Skip to content

Latest commit

 

History

History
37 lines (29 loc) · 1.62 KB

README.md

File metadata and controls

37 lines (29 loc) · 1.62 KB

LLM Guardrails

Overview

This repository contains resources and code for learning and implementing guardrails for Large Language Models (LLMs) at Folio3. The project is structured to help understand benchmarks, evaluate models, and enhance classification using various techniques.

Contents

  1. Aheed's Notion Link: A link to additional resources and documentation.
  2. Day-1_Learning_About_Benchmarks.pdf: Introduction to benchmark datasets.
  3. Day-2_Benchmark_data_samples.pdf: Samples from benchmark datasets.
  4. Day-3_How_Models_are_Evaluated.pdf: Scripts for evaluating models, specifically focusing on MMLU.
  5. Day-4_Guardrails_in_llms_&_Nemo.zip: Implementation of guardrails in LLMs and related resources.
  6. Day-5_Prompt_Classifier_using_ANNOY.zip: Using ANNOY for prompt classification.
  7. Day-6_Enhanced_Classifier_&_Increased_data.zip: Enhanced classifier and additional data for training.

Getting Started

To get started with this repository, clone it to your local machine:

git clone https://github.com/Tahiralira/llm-guardrails.git

Navigate to the repository directory:

cd llm-guardrails

Explore the different folders and files to understand the content and resources provided.

Usage

Each folder contains specific resources and code related to different aspects of implementing guardrails for LLMs. Refer to the PDFs and ZIP files for detailed information and code samples.

Contributing

Contributions are welcome! If you have any improvements or additional resources to share, please fork the repository and submit a pull request.

License

Me

Author

Aheed Tahir