This repository contains resources and code for learning and implementing guardrails for Large Language Models (LLMs) at Folio3. The project is structured to help understand benchmarks, evaluate models, and enhance classification using various techniques.
- Aheed's Notion Link: A link to additional resources and documentation.
- Day-1_Learning_About_Benchmarks.pdf: Introduction to benchmark datasets.
- Day-2_Benchmark_data_samples.pdf: Samples from benchmark datasets.
- Day-3_How_Models_are_Evaluated.pdf: Scripts for evaluating models, specifically focusing on MMLU.
- Day-4_Guardrails_in_llms_&_Nemo.zip: Implementation of guardrails in LLMs and related resources.
- Day-5_Prompt_Classifier_using_ANNOY.zip: Using ANNOY for prompt classification.
- Day-6_Enhanced_Classifier_&_Increased_data.zip: Enhanced classifier and additional data for training.
To get started with this repository, clone it to your local machine:
git clone https://github.com/Tahiralira/llm-guardrails.git
Navigate to the repository directory:
cd llm-guardrails
Explore the different folders and files to understand the content and resources provided.
Each folder contains specific resources and code related to different aspects of implementing guardrails for LLMs. Refer to the PDFs and ZIP files for detailed information and code samples.
Contributions are welcome! If you have any improvements or additional resources to share, please fork the repository and submit a pull request.
Me
Aheed Tahir