Skip to content

Latest commit

 

History

History
105 lines (82 loc) · 3.44 KB

README.md

File metadata and controls

105 lines (82 loc) · 3.44 KB

Chainlit RAG Application


📕 Table of Contents

💡 What is this repo?

This is a hybrid RAG application designed to enhance text generation by integrating powerful retrieval mechanisms. By combining Microsoft's GraphRAG and traditional RAG techniques, we acheive state-of-the-art results. We also provide a webUI based on ChainLit for seamless integration, extensibility, and ease of deployment.

📷 Screenshots

screenshot2.pngscreenshotSettings.png screenshotLight.png

🚀 Getting Started

Prerequisites

  • Docker >= 24.0.0 & Docker Compose >= v2.26.1

    If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.

  • Python >= 3.9.0
  • Conda

    If you do not have Conda installed, then follow the steps here, to install miniconda on your machine

Initial Setup

  1. Initialize a new conda enviroment
$ conda create python==3.11 -n chainlit_rag
$ conda activate chainlit_rag
  1. Clone this repository, and install dependencies
$ git clone https://github.com/agi-dude/chainlit-rag
$ cd chainlit-rag
$ pip install -r requirements.txt
  1. Configure GraphRAG. Open the settings.yaml file located in the main directory, and then change these lines:
llm:
  api_key: ${GRAPHRAG_API_KEY} # Change to your openai api key if you are using openAI models
  type: openai_chat # or azure_openai_chat
  model: dolphin-mistral:latest # Change to your model
  ...
  api_base: http://localhost:11434/v1 # By default, it's configured to use Ollama. You can change it to `https://api.openai.com/v1` if you want to use openai models
  ...
embeddings:
  ...
  llm:
    api_key: ${GRAPHRAG_API_KEY} # Change to your openai api key if you are using openAI models
    type: openai_embedding # or azure_openai_embedding
    model: mxbai-embed-large:latest # Change to your model
    api_base: http://192.168.10.102:11434/v1 # By default, it's configured to use Ollama. You can change it to `https://api.openai.com/v1` if you want to use openai models

Setup database

  1. Create the path input/pdfs in the root folder of this project and place your pdf files into it.
  2. Run the loader.py
$ python loader.py -c -n # This might take some time (~1 hour or more for large datasets), because it has to index everything, so be patient!

🚗 Usage

  1. Start the server by running app.py
$ python app.py
  1. Open https://localhost:8000 in your browser.
  2. Press the settings button to change your settings.

Settings.png

Add more files

  1. To add more documents to the database, first add them into input/pdf. After that, run loader.py without -n:
$ python loader.py -c

🙌 Contributing

Feel free to fork the project, make some updates, and submit pull requests. Any contributions are welcomed!