The application has a rate limit so kindly upload documents which are less than 10-20 pages even though it accepts upto 40 only put 10-20 for safety this rate limit it is due to free key access from cohere embeddings if we use another embedding model it can be improved based on the key rate limit
A RAG application to chat with your documents with complete control over your data you can delete your files in the db and see which files are loaded into the vector database and also delete the vectors of a particular file
-
📄 Document Upload & Query:
Upload your PDF documents, securely store them in Postgres, and query their content using Cohere embeddings. -
🧠 RAG-Powered Question Answering:
Ask questions related to your uploaded documents, and receive precise answers thanks to Cohere's advanced embeddings and postgres's vector storage. -
🔒 Full Data Control:
You have total control over your documents, including the ability to delete, manage, or reference them whenever needed. -
🎨 Dynamic UI with Animations:
Built with TailwindCSS for responsiveness and Framer Motion for smooth transitions. Unauthorized pages are highlighted with engaging Rive animations for user feedback.
- Clone the repository:
git clone https://github.com/yourusername/LLM-Chat.git
- Change the directory
cd LLM-Chat
- Add the environment variables in the .env.local
cd backend && create .env file
To run this application, you need to set the following environment variables:
COHERE_API_KEY=
# get the cohere api key from cohere website
DATABASE_URL=
# get the neondb postgres url
-
Install the packages
pip install -r requirements.txt
-
Run the project
uvicorn main:app --reload
-
Change the directory
cd frontend
-
Install the packages
npm install
-
Run the project
npm run dev