This application demonstrates the use of the ChatGroq API with Llama3-8b-8192 model for answering questions based on provided document contexts. The application uses Streamlit for the web interface and integrates with Langchain for document processing and embedding.
- Python 3.7+
- Streamlit
- Langchain
- HuggingFace Embeddings
- FAISS
- PyPDFLoader
- dotenv
- Clone the repository:
git clone https://github.com/aswin-bs/RAG_USING_LLAMA_AND_GROQ.git
- Install the required dependencies:
pip install streamlit langchain huggingface_hub faiss-gpu pypdf2 python-dotenv
- Set up your environment variables. Create a
.env
file in the root directory and add your GROQ API key:
GROQ_API_KEY=your_groq_api_key
-
Ensure you have the necessary API keys set in your
.env
file. -
Run the Streamlit application:
streamlit run app.py
-
Upload your PDF documents using the file uploader. The application will process and embed the documents.
-
Enter your questions in the text input field and click "Generate Response" to get answers based on the uploaded documents.
app.py
: Main application file.env
: Environment variable file containing API keysrequirements.txt
: List of required dependencies
-
File Upload: Users can upload multiple PDF documents. The application saves these documents to a temporary directory.
-
Document Processing: The application uses PyPDFLoader to read and load the content of the PDF documents.
-
Embedding: The loaded documents are split into smaller chunks using RecursiveCharacterTextSplitter and then embedded using HuggingFaceEmbeddings. The embeddings are stored in a FAISS vector store.
-
Question Answering: When a user inputs a question, the application retrieves relevant document chunks from the vector store using a retriever. The retrieved documents are then passed to the ChatGroq model for generating a response.
-
Response Display: The generated response along with the relevant document chunks are displayed on the Streamlit interface.
- Vector Store: The FAISS vector store is used to efficiently retrieve relevant document chunks based on the user's question.
- Model: The ChatGroq API with Llama3-8b-8192 model is used for generating answers.
This application uses the following libraries and services:
Feel free to contribute or raise issues if you find any bugs or have suggestions for improvements.