Skip to content

A locally-hosted Retrieval Augmented Generation pipeline for querying a Large Language Model on YOUR documents.

License

Notifications You must be signed in to change notification settings

Sydney-Informatics-Hub/LLM-local-RAG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-local-RAG

A locally-hosted Retrieval Augmented Generation pipeline for querying a Large Language Model on YOUR documents.

Based on the framework by Local-rag-Example with Ollama, LangChain and Streamlit.

Install and Run Ollama

Visit and download https://ollama.com/download

Unzip Ollama.app and move to Applications folder.

Open a terminal and execute:

ollama pull llama3.1
ollama serve

Clone repo and setup dependencies

git clone https://github.com/Sydney-Informatics-Hub/LLM-local-RAG/
cd LLM-local-RAG
conda create -n localrag python=3.11 pip
conda activate localrag
pip install langchain streamlit streamlit_chat chromadb fastembed pypdf langchain_community cryptography

Run the Frontend

streamlit run app.py

About

A locally-hosted Retrieval Augmented Generation pipeline for querying a Large Language Model on YOUR documents.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages