Skip to content

octoml/LLM-RAG-Examples

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM endpoint chat

Load a PDF file and ask questions via llama_index, LangChain and a LLM endpoint hosted on OctoAI

Instructions

  • Install the requirements
pip install -r requirements.txt -U

Environment setup

To run our example app, there are four simple steps to take:

  • Paste the Endpoint URL in a file called .env in the root directory of the project.
ENDPOINT_URL="https://text.octoai.run/v1/chat/completions"
  • Get an API Token from your OctoAI account page.
  • Paste your API key in a file called .env in the root directory of the project.
OCTOAI_API_TOKEN=<your key here>
  • Run chat_main.py script to chat with the LLM hosted endpoint.
python3 chat_main.py

or

  • Select a file from the menu or replace the default file file.pdf with the PDF you want to use.
  • Run pdf_qa_main.py script to ask questions about your pdf file via llama_index, LangChain and the hosted endpoint.
python3 pdf_qa_main.py
  • Ask any questions about the content of the PDF.

For detailed setup steps, please see https://docs.octoai.cloud/docs/setup-steps-for-the-qa-app

About

OctoAI LLM RAG samples

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.0%
  • Dockerfile 5.0%