Skip to content

TonicAI/llama-validate-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tonic Validate

Tonic Validate is a platform for Retrieval Augmented Generation (RAG) development and experiment tracking. To check it out, go to our repo here

This is a LlamaIndex project using FastAPI bootstrapped with create-llama.

Getting Started

First, setup the environment:

poetry install
poetry shell

By default, we use the OpenAI LLM (though you can customize, see app/api/routers/chat.py). As a result you need to specify an OPENAI_API_KEY in an .env file in this directory.

Example backend/.env file:

OPENAI_API_KEY=<openai_api_key>

Second, generate the embeddings of the documents in the ./data directory (if this folder exists - otherwise, skip this step):

python app/engine/generate.py

Third, run the development server:

python main.py

Then call the API endpoint /api/chat to see the result:

curl --location 'localhost:8000/api/chat' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'

You can start editing the API by modifying app/api/routers/chat.py. The endpoint auto-updates as you save the file.

Open http://localhost:8000/docs with your browser to see the Swagger UI of the API.

The API allows CORS for all origins to simplify development. You can change this behavior by setting the ENVIRONMENT environment variable to prod:

ENVIRONMENT=prod uvicorn main:app

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndex GitHub repository - your feedback and contributions are welcome!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages