The backend is a Flask API that provices both a web socket connection and a REST endpoint to receive and return messages. It uses a LangChain agent to analyze the input and then uses various tools to best respond to the input.
- Receives input
- Uses a LangChain agent to analyze the input
- Uses various tools to best answer the input
- Ocean API tools
- Calls the Ocean API via DefichainPython
- Wiki tool
- Embedds the input
- Uses Supabase (pgvector) to find the best matching document
- Generates an answer
- Math tool
- Ocean API tools
- Comes up with the final answer
- Saves the final answer to Supabase
- Returns the answer
- Python
- LangChain
- Flask
- Web sockets
- OpenAI API
- DefichainPython
- Supabase
On every push to main
, the backend is deployed to Fly.io.
The web socket is used to send messages to the backend and receive answers over the same connection. We use the Socket.IO protocol.
To connect to the web socket, use the following URL: https://jellychat.fly.dev
.
The library used on the backend is Flask-SocketIO. There are librarys for many languages available on the Socket.IO website.
To send a message, emit an event called user_message
with the data.
user_token
- The user token to identify the user/session.message
- The message to send.application
- The application you are sending the message from (used for analytics).
This event is emitted, when the agent starts using a tool. You can use this to display an information to the user, so he knows what is happening.
tool_name
- message which tool is used
This event is emitted, when the agent has come up with a final answer. You can use this to display the answer to the user.
token
- One token of the final answer
This event emits each token of the final answer individually. You can use this to display the answer instantly to the user as it is being generated.
Main endpoint to ask a question.
Request body
{
"message": "How many DFI do I need to create a masternode?",
"user_token": "{usertoken}", // The user token to identify the user/session.
"application": "{application}" // The application you are sending the message from (used for analytics).
}
Response body
{
"response": "You need 20,011 DFI to create a masternode."
}
Get all messages and answers.
Response body
[
{
"date": "2023-02-18 23:38:50",
"question": "How many DFI do I need to create a masternode?",
"answer": "You need 20,011 DFI to create a masternode."
},
...
]
OPENAI_API_KEY
- Your OpenAI API key.- Used to embed incoming questions.
- Used to generate text.
- Can be obtained here: platform.openai.com
SUPABASE_URL
- Supabase API URL.- Used to save questions and answers with their rating.
- Used to find the best matching documents.
- Can be obtained here: app.supabase.io
SUPABASE_KEY
- Supabase anon key.- Used to save questions and answers with their rating.
- Used to find the best matching documents.
- Can be obtained here: app.supabase.io
python -m venv venv
.\venv\Scripts\activate
Deactivate
pip install -r requirements.txt
We use Flask to create the API. It is a micro web framework written in Python.
Docs: https://flask.palletsprojects.com/en/2.2.x/quickstart/
To develop locally, run the app.py file.
python .\app.py
The app will be available at http://localhost:8080
We use Docker to package and run the backend. This makes the deployment more reliable and easier.
When deploying to Fly.io, we don't use Docker commands ourselves. The generation of the Docker image is done by Fly.io.
docker build -t jellychat-backend .
docker container run --name JellyChat_Backend --env-file .env -d -p 8080:8080 jellychat-backend
The main Agent is in main_agent.py
. You can run it directly to test it.
To debug, make sure langchain.debug = True
is active in /agent/main_agent.py
.