Skip to content

Chat interface made with Python, Streamlit and Ollama (llama3)

License

Notifications You must be signed in to change notification settings

ricardobalk/streamlit-ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reference implementation for a chatbot with Streamlit and Ollama

This is a chatbot application built with Streamlit for the web interface and Ollama as the backend language model processor. In this setup, it uses Docker to containerize the application, making it easy to deploy and scale.

Usage

1) Start the containers

Run the following command to start all the services defined in your Docker Compose file in detached mode:

docker compose up -d

2) Pull llama2 or llama3

Depending on the model you want to use, pull it using the following command:

docker compose exec ollama ollama pull llama2
# or
docker compose exec ollama ollama pull llama3

This will execute the command ollama pull llama3 in the ollama container.

Note: Keep in mind to pick the right model in the preferences panel of the GUI.

3) Try it out

Navigate to http://localhost:8501/ in your web browser to interact with the chatbot. The Streamlit interface should allow you to input text and display responses from the chatbot powered by the Ollama model.

4) Stop the containers

When you are done, you can stop the containers by running:

docker compose stop

5) Remove the containers and volumes (optional)

To completely remove all containers and clean up volumes created by Docker Compose, use:

docker compose down -v

About

Chat interface made with Python, Streamlit and Ollama (llama3)

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages