The aim of this project is to create the MVP of new chatbot for the Faculty of Informatics and Statistics, Prague University of Economics and Business.
The whole project is running in Docker containers. Please refer to the image below for the high-level architecture.
Components:
- Rasa Chatbot - the core of chatbot, built using Rasa framework
- Rasa Actions - the support component for chatbot, responsible for custom actions
- Chatbot Redis - key-value in-memory DB, used to store scraped information
- Nginx Server - used as reverse proxy for Chatbot and Actions server
- Chatbot Web Demo - frontend component, taken from existing repo and modified
You can spin up the containers by running following command in the root directory:
docker compose up --build
You should also create .env file with host IP variable (see .env.sample) as localhost. Changes made on frontend side will be automatically propagated to the container.
Another option is to manually run Rasa Chatbot and Rasa Actions server components (following commands may be different on Windows OS):
Note: for this option you should also have running Redis instance (either Dockerized one, or the local installation)
- create venv for project and install required libraries
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
- start Rasa actions server in chatbot folder
rasa run actions --cors "*" --debug
- start Rasa server in chatbot folder
rasa run -m models --enable-api --cors "*" --debug
- every time you make changes in chatbot, you should first train the model and then restart the server.
rasa train
If you only made changes in actions, restarting actions server is enough.
Automatic deployment is made via simple CI/CD pipeline in GitHub Actions. To use this pipeline, several secrets should be set.