This project aims to develop an AI tutor powered by Large Language Models (LLMs) to assist university computing students with advanced concepts. The AI tutor is be able to access local knowledge, search live websites, and provide concise explanations.
To serve an LLM locally, we run Ollama for its ease-of-use and accessibility for many popular models. For more information on its installation, refer to Ollama's official repo.
After installation, run ollama run mistral:7b
(or another model of choice). This downloads the model before serving it through the Ollama API on http://localhost:11434
.
Start the FastAPI backend with the following:
uvicorn backend:app --reload
Start the ReactJS frontend with the following:
cd frontend
npm ci
npm start