Project Artemis began as Carli's hope—a journey into the world of artificial intelligence. The goal? To create an AI named Artemis. Here's how it all came together:
- Operating System: Fedora Linux (FC40)
- Hardware: My gaming laptop with a single NVIDIA GPU for development
- First, we run the Rocket LLM AI model.
- Next, we install open-webui to unlock a plethora of rich features.
OLLAMA caught our attention—it's like a Docker engine, but with a twist. Here's how we set it up:
-
Download and install OLLAMA:
curl -fsSL https://ollama.com/install.sh | sh
-
SystemD start up in the end.
systemctl status ollama systemctl enable ollama systemctl start ollama
-
Starting the server manually for debugging:
OLLAMA_HOST=192.168.1.63:11435 ollama serve
-
Verify if it's running:
- In your browser: localhost:11434
- Or via IP: http://172.17.0.1:11434/
OLLAMA_HOST=192.168.1.63:11435 ollama run chand1012/rocket
Let's add some flair to our project with open-webui:
-
Run the container:
docker run -d --network=host -e OLLAMA_API_BASE_URL=http://localhost:11434/api --name ollama-webui --restart always ollama-webui
-
Access the web interface:
- URL: localhost:3000
- Admin credentials:
admin@localhost
(password: a few seconds ago)
If you encounter authentication issues, bypass them:
docker run --env WEBUI_AUTH=False -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Remember, Artemis is watching over us! 🌙✨