Nano Jarvis is compatible with either a cloud-based (managed) LLM service (e.g. OpenAI GPT model, Grog, OpenRouter, etc) or with a locally hosted LLM server (e.g. llama.cpp, LocalAI, Ollama, etc). Please continue reading for detailed instructions.
Requirement: Node.js v18 or later.
Launch with:
npm start
then open localhost:3000
with your favorite web browser.
Supported local LLM servers include llama.cpp, Jan, Ollama, and LocalAI.
To use Ollama locally, load a model and configure the environment variable LLM_API_BASE_URL
:
ollama pull llama3.1
export LLM_API_BASE_URL=http://127.0.0.1:11434/v1
export LLM_CHAT_MODEL='llama3.1'
To use Groq, select a model (e.g. LLaMa-3.1-8B, LLaMa3-8B, etc) and set the environment variables accordingly.
export LLM_API_BASE_URL=https://openrouter.ai/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3-8b-instruct"
- How much is $100 in IDR?
- How much is 52 EUR in IDR?
This project is fork of this original nano-jarvis project created by Ariya Hidayat and modified so it's suitable for this workshop.