Skip to content

Latest commit

 

History

History
45 lines (30 loc) · 1.78 KB

README.md

File metadata and controls

45 lines (30 loc) · 1.78 KB

Nano Jarvis

Screenshot

Nano Jarvis is compatible with either a cloud-based (managed) LLM service (e.g. OpenAI GPT model, Grog, OpenRouter, etc) or with a locally hosted LLM server (e.g. llama.cpp, LocalAI, Ollama, etc). Please continue reading for detailed instructions.

Requirement: Node.js v18 or later.

Launch with:

npm start

then open localhost:3000 with your favorite web browser.

Using Local LLM Servers

Supported local LLM servers include llama.cpp, Jan, Ollama, and LocalAI.

To use Ollama locally, load a model and configure the environment variable LLM_API_BASE_URL:

ollama pull llama3.1
export LLM_API_BASE_URL=http://127.0.0.1:11434/v1
export LLM_CHAT_MODEL='llama3.1'

Using Managed LLM Services

To use Groq, select a model (e.g. LLaMa-3.1-8B, LLaMa3-8B, etc) and set the environment variables accordingly.

export LLM_API_BASE_URL=https://openrouter.ai/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3-8b-instruct"

Prompt Example for ReAct

  • How much is $100 in IDR?
  • How much is 52 EUR in IDR?

References

This project is fork of this original nano-jarvis project created by Ariya Hidayat and modified so it's suitable for this workshop. Screenshot