Skip to content

This project involves porting the lightweight RAG implementation of AnythingLLM + Ollama to LazyCat MicroServer. So, please do not expect this project to help you run this RAG implementation on platforms other than LazyCat MicroServer.

Notifications You must be signed in to change notification settings

ironfeet/AnythingLLM-LazyCat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AnythingLLM-LazyCat

What

This project involves porting the lightweight RAG implementation of AnythingLLM + Ollama to LazyCat MicroServer.

So, please do not expect this project to help you run this RAG implementation on platforms other than LazyCat MicroServer.

Dependencies

  • AnythingLLM:1.2.4
  • Ollama:0.4.1

How

curl http://anythingllm.${YourLazyCatMicroServerName}.heiyu.space:11434/api/pull -d '{
  "name": "llama3.2"
}

Recommended Configuration

Due to the LazyCat MicroServer's limited memory and CPU, it is highly not recommended to run LLMs with 10s of billions of parameters for real-time chat on the LazyCat MicroServer.

To process the documents that do not contain any sensitive data

  • Chat Model: API
  • Agent Model: API
  • Embedder: API
  • Vector DB: LanceDB

To process the documents that do contain any sensitive data but are not very confidential

  • Chat Model: API
  • Agent Model: API
  • Embedding Model: Ollama - mxbai-embed-large
  • Vector DB: LanceDB

To process the documents that contain any confidential data

  • Chat Model: Ollama - llama3.2
  • Agent Model: Ollama - llama3.2
  • Embedding Model: Ollama - mxbai-embed-large
  • Vector DB: LanceDB

Here are the commands for pulling llama3.2 and mxbai-embed-large models.

curl http://anythingllm.${YourLazyCatMicroServerName}.heiyu.space:11434/api/pull -d '{
  "name": "llama3.2"
}

curl http://anythingllm.${YourLazyCatMicroServerName}.heiyu.space:11434/api/pull -d '{
  "name": "mxbai-embed-large"
}

Recommended Configuration

Due to the LazyCat MicroServer's limited memory and CPU, it is highly not recommended to run LLMs with 10s of billions of parameters for real-time chat on the LazyCat MicroServer.

How to Check the Status of Ollama

Uptime Kuma Ollama

  • Alternatively, you can install and leverage the Uptime Kuma to track Ollama's status easily.

About

This project involves porting the lightweight RAG implementation of AnythingLLM + Ollama to LazyCat MicroServer. So, please do not expect this project to help you run this RAG implementation on platforms other than LazyCat MicroServer.

Resources

Stars

Watchers

Forks

Packages

No packages published