This repository contains a custom implementation of llama-2. The program is designed to create a bot that can ingest custom knowledge from a normal text file and engage in conversations.
It utilizes the Ollama API to access the llama-2 or Mistral Model.
Before running this program, ensure you meet the following prerequisites:
-
Node version
v18
or higher is installed. -
Install
ollama
and pull the modelmistral
. Refer to the Ollama GitHub for comprehensive installation guides. -
Once your Ollama model is prepared, install the required Node packages:
npm install
To start the bot, run:
npm start
The program utilizes the Ollama server for the Large Language Model (LLM) and Langchain to access LLM models,
allowing you to feed them your custom document. You can experiment with the chunk size in index.js
to find the optimal configuration for your use case.