This guide walks you through the setup process for the 'analyticalite' project, which includes both backend and frontend components.
Before you start, ensure you have Python and Node.js installed on your system.
- Python: Download it based on your operating system from Python Downloads.
- Node.js: Download and install Node.js from Node.js Downloads. For macOS users, you can also install it using Homebrew:
brew install node
Navigate to the analyticalite/backend
directory:
- Create a Python Virtual Environment:
python -m venv ./venv
or
python3 -m venv ./venv
- Activate the Virtual Environment:
. venv/bin/activate
Note: This will be different for Windows.
- Install Dependencies:
pip install -r requirements.txt
- Install Llama-CPP-Python:
For a default CPU-only build:
pip install llama-cpp-python
For a mixed CPU+GPU build on Macs with GPUs:
CMAKE_ARGS=”-DLLAMA_METAL=on” pip install llama-cpp-python
Note: Windows users might need additional installation steps.
- Download Llama Model:
huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.Q8_0.gguf --local-dir ./models --local-dir-use-symlinks False
- Start the Backend Server:
uvicorn main:app
Note: If this doesn’t start open up a new terminal and run this command again in the same directory with the activated venv.
Navigate to the analyticalite/frontend
directory and run:
- Install Node Modules
npm install
- Start the Frontend Server:
npm run dev