Skip to content

Commit

Permalink
local_llm_fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
rounak610 committed Jan 12, 2024
1 parent b141ce7 commit 2a75593
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 2 deletions.
1 change: 0 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ ENV PATH="/opt/venv/bin:$PATH"
COPY requirements.txt .
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
RUN python3 -m pip install llama-cpp-python==0.2.7 --force-reinstall --upgrade --no-cache-dir

RUN python3.10 -c "import nltk; nltk.download('punkt')" && \
python3.10 -c "import nltk; nltk.download('averaged_perceptron_tagger')"
Expand Down
2 changes: 2 additions & 0 deletions docker-compose-gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ services:
backend:
volumes:
- "./:/app"
- "/home/ubuntu/models/vicuna-7B-v1.5-GGUF/vicuna-7b-v1.5.Q5_K_M.gguf:/app/local_model_path"
build:
context: .
dockerfile: Dockerfile-gpu
Expand All @@ -24,6 +25,7 @@ services:
volumes:
- "./:/app"
- "${EXTERNAL_RESOURCE_DIR:-./workspace}:/app/ext"
- "/home/ubuntu/models/vicuna-7B-v1.5-GGUF/vicuna-7b-v1.5.Q5_K_M.gguf:/app/local_model_path"
build:
context: .
dockerfile: Dockerfile-gpu
Expand Down
2 changes: 1 addition & 1 deletion superagi/helper/llm_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def model(self):
if self._model is None:
try:
self._model = Llama(
model_path="/app/local_model_path", n_ctx=self.context_length, n_gpu_layers=get_config('GPU_LAYERS'))
model_path="/app/local_model_path", n_ctx=self.context_length, n_gpu_layers=get_config('GPU_LAYERS', '-1'))
except Exception as e:
logger.error(e)
return self._model
Expand Down

0 comments on commit 2a75593

Please sign in to comment.