Skip to content

Commit

Permalink
pin llama cpp python
Browse files Browse the repository at this point in the history
  • Loading branch information
Smartappli authored Jul 27, 2024
1 parent f8adc19 commit cb09745
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions Docker/cuda/cuda.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ RUN python3 -m venv venv
# RUN python3 -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette pydantic-settings starlette-context

# Install llama-cpp-python (build with cuda)
RUN CMAKE_ARGS="-DGGML_CUDA=on" venv/bin/pip install .[server]
# RUN make clean
RUN CMAKE_ARGS="-DGGML_CUDA=on" venv/bin/pip install .[server]==0.2.83

Check warning

Code scanning / Hadolint (reported by Codacy)

Ranges can only match single chars (mentioned due to duplicates). Warning

Ranges can only match single chars (mentioned due to duplicates).

# RUN make clean
FROM nvidia/cuda:${CUDA_RUNTIME_IMAGE} as runtime

# We need to set the host to 0.0.0.0 to allow outside access
Expand Down

0 comments on commit cb09745

Please sign in to comment.