Skip to content

Commit

Permalink
update readme for the run-model-locally section
Browse files Browse the repository at this point in the history
  • Loading branch information
tybalex committed Jul 5, 2024
1 parent d47e062 commit 44c0130
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions docs/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,13 @@ Try out the models immediately without downloading anything in [Huggingface Spac

## Run Rubra Models Locally

Check out our [documentation](https://docs.rubra.ai/category/serving--inferencing) to learn how to run Rubra models locally.
We extend the following inferencing tools to run Rubra models in an OpenAI-compatible tool-calling format for local use:

- [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [vllm](https://github.com/vllm-project/vllm)
- [llama.cpp](https://github.com/rubra-ai/tools.cpp)
- [vLLM](https://github.com/rubra-ai/vllm)

Note: It is a known issue that Llama3 models (including 8B and 70B) are more prone to damage from quantization. We recommend serving them with either vLLM or using the fp16 quantization.

## Contributing

Expand Down

0 comments on commit 44c0130

Please sign in to comment.