Skip to content

Commit 22b39e1

Browse files
pavanjavapavanmantha
and
pavanmantha
authored
llama_index serving integration documentation (vllm-project#6973)
Co-authored-by: pavanmantha <[email protected]>
1 parent f55a9ae commit 22b39e1

File tree

2 files changed

+28
-0
lines changed

2 files changed

+28
-0
lines changed

docs/source/serving/integrations.rst

+1
Original file line numberDiff line numberDiff line change
@@ -12,3 +12,4 @@ Integrations
1212
deploying_with_lws
1313
deploying_with_dstack
1414
serving_with_langchain
15+
serving_with_llamaindex
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
.. _run_on_llamaindex:
2+
3+
Serving with llama_index
4+
============================
5+
6+
vLLM is also available via `llama_index <https://github.com/run-llama/llama_index>`_ .
7+
8+
To install llamaindex, run
9+
10+
.. code-block:: console
11+
12+
$ pip install llama-index-llms-vllm -q
13+
14+
To run inference on a single or multiple GPUs, use ``Vllm`` class from ``llamaindex``.
15+
16+
.. code-block:: python
17+
18+
from llama_index.llms.vllm import Vllm
19+
20+
llm = Vllm(
21+
model="microsoft/Orca-2-7b",
22+
tensor_parallel_size=4,
23+
max_new_tokens=100,
24+
vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
25+
)
26+
27+
Please refer to this `Tutorial <https://docs.llamaindex.ai/en/latest/examples/llm/vllm/>`_ for more details.

0 commit comments

Comments
 (0)