From eb21d4458ddd624dd21dfed12ba1de6d1a5da647 Mon Sep 17 00:00:00 2001 From: Jennifer Zhao <7443418+JenZhao@users.noreply.github.com> Date: Wed, 26 Feb 2025 22:47:57 -0800 Subject: [PATCH] update --- docs/source/getting_started/v1_user_guide.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/docs/source/getting_started/v1_user_guide.md b/docs/source/getting_started/v1_user_guide.md index e647a5861d57e..fc598687c0092 100644 --- a/docs/source/getting_started/v1_user_guide.md +++ b/docs/source/getting_started/v1_user_guide.md @@ -19,14 +19,18 @@ Previous blog post [vLLM V1: A Major Upgrade to vLLM's Core Architecture](https: more detailed list of the supported models. Encoder-decoder models support is not happending soon. -## Unsupported features +### List of features that are deprecated in v1 +- best_of +- logits_processors +- beam_search +## Unsupported features ### LoRA - LoRA works for V1 on the main branch, but its performance is inferior to that of V0. The team is actively working on improving the performance [PR](https://github.com/vllm-project/vllm/pull/13096). -### Spec decode other than ngram +### Spec Decode other than ngram - Currently, only ngram spec decode is supported in V1 after this [PR](https://github.com/vllm-project/vllm/pull/12193). ### KV Cache Swapping & Offloading & FP8 KV Cache @@ -34,7 +38,7 @@ Previous blog post [vLLM V1: A Major Upgrade to vLLM's Core Architecture](https: team is working actively on it. -## Unsupported models +## Unsupported Models ## FAQ