Skip to content

Commit

Permalink
fix offline whisper+docs
Browse files Browse the repository at this point in the history
Signed-off-by: NickLucche <[email protected]>
  • Loading branch information
NickLucche committed Feb 14, 2025
1 parent 8be8105 commit ad59ed5
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 1 deletion.
20 changes: 20 additions & 0 deletions docs/source/models/supported_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -939,6 +939,26 @@ The following table lists those that are tested in vLLM.
* ✅︎
:::

#### Transcription (`--task transcription`)

Speech2Text models trained specifically for Automatic Speech Recognition.

:::{list-table}
:widths: 25 25 25 5 5
:header-rows: 1

- * Architecture
* Models
* Example HF Models
* [LoRA](#lora-adapter)
* [PP](#distributed-serving)
- * `Whisper`
* Whisper-based
* `openai/whisper-large-v3-turbo`
* 🚧
* 🚧
:::

_________________

## Model Support Policy
Expand Down
2 changes: 1 addition & 1 deletion vllm/entrypoints/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -421,7 +421,7 @@ def generate(
instead pass them via the ``inputs`` parameter.
"""
runner_type = self.llm_engine.model_config.runner_type
if runner_type != "generate":
if runner_type not in ["generate", "transcription"]:
messages = [
"LLM.generate() is only supported for (conditional) generation "
"models (XForCausalLM, XForConditionalGeneration).",
Expand Down

0 comments on commit ad59ed5

Please sign in to comment.