Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1][Metrics] Add several request timing histograms #12644

Merged

Conversation

markmc
Copy link
Contributor

@markmc markmc commented Feb 1, 2025

Follow on from #12579, part of #10582. See the design doc in #12745 for more details.

Add the following:

  • vllm:e2e_request_latency_seconds
  • vllm:request_queue_time_seconds
  • vllm:request_inference_time_seconds
  • vllm:request_prefill_time_seconds
  • vllm:request_decode_time_seconds

e2e_request_latency is calculated relative to the arrival_time timestamp recorded by the frontend.

For the rest ... we want to capture (in histograms) precise pre-request timing intervals between certain events in the engine core:

  << queued timestamp >>
    [ queue interval ]
  << scheduled timestamp >>
    [ prefill interval ]
  << new token timestamp (FIRST) >>
    [ inter-token interval ]
  << new token timestamp >>
    [ decode interval (relative to first token time)
    [ inference interval (relative to scheduled time)
  << new token timestamp (FINISHED) >>

We want to collect these metrics in the frontend process, to keep the engine core freed up as much as possible. We need to calculate these intervals based on timestamps recorded by the engine core.

Engine core will include these timestamps in EngineCoreOutput (per request) as a sequence of timestamped events, and the frontend will calculate intervals and log them. Where we record these timestamped events:

  • QUEUED: scheduler add_request()
  • SCHEDULED: scheduler schedule()

There is an implicit NEW_TOKENS timestamp based on an initialization timestamp recorded on EngineCoreOutputs.

Copy link

github-actions bot commented Feb 1, 2025

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added the v1 label Feb 1, 2025
@robertgshaw2-redhat robertgshaw2-redhat added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 1, 2025
@markmc markmc force-pushed the metrics-v1-prometheus-logger-6 branch from 5755e17 to aa6b6a9 Compare February 5, 2025 11:48
@markmc markmc marked this pull request as ready for review February 5, 2025 11:49
Copy link
Collaborator

@robertgshaw2-redhat robertgshaw2-redhat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @markmc - left some small nits. This is looking good.

The one thing I am not sure about is that doing the timestamps from the perspective of the AsyncLLM does not quite give us the granularity to make a distinction between queue_time, prefill_time, and inference_time, since if the prompt length is < chunked prefill size (which usually the case), the timestamp of scheduled_time will be the same as the timestamp of first_token_time (since we will generate the first token in the same step as the first time it is scheduled).

I'm not sure how to get around this without inserting some timing logic into EngineCore which also feels not ideal + brittle. What do you think?

@markmc
Copy link
Contributor Author

markmc commented Feb 5, 2025

Hey @markmc - left some small nits. This is looking good.

The one thing I am not sure about is that doing the timestamps from the perspective of the AsyncLLM does not quite give us the granularity to make a distinction between queue_time, prefill_time, and inference_time, since if the prompt length is < chunked prefill size (which usually the case), the timestamp of scheduled_time will be the same as the timestamp of first_token_time (since we will generate the first token in the same step as the first time it is scheduled).

I'm not sure how to get around this without inserting some timing logic into EngineCore which also feels not ideal + brittle. What do you think?

Ugh, how right you are! Yes, what I'm doing now seems very broken.

So, basically we would want scheduled_time to be before execute_model() is called?

We have:

<<< arrival timestamp >>>

[ queued interval]

<<< scheduled_timestamp >>>

[prefill interval]

<<< first token timestamp >>>

[decode interval]

<<< finished timestamp >>>

And the closest AsyncLLM can get to scheduled_timestamp is like "the completion of the first step that included this request" - we have no visibility into events that occur during a step.

So I guess our options are:

  1. Compare timestamps across processes - e.g. record scheduled_timestamp in the core and use it for interval calculations in the frontend. I would like to avoid this because we should be using monotonic time (unaffected by system clock changes) and you can't compare monotonic timestamps from different processes.
  2. Have the core send back an interval like "scheduling for this request happened N ms before first token"
  3. Calculate these intervals in the core - we would need queue_time would be relative to the request's arrival in the core. I guess the computations would need to happen in the schedule() loop (queue time) and the update_from_output() loop (prefill and decode time).
  4. Collect timestamps in the core, but calculate the intervals on the frontend side. This might not be terrible, and likely more accurate?

I'll take a stab at (4), but definitely welcome feedback!

@robertgshaw2-redhat
Copy link
Collaborator

Hey @markmc - left some small nits. This is looking good.
The one thing I am not sure about is that doing the timestamps from the perspective of the AsyncLLM does not quite give us the granularity to make a distinction between queue_time, prefill_time, and inference_time, since if the prompt length is < chunked prefill size (which usually the case), the timestamp of scheduled_time will be the same as the timestamp of first_token_time (since we will generate the first token in the same step as the first time it is scheduled).
I'm not sure how to get around this without inserting some timing logic into EngineCore which also feels not ideal + brittle. What do you think?

Ugh, how right you are! Yes, what I'm doing now seems very broken.

So, basically we would want scheduled_time to be before execute_model() is called?

We have:

<<< arrival timestamp >>>

[ queued interval]

<<< scheduled_timestamp >>>

[prefill interval]

<<< first token timestamp >>>

[decode interval]

<<< finished timestamp >>>

And the closest AsyncLLM can get to scheduled_timestamp is like "the completion of the first step that included this request" - we have no visibility into events that occur during a step.

So I guess our options are:

  1. Compare timestamps across processes - e.g. record scheduled_timestamp in the core and use it for interval calculations in the frontend. I would like to avoid this because we should be using monotonic time (unaffected by system clock changes) and you can't compare monotonic timestamps from different processes.
  2. Have the core send back an interval like "scheduling for this request happened N ms before first token"
  3. Calculate these intervals in the core - we would need queue_time would be relative to the request's arrival in the core. I guess the computations would need to happen in the schedule() loop (queue time) and the update_from_output() loop (prefill and decode time).
  4. Collect timestamps in the core, but calculate the intervals on the frontend side. This might not be terrible, and likely more accurate?

I'll take a stab at (4), but definitely welcome feedback!

  • I think (4) is very reasonable if we want to preserve these metrics. Especially if the timestamps are batch level (rather than per-request) the overhead should be very small (which is P0 for EngineCore)
  • The other option would be to just deprecate the concept of prefill_time and only implement metrics that can be computed from the POV of the AsyncLLM. This metric is valuable but I would not describe prefill_time as an absolute must have. I'm mostly just concerned about churning our users' grafana setups. The other negative of this approach is that we might want to implement future metrics that need the EngineCore level granularity and the setup you describe creates a good framework for building upon

WDYT? Do you have any experience deprecating telemetry like this?

@markmc
Copy link
Contributor Author

markmc commented Feb 6, 2025

  • I think (4) is very reasonable if we want to preserve these metrics. Especially if the timestamps are batch level (rather than per-request) the overhead should be very small (which is P0 for EngineCore)

See the new PR for what I got to, commit message pasted below

I might be missing something obvious on how to do this at the batch level, I'm really just thinking about it now. I guess the NEW_TOKENS event (and its timestamp) applies to all of the requests in the EngineCoreOutputs. The SCHEDULED and QUEUED events are only ever associated with a subset of the requests in EngineCoreOutputs though. So ... yeah ... probably can do better on this.

  • The other option would be to just deprecate the concept of prefill_time and only implement metrics that can be computed from the POV of the AsyncLLM. This metric is valuable but I would not describe prefill_time as an absolute must have. I'm mostly just concerned about churning our users' grafana setups. The other negative of this approach is that we might want to implement future metrics that need the EngineCore level granularity and the setup you describe creates a good framework for building upon

WDYT? Do you have any experience deprecating telemetry like this?

I guess I'm being conservative and assuming that removing anything that's in the example dashboard would be disruptive, and so we'd need a good reason to not add it - e.g. if the overhead was too large. So I guess the point of this discussion is to see if we can do it with a reasonable level of overhead.

Commit message below:

We want to capture (in histograms) precise pre-request timing
intervals between certain events in the engine core:

  << queued timestamp >>
    [ queue interval ]
  << scheduled timestamp >>
    [ prefill interval ]
  << new token timestamp (FIRST) >>
    [ inter-token interval ]
  << new token timestamp >>
    [ decode interval (relative to first token time)
    [ inference interval (relative to scheduled time)
  << new token timestamp (FINISHED) >>

We want to collect these metrics in the frontend process, to keep the
engine core freed up as much as possible. We need to calculate these
intervals based on timestamps recorded by the engine core.

Engine core will include these timestamps in EngineCoreOutput (per
request) as a sequence of timestamped events, and the frontend will
calculate intervals and log them.

Where we recording these timestamped timestamps:
- QUEUED: scheduler add_request()
- SCHEDULED: scheduler schedule()
- NEW token: scheduler update_from_output()

There will always be a NEW_TOKEN event in each EngineCoreOutput, but there
may also be QUEUED and SCHEDULED events included.

@markmc
Copy link
Contributor Author

markmc commented Feb 7, 2025

I might be missing something obvious on how to do this at the batch level, I'm really just thinking about it now. I guess the NEW_TOKENS event (and its timestamp) applies to all of the requests in the EngineCoreOutputs. The SCHEDULED and QUEUED events are only ever associated with a subset of the requests in EngineCoreOutputs though. So ... yeah ... probably can do better on this.

Good call on that, done now 👍

@markmc markmc force-pushed the metrics-v1-prometheus-logger-6 branch from 3725022 to 38cf896 Compare February 7, 2025 15:24
Copy link

mergify bot commented Feb 7, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @markmc.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 7, 2025
@markmc markmc force-pushed the metrics-v1-prometheus-logger-6 branch from 38cf896 to 37e5b11 Compare February 7, 2025 16:57
@markmc
Copy link
Contributor Author

markmc commented Feb 7, 2025

Rebased onto the logprobs commit 👍

@mergify mergify bot removed the needs-rebase label Feb 7, 2025
Copy link

mergify bot commented Feb 9, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @markmc.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 9, 2025
@markmc
Copy link
Contributor Author

markmc commented Feb 9, 2025

Added another commit to make --disable-log-stats remove the overhead of this new stuff

[V1][Metrics] Make --disable-log-stats more effective
Avoid constructing:

- The logging and prometheus loggers
- IterationStats and RequestStateStats in the output processor
- SchedulerStats and EngineCoreEvents in the scheduler

Also, add TODO for https://github.com/vllm-project/vllm/pull/12592 to disable prefix cache stats.

@markmc markmc force-pushed the metrics-v1-prometheus-logger-6 branch from 75e7574 to b8d495c Compare February 9, 2025 22:46
@mergify mergify bot removed the needs-rebase label Feb 9, 2025
Copy link

mergify bot commented Feb 10, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @markmc.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 10, 2025
@markmc markmc force-pushed the metrics-v1-prometheus-logger-6 branch from b8d495c to 86415ad Compare February 10, 2025 07:39
@mergify mergify bot removed the needs-rebase label Feb 10, 2025
@markmc markmc force-pushed the metrics-v1-prometheus-logger-6 branch 2 times, most recently from 99be9c4 to dd11742 Compare February 10, 2025 11:58
Copy link

mergify bot commented Feb 11, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @markmc.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 11, 2025

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Follow on from vllm-project#12579, part of vllm-project#10582.

Add the following:

- vllm:e2e_request_latency_seconds
- vllm:request_queue_time_seconds
- vllm:request_inference_time_seconds
- vllm:request_prefill_time_seconds
- vllm:request_decode_time_seconds

e2e_request_latency is calculated relative to the arrival_time
timestamp recorded by the frontend.

For the rest ... we want to capture (in histograms) precise
pre-request timing intervals between certain events in the engine
core:

```
  << queued timestamp >>
    [ queue interval ]
  << scheduled timestamp >>
    [ prefill interval ]
  << new token timestamp (FIRST) >>
    [ inter-token interval ]
  << new token timestamp >>
    [ decode interval (relative to first token time)
    [ inference interval (relative to scheduled time)
  << new token timestamp (FINISHED) >>
```

We want to collect these metrics in the frontend process, to keep the
engine core freed up as much as possible. We need to calculate these
intervals based on timestamps recorded by the engine core.

Engine core will include these timestamps in EngineCoreOutput (per
request) as a sequence of timestamped events, and the frontend will
calculate intervals and log them. Where we record these timestamped
events:

- QUEUED: scheduler add_request()
- SCHEDULED: scheduler schedule()

There is an implicit NEW_TOKENS timestamp based on an initialization
timestamp recorded on EngineCoreOutputs.

Signed-off-by: Mark McLoughlin <[email protected]>
Avoid constructing:

- The logging and prometheus loggers
- IterationStats and RequestStateStats in the output processor
- SchedulerStats and EngineCoreEvents in the scheduler

Also, add TODO for vllm-project#12592 to disable prefix cache stats.

Signed-off-by: Mark McLoughlin <[email protected]>
@markmc markmc force-pushed the metrics-v1-prometheus-logger-6 branch from dd11742 to 79e8f3a Compare February 11, 2025 07:16
@mergify mergify bot removed the needs-rebase label Feb 11, 2025
@robertgshaw2-redhat
Copy link
Collaborator

Nice work. This timestamp mechanism from the POV EngineCore has the added benefit of working when we add N processes in the frontend.

@robertgshaw2-redhat robertgshaw2-redhat merged commit 75e6e14 into vllm-project:main Feb 11, 2025
31 checks passed
SzymonOzog pushed a commit to SzymonOzog/vllm that referenced this pull request Feb 12, 2025
kwang1012 pushed a commit to kwang1012/vllm that referenced this pull request Feb 12, 2025
panf2333 pushed a commit to yottalabsai/vllm that referenced this pull request Feb 18, 2025
kerthcet pushed a commit to kerthcet/vllm that referenced this pull request Feb 21, 2025
hongxiayang pushed a commit to ROCm/vllm that referenced this pull request Feb 25, 2025
* [ROCM][AMD][TRITON] Halving warps number for fw_prefill to reduce spilling (vllm-project#12713)

Signed-off-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>

* Refactor `Linear` handling in `TransformersModel` (vllm-project#12727)

Signed-off-by: Harry Mellor <[email protected]>

* [VLM] Add MLA with pure RoPE support for deepseek-vl2 models (vllm-project#12729)

* [Misc] Bump the compressed-tensors version (vllm-project#12736)

* [Model][Quant] Fix GLM, Fix fused module mappings for quantization (vllm-project#12634)

Signed-off-by: mgoin <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Co-authored-by: mgoin <[email protected]>

* [Doc] Update PR Reminder with link to Developer Slack (vllm-project#12748)

* [Bugfix] Fix OpenVINO model runner (vllm-project#12750)

* [V1][Misc] Shorten `FinishReason` enum and use constant strings (vllm-project#12760)

* [Doc] Remove performance warning for auto_awq.md (vllm-project#12743)

* [Bugfix] Fix 'ModuleNotFoundError: No module named 'intel_extension_for_pytorch'' for --tensor-parallel-size more than 1  (vllm-project#12546)

* [core][distributed] exact ray placement control (vllm-project#12732)

Signed-off-by: youkaichao <[email protected]>

* The code assumes WARP_SIZE to be equal to 32, which is not the case on ROCm (#406)

Signed-off-by: Gregory Shtrasberg <[email protected]>

* Merging PR vllm-project#12536

Merged via CLI script

* [Hardware][Intel-Gaudi] Enable FusedSDPA support for Intel Gaudi (HPU)

* Add: Support for Sparse24Bitmask Compressed Models

* [VLM] Use shared field to pass token ids to model

* [Docs] Drop duplicate [source] links

* [VLM] Qwen2.5-VL

* [VLM] Update compatibility with transformers 4.49

* [ROCm][Kernel] Using the correct warp_size value

* [Bugfix] Better FP8 supported defaults

* [Misc][Easy] Remove the space from the file name

* [Model] LoRA Support for Ultravox model (vllm-project#11253)

* [Bugfix] Fix the test_ultravox.py's license (vllm-project#12806)

Signed-off-by: Lu Fang <[email protected]>

* Improve `TransformersModel` UX (vllm-project#12785)

* [Misc] Remove duplicated DeepSeek V2/V3 model definition (vllm-project#12793)

* [Misc] Improve error message for incorrect pynvml (vllm-project#12809)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Update w2 scale loading for GPTQMarlinMoE (vllm-project#12757)

* [Docs] Add Google Cloud Slides (vllm-project#12814)

* [Attention] Use FA3 for MLA on Hopper (vllm-project#12807)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [misc] Reduce number of config file requests to HuggingFace (vllm-project#12797)

Signed-off-by: EC2 Default User <[email protected]>
Signed-off-by: <>
Co-authored-by: EC2 Default User <[email protected]>

* Update README.md 20250205_aiter (#407)

* Update README.md 20250205_aiter

* whitespace

* adding VLLM_USE_AITER=0 advice

* [Misc] Remove unnecessary decode call (vllm-project#12833)

* [Kernel] Make rotary_embedding ops more flexible with input shape (vllm-project#12777)

* [torch.compile] PyTorch 2.6 and nightly compatibility (vllm-project#12393)

Signed-off-by: youkaichao <[email protected]>

* [Doc] double quote cmake package in build.inc.md (vllm-project#12840)

* [Bugfix] Fix unsupported FA version check for Turing GPU (vllm-project#12828)

* [V1] LoRA Support (vllm-project#10957)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* Add Bamba Model (vllm-project#10909)

Signed-off-by: Yu Chin Fabian Lim <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>

* [MISC] Check space in the file names in the pre commit checks (vllm-project#12804)

Signed-off-by: Lu Fang <[email protected]>

* [misc] Revert # 12833 (vllm-project#12857)

Signed-off-by: <>
Co-authored-by: EC2 Default User <[email protected]>

* [Bugfix] FA2 illegal memory access (vllm-project#12848)

* Make vllm compatible with verl (vllm-project#12824)

Co-authored-by: zhangshulai <[email protected]>

* [Bugfix] Missing quant_config in deepseek embedding layer (vllm-project#12836)

* Prevent unecessary requests to huggingface hub (vllm-project#12837)

* [MISC][EASY] Break check file names into entry and args in the pre-commit hooks (vllm-project#12880)

Signed-off-by: Lu Fang <[email protected]>

* [Misc] Remove unnecessary detokenization in multimodal processing (vllm-project#12868)

* PR vllm-project#12718 (vllm-project#12718)

* [V1] Logprobs and prompt logprobs support (vllm-project#9880)

This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.

New behavior:

- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.

Signed-off-by: Andrew Feldman <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: [email protected] <[email protected]>


Co-authored-by: [email protected] <[email protected]>
Co-authored-by: Nick Hill <[email protected]>

* [ROCm] [Feature] [Doc] [Dockerfile] [BugFix] Support Per-Token-Activation Per-Channel-Weight FP8 Quantization Inferencing (vllm-project#12501)

* fix rocm get_device name for moe configs (#359)

* fix rocm get_device name

use 'market_name'
hard-code names for mi308 & mi300

* use gfx and num_CU for device name

* using market_name

* rename MI325_OAM to MI325X

* rm (duplicate) MI300X_OAM

* rename mi308

* [V1] LM Eval With Streaming Integration Tests (vllm-project#11590)

* [Bugfix] Fix disagg hang caused by the prefill and decode communication issues (vllm-project#12723)

Signed-off-by: Lu Fang <[email protected]>

* [V1][Minor] Remove outdated comment (vllm-project#12928)

Signed-off-by: Woosuk Kwon <[email protected]>

* [V1] Move KV block hashes from Request to KVCacheManager (vllm-project#12922)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Bugfix] Fix Qwen2_5_VLForConditionalGeneration packed_modules_mapping (vllm-project#12905)

* [Misc] Fix typo in the example file (vllm-project#12896)

Signed-off-by: Zhao Ke <[email protected]>

* [Bugfix] Fix multi-round chat error when mistral tokenizer is used (vllm-project#12859)

Signed-off-by: Zifei Tong <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [bugfix] respect distributed_executor_backend in world_size=1 (vllm-project#12934)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Add offline test for disaggregated prefill (vllm-project#12418)

* [V1][Minor] Move cascade attn logic outside _prepare_inputs (vllm-project#12943)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Build] Make pypi install work on CPU platform (vllm-project#12874)

* [Hardware][Intel-Gaudi] Enable long-contexts + LoRA support for Intel Gaudi (vllm-project#12812)

Signed-off-by: Sanju C Sudhakaran <[email protected]>

* [misc]  Add LoRA to benchmark_serving (vllm-project#12898)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [Misc] Log time consumption on weight downloading (vllm-project#12926)

* [CI] Resolve transformers-neuronx version conflict (vllm-project#12925)

* [Doc] Correct HF repository for TeleChat2 models (vllm-project#12949)

* [Misc] Add qwen2.5-vl BNB support (vllm-project#12944)

* [CI/Build] Auto-fix Markdown files (vllm-project#12941)

* [Bugfix] Remove unused seq_group_metadata_list from ModelInputForGPU (vllm-project#12935)

Signed-off-by: Shangming Cai <[email protected]>

* [bugfix] fix early import of flash attention (vllm-project#12959)

Signed-off-by: youkaichao <[email protected]>

* [VLM] Merged multi-modal processor for GLM4V (vllm-project#12449)

Signed-off-by: Jee Jee Li <[email protected]>

* [V1][Minor] Remove outdated comment (vllm-project#12968)

Signed-off-by: Woosuk Kwon <[email protected]>

* [RFC] [Mistral] FP8 format (vllm-project#10130)

Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>

* [V1] Cache `uses_mrope` in GPUModelRunner (vllm-project#12969)

* [core] port pynvml into vllm codebase (vllm-project#12963)

Signed-off-by: youkaichao <[email protected]>

* [MISC] Always import version library first in the vllm package (vllm-project#12979)

Signed-off-by: Lu Fang <[email protected]>

* [core] improve error handling when wake up from sleep mode (vllm-project#12981)

Signed-off-by: youkaichao <[email protected]>

* [core][rlhf] add colocate example for RLHF (vllm-project#12984)

Signed-off-by: youkaichao <[email protected]>

* [V1] Use msgpack for core request serialization (vllm-project#12918)

Signed-off-by: Nick Hill <[email protected]>

* Check if selected backend is None in get_attn_backend_cls() (vllm-project#12975)

Signed-off-by: Yuan Tang <[email protected]>

* [core] fix sleep mode and pytorch checkpoint compatibility (vllm-project#13001)

Signed-off-by: youkaichao <[email protected]>

* [Doc] Add link to tool_choice tracking issue in tool_calling.md (vllm-project#13003)

Signed-off-by: Yuan Tang <[email protected]>

* [misc] Add retries with exponential backoff for HF file existence check (vllm-project#13008)

* [Bugfix] Clean up and fix multi-modal processors (vllm-project#13012)

Signed-off-by: DarkLight1337 <[email protected]>

* Fix seed parameter behavior in vLLM (vllm-project#13007)

Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>

* Fixing the output formatting (#414)

* [Model] Ultravox Model: Support v0.5 Release (vllm-project#12912)

Signed-off-by: Farzad Abdolhosseini <[email protected]>

* [misc] Fix setup.py condition to avoid AMD from being mistaken with CPU (vllm-project#13022)

Signed-off-by: kevin <[email protected]>

* [V1][Minor] Move scheduler outputs to a separate file (vllm-project#13062)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Docs] Annouce Meta Meetup (vllm-project#13065)

Signed-off-by: simon-mo <[email protected]>

* [Bugfix] Support missing tool parameters in mistral tokenizer (vllm-project#12884)

Signed-off-by: Florian Greinacher <[email protected]>

* [Benchmark] Add BurstGPT to benchmark_serving (vllm-project#13063)

Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [Core] Don't do platform detection at import time (vllm-project#12933)

Signed-off-by: Russell Bryant <[email protected]>

* [Misc] LoRA - Refactor Punica ops tests (vllm-project#12970)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [Bugfix]: Reasoning output bug according to the chat template change (vllm-project#13025)

Signed-off-by: Ce Gao <[email protected]>

* [V1][Metrics] Add GPU prefix cache hit rate % gauge (vllm-project#12592)

* [executor] init `local_rank` as device index (vllm-project#13027)

Signed-off-by: Mengqing Cao <[email protected]>

* [ROCm] Using a more precise memory profiling (vllm-project#12624)

Signed-off-by: Gregory Shtrasberg <[email protected]>

* [Build] Fix cuda link target of cumem_allocator in CPU env (vllm-project#12863)

Signed-off-by: YuhongGuo <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>

* [Platform] add pre_register_and_update function (vllm-project#12432)

Signed-off-by: wangxiyuan <[email protected]>

* [Bugfix] fix flaky test (vllm-project#13089)

Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>

* [V1][Metrics] Add several request timing histograms (vllm-project#12644)

Signed-off-by: Mark McLoughlin <[email protected]>

* Set `torch_dtype` in `TransformersModel` (vllm-project#13088)

Signed-off-by: Harry Mellor <[email protected]>

* [Misc] Fix typo at comments at metrics.py (vllm-project#13024)

* [Bugfix] Do not use resource module on Windows (vllm-project#12858) (vllm-project#13029)

* [BugFix] Pop instead of del CUDA_VISIBLE_DEVICES (vllm-project#12962)

Signed-off-by: Hollow Man <[email protected]>

* Fix initializing GGUF weights for ColumnParallelLinear when using tensor parallel > 1 (vllm-project#13023)

* Add tuned moe config for qwen1.5_moe_A2.7B (#398)

* Add tuned moe config for qwen1.5_moe_A2.7B

* Add more sweep parameters on qwen2_moe

* Add tp = 1,2,4,8 after applying PR12838

* Rename config name by deleting "_OAM"

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>

* [CI/Build][Bugfix] Fix CPU backend default threads num (vllm-project#13077)

* Removing non-existent parameter

* [Doc] Improve OpenVINO installation doc (vllm-project#13102)

Signed-off-by: Harry Mellor <[email protected]>

* [Bugfix] Guided decoding falls back to outlines when fails to import xgrammar (vllm-project#12976)

Signed-off-by: Yuan Tang <[email protected]>

* [Misc] Move pre-commit suggestion back to the end (vllm-project#13114)

Signed-off-by: Russell Bryant <[email protected]>

* [RFC][vllm-API] Support tokenizer registry for customized tokenizer in vLLM (vllm-project#12518)

Signed-off-by: Keyun Tong <[email protected]>

* [Model] IBM/NASA Prithvi Geospatial model  (vllm-project#12830)

* [ci] Add more source file dependencies for some tests (vllm-project#13123)

Signed-off-by: <>
Co-authored-by: EC2 Default User <[email protected]>

* [Neuron][Kernel] Support Longer Sequences in NKI-based Flash PagedAttention and Improve Efficiency (vllm-project#12921)

Signed-off-by: Lingfan Yu <[email protected]>

* Bump helm/kind-action from 1.10.0 to 1.12.0 (vllm-project#11612)

* Bump actions/stale from 9.0.0 to 9.1.0 (vllm-project#12462)

* Bump helm/chart-testing-action from 2.6.1 to 2.7.0 (vllm-project#12463)

* Bump actions/setup-python from 5.3.0 to 5.4.0 (vllm-project#12672)

* Further reduce the HTTP calls to huggingface.co (vllm-project#13107)

* [Misc] AMD Build Improvements (vllm-project#12923)

* [Bug] [V1] Try fetching stop_reason from EngineOutput before checking the request (vllm-project#13108)

* [Bugfix] Fix num video tokens calculation for Qwen2-VL (vllm-project#13148)

Signed-off-by: DarkLight1337 <[email protected]>

* [Frontend] Generate valid tool call IDs when using `tokenizer-mode=mistral` (vllm-project#12332)

* [Misc] Delete unused LoRA modules (vllm-project#13151)

* Introduce VLLM_CUDART_SO_PATH to allow users specify the .so path (vllm-project#12998)

Signed-off-by: Lu Fang <[email protected]>

* [CI/Build] Use mypy matcher for pre-commit CI job (vllm-project#13162)

Signed-off-by: Russell Bryant <[email protected]>

* Update Benchmark Profiling Scripts (#417)

* Update profiling benchmarks

* Fix linter errors

---------

Co-authored-by: AdrianAbeyta <[email protected]>

* [CORE] [QUANT] Support for GPTQModel's `dynamic` quantization per module override/control (vllm-project#7086)

* [Bugfix] Allow fallback to AWQ from AWQMarlin at per-layer granularity (vllm-project#13119)

* DS V2V3 fix for same file

* Lint

* updating manfiest (#416)

* [CI] Fix failing FP8 cpu offload test (vllm-project#13170)

Signed-off-by: mgoin <[email protected]>

* Aiter base (#419)

* Using upstream FA repo. Building aiter in the base docker image

* Renaming the file to match upstream naming

* [V1][Bugfix] Copy encoder input ids to fix set iteration issue during VLM abort (vllm-project#13173)

Signed-off-by: andoorve <[email protected]>

* [CI/Build] Ignore ruff warning up007 (vllm-project#13182)

Signed-off-by: Russell Bryant <[email protected]>

* [perf-benchmark] cleanup unused Docker images and volumes in H100 benchmark instance (vllm-project#12706)

* [NVIDIA] Support nvfp4 quantization (vllm-project#12784)

* [Bugfix][Example] Fix GCed profiling server for TPU (vllm-project#12792)

Signed-off-by: mgoin <[email protected]>

* [VLM] Implement merged multimodal processor for Mllama (vllm-project#11427)

* Simplify logic of locating CUDART so file path (vllm-project#13203)

Signed-off-by: Lu Fang <[email protected]>

* [Build] Automatically use the wheel of the base commit with Python-only build (vllm-project#13178)

* [Bugfix] deepseek_r1_reasoning_parser put reason content in wrong field in certain edge case (vllm-project#13097)

* [Frontend] Move CLI code into vllm.cmd package (vllm-project#12971)

* Allow Unsloth Dynamic 4bit BnB quants to work (vllm-project#12974)

* [CI/Build] Allow ruff to auto-fix some issues (vllm-project#13180)

Signed-off-by: Russell Bryant <[email protected]>

* [V1][core] Implement pipeline parallel on Ray (vllm-project#12996)

* [VLM] Remove input processor from clip and siglip (vllm-project#13165)

* [Frontend] Pass pre-created socket to uvicorn (vllm-project#13113)

* [V1] Clarify input processing and multimodal feature caching logic (vllm-project#13211)

* [VLM] Merged multi-modal processor for Molmo (vllm-project#12966)

* [V1][Core] Add worker_base for v1 worker (vllm-project#12816)

Signed-off-by: Aoyu <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Aoyu <[email protected]>
Co-authored-by: youkaichao <[email protected]>

* [Misc] Qwen2.5-VL Optimization (vllm-project#13155)

* [VLM] Separate text-only and vision variants of the same model architecture (vllm-project#13157)

* [Bugfix] Missing Content Type returns 500 Internal Server Error (vllm-project#13193)

* [Frontend] Add `/v1/audio/transcriptions` OpenAI API endpoint (vllm-project#12909)

* Initial attempt to adjust codeowners to the ROCm fork (#420)

* Applying weight padding to deepseek (#421)

* Add label if pre-commit passes (vllm-project#12527)

Signed-off-by: Harry Mellor <[email protected]>

* [Model] DeepSeek Tunings (#423)

* fused_moe config for DSv3 on MI300X updated

* Add tuning script and post processing script

Signed-off-by: Randall Smith <[email protected]>

* Add modification to fp8_utils for tuning

Signed-off-by: Randall Smith <[email protected]>

* update tuning script and add the configs

Signed-off-by: Randall Smith <[email protected]>

* slightly better tunings

Signed-off-by: Randall Smith <[email protected]>

* benchmark_moe.py is updated to generate more accurate MoE configs and a specific MoE config for DSv3 is added

* Bug in sgl_moe_align_block_size() is fixed by Greg

* Generate fp8_w8a8 config for MI300XHF

* tunings that don't give garbage output

Signed-off-by: Randall Smith <[email protected]>

* More accurate tunings

Signed-off-by: Randall Smith <[email protected]>

* More accurate tunings and reject inaccurate configs

Signed-off-by: Randall Smith <[email protected]>

* add new tunings

Signed-off-by: Randall Smith <[email protected]>

* rename tuning script and add benchmark script to use for optimizing blockwise quant

Signed-off-by: Randall Smith <[email protected]>

* remove white space from file names

Signed-off-by: Randall Smith <[email protected]>

* remove white space from file names

Signed-off-by: Randall Smith <[email protected]>

* Remove some unnecessary changes

Signed-off-by: Randall Smith <[email protected]>

* don't use space in file names

Signed-off-by: Randall Smith <[email protected]>

* remove XHF tunings

Signed-off-by: Randall Smith <[email protected]>

* remove OAM from file name

Signed-off-by: Randall Smith <[email protected]>

* rmeove OAM from file names

Signed-off-by: Randall Smith <[email protected]>

* yapf

Signed-off-by: Randall Smith <[email protected]>

* update config name

Signed-off-by: Randall Smith <[email protected]>

* remove benchmark_moe.py changes

Signed-off-by: Randall Smith <[email protected]>

* remove is_contiguous

Signed-off-by: Randall Smith <[email protected]>

* use more recent fp8_utils.py

Signed-off-by: Randall Smith <[email protected]>

* remove is_contiguous

Signed-off-by: Randall Smith <[email protected]>

---------

Signed-off-by: Randall Smith <[email protected]>
Co-authored-by: qli88 <[email protected]>

* Optimize moe_align_block_size for deepseek_v3 (vllm-project#12850)

Signed-off-by: mgoin <[email protected]>

* [Kernel][Bugfix] Refactor and Fix CUTLASS 2:4 Sparse Kernels (vllm-project#13198)

Signed-off-by: Tyler Michael Smith <[email protected]>

* Revert "Add label if pre-commit passes" (vllm-project#13242)

* [ROCm] Avoid using the default stream on ROCm (vllm-project#13238)

Signed-off-by: Gregory Shtrasberg <[email protected]>

* [Kernel] Fix awq error when n is not divisable by 128 (vllm-project#13227)

* [V1] Consolidate MM cache size to vllm.envs (vllm-project#13239)

* [Bugfix/CI] Turn test_compressed_tensors_2of4_sparse back on (vllm-project#13250)

* [Bugfix][CI] Inherit codespell settings from pyproject.toml in the pre-commit-config (vllm-project#13237)

* [Bugfix] Offline example of disaggregated prefill (vllm-project#13214)

* [Misc] Remove redundant statements in scheduler.py (vllm-project#13229)

* Consolidate Llama model usage in tests (vllm-project#13094)

* Expand MLA to support most types of quantization (vllm-project#13181)

* [V1] LoRA - Enable Serving Usecase (vllm-project#12883)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [ROCm][V1] Add intial ROCm support to V1 (vllm-project#12790)

* [Bugfix][V1] GPUModelRunner._update_states should return True when there is a finished request in batch (vllm-project#13126)

* [WIP] TPU V1 Support Refactored (vllm-project#13049)

* [Frontend] Optionally remove memory buffer used for uploading to URLs in run_batch (vllm-project#12927)

Signed-off-by: Pooya Davoodi <[email protected]>

* [Bugfix] Fix missing parentheses (vllm-project#13263)

* [Misc] Log time consumption of sleep and wake-up (vllm-project#13115)

Signed-off-by: Jun Duan <[email protected]>

* [VLM] Keep track of whether prompt replacements have been applied (vllm-project#13215)

* [V1] Simplify GPUModelRunner._update_states check (vllm-project#13265)

* Support logit_bias in v1 Sampler (vllm-project#13079)

* [Core] choice-based structured output with xgrammar (vllm-project#12632)

* [Hardware][Gaudi][Bugfix] Fix error for guided decoding (vllm-project#12317)

* Removing bad config (#425)

* The order in the file is important. One needs to be explicitly be added to each following path for their ownership to apply (#427)

* [Quant][Perf] Use moe_wna16 kernel by default for MoEs with many experts (vllm-project#13236)

Signed-off-by: mgoin <[email protected]>

* [Core] Reduce TTFT with concurrent partial prefills (vllm-project#10235)

Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Co-authored-by: Prashant Gupta <[email protected]>
Co-authored-by: Cody Yu <[email protected]>

* [V1][Core] min_p sampling support (vllm-project#13191)

Signed-off-by: Aoyu <[email protected]>
Co-authored-by: Aoyu <[email protected]>

* [V1][CI] Fix failed v1-test because of min_p (vllm-project#13316)

Signed-off-by: Woosuk Kwon <[email protected]>

* [V1][Sampler] Don't apply temp for greedy-only (vllm-project#13311)

Signed-off-by: Nick Hill <[email protected]>

* [V1][PP] Fix memory profiling in PP (vllm-project#13315)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Bugfix][AMD] Update torch_bindings so that scaled_fp4_quant isn't build on ROCm (vllm-project#13235)

* [Bugfix][Docs] Fix offline Whisper (vllm-project#13274)

* [Bugfix] Massage MLA's usage of flash attn for RoCM (vllm-project#13310)

* [BugFix] Don't scan entire cache dir when loading model (vllm-project#13302)

* [Bugfix]Fix search start_index of stop_checker (vllm-project#13280)

* [Bugfix] Fix qwen2.5-vl image processor (vllm-project#13286)

* [V1][Metrics] Add iteration_tokens_total histogram from V0 (vllm-project#13288)

* [AMD] [Model] DeepSeek tunings (vllm-project#13199)

* [V1][PP] Run engine busy loop with batch queue (vllm-project#13064)

* [ci/build] update flashinfer (vllm-project#13323)

* [Doc] [2/N] Add Fuyu E2E example for multimodal processor (vllm-project#13331)

* [V1][Spec Decode] Ngram Spec Decode  (vllm-project#12193)

Signed-off-by: LiuXiaoxuanPKU <[email protected]>

* [Quant] Add `SupportsQuant` to phi3 and clip (vllm-project#13104)

* [Bugfix] Pin xgrammar to 0.1.11 (vllm-project#13338)

* avoid calling hf_list_repo_files for local model

Signed-off-by: isotr0py <[email protected]>

* annotation

Signed-off-by: isotr0py <[email protected]>

* [BugFix] Enhance test_pos_encoding to support execution on multi-devices (vllm-project#13187)

Signed-off-by: wchen61 <[email protected]>

* [V1] Update doc and examples for H2O-VL (vllm-project#13349)

Signed-off-by: Roger Wang <[email protected]>

* [ci] skip failed tests for flashinfer (vllm-project#13352)

Signed-off-by: youkaichao <[email protected]>

* [platform] add base class for communicators (vllm-project#13208)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] Fix 2 Node and Spec Decode tests (vllm-project#13341)

Signed-off-by: DarkLight1337 <[email protected]>

* [Docs] Change myenv to vllm. Update python_env_setup.inc.md (vllm-project#13325)

* [V1][BugFix] Add __init__.py to v1/spec_decode/ (vllm-project#13359)

Signed-off-by: Woosuk Kwon <[email protected]>

* [V1][PP] Cache Intermediate Tensors (vllm-project#13353)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Bugfix][Platform][CPU] Fix cuda platform detection on CPU backend edge case (vllm-project#13358)

Signed-off-by: Isotr0py <[email protected]>

* [V1][BugFix] Clean up rejection sampler & Fix warning msg (vllm-project#13362)

Signed-off-by: Woosuk Kwon <[email protected]>

* [V1][Misc] Avoid unnecessary log output (vllm-project#13289)

* [Feature][Spec Decode] Simplify the use of Eagle Spec Decode (vllm-project#12304)

Signed-off-by: Shangming Cai <[email protected]>

* Fix spelling error in index.md (vllm-project#13369)

* Run v1 benchmark and integrate with PyTorch OSS benchmark database (vllm-project#13068)

Signed-off-by: Huy Do <[email protected]>

* [MISC] tiny fixes (vllm-project#13378)

* [VLM] Check required fields before initializing field config in `DictEmbeddingItems` (vllm-project#13380)

* [Model] Support Mamba2 (Codestral Mamba) (vllm-project#9292)

Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Yu Chin Fabian Lim <[email protected]>

* [Bugfix] fix xpu communicator (vllm-project#13368)

Signed-off-by: yan ma <[email protected]>

* [Bugfix] Fix VLLM_USE_MODELSCOPE issue (vllm-project#13384)

* Updating PR template to point people to the upstream repo. Updating codeowners (#431)

* Enabling the ROCm-vLLM CI on MI250 machines (#432)

* Enabling ROCm CI on MI250 machines:
- correct build target
- correct queue

Signed-off-by: Alexei V. Ivanov <[email protected]>

---------

Signed-off-by: Alexei V. Ivanov <[email protected]>

* Optimization for quantized gemm skinny sizes (#411)

* Optimization for quantized gemm skinny sizes

* lint fix

* Add support for bf16/fp16

* code cleanup

* code cleanup

* lint fix2

* cleanup

* Moved the logic into tuned gemm to preserve API compatibility

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>

* Restricting FP8 wvSplitk to MI300x (#439)

* Remove mi300a (#440)

* Removing gfx940 and gfx941 targets. These have been deprecated in favor of gfx942 for MI300X

Signed-off-by: Gregory Shtrasberg <[email protected]>

* Remove from custom kernels as well

---------

Signed-off-by: Gregory Shtrasberg <[email protected]>

* resolve diff for mixtral8x7B configs (#437)

Signed-off-by: Divakar Verma <[email protected]>

* Torch version bump to fix tunable ops (#442)

* Advance torch commit to be past pytorch/pytorch#144942 to fix tunable ops

* Make sure to use the submodule commit compatible with the main aiter commit

* bugfix: remove unused  argument passed to the forward pass of ReplicatedLinear layer

Signed-off-by: vllmellm <[email protected]>

---------

Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Lu Fang <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: EC2 Default User <[email protected]>
Signed-off-by: <>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yu Chin Fabian Lim <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Zhao Ke <[email protected]>
Signed-off-by: Zifei Tong <[email protected]>
Signed-off-by: Sanju C Sudhakaran <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Florian Greinacher <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Ce Gao <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: YuhongGuo <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Hollow Man <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Lingfan Yu <[email protected]>
Signed-off-by: andoorve <[email protected]>
Signed-off-by: Aoyu <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Jun Duan <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: LiuXiaoxuanPKU <[email protected]>
Signed-off-by: isotr0py <[email protected]>
Signed-off-by: wchen61 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Akash kaothalkar <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Sanju C Sudhakaran <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Sumit Vij <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: EC2 Default User <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: Jitse Klomp <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Yu Chin Fabian Lim <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: ZSL98 <[email protected]>
Co-authored-by: zhangshulai <[email protected]>
Co-authored-by: Szymon Ożóg <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Amit Garg <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Ke Zhao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Shaoting <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Jun Duan <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <[email protected]>
Co-authored-by: Farzad Abdolhosseini <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Florian Greinacher <[email protected]>
Co-authored-by: Ce Gao <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Yuhong Guo <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Jewon Lee <[email protected]>
Co-authored-by: MoonRide303 <[email protected]>
Co-authored-by: ℍ𝕠𝕝𝕝𝕠𝕨 𝕄𝕒𝕟 <[email protected]>
Co-authored-by: sky0530 <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: Christian Pinto <[email protected]>
Co-authored-by: Lingfan Yu <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Shiyan Deng <[email protected]>
Co-authored-by: bnellnm <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Adrian Abeyta <[email protected]>
Co-authored-by: AdrianAbeyta <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: Yida Wu <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Kaixi Hou <[email protected]>
Co-authored-by: LikeSundayLikeRain <[email protected]>
Co-authored-by: Daniel Han <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Aoyu <[email protected]>
Co-authored-by: Aoyu <[email protected]>
Co-authored-by: 燃 <[email protected]>
Co-authored-by: Vaibhav Jain <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Wang Ran (汪然) <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: Kero Liang <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Xu Song <[email protected]>
Co-authored-by: Yu-Zhou <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Prashant Gupta <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: isotr0py <[email protected]>
Co-authored-by: wchen61 <[email protected]>
Co-authored-by: 凌 <[email protected]>
Co-authored-by: yankooo <[email protected]>
Co-authored-by: Huy Do <[email protected]>
Co-authored-by: Yu Chin Fabian Lim <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: r.4ntix <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Hashem Hashemi <[email protected]>
Co-authored-by: vllmellm <[email protected]>
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Mar 5, 2025
Said-Akbar pushed a commit to Said-Akbar/vllm-rocm that referenced this pull request Mar 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants