-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc] Use VisionArena Dataset for VLM Benchmarking #12389
Conversation
Signed-off-by: Roger Wang <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
QQ: Do you still plan to add offline benchmarks (like #11196)? |
We will use this one, since this is sampled from chat bot arena and should simulate the real world traffic a lot better |
* [Misc] Use VisionArena Dataset for VLM Benchmarking (vllm-project#12389) Signed-off-by: Roger Wang <[email protected]> * [ci/build] fix wheel size check (vllm-project#12396) Signed-off-by: youkaichao <[email protected]> * [Hardware][Gaudi][Doc] Add missing step in setup instructions (vllm-project#12382) * [ci/build] sync default value for wheel size (vllm-project#12398) Signed-off-by: youkaichao <[email protected]> * [Misc] Enable proxy support in benchmark script (vllm-project#12356) Signed-off-by: Junichi Sato <[email protected]> * [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build (vllm-project#12375) Signed-off-by: Lucas Wilkinson <[email protected]> * [Misc] Remove deprecated code (vllm-project#12383) Signed-off-by: DarkLight1337 <[email protected]> * [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). (vllm-project#12405) Signed-off-by: Lucas Wilkinson <[email protected]> * [Bugfix][Kernel] Fix moe align block issue for mixtral (vllm-project#12413) * [Bugfix] Fix BLIP-2 processing (vllm-project#12412) Signed-off-by: DarkLight1337 <[email protected]> * [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 (vllm-project#12408) Signed-off-by: Divakar Verma <[email protected]> * [Misc] Add FA2 support to ViT MHA layer (vllm-project#12355) Signed-off-by: Isotr0py <[email protected]> * [TPU][CI] Update torchxla version in requirement-tpu.txt (vllm-project#12422) Signed-off-by: Siyuan Liu <[email protected]> * [Misc][Bugfix] FA3 support to ViT MHA layer (vllm-project#12435) Signed-off-by: Roger Wang <[email protected]> Signed-off-by: Isotr0py <[email protected]> Co-authored-by: Isotr0py <[email protected]> * [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (vllm-project#12094) Signed-off-by: Keyun Tong <[email protected]> * [V1][Bugfix] Fix assertion when mm hashing is turned off (vllm-project#12439) Signed-off-by: Roger Wang <[email protected]> * [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 (vllm-project#12445) * [Frontend] generation_config.json for maximum tokens(vllm-project#12242) Signed-off-by: Matthew Hendrey <[email protected]> Signed-off-by: Shangming Cai <[email protected]> Signed-off-by: youkaichao <[email protected]> Signed-off-by: Harry Mellor <[email protected]> Signed-off-by: Yuan Tang <[email protected]> Signed-off-by: Isotr0py <[email protected]> Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: Chen Zhang <[email protected]> Signed-off-by: wangxiyuan <[email protected]> Co-authored-by: shangmingc <[email protected]> Co-authored-by: youkaichao <[email protected]> Co-authored-by: Harry Mellor <[email protected]> Co-authored-by: Yuan Tang <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: Chen Zhang <[email protected]> Co-authored-by: wangxiyuan <[email protected]> * [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 (vllm-project#12417) Signed-off-by: Tyler Michael Smith <[email protected]> Co-authored-by: mgoin <[email protected]> * [Bugfix/CI] Fix broken kernels/test_mha.py (vllm-project#12450) * [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 (vllm-project#12434) Signed-off-by: Lucas Wilkinson <[email protected]> * [Build/CI] Fix libcuda.so linkage (vllm-project#12424) Signed-off-by: Tyler Michael Smith <[email protected]> * [Frontend] Rerank API (Jina- and Cohere-compatible API) (vllm-project#12376) Signed-off-by: Kyle Mistele <[email protected]> * [DOC] Add link to vLLM blog (vllm-project#12460) Signed-off-by: Yuan Tang <[email protected]> * [V1] Avoid list creation in input preparation (vllm-project#12457) Signed-off-by: Woosuk Kwon <[email protected]> * [Frontend] Support scores endpoint in run_batch (vllm-project#12430) Signed-off-by: Pooya Davoodi <[email protected]> * [Bugfix] Fix Granite 3.0 MoE model loading (vllm-project#12446) Signed-off-by: DarkLight1337 <[email protected]> * [Bugfix] Fix missing seq_start_loc in xformers prefill metadata (vllm-project#12464) Signed-off-by: Isotr0py <[email protected]> * [V1][Minor] Minor optimizations for update_from_output (vllm-project#12454) Signed-off-by: Woosuk Kwon <[email protected]> * [Bugfix] Fix gpt2 GGUF inference (vllm-project#12467) Signed-off-by: Isotr0py <[email protected]> --------- Signed-off-by: Roger Wang <[email protected]> Signed-off-by: youkaichao <[email protected]> Signed-off-by: Junichi Sato <[email protected]> Signed-off-by: Lucas Wilkinson <[email protected]> Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: Divakar Verma <[email protected]> Signed-off-by: Isotr0py <[email protected]> Signed-off-by: Siyuan Liu <[email protected]> Signed-off-by: Keyun Tong <[email protected]> Signed-off-by: Matthew Hendrey <[email protected]> Signed-off-by: Shangming Cai <[email protected]> Signed-off-by: Harry Mellor <[email protected]> Signed-off-by: Yuan Tang <[email protected]> Signed-off-by: Chen Zhang <[email protected]> Signed-off-by: wangxiyuan <[email protected]> Signed-off-by: Tyler Michael Smith <[email protected]> Signed-off-by: Kyle Mistele <[email protected]> Signed-off-by: Woosuk Kwon <[email protected]> Signed-off-by: Pooya Davoodi <[email protected]> Co-authored-by: Roger Wang <[email protected]> Co-authored-by: youkaichao <[email protected]> Co-authored-by: Mohit Deopujari <[email protected]> Co-authored-by: Junichi Sato <[email protected]> Co-authored-by: Lucas Wilkinson <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: ElizaWszola <[email protected]> Co-authored-by: Divakar Verma <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: Siyuan Liu <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: Keyun Tong <[email protected]> Co-authored-by: Matthew Hendrey <[email protected]> Co-authored-by: shangmingc <[email protected]> Co-authored-by: Harry Mellor <[email protected]> Co-authored-by: Yuan Tang <[email protected]> Co-authored-by: Chen Zhang <[email protected]> Co-authored-by: wangxiyuan <[email protected]> Co-authored-by: Tyler Michael Smith <[email protected]> Co-authored-by: mgoin <[email protected]> Co-authored-by: Kyle Mistele <[email protected]> Co-authored-by: Woosuk Kwon <[email protected]> Co-authored-by: Pooya Davoodi <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Command:
python3 benchmark_serving.py --model <server-model-id> --backend openai-chat --endpoint /v1/chat/completions --dataset-name hf --dataset-path lmarena-ai/vision-arena-bench-v0.1 --hf-split train --num-prompts 500 --request-rate 1 --percentile-metrics ttft,tpot,e2el