Skip to content

Actions: ROCm/vllm

clang-format

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
725 workflow runs
725 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

RPD Profiling
clang-format #581: Pull request #208 synchronize by dllehr-amd
September 27, 2024 16:21 20s rpd_main_2
September 27, 2024 16:21 20s
RPD Profiling
clang-format #580: Pull request #208 synchronize by dllehr-amd
September 27, 2024 16:20 22s rpd_main_2
September 27, 2024 16:20 22s
[Int4-AWQ] Fix AWQ Marlin check for ROCm (#206)
clang-format #579: Commit b79f9f4 pushed by hegemanjw4amd
September 27, 2024 16:15 16s main
September 27, 2024 16:15 16s
fix dbrx weight loader
clang-format #577: Pull request #212 opened by divakar-amd
September 26, 2024 22:58 17s divakar-amd-patch-1
September 26, 2024 22:58 17s
multi-gpu fused_moe tuning support
clang-format #575: Pull request #143 synchronize by divakar-amd
September 26, 2024 21:59 18s distributed_fmoe_tuning
September 26, 2024 21:59 18s
extend moe padding to DUMMY weights (#211)
clang-format #574: Commit 9858710 pushed by gshtras
September 26, 2024 21:52 20s main
September 26, 2024 21:52 20s
extend moe padding to DUMMY weights
clang-format #573: Pull request #211 synchronize by divakar-amd
September 26, 2024 21:28 20s divakar-amd-patch-1
September 26, 2024 21:28 20s
extend moe padding to DUMMY weights
clang-format #572: Pull request #211 synchronize by divakar-amd
September 26, 2024 21:22 18s divakar-amd-patch-1
September 26, 2024 21:22 18s
extend moe padding to DUMMY weights
clang-format #571: Pull request #211 synchronize by gshtras
September 26, 2024 21:00 23s divakar-amd-patch-1
September 26, 2024 21:00 23s
extend moe padding to DUMMY weights
clang-format #570: Pull request #211 opened by divakar-amd
September 26, 2024 20:59 17s divakar-amd-patch-1
September 26, 2024 20:59 17s
add block_manager_v2.py into setup_cython: block_manager_v2 is used w…
clang-format #569: Commit 5c50fca pushed by gshtras
September 26, 2024 20:54 20s main
September 26, 2024 20:54 20s
multi-gpu fused_moe tuning support
clang-format #568: Pull request #143 synchronize by divakar-amd
September 26, 2024 19:14 26s distributed_fmoe_tuning
September 26, 2024 19:14 26s
add block_manager_v2.py into setup_cython
clang-format #567: Pull request #210 opened by sanyalington
September 26, 2024 17:58 24s shsanyal_dev_vllm61
September 26, 2024 17:58 24s
re-enable avoid torch slice fix when chunked prefill is disabled (#209)
clang-format #566: Commit a5d87a1 pushed by gshtras
September 26, 2024 17:37 17s main
September 26, 2024 17:37 17s
re-enable avoid torch slice fix when chunked prefill is disabled
clang-format #565: Pull request #209 opened by sanyalington
September 26, 2024 17:02 18s shsanyal_vllm61_dev
September 26, 2024 17:02 18s
RPD Profiling
clang-format #564: Pull request #208 synchronize by AdrianAbeyta
September 26, 2024 01:01 20s rpd_main_2
September 26, 2024 01:01 20s
RPD Profiling
clang-format #563: Pull request #208 opened by dllehr-amd
September 25, 2024 21:24 19s rpd_main_2
September 25, 2024 21:24 19s
multi-gpu fused_moe tuning support
clang-format #562: Pull request #143 synchronize by divakar-amd
September 25, 2024 20:19 23s distributed_fmoe_tuning
September 25, 2024 20:19 23s
Revert "[Kernel] changing fused moe kernel chunk size default to 32k …
clang-format #561: Commit cc2039c pushed by gshtras
September 25, 2024 15:34 23s main
September 25, 2024 15:34 23s
Revert "[Kernel] changing fused moe kernel chunk size default to 32k (#7995)"
clang-format #560: Pull request #207 opened by gshtras
September 25, 2024 15:04 22s moe_size_revert
September 25, 2024 15:04 22s
With chunked prefil, for large prompts, the sampler can encounter a z…
clang-format #558: Commit 48c0cb4 pushed by gshtras
September 23, 2024 22:19 23s main
September 23, 2024 22:19 23s
Gating n=0 case from skinny gemm
clang-format #557: Pull request #204 opened by gshtras
September 23, 2024 22:00 19s skinny_chunked_prefil
September 23, 2024 22:00 19s