Skip to content
View ywang96's full-sized avatar

Organizations

@vllm-project

Block or report ywang96

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. vllm-project/vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 40.5k 6.1k

  2. vllm Public

    Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python

708 contributions in the last year

Contribution Graph
Day of Week March April May June July August September October November December January February
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Activity overview

Contributed to vllm-project/vllm, sgl-project/sglang, ywang96/vllm and 11 other repositories
Loading A graph representing ywang96's contributions from March 03, 2024 to March 07, 2025. The contributions are 52% code review, 27% commits, 20% pull requests, 1% issues.

Contribution activity

March 2025

Created 1 commit in 1 repository

Created a pull request in vllm-project/vllm that received 7 comments

[Misc][V1] Avoid using envs.VLLM_USE_V1 in mm processing

The main difference between V0 and V1 multimodal input processing is that in V1 we need mm_hashes for downstream tasks (prefix caching, feature cac…

+38 −8 lines changed 7 comments
Reviewed 13 pull requests in 1 repository

Created an issue in vllm-project/vllm that received 1 comment

[Usage]: Clean up Engine Args & Documentation

Your current environment Currently vLLM has a lot of engine arguments listed here https://docs.vllm.ai/en/latest/serving/engine_args.html. Over tim…

1 task done
1 comment
Loading

Seeing something unexpected? Take a look at the GitHub profile guide.