-
-
Notifications
You must be signed in to change notification settings - Fork 6.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1][Molmo] Fix get_multimodal_embeddings() in molmo.py #14161
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Expected: get_multimodal_embeddings() should return list[Tensor] for `GPUModelRunner` to iterate. Actual: prious to this PR, molmo's _get_mm_embeds() returns a list thus get_multimodal_embeddings() returns a list of list. This is reproducible when all of following satisfy: * more than one request * the tailing part of each request is a bit different, to trigger partial cache hit This PR also updates vision_language.py to help reproduce. Tested with: ``` VLLM_USE_V1=1 \ python examples/offline_inference/vision_language.py \ --model molmo \ --num-prompts=2 \ --use-different-prompt-per-request ``` Signed-off-by: Linkun Chen <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
def get_multimodal_embeddings(self, **kwargs) -> Optional[T]: | ||
def get_multimodal_embeddings( | ||
self, **kwargs | ||
) -> Union[list[torch.Tensor], torch.Tensor, tuple[torch.Tensor, ...]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this particular case, we should change the typevar instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, tested
Signed-off-by: Linkun Chen <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing!
Signed-off-by: Linkun Chen <[email protected]>
Head branch was pushed to by a user without write access
…#14161) Signed-off-by: Johnny <[email protected]>
Expected:
get_multimodal_embeddings()
should returnlist[Tensor]
forGPUModelRunner
to iterate.Actual: prior to this PR, molmo's
_get_mm_embeds()
returns a list thusget_multimodal_embeddings()
returns a list of list.The issue is surfaced when all of following satisfy:
This PR also updates
vision_language.py
to help reproduce above issue.Tested with:
cc @DarkLight1337 @ywang96