Skip to content

Multimodal #66

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 49 commits into
base: main
Choose a base branch
from
Open

Multimodal #66

wants to merge 49 commits into from

Conversation

anmarques
Copy link
Member

@anmarques anmarques commented Nov 4, 2024

This PR adds support for benchmarking multimodal models.

It mostly extends existing infrastructure to add support to requests containing images. For emulated requests it downloads images from an illustrated version from Pride and Prejudice and randomly selects from them.

The load_images logic is currently limited to download from url. It should be extended to HF datasets or local files in the future.

I tested by running the following command:

guidellm --data="prompt_tokens=128,generated_tokens=128,images=1" --data-type emulated --model microsoft/Phi-3.5-vision-instruct --target "http://localhost:8000/v1" --max-seconds 20

On 2xA5000 I had to set max_concurrenty=4 to run this command due to memory limitations.

@anmarques anmarques requested a review from markurtz November 5, 2024 01:11
@rgreenberg1
Copy link
Collaborator

This is image models only right? text-to-image only?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multi-modal Support for benchmarking new multi-modal models
Projects
Status: In review
Development

Successfully merging this pull request may close these issues.

2 participants