Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Llava and BakLlava #2234

Merged
merged 22 commits into from
Jan 13, 2024
Merged

Added Llava and BakLlava #2234

merged 22 commits into from
Jan 13, 2024

Conversation

JosselinSomervilleRoberts
Copy link
Contributor

@JosselinSomervilleRoberts JosselinSomervilleRoberts commented Jan 13, 2024

Resolves #1949

Example using BakLLaVA on VQA:

USER: <image>
Where is the horse?
ASSISTANT: 

image

Output:

The horse is standing on a city street corner, next to a sidewalk.

Copy link
Member

@teetone teetone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, except for a few minor comments. I think the tests are failing because of the new files. Could you also include an example prompt/image and output in the PR description? Thanks!

test_llava.py Outdated Show resolved Hide resolved
src/helm/config/model_metadata.yaml Outdated Show resolved Hide resolved
@teetone teetone added models VHELM Holistic Evaluation of Vision-Language Models (VLM) labels Jan 13, 2024
@JosselinSomervilleRoberts JosselinSomervilleRoberts merged commit 30a6743 into main Jan 13, 2024
6 checks passed
@JosselinSomervilleRoberts JosselinSomervilleRoberts deleted the joss-llava branch January 13, 2024 22:39
brianwgoldman pushed a commit that referenced this pull request Feb 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models VHELM Holistic Evaluation of Vision-Language Models (VLM)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support LLaVA 1.5
2 participants