Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VLMs vs LLMs evaluation #12

Open
idan-tankel opened this issue Oct 22, 2023 · 1 comment
Open

VLMs vs LLMs evaluation #12

idan-tankel opened this issue Oct 22, 2023 · 1 comment

Comments

@idan-tankel
Copy link

Hello 👋

First of all thank you for the great work and evaluation results!

I have understood that in many cases you predicted outputs for each question based on the choice that minimizes the loss of the current evaluated model. I wanted to ask - if there was any difference between the evaluation for VLMs and LLMs? and if was, how did you put these results on the same scale?

many thanks
Idan

@Bohao-Lee
Copy link
Collaborator

Thank you for your interest in our work, and we apologize for the delayed response. In reality, there is no difference in the evaluation of VLMs and LLMs. The only distinction is that LLMs take only the question as input, while VLMs consider both the question and its corresponding image. We simply calculate the loss for each candidate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants