Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DPO performance on other models #9

Open
thusharakart opened this issue Feb 20, 2024 · 7 comments
Open

DPO performance on other models #9

thusharakart opened this issue Feb 20, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@thusharakart
Copy link

Do you have data on the performance of DPO with models other than Qwen-VL-Chat? I found that it degrades both perception and cognition in MME when used with LLaVA-1.5.

@TobiasLee
Copy link
Collaborator

We did not explore the DPO with LLaVA models. Could you share your results and example outputs before/after DPO so we can dig into it?

@thusharakart
Copy link
Author

The following are the results for MME benchmark.

MME score { perception, cognition, ocr }
LLaVA-v1.5-7B with DPO {1342, 313, 125}
LLaVA-v1.5-13B with DPO {1425, 312, 130}

@TobiasLee
Copy link
Collaborator

How many epochs have your trained with DPO?

@thusharakart
Copy link
Author

Above results are from 1 epoch training for 7B model and 3 epoch training for 13B model.

@TobiasLee
Copy link
Collaborator

I'm sorry for not getting back to you sooner. We also recently explored performing DPO training on the LLaVA backbone and observed degraded MME performance. However, the scores on other benchmarks have consistently improved.

Model MM-Vet MMHal MMBench
LLaVA-v1.5-7B 30.5 2.42 63.0
LLaVA-v1.5-7B + DPO 31.7 2.62 63.9

We attribute that the simple answer format required by MME cannot be followed by the model after DPO training, and would like to investigate it later.

@choucaicai
Copy link

choucaicai commented May 20, 2024

may be you can add a prompt like this query = f'<img>{img_path}</img>\n{question} you can only use "Yes" or "No" as your responses without adding any extra text or explanation.

@TobiasLee
Copy link
Collaborator

TobiasLee commented Jun 5, 2024

Hi all, we found a great repo with the support/results of many other models: https://github.com/TideDra/VL-RLHF

The performance can be boosted almost consistently for LLaVA-Next series models. So my guess is that the current LLaVA-v1.5 series model is too weak to serve as a starting model for DPO ( possibly due to its lower resolution 336 v.s. Qwen-VL). LLaVA-Next series is more powerful with the image tiling mechanism.

Check it out if you want to further explore the DPO/RLHF with VLFeedback!

@TobiasLee TobiasLee added the enhancement New feature or request label Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants