-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OpenFlamingo #2237
Add OpenFlamingo #2237
Conversation
…into michi_openflamingo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A general comment.
I am not sure you took the most straightforward approach here by using an implementation as low-level as this one. Is there no huggingface implementation (or other framework?). Could you try something similar to the pipeline
approach I used in the Llava PR? In this state I feel like it will very hard/impossible to update and just become dead code once something breaks. If there are no other ways then let's keep it this way
requirements.txt
Outdated
@@ -168,7 +171,7 @@ torchvision==0.13.1 ; sys_platform == "darwin" | |||
torch==1.12.1+cu113 ; sys_platform == "linux" | |||
torchvision==0.13.1+cu113 ; sys_platform == "linux" | |||
tqdm==4.64.1 | |||
transformers==4.36.0 | |||
transformers==4.32.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you revert this? This is going to break Llava
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review! This was a bit tricky. OpenFlamingo seems to require 4.32.0; I tried 4.36.0, but encountered errors like ImportError: cannot import name '_expand_mask' from 'transformers.models.bloom.modeling_bloom'
(similar to salesforce/LAVIS#571).
setup.cfg
Outdated
@@ -52,7 +52,7 @@ install_requires= | |||
scikit-learn~=1.1.2 | |||
|
|||
# Models and Metrics Extras | |||
transformers~=4.36.0 # For anthropic_client, vision_language.huggingface_vlm_client, huggingface_client, huggingface_tokenizer, test_openai_token_cost_estimator, model_summac (via summarization_metrics) | |||
transformers>=4.28.0 # For anthropic_client, vision_language.huggingface_vlm_client, huggingface_client, huggingface_tokenizer, test_openai_token_cost_estimator, model_summac (via summarization_metrics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same, we need transformers>=4.36.0
here
src/helm/proxy/clients/vision_language/open_flamingo/src/flamingo.py
Outdated
Show resolved
Hide resolved
src/helm/proxy/clients/vision_language/open_flamingo/src/flamingo_lm.py
Outdated
Show resolved
Hide resolved
src/helm/proxy/clients/vision_language/open_flamingo/src/utils.py
Outdated
Show resolved
Hide resolved
Thanks for the comment! I looked into the possibility of pipeline, but OpenFlamingo is not available on Huggingface natively, so I'm following the current official usage provided in https://github.com/mlfoundations/open_flamingo and https://huggingface.co/openflamingo/OpenFlamingo-9B-vitl-mpt7b |
Sounds good then! Can you just add those references somewhere as a comment please? Thanks! |
Hey @michiyasunaga, could you do the following things so that we get this merged?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the transformers version:
- The pinned dependencies in
requirements.txt
were updated recently by Update requirements.txt #2331 - you might want to check if this works. - If things are still broken, you should put the output of your
pip freeze
in a Gist and send that to me, so that I can reproduce the breakage.
@@ -165,6 +165,12 @@ tokenizer_configs: | |||
class_name: "helm.proxy.tokenizers.huggingface_tokenizer.HuggingFaceTokenizer" | |||
end_of_text_token: "</s>" | |||
prefix_token: "<s>" | |||
|
|||
- name: anas-awadalla/mpt-7b |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this different from the pre-existing mpt
tokenizer (which is the same as EleutherAI/gpt-neox-20b
)?
@michiyasunaga , I have fixed the comments and merge conflicts. Could you provide some commands to run these to check that it works before we merge it? Thanks! |
Thank you @JosselinSomervilleRoberts! Here is the command:
|
…into michi_openflamingo
…into michi_openflamingo
…into michi_openflamingo
Fixed the prompting issues we had with OpenFlamingo. |
Joss made cleaned up this PR earlier.
Add Open Flamingo client for VHELM evaluation.
Resolves #2069