Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Olmo 0724 -hf checkpoints don't load the proper config when instantiating with OLMoForCausalLM #689

Open
sarahwie opened this issue Aug 5, 2024 · 2 comments
Labels
type/bug An issue about a bug

Comments

@sarahwie
Copy link
Contributor

sarahwie commented Aug 5, 2024

🐛 Describe the bug

Hi, when I attempt to load a HF checkpoint as follows, there seems to be a config mismatch that prevents the checkpoint from loading (in general I'm not sure I understand the difference between the models ending in -hf those that are not, but I'd like to use intermediate checkpoints, which are currently only released for the 0424 -hf model).

> from hf_olmo import OLMoForCausalLM
> model = OLMoForCausalLM.from_pretrained("allenai/OLMo-7B-0424-hf")

seems to be loading the OlmoConfig for a much smaller model:

You are using a model of type olmo to instantiate a model of type hf_olmo. This is not supported for all configurations of models and can yield errors.
Can't set hidden_size with value 4096 for OLMoConfig {
  "activation_type": "swiglu",
  "alibi": false,
  "alibi_bias_max": 8.0,
  "architectures": [
    "OlmoForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "attention_layer_norm": false,
  "attention_layer_norm_with_affine": true,
  "bias_for_layer_norm": null,
  "block_group_size": 1,
  "block_type": "sequential",
  "clip_qkv": 8.0,
  "d_model": 768,
  "embedding_dropout": 0.1,
  "embedding_size": 50304,
  "eos_token_id": 50279,
  "flash_attention": false,
  "hidden_act": "silu",
  "include_bias": true,
  "init_cutoff_factor": null,
  "init_device": null,
  "init_fn": "normal",
  "init_std": 0.02,
  "layer_norm_eps": 1e-05,
  "layer_norm_type": "default",
  "layer_norm_with_affine": true,
  "max_sequence_length": 1024,
  "mlp_hidden_size": null,
  "mlp_ratio": 4,
  "model_type": "hf_olmo",
  "multi_query_attention": null,
  "n_heads": 12,
  "n_kv_heads": null,
  "n_layers": 12,
  "pad_token_id": 1,
  "precision": null,
  "residual_dropout": 0.1,
  "rope": false,
  "rope_full_precision": true,
  "scale_logits": false,
  "tie_word_embeddings": false,
  "torch_dtype": "float32",
  "transformers_version": "4.43.3",
  "vocab_size": 50304,
  "weight_tying": true
}

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/sarahw/miniconda3/envs/understanding_mcqa/lib/python3.12/site-packages/transformers/modeling_utils.py", line 3310, in from_pretrained
    config, model_kwargs = cls.config_class.from_pretrained(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sarahw/miniconda3/envs/understanding_mcqa/lib/python3.12/site-packages/transformers/configuration_utils.py", line 610, in from_pretrained
    return cls.from_dict(config_dict, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sarahw/miniconda3/envs/understanding_mcqa/lib/python3.12/site-packages/transformers/configuration_utils.py", line 772, in from_dict
    config = cls(**config_dict)
             ^^^^^^^^^^^^^^^^^^
  File "/home/sarahw/miniconda3/envs/understanding_mcqa/lib/python3.12/site-packages/hf_olmo/configuration_olmo.py", line 25, in __init__
    super().__init__(**all_kwargs)
  File "/home/sarahw/miniconda3/envs/understanding_mcqa/lib/python3.12/site-packages/transformers/configuration_utils.py", line 376, in __init__
    raise err
  File "/home/sarahw/miniconda3/envs/understanding_mcqa/lib/python3.12/site-packages/transformers/configuration_utils.py", line 373, in __init__
    setattr(self, key, value)
  File "/home/sarahw/miniconda3/envs/understanding_mcqa/lib/python3.12/site-packages/transformers/configuration_utils.py", line 259, in __setattr__
    super().__setattr__(key, value)
AttributeError: property 'hidden_size' of 'OLMoConfig' object has no setter

Note that everything works as expected for the following commands:

from hf_olmo import OLMoForCausalLM
model = OLMoForCausalLM.from_pretrained("allenai/OLMo-7B-0424")

I am assuming this has to do with the warning message "You are using a model of type olmo to instantiate a model of type hf_olmo. This is not supported for all configurations of models and can yield errors.", though notably I get the same warning message when running this command^, which does seemingly load the model.

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-0424-hf")

Versions

Python 3.12.4
ai2-olmo==0.4.0
ai2-olmo-core==0.1.0
aiohappyeyeballs==2.3.4
aiohttp==3.10.0
aiosignal==1.3.1
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
attrs==23.2.0
boto3==1.34.152
botocore==1.34.152
cached_path==1.6.3
cachetools==5.4.0
certifi==2024.7.4
charset-normalizer==3.3.2
contourpy==1.2.1
cycler==0.12.1
datasets==2.20.0
dill==0.3.8
filelock==3.13.4
fonttools==4.53.1
frozenlist==1.4.1
fsspec==2024.5.0
google-api-core==2.19.1
google-auth==2.32.0
google-cloud-core==2.4.1
google-cloud-storage==2.18.0
google-crc32c==1.5.0
google-resumable-media==2.7.1
googleapis-common-protos==1.63.2
huggingface-hub==0.23.5
idna==3.7
importlib_resources==6.4.0
Jinja2==3.1.4
jmespath==1.0.1
jsonlines==4.0.0
kiwisolver==1.4.5
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.1
mdurl==0.1.2
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.16
networkx==3.3
numpy==2.0.1
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.6.20
nvidia-nvtx-cu12==12.1.105
omegaconf==2.3.0
packaging==24.1
pandas==2.2.2
pillow==10.4.0
proto-plus==1.24.0
protobuf==5.27.3
pyarrow==17.0.0
pyarrow-hotfix==0.6
pyasn1==0.6.0
pyasn1_modules==0.4.0
pydantic==2.8.2
pydantic_core==2.20.1
Pygments==2.18.0
pyparsing==3.1.2
python-dateutil==2.9.0.post0
pytz==2024.1
PyYAML==6.0.1
regex==2024.7.24
requests==2.32.3
rich==13.7.1
rsa==4.9
s3transfer==0.10.2
safetensors==0.4.3
setuptools==72.1.0
six==1.16.0
sympy==1.13.1
tokenizers==0.19.1
torch==2.3.1
tqdm==4.66.4
transformers==4.43.3
typing_extensions==4.12.2
tzdata==2024.1
urllib3==2.2.2
wheel==0.43.0
xxhash==3.4.1
yarl==1.9.4

@sarahwie sarahwie added the type/bug An issue about a bug label Aug 5, 2024
@2015aroras
Copy link
Collaborator

2015aroras commented Aug 5, 2024

The OLMoForCausalLM in hf_olmo and the OlmoForCausalLM in transformers are different models. The latter have -hf suffix for their repos. You are trying to load a checkpoint of the latter type into the former model, and hence seeing failures.

AutoModelForCausalLM should load both types of checkpoints properly without warnings (you just need to also import hf_olmo for the former type). Can you share the exact warning message you get when you run AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-0424-hf")?

More context: https://github.com/allenai/OLMo/blob/main/docs/Checkpoints.md

@sarahwie
Copy link
Contributor Author

sarahwie commented Aug 5, 2024

Oh I see, thanks for the clarification! I was confused about the casing difference, too, but this explains it. I don't get any warning when I use AutoModelForCausalLM; was using that as an example of something that's working fine.

I'm currently subclassing hf_olmo's OLMoForCausalLM with some custom inference-time hooks, but I want to load the intermediate training checkpoints, which are only available for OLMo-7B-0424-hf and not OLMo-7B-0424 from what I can see on the HF landing page. Is there any plan to add the checkpoints to the other model version?

Also, maybe we should link this markdown doc from the official HF landing page for the model checkpoints. Might be helpful to others too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug An issue about a bug
Projects
None yet
Development

No branches or pull requests

2 participants