Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: IP adapters don't work with SDXL: MetadataIncompleteBuffer #6885

Open
1 task done
Alec-15 opened this issue Sep 19, 2024 · 1 comment
Open
1 task done

[bug]: IP adapters don't work with SDXL: MetadataIncompleteBuffer #6885

Alec-15 opened this issue Sep 19, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@Alec-15
Copy link

Alec-15 commented Sep 19, 2024

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

RTX 3080

GPU VRAM

16GB

Version number

4.2.9

Browser

Brave 1.69.168

Python dependencies

{
"accelerate": "0.30.1",
"compel": "2.0.2",
"cuda": "12.1",
"diffusers": "0.27.2",
"numpy": "1.26.4",
"opencv": "4.9.0.80",
"onnx": "1.15.0",
"pillow": "10.4.0",
"python": "3.11.10",
"torch": "2.2.2+cu121",
"torchvision": "0.17.2",
"transformers": "4.41.1",
"xformers": "0.0.25.post1"
}

What happened

When using any IP adapter on any SDXL model. I get this error:

Traceback (most recent call last): File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal output = self.invoke(context) ^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/app/invocations/denoise_latents.py", line 789, in invoke return self._old_invoke(context) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/contextlib.py", line 81, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/app/invocations/denoise_latents.py", line 958, in _old_invoke image_prompts = self.prep_ip_adapter_image_prompts(context=context, ip_adapters=ip_adapters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/app/invocations/denoise_latents.py", line 543, in prep_ip_adapter_image_prompts image_encoder_model_info = context.models.load(single_ip_adapter.image_encoder_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/app/services/shared/invocation_context.py", line 369, in load return self._services.model_manager.load.load_model(model, _submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/app/services/model_load/model_load_default.py", line 70, in load_model ).load_model(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/backend/model_manager/load/load_default.py", line 56, in load_model locker = self._load_and_cache(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/backend/model_manager/load/load_default.py", line 77, in _load_and_cache loaded_model = self._load_model(config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/invokeai/backend/model_manager/load/model_loaders/generic_diffusers.py", line 42, in _load_model result: AnyModel = model_class.from_pretrained(model_path, torch_dtype=self._torch_dtype, variant=variant) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/ai/opt/invoke/test/.venv/lib64/python3.11/site-packages/transformers/modeling_utils.py", line 3531, in from_pretrained with safe_open(resolved_archive_file, framework="pt") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

What you expected to happen

Image generation without errors.

How to reproduce the problem

Install Invoke
Install any SDXL from Starter Models
Install IP Adapter SDXL from Starter Models
Use any of the three IP Adapters with model to generate an image.

Additional context

It happens with all SDXL models, also other ones from HF.
I made a fresh clean install of Invoke.
I've also tried downloading the IP Adapters from HF through the Model Manager and also manually. Same result.

Discord username

No response

@Alec-15 Alec-15 added the bug Something isn't working label Sep 19, 2024
@Alec-15
Copy link
Author

Alec-15 commented Sep 19, 2024

I can get the basic IP Adapter to work, if I use ViT-G.
Obviously, I can't do this for the vit-h and plus-vit-h versions, so they are still broken.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant