Inference problem, tokenizer shape mismatch #1292
Unanswered
JunlingWang0512
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I hope to do inference with my fine-tuned sd-3.5-large model. I followed this code: https://github.com/bghira/SimpleTuner/blob/main/documentation/LYCORIS.md and it works well for flux.1 model.
But when I adapt this code to sd-3.5-large, I met this error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x157696 and 2048x2432)
This error is likely because of tokenizer mismatch, but I simply use tokenizer from downloaded model, and the mismatch happen at tokenizer and tokenizer_2:
text_encoder = CLIPTextModel.from_pretrained(bfl_repo, subfolder="text_encoder", torch_dtype=dtype)
tokenizer = CLIPTokenizer.from_pretrained(bfl_repo, subfolder="tokenizer", torch_dtype=dtype)
text_encoder_2 = CLIPTextModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
tokenizer_2 = CLIPTokenizer.from_pretrained(bfl_repo, subfolder="tokenizer_2", torch_dtype=dtype)
text_encoder_3 = None
tokenizer_3 = None
Here is my complete code, and complete error, I wonder what might be the problem?
code:
Error:
Beta Was this translation helpful? Give feedback.
All reactions