-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Support For OT Lora, Loha and Dora for HunYuan Video in ComfyUI #6673
Comments
To try to impose a consistent lora standard and because it's a pain to deal with I have decided to stop implementing any new lora format that uses diffusers keys. If you want your loras to work you will have to convert the keys to the original model or comfy format. |
Why not provide an interface for people to define them for you. Then you can leave adding support to the people making training software. |
Isn't diffusers, like, the basis of every major training software? That rule would make comfy incompatible with basically every lora wouldn't it? |
The conversion code is either going to be in comfyui or in the training tools and it's better for the entire ecosystem for it to be in the training tools. |
Oh, to relay an answer to my question I got on Discord but should be listed here: While diffusers is commonly used for training software, "the diffusers format" for files isn't. For example, Kohya's trainer, while it uses diffusers code, has its own format based on the original model keys. It's actually technically "the diffusers format" that is "non-standard", for some reason it changes the keys from what the original model architecture authors used. There's an ongoing conversation in the Open Model Initiative discord about standardizing file formats for models and loras, and the essentially universal opinion of all stakeholders is to preserve the original model architecture author's keys, no reformatting like what diffusers does should happen. The standard OMI pushes will be used by all major software vendors from comfy to diffusers to onetrainer to etc., most of them are stakeholders taking part in the OMI discussions. I will add the argument though, that in the time while the standard is being defined and enforced, model formats used before that time should ideally get automatic import support. The cutoff of refusal to add new formats should only be applied after the OMI standard rules are finalized imo. |
Script for converting diffusers lora weights to original format: https://gist.github.com/spacepxl/b6b8be274ef289250106e5bd85a92959 it won't support dora though, just the basic lora_A and lora_B keys. a-r-r-o-w/finetrainers#231 for context |
@spacepxl that may work for finetrainers, but sadly I tried that script and it doesn't seem to work with OT loras. It prompts an error message for each block saying "unkown or not implemented" and the output file is just 1KB and of course it doesn't work. Unless I'm doing something wrong, that is. :( |
Feature Idea
Please add support in ComfyUI for loading of OneTrainer Lora, LoHa and Dora files.
Attached are the key names for an OT Lora, LoHa, Dora with full layers, TE1 and TE2 trained and a bundled embedding (essentially every option possible)
LoHaFullTETI_keys.txt
LoRaFullTETI_keys.txt
DoRaFullTETI_keys.txt
Safetensors if needed can be found here:
SafeTensor Files
Existing Solutions
#6531 (comment)
is a workaround for an OT Lora, but not for Dora and likely not for a Lora with TE.
Other
No response
The text was updated successfully, but these errors were encountered: