-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDXL models trained w/ "One Trainer" are rejected w/ warning, "Warning model_a is SDNone but model_b is SDXL" #104
Comments
Are there any available/downloadable trained checkpoints? |
Here's one to test with: OT trained model export that reports "SDOne": Oddly I'm not getting the popup warning this morning, so it's probably an order of operations thing where model mixer will reject the 2nd model as not SDXL (or SDNone as it seems the OT trained models get listed). The screenshots are from the A1111 extension "Model Toolkit" which also finds the checkpoints aren't standard SDXL. Not 100% sure why this is occurring, but they checkpoints themselves do work fine (and mix fine if allowed), so being able to use them w/o possibility of rejection would be good in my case. |
The uploaded model is a valid SDXL checkpoint. (model-toolkit does not detect SDXL correctly, without and for me, An OT checkpoint works as expected... so strange. I guess, there are some bug exist to prevent a SDXL being detected correctly.
diff --git a/requirements_versions.txt b/requirements_versions.txt
index 2a922f288..a574c7ca7 100644
--- a/requirements_versions.txt
+++ b/requirements_versions.txt
@@ -19,7 +19,7 @@ piexif==1.1.3
psutil==5.9.5
pytorch_lightning==1.9.4
resize-right==0.0.2
-safetensors==0.3.1
+safetensors
scikit-image==0.21.0
spandrel==0.1.6
tomesd==0.1.3 after this fix applied, you can update currently, there are 3-ways to detect SDXL. check
|
Hey,
I think this is a issue/quirk on the OneTrainer side, but due to some difference in the way the header(maybe?) is formatted some applications read SDXL models trained w/ "One Trainer" as broken. I asked in their Discord why models trained w/ Onetrainer might be reported incorrectly, and apparently it's a known thing but it seemed the consensus was Kohya's training script just does something differently that some applications look-for/expect.
The models I've trained w/ OneTrainer (which gained a lot of popularity recently due to a nice GUI and very active developer) have never failed to work anywhere for inferencing/training(w/ Kohya)/or even merging when it's been allowed w/o the warning/rejection. On the other hand the A1111's extension "Model Toolkit" is one other place I've found OT trained models list as corrupt/invalid.....so I don't know what the difference is.
Regardless, based on this can you allow for an over-ride if a model is reported as "SDNone"? I honestly think I've gotten it to accept my OT models anyway, but I can't remember how I did it. I don't know if I had to use a OneTrainer model as the 1st one, and then could use "standard"/kohya models in B,C,etc......or perhaps it was just an older version.
An override would allow people to still use an OT model until maybe whatever the difference is can be sorted out. It's not an SD 1.5 vs. SDXL type issue - it's something minor w/ the header/file-info (faik). I could upload a model that's been exported w/ OT if that'd be helpful....but they are 6gb obv....
https://github.com/Nerogar/OneTrainer
The text was updated successfully, but these errors were encountered: