-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The issue of facial contamination in LoRA #166
Comments
Have you tried using regularisation images? |
You need to provide more info: trained with or without captions? Network DIm\alpha? Because if you've trained Dim 128 for example, it's most likely that your Lora weights are huge, and weaker tokens can't break through it (faces of random AI generated humas). But anyways, in order to generate images with more different subjects, you just NEED to use attention masking and Inpainting (i'm using ComfyUI for that and it is amazing what you can achieve with masks + inpaint). |
Thank you for your reply. I was about to give up, but you gave me hope. Here's my training process: All of the data was labeled, using natural language labeling generated by ChatGPT-4.0 or LLaMA 3.1. Do you have X (formerly Twitter) or YouTube? I would like to follow you. |
I never EVER use captions for training faces, just trigger words (ohwx man, or ohwx woman.. girl etc). Default LR 1e-4 (0.0004) is good. Set Dim/Alpha - 32/32. Optimizer - i prefer Adafactor, but you can use AdamW8bit, Prodigy... 150 Dataset repeats, save every 10-15 epochs. All of this falls apart if the dataset is not good, of course. |
I know that using masks, InstantID, and inpainting during the generation process can control the output. However, what I hope to achieve is solving the face contamination issue during the Lora training itself. I've tried various methods like regularization and layer-wise training, but they all failed... I'll register for an Instagram account and make a friend there to thank you. Thanks again! |
I’ve been following you on X and also left a comment on your X post regarding this issue. The regularization has been tested, but it still doesn't solve the problem. When you trained the character LoRA, did you experience any face contamination issues when generating a double or multi-person photo? How did you resolve this? |
Nothing works. Many many many people tried and discussed it in the discord and that is basically impossible. |
If the same Lora is trained on several people, then when generating group photos with this Lora, it will also be limited to these specific people, right? |
I am not sure, you may need to prompt some facial characteristics of other
people to avoid this.
I unfortunately can't install tuner at the moment so I can't try
…On Sat, 14 Sep 2024, 08:50 SUNNYS ***@***.***> wrote:
Nothing works. Many many many people tried and discussed it in the discord
and that is basically impossible. Try training a Lokr with simple-tuner.
People managed to train several people in same Lokr. So basically it is
possible to not have bleeding with Lokr. Haven't seen it with my own eyes
though.
If the same Lora is trained on several people, then when generating group
photos with this Lora, it will also be limited to these specific people,
right?
—
Reply to this email directly, view it on GitHub
<#166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BEN3AY5ICLUL62PW4E7GDETZWPMD3AVCNFSM6AAAAABOAUGIWWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJQHA4DANRSHE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
https://huggingface.co/TheLastBen/The_Hound This Lora solves the problem of facial pollution, but I wonder if it has anything to do with him being a celebrity? |
This one only trained 2 layers, and the dim value is relatively low. After careful testing, there is still facial pollution, but it is relatively small. |
In the character Lora, if the output is a group photo, the face of the character Lora contaminates the faces of other people in the group. Various methods such as adjusting the dataset, lowering the learning rate, and layer-wise training have been tried, but the issue cannot be resolved. What exactly is going wrong?
The text was updated successfully, but these errors were encountered: