Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faceswap with inpaint #89

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

faceswap with inpaint #89

wants to merge 9 commits into from

Conversation

nosiu
Copy link

@nosiu nosiu commented Jan 27, 2024

Face swap with inpainting (sample code uses LCM Lora)

Works with stabilityai/stable-diffusion-xl-base-1.0 and probably any base XL model tested it on a few others
(do not use models for inpaint!)

Steps

  1. finding the face,
  2. adding padding,
  3. cutting the face with padding,
  4. creating the mask,
  5. rescaling all that (optional)
  6. sending all that to the inpainting pipeline
  7. integrating face from the pipeline into the original image.

because InstantID doesn't handle small faces well, they are (optionally) scaled up, and after exiting the pipeline they are scaled back to their original size

image

Example 1:

Original Image:
image
Resized face from the pipeline (1 face embedding):
image
Result (original image + downscaled face):
image
Result (4 face embeddings):
image

Example 2:
Original image:
image
Face embed from:
image
Result:
image

@johndpope
Copy link
Contributor

this is cool. I made a gist (using chatgpt) to simplify from gradio demo
https://gist.github.com/johndpope/4f2f237fab4b15101ccbe62ab8ab7af2

presumably you modify a few lines to add the faceswap into image pipeline?
could you share lines / or even update this code?

Upgrade gradio samples?
(if you start a new chat with chat.openai.com - and paste in the sample python gradio demo from this repo -
you can simply ask it to upgrade it with your code snippet.)
https://chat.openai.com/share/2d039730-f0e6-43d6-a299-5873172cbf59

@nosiu
Copy link
Author

nosiu commented Jan 28, 2024

@johndpope I don't want to play with GUI (no experience with gradio) but if you make a fork, add a checkbox for example "use mask" which will allow you to draw a mask on the copy of the uploaded image then I can probably make it work.

@johndpope
Copy link
Contributor

Hi @nosiu - understand completely.
my gist code actually throws away gradio code - it does not play with gui.
just runs from terminal. if you can help show how to wire up the pipeline - would help testing.

@nosiu
Copy link
Author

nosiu commented Jan 28, 2024

Hi @johndpope I don't understand the problem because the provided faceswap.py file in this pull request is a minimal example of how to swap faces just python faceswap.py it.
You just run this file and it will swap the faces. Just look at this code (faceswap.py) from the 131 line down if you want to change some parameters like padding, mask size, or of course input / output images.

So if you want to add/change something it would be better to work from faceswap.py as a base

@johndpope
Copy link
Contributor

hi @nosiu - thanks for clarifying.
run your code - python faceswap.py
after loggging into huggingface - i get this error.

a quick change to this should fix -

#adapter_id = 'loras/pytorch_lora_weights.safetensors'
adapter_id = "latent-consistency/lcm-lora-sdxl"


python faceswap.py   
/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
/media/2TB/InstantID/faceswap.py:17: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
  pad_to_max_side=False, mode=Image.BILINEAR, base_pixel_number=64):
/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
  warnings.warn(
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
Loading pipeline components...: 100%|█████████████████████████████████████████████| 7/7 [00:01<00:00,  6.35it/s]
The config attributes {'skip_prk_steps': True} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Traceback (most recent call last):
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status
    response.raise_for_status()
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/loras/pytorch_lora_weights.safetensors

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/media/2TB/InstantID/faceswap.py", line 127, in <module>
    pipe.load_lora_weights(adapter_id)
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/loaders/lora.py", line 1441, in load_lora_weights
    state_dict, network_alphas = self.lora_state_dict(
                                 ^^^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/loaders/lora.py", line 263, in lora_state_dict
    weight_name = cls._best_guess_weight_name(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/loaders/lora.py", line 318, in _best_guess_weight_name
    files_in_repo = model_info(pretrained_model_name_or_path_or_dict).siblings
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 2085, in model_info
    hf_raise_for_status(r)
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 323, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-65b6e44c-74ae84201c41506c012f12d1;1329239b-0ad7-4d28-98f2-dde339113861)

Repository Not Found for url: https://huggingface.co/api/models/loras/pytorch_lora_weights.safetensors.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
(comfyui) ➜  InstantID git:(main) ✗ 

@nosiu
Copy link
Author

nosiu commented Jan 28, 2024

hi @nosiu - thanks for clarifying. run your code - python faceswap.py after loggging into huggingface - i get this error.

a quick change to this should fix -

#adapter_id = 'loras/pytorch_lora_weights.safetensors' adapter_id = "latent-consistency/lcm-lora-sdxl"


python faceswap.py   
/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
/media/2TB/InstantID/faceswap.py:17: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
  pad_to_max_side=False, mode=Image.BILINEAR, base_pixel_number=64):
/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
  warnings.warn(
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./models/antelopev2/scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
Loading pipeline components...: 100%|█████████████████████████████████████████████| 7/7 [00:01<00:00,  6.35it/s]
The config attributes {'skip_prk_steps': True} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Traceback (most recent call last):
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status
    response.raise_for_status()
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/loras/pytorch_lora_weights.safetensors

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/media/2TB/InstantID/faceswap.py", line 127, in <module>
    pipe.load_lora_weights(adapter_id)
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/loaders/lora.py", line 1441, in load_lora_weights
    state_dict, network_alphas = self.lora_state_dict(
                                 ^^^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/loaders/lora.py", line 263, in lora_state_dict
    weight_name = cls._best_guess_weight_name(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/diffusers/loaders/lora.py", line 318, in _best_guess_weight_name
    files_in_repo = model_info(pretrained_model_name_or_path_or_dict).siblings
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 2085, in model_info
    hf_raise_for_status(r)
  File "/home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 323, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-65b6e44c-74ae84201c41506c012f12d1;1329239b-0ad7-4d28-98f2-dde339113861)

Repository Not Found for url: https://huggingface.co/api/models/loras/pytorch_lora_weights.safetensors.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
(comfyui) ➜  InstantID git:(main) ✗ 

Thanks, this is path to the file on my HDD not huggingface repo address.
It's just annoying downloading 12 GB models just because you have them in a different place than default.
That's why I usually use paths on my drive and put links to the file in the comment.

@t00350320
Copy link

how to understand this instantid inpaint with stable diffusion's original inpaint ?

@t00350320
Copy link

python faceswap.py
i get this error.
File "xx/InstantID/pipeline_stable_diffusion_xl_instantid_inpaint.py", line 326, in call
init_image = self.image_processor.preprocess(
TypeError: VaeImageProcessor.preprocess() got an unexpected keyword argument 'resize_mode'

@nosiu
Copy link
Author

nosiu commented Jan 30, 2024

hi @t00350320, You are probably using a different version of diffusers pip show diffusers to show the version you have installed.
It works on version 0.25.0. A good way to fix your problem might be simply updating this outdated package.

@t00350320
Copy link

hi @t00350320, You are probably using a different version of diffusers pip show diffusers to show the version you have installed. It works on version 0.25.0. A good way to fix your problem might be simply updating this outdated package.

thanks for your proposal. Here is our test result with your default scripts,
but it seems weird : 1) a box around face corner; 2) face location does not matched well with original image
result (7)

@johndpope
Copy link
Contributor

you can play with these 2x values
Screenshot 2024-01-31 at 11 14 32 pm

@nosiu
Copy link
Author

nosiu commented Jan 31, 2024

@t00350320 There are two things wrong with this picture.

  1. The mask seems to be too big and it tried to generate part of the collar you can also try to fix it with a negative prompt just add negative_prompt_2 parameter into the pipeline
  2. "The box" around. You need to be extremely lucky not to get any kind of box, even tiny by simply pasting inpainted image into the source image. Solution? One I can think of is adding blur to the mask before pasting it into source image.

I implemented all those features to ComfyUI as a custom node (at least I've tried).
You don't need to be familiar with comfy to look here and check what is done to the image after it was spat out from the pipeline.

@markrmiller
Copy link

It works much better with a good inpaint model. I swapped in mine that takes the huggingface inpaint model and trains it heavily with random masks on people and it works great - my first attempt with the base model came with an extra forearm/hand thing (subject had hands on head kind of) :) Then use fpie/taichi_solver to do the replacement into the original image and results are super. I have not done a detailed comparison with insightface, but much higher resolution of course, and the patch in looks at least as good.

@qwerdf4
Copy link

qwerdf4 commented Feb 24, 2024

faceswap.py TypeError:StableDiffusionXLControlNetInpaintPipeline.prepare_control_image() missing 2 required positional arguments: 'crops_coords' and 'resize_mode'

@nosiu
Copy link
Author

nosiu commented Feb 24, 2024

@qwerdf4 This branch is using diffusers 0.25.0

@qwerdf4
Copy link

qwerdf4 commented Feb 25, 2024

@qwerdf4 This branch is using diffusers 0.25.0
image
uncomment this ,i solved it

@ipfans
Copy link

ipfans commented Jun 5, 2024

I tried to test it and chose to use two examples images: musk_resize.jpeg and post2.jpeg(in this repo)

the parameters:

controlnet_conditioning_scale=0.8,
strength=mask_strength,
ip_adapter_scale=0.3, # keep it low

the generated results are as follows:

output

Can I know where I made a mistake that caused this? It looks like the face image wasn't replaced correctly. The face.jpg still the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants