Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Has anybody actulay got this to work ? #15

Open
NovaSabre opened this issue Jan 23, 2024 · 33 comments
Open

Has anybody actulay got this to work ? #15

NovaSabre opened this issue Jan 23, 2024 · 33 comments

Comments

@NovaSabre
Copy link

I'm at a stage where I think this is just a malicious prank breaking everybody's confyUI's

@jwvanderbeck
Copy link

Does almost count? :p I think I have most of it worked out but I'm still stumbling on the InsightFace loader. However I think it might be because of the version of insightface I have installed due to an issue compiling it locally.

@GillesVermeulen
Copy link

Got it technically working on Windows (not on Mac yet), but the resulting images look nothing like the input face.

@aminesoulaymani
Copy link

I got it to work on windows, but the VRAM consumption is very weird, with a huge peak right after the generation process, and the results are not that good, the pose is always the same as the input picture, and the face is not that resembling

@johndpope
Copy link

johndpope commented Jan 23, 2024

try this - https://github.com/huxiuhan/comfyui-instantid

UPDATE
I'm close - workflow loads
see notes here #17
Screenshot from 2024-01-24 10-59-41

but hitting a snag with controlnet asking for a config.json
Screenshot from 2024-01-24 10-57-25

maybe needs one of these
https://huggingface.co/collections/diffusers/sdxl-controlnets-64f9c35846f3f06f5abe351f

clone this and specify path in first node.
ComfyUI/models/controlnet/
git clone https://huggingface.co/lllyasviel/sd-controlnet-mlsd

ComfyUI/models/controlnet/sd-controlnet-mlsd

@b4sh
Copy link

b4sh commented Jan 24, 2024

Works for me. It's slow and VRAM hungry (RTX4070), some SDXL base checkpoint not working (out of memory). Insightface loader set to CUDA results very slow generation. ComfyUI running on Linux (WSL2).

instantID

@johndpope
Copy link

@b4sh - what did you use for controlnet loader?

@Mossbraker
Copy link

@b4sh - what did you use for controlnet loader?

使用方法 | How to Use

下载 InstantID/ControlNetModel 中的 config.json 和 diffusion_pytorch_model.safetensors ,将模型地址填入 📷ID ControlNet Loader 节点中(例如:ComfyUI/custom_nodes/ComfyUI-InstantID/checkpoints/controlnet)

下载 InstantID/ip-adapter 中的 ip-adapter.bin ,将其地址填入 📷Ipadapter_instantid Loader 节点中(例如:ComfyUI/custom_nodes/ComfyUI-InstantID/checkpoints)

下载 DIAMONIK7777/antelopev2 中的所有模型,将其放入 ComfyUI//custom_nodes/ComfyUI-InstantID/models/antelopev2 中

@wandrzej
Copy link

To be honest, as InstantID is now implemented in diffusers, I hope we might get a proper implementation.

https://github.com/huggingface/diffusers/blob/main/examples/community/pipeline_stable_diffusion_xl_instantid.py

On top of other issues, just like with PhotoMaker we have an issue with two competing implementations with the same name even.

@Starzilla29
Copy link

Starzilla29 commented Jan 24, 2024

but hitting a snag with controlnet asking for a config.json Screenshot from 2024-01-24 10-57-25

maybe needs one of these https://huggingface.co/collections/diffusers/sdxl-controlnets-64f9c35846f3f06f5abe351f

clone this and specify path in first node. ComfyUI/models/controlnet/ git clone https://huggingface.co/lllyasviel/sd-controlnet-mlsd

ComfyUI/models/controlnet/sd-controlnet-mlsd

I had this error, just having the json file in the same directory as the controlnet model and renaming the json to config.json was enough to solve this for me. It requires you to download the json file with the controlnet model if you didn't realize.

I got it working on my CPU, but it doesn't recognize cuda for some reason... In order to get it to work on cpu, I had to uninstall IPAdapter, rename the json to config.json, redownload all the antelopev2 files and replace them in the models/antelopev2 folder... idk if I will try to solve the cuda error or just wait for a proper implementation.

@jwvanderbeck
Copy link

Yeah the json file is easy just missing from the instructions. In the same place you downloaded the controlnet model, download the config json from that same place and name it "config.json" and put it in the same location.

My issue is with insightface and the InsightFace Loader. I can't install insightface normally though pip install because it tries to compile it for my computer and fails. I CAN install it by using --prefer-binary so it downloads a pre-compiled binary instead, but I am not 100% sure that binary is compatible or the right version because when I try to run the InsightFace loader, there is an error in the code. The constructor is passing in a kwarg that isn't recognized as valid. I did a few hours of debugging yesterday but didn't get to the bottom of it :(

@johndpope
Copy link

are you on right python version? 3.11 ? show the logs when comfyui loads.
I added these notes for workflow -
https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID/pull/17/files
I'm using this
https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0

my impression with instantid from workflows in this repo - I want to have more control of the style. I attempt to load this image as an artist - but couldn't create anything that resembled this image. need to explore more.

bubblegum

@jwvanderbeck
Copy link

jwvanderbeck commented Jan 24, 2024

3.11.6. I'm 90% certain it is coming from m y install of insightface. If I could get it to compile locally like it wants it would probably just fix the issues

@johndpope
Copy link

johndpope commented Jan 24, 2024

@jwvanderbeck
Copy link

Installing from that wheel at least gets me a new error so that's progress :D


[ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from F:\Generative Art Tools\ComfyUI\custom_nodes\ComfyUI-InstantID\models\antelopev2\1k3d68.onnx failed:Protobuf parsing failed.

File "F:\Generative Art Tools\ComfyUI\execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Generative Art Tools\ComfyUI\execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Generative Art Tools\ComfyUI\execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Generative Art Tools\ComfyUI\custom_nodes\ComfyUI-InstantID\InstantIDNode.py", line 70, in load_insight_face
model = FaceAnalysis(name="antelopev2", root=current_directory, providers=[provider + 'ExecutionProvider',])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\insightface\app\face_analysis.py", line 31, in __init__
model = model_zoo.get_model(onnx_file, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model
session = PickableInferenceSession(self.onnx_file, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
super().__init__(model_path, **kwargs)
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 452, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^```

@jwvanderbeck
Copy link

Ahh looking at files on disk I believe my downloads of the onnx models were corrupoted. I will redownload them.

@jwvanderbeck
Copy link

try this pip install 'https://github.com/Gourieff/Assets/raw/main/Insightface/insightface-0.7.3-cp311-cp311-win_amd64.whl'

Thank you! That wheel was exactly what I needed. With that I got it all working!

@jwvanderbeck
Copy link

I will note for people who might be getting "safety check" errors. Someone correct me if I'm wrong but it seems to only work using SDXL based models?

@johndpope
Copy link

for now, yep.

fyi there's an pytorch2 hotfix upstream that may speed up things (can't tell).
instantX-research/InstantID@abbb364

@hany-cmd
Copy link

Why is it so slow? I have to wait 15 minutes for it to finish 50 steps * Rtx 3080

@jwvanderbeck
Copy link

It might be that onnx is using CPU. I am having that issue and can't figure out why. That said it is still nowhere near as slow as what you are saying.

@PublicPrompts
Copy link

@jwvanderbeck how did you fix the
[ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from F:\Generative Art Tools\ComfyUI\custom_nodes\ComfyUI-InstantID\models\antelopev2\1k3d68.onnx failed:Protobuf parsing failed.

@jwvanderbeck
Copy link

jwvanderbeck commented Jan 26, 2024 via email

@PublicPrompts
Copy link

oh that's it. i redownloaded the onnx from here https://huggingface.co/DIAMONIK7777/antelopev2/tree/main
thanks!

@camoody1
Copy link

I wonder why these files in the antelopev2 folder keep getting overwritten with the broken 1kb version. I had this node working last night, and when I tried again today, it was broken. I checked these files, and they were back to the 1kb versions. I had to redownload the correct files. That's frustrating.

@jwvanderbeck
Copy link

jwvanderbeck commented Jan 27, 2024 via email

@camoody1
Copy link

Because they are in the repo. - John Vanderbeck - http://www.johnvanderbeck.com

Makes sense. I guess the owner if this repo just doesn't care enough to make his work user friendly. I'm hoping for a better made version of InstantID soon.

@jwvanderbeck
Copy link

jwvanderbeck commented Jan 27, 2024 via email

@camoody1
Copy link

@jwvanderbeck I probably would be, actually. 😂 I've spent three nights trying to get this to work and still haven't been able to get an image out of it.

@udappk
Copy link

udappk commented Jan 27, 2024

I had this node working last night, and when I tried again today, it was broken.

@camoody1 Same, It was working prefect and today after an update, it stopped working until replaced by newly redownloaded onnx. But still instantID is a must have, Results are amazing!

@camoody1
Copy link

camoody1 commented Jan 28, 2024

So, I was finally able to get some images out of this workflow after a LOT of memory and node struggles. And while the faces truly are amazing in stylized images, I have to say, the photorealistic image quality is really quite poor. Even with the Style set to "none" and my prompt including "raw photo, 8k uhd, ultra-detailed", the output looks blurry and grainy. I can convert the image to a latent, upscale it and then run it through a KSampler to make it come out gorgeous, but that totally removes the face likeness that is the whole point of the process. I'm not sure what is happening with the code behind the scenes, but if you're looking to produce a high-quality, photorealistic image, IPAdapter is still the better option.

@johndpope
Copy link

@camoody1 - this might be the way to get faceswap to work better - instantX-research/InstantID#89

@camoody1
Copy link

@camoody1 - this might be the way to get faceswap to work better - InstantID/InstantID#89

Yes. This would be very nice to have. Also, anything that could reduce the VRAM required would be huge.

@camoody1
Copy link

Well, sadly, I had this working last night, but as I come back to use it again tonight, VRAM memory issues, again. My 12GB 3060 must be right on the teetering edge of being usable. Unreal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests