-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controlnet Support #90
base: master
Are you sure you want to change the base?
Conversation
…into seperate_onnx_export
I've been testing this branch out and pushed a PR with minor fixes that were needed to get things working on my end. Just wanted to clarify for my own understanding - IIUC this PR adds support for the TRT engine to use the ControlNet controls (i.e. output/middle/input) passed to the forward pass of Comfy's UNetModel. However, AFAICT the controls are actually generated outside of the UNet forward pass so the TRT engine would only accelerate the UNet forward pass + sampling and not the ControlNet forward pass that generates the controls. This might be different behavior than TRT compilation for SD + ControlNet works in other environments - for example, the ControlNet forward pass is invoked in the UNet forward pass here so the resulting engine would accelerate the ControlNet generation as well. So, you'd need to take a separate step to accelerate the ControlNet generation i.e. build another TRT engine for it or use something like torch.compile. Lmk if I'm off with anything here though! |
@yondonfu Thank you for your feedback, your understanding is right. I intended to have a more flexible system at the cost of some performance - as otherwise you'd be required to build an engine for each combination of ControlNet+Backbone. I have realized issues with this approach as this will not work with 3rd party implementations like XLabs. If you have suggestions on how to cleanly export the baked-in controlnet from Comfy I'd be happy to take a look. |
That makes sense!
So, for background, I have a workflow that uses a torch.compile node on the ControlNet loaded by the ControlNetLoader node. This makes me wonder - do you think it would be possible to instead run a TensorRT convert node (or ONNX export followed by TensorRT convert) on the ControlNet output of the ControlNetLoader node? And then use a TensorRT loader node (instead of ControlNetLoader) to load the ControlNet TensorRT engine? |
Larger refactor to enable ControlNet by exposing output/middle/input control as optional inputs. Nonetheless, should this be backwards compatible.
Some of the logic moved into a model helper class to hopefully make it easier to adopt new architectures. Furthermore were ONNX export and Import nodes added to allow for a more flexible engine building workflow.