Nvidia TensorRT image generation is 6x slower!!! - A1111 WebUI - What is going on? #336
Unanswered
dezorianguy
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Since I noticed you are using SDXL with this, are you positive you aren't overflowing to RAM? The TRT models do take up approx 5-6gb on the drive. Might be that you CAN'T load all of that into your 12gigs. On my 3090 I'm hovering at 20gb when using one loRA model and one optimized engine. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Thank you for your time.
I installed the TensorRT extension within the A1111 WebUI and followed the instructions to "export engine" (screenshot). However, when generating images, it takes approximately three times as long.
For a test run, I first generated an image without the SD Unet activated, followed by one with the SD Unet activated for that specific model (screenshot). The regular generation takes about 0.5 minutes, while the TensorRT-boosted one takes around 6 minutes.
My setup: NVIDIA GeForce RTX 3060 (12GB).
Beta Was this translation helpful? Give feedback.
All reactions