Doesn't tensorrt support one model, all resolutions? #90
DuckersMcQuack
started this conversation in
General
Replies: 1 comment
-
Currently, the model weights must be stored alongside the engines for TensorRT to work correctly. I'm working on a way that we can directly use the weights from the torch checkpoint. This should bring the size of an engine down to <20mb. You can already build with as large as a range as you want (given your GPU has enough VRAM). #28 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've so far made 5 or 7 different resolutions for 2 models, and so far it's 23GB. Seeing as it said "bigger range, less speed". Would further "research" from nvidia's side allow models to be as custom resolution adaptive as regular safetensors?
Beta Was this translation helpful? Give feedback.
All reactions