Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add TensorRT models to work #66

Open
2 tasks
My12123 opened this issue Nov 6, 2023 · 4 comments
Open
2 tasks

Add TensorRT models to work #66

My12123 opened this issue Nov 6, 2023 · 4 comments

Comments

@My12123
Copy link
Contributor

My12123 commented Nov 6, 2023

  • Add TensorRT models for operation, the ability to run TensorRT models
  • TensorRT Converter
@nagadomi
Copy link
Owner

nagadomi commented Nov 7, 2023

On Linux, torch.compile() is supported. torch.compile should support the TensorRT backend in the future.

To use it, specify --compile option for waifu2x.web. In GUI/CLI, it is automatically used when the input is video.
Note that compiling the model can be very time consuming. Also, it is not yet supported for Windows.

Also, ONNX models can use the TensorRT backend. However, only JavaScript implementation(unlimited:waifu2x) currently use onnxruntime.

@nagadomi
Copy link
Owner

nagadomi commented Nov 7, 2023

I have confirmed that upconv_7, cunet works with TensorRT backend. However, it is slower than with the default backend and no compiling.
For swin_unet*, it requires PyTorch 2.1, but TensorRT does not yet support PyTorch 2.1.

@alanzchen
Copy link

Looks like iw3 would also benefit from https://github.com/spacewalk01/depth-anything-tensorrt !

@nagadomi
Copy link
Owner

I have confirmed that using torch.compile with PyTorch 2.2 can speed up the waifu2x models almost 2x.
So I do not think we should stick to TensorRT.

For iw3's DepthAnything model, I have optimized it with mini-batch and fp16, so it should already be 2x faster than the official video demo.
(Probably the same approach as LiheYoung/Depth-Anything#49 (comment) )
I have not tried torch.compile for the DepthAnything model yet.
The one bad thing about torch.compile is that it does not work on Windows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants