-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does it support NVIDIA TensorRT #108
Comments
I had the same problem. It is normal to use the whole card, but not once it is fragmented. |
The gpu-manager version and log? |
|
Please follow the FAQ, provides the application container log |
|
You need following the FAQ, set environment first, then run your application, the expected log should have |
I have followed the FAQ。 |
|
https://gist.github.com/xwttzz/1f4b3794a2fb19f430ebea828030d145 |
The error shows you application missing library |
The confusing part is that when I set vcore to 100 it is ok, only when it is fragmented 。 Besides, it has been confirmed that LibnVinfer has been installed in the container. |
How about re-pull the image thomassong/gpu-manager:1.1.5. We have fixed a recursive problem about vcuda. |
I updated the latest log in the original address. [https://gist.github.com/xwttzz/1f4b3794a2fb19f430ebea828030d145] |
I think you should debug the coredump to find where it crash |
@xwttzz have u solved the problem ? |
I have the same problem, is anyone have a solution? |
Hi,have u solved the problem ? |
Does it support NVIDIA TensorRT?
[TensorRT] WARNING: using an engine plan file across different models of devices is not recommended and is likely to affect performance or enven cause errors
any solution or suggestion?
The text was updated successfully, but these errors were encountered: