-
Notifications
You must be signed in to change notification settings - Fork 664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components #9762
base: main
Are you sure you want to change the base?
Conversation
…nsorrt_common as unified lib for all perception components Signed-off-by: Amadeusz Szymko <[email protected]>
Thank you for contributing to the Autoware project! 🚧 If your pull request is in progress, switch it to draft mode. Please ensure:
|
Signed-off-by: Amadeusz Szymko <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #9762 +/- ##
==========================================
- Coverage 29.72% 29.64% -0.08%
==========================================
Files 1450 1457 +7
Lines 108837 109123 +286
Branches 42740 42825 +85
==========================================
+ Hits 32348 32352 +4
- Misses 73311 73592 +281
- Partials 3178 3179 +1
*This pull request uses carry forward flags. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Description
Incoming TensorRT upgrade requires refactor within perception components. The new version of
autoware_tensorrt_common
library will stand as high-level TensorRT API. PR considers compatibility with current environment (TensorRT 8.6) and future upgrade (TensorRT 10.7+).This PR should be merged before TensorRT upgrade.
Related links
Parent Issue:
How was this PR tested?
A. Current environment (without dependencies upgrade).
B. New environment (with dependencies upgrade).
src/universe/external
.Notes for reviewers
TrtCommon
. Documentation says, They may not give the optimal performance and accuracy. As a workaround, use INT8 explicit quantization instead, therefore we might need to consider explicit quantization (during model deployment).Interface changes
None.
Effects on system behavior
None.