You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here's a screenshot of the demo (JetPack-4.2.2, i.e. TensorRT 5).
98
98
99
99

100
100
101
-
5. The demo program supports a number of different image inputs. You could do `python3 trt_googlenet.py --help` to read the help messages. Or more specifically, the following inputs could be specified:
101
+
5. The demo program supports 5 different image/video inputs. You could do `python3 trt_googlenet.py --help` to read the help messages. Or more specifically, the following inputs could be specified:
102
102
103
-
*`--file --filename test_video.mp4`: a video file, e.g. mp4 or ts.
104
-
*`--image --filename test_image.jpg`: an image file, e.g. jpg or png.
105
-
*`--usb --vid 0`: USB webcam (/dev/video0).
106
-
*`--rtsp --uri rtsp://admin:[email protected]/live.sdp`: RTSP source, e.g. an IP cam.
103
+
*`--image test_image.jpg`: an image file, e.g. jpg or png.
104
+
*`--video test_video.mp4`: a video file, e.g. mp4 or ts. An optional `--video_looping` flag could be enabled if needed.
105
+
*`--usb 0`: USB webcam (/dev/video0).
106
+
*`--rtsp rtsp://admin:[email protected]/live.sdp`: RTSP source, e.g. an IP cam. An optional `--rtsp_latency` argument could be used to adjust the latency setting in this case.
107
+
*`--onboard 0`: Jetson onboard camera.
108
+
109
+
In additional, you could use `--width` and `--height` to specify the desired input image size, and use `--do_resize` to force resizing of image/video file source.
110
+
111
+
The `--usb`, `--rtsp` and `--onboard` video sources usually produce image frames at 30 FPS. If the TensorRT engine inference code runs faster than that (which happens easily on a x86_64 PC with a good GPU), one particular image could be inferenced multiple times before the next image frame becomes available. This causes problem in the object detector demos, since the original image could have been altered (bounding boxes drawn) and the altered image is taken for inference again. To cope with this problem, use the optional `--copy_frame` flag to force copying/cloning image frames internally.
107
112
108
113
6. Check out my blog post for implementation details:
109
114
@@ -131,7 +136,7 @@ Assuming this repository has been cloned at "${HOME}/project/tensorrt_demos", fo
Here's the result (JetPack-4.2.2, i.e. TensorRT 5). Frame rate was good (over 20 FPS).
@@ -187,9 +191,8 @@ Assuming this repository has been cloned at "${HOME}/project/tensorrt_demos", fo
187
191
I also tested the "ssd_mobilenet_v1_egohands" (hand detector) model with a video clip from YouTube, and got the following result. Again, frame rate was pretty good. But the detection didn't seem very accurate though :-(
5. To verify accuracy (mAP) of the optimized TensorRT engines and make sure they do not degrade too much (due to reduced floating-point precision of "FP16") from the original TensorFlow frozen inference graphs, you could prepare validation data and run "eval_ssd.py". Refer to [README_mAP.md](README_mAP.md) for details.
@@ -278,8 +280,8 @@ Assuming this repository has been cloned at "${HOME}/project/tensorrt_demos", fo
0 commit comments