-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/zoombinis infer single image #305
base: master
Are you sure you want to change the base?
Conversation
…. Before the same image would run inference multiple times redundantly hogging resources
* catkin_test_results (#302) * only inferring a single image frame once for a huge performance boost. Before the same image would run inference multiple times redundantly hogging resources * tests working after I update the image seq in the header * now with test fixes the image header correctly sets Seq * Update YoloObjectDetector.cpp * Update YoloObjectDetector.cpp * Update ObjectDetection.cpp Co-authored-by: Tom Lankhorst <[email protected]>
@@ -171,7 +171,7 @@ void YoloObjectDetector::cameraCallback(const sensor_msgs::ImageConstPtr& msg) { | |||
if (cam_image) { | |||
{ | |||
boost::unique_lock<boost::shared_mutex> lockImageCallback(mutexImageCallback_); | |||
imageHeader_ = msg->header; | |||
imageHeader_ = cam_image->header; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is an optional change, just for consistency
sensor_msgs::ImagePtr image = cv_ptr->toImageMsg(); | ||
auto header = std_msgs::Header(); | ||
header.stamp = ros::Time::now(); | ||
header.seq = seq; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
without a distinct sequence number, the logic for preventing duplicate image detections fails
@@ -104,8 +105,8 @@ TEST(ObjectDetection, DISABLED_DetectDog) { | |||
pathToTestImage += ".jpg"; | |||
|
|||
// Send dog image to yolo. | |||
ASSERT_TRUE(sendImageToYolo(nodeHandle, pathToTestImage)); | |||
ASSERT_TRUE(sendImageToYolo(nodeHandle, pathToTestImage)); | |||
ASSERT_TRUE(sendImageToYolo(nodeHandle, pathToTestImage, 1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
notice for each image we need to set a distinct image sequence.
EXPECT_LT(centerErrorPersonX, 30); | ||
EXPECT_LT(centerErrorPersonY, 30); | ||
// TODO: accuracy is still bad but not unacceptable (see images) | ||
EXPECT_LT(centerErrorPersonX, 260); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
really very different than what this test was expecting, please confirm this is okay. Image output looked fine to me (see previous PR)
@tomlankhorst @mbjelonic I understand it may take time to review, but at first glance what are some concerns you have that I could possibly address to help with the review process? |
ZZZMerge branch 'leggedrobotics-master' into feature/zoombinis-infer-single-image
@Zorbing feel free to carry the torch from here. I've run out of time waiting for this merge request and ended up writing my own ROS node. I must say YoloV4 and TensorRT (if you can) speedup gains are well worth the effort! |
<?xml version="1.0" encoding="UTF-8"?> | ||
<module type="CPP_MODULE" version="4"> | ||
<component name="NewModuleRootManager"> | ||
<content url="file://$MODULE_DIR$" /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unnecessary files.
@zoombinis sorry for the late reply. I went through your changes and left some comments. |
@zoombinis I hope you are doing fine. Let us know if you need more info. |
@tomlankhorst same to you I hope you are well. As I posted above this merge took too long. I ended up implementing my own custom ros node that is in very simple python and cv2 RTSP connection to a camera (I've found ros image serialization to be the biggest bottleneck) and TensorRT neural net for much faster frame rate and much better accuracy and using YoloV4. On an NVidia Xavier NX I'm between 15 and 20 FPS even though I only need between 5 and 10 :) So @Zorbing is the other contributor for this but I can no longer support this merge. |
As far as I know, I cannot edit this pull request or push any changes to it. |
@Zorbing it's the same for me; Every time I make a change I've had to make a new PR. I work mostly with gitlab, so maybe a github expert will say differently. |
See description of results (including images) here: #301 (comment)