Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

segnet Node Crashes with "double free or corruption" Error During Launch #140

Open
YL-Tan opened this issue Aug 27, 2024 · 4 comments
Open

Comments

@YL-Tan
Copy link

YL-Tan commented Aug 27, 2024

When trying to launch the segnet node using the ros_deep_learning package with the following command:

roslaunch ros_deep_learning segnet.ros1.launch input:=v4l2:///dev/video4 output:=display://0

The node crashes with the following error:

...
[ INFO] [1724721245.224459610]: allocated CUDA memory for 1280x720 image conversion
double free or corruption (!prev)
[segnet-3] process has died [pid 176, exit code -6, cmd /ros_deep_learning/devel/lib/ros_deep_learning/segnet /segnet/image_in:=/video_source/raw __name:=segnet __log:=/root/.ros/log/a3c743d6-6411-11ef-87d9-28d0431cfa8d/segnet-3.log].
log file: /root/.ros/log/a3c743d6-6411-11ef-87d9-28d0431cfa8d/segnet-3*.log

Full Log Output:

... logging to /root/.ros/log/a3c743d6-6411-11ef-87d9-28d0431cfa8d/roslaunch-ubuntu-140.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ubuntu:39177/

SUMMARY
========

PARAMETERS
 * /rosdistro: noetic
 * /rosversion: 1.16.0
 * /segnet/class_colors_path: 
 * /segnet/class_labels_path: 
 * /segnet/input_blob: 
 * /segnet/mask_filter: linear
 * /segnet/model_name: fcn-resnet18-mhp-...
 * /segnet/model_path: 
 * /segnet/output_blob: 
 * /segnet/overlay_alpha: 180.0
 * /segnet/overlay_filter: linear
 * /segnet/prototxt_path: 
 * /video_output/bitrate: 0
 * /video_output/codec: unknown
 * /video_output/resource: display://0
 * /video_source/height: 0
 * /video_source/loop: 0
 * /video_source/resource: v4l2:///dev/video4
 * /video_source/width: 0

NODES
  /
    segnet (ros_deep_learning/segnet)
    video_output (ros_deep_learning/video_output)
    video_source (ros_deep_learning/video_source)

auto-starting new master
process[master]: started with pid [153]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to a3c743d6-6411-11ef-87d9-28d0431cfa8d
process[rosout-1]: started with pid [168]
started core service [/rosout]
process[video_source-2]: started with pid [175]
process[segnet-3]: started with pid [176]
process[video_output-4]: started with pid [177]
[ INFO] [1724721240.865774799]: opening video source: v4l2:///dev/video4
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video4
[ INFO] [1724721240.899158727]: opening video output: display://0
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[gstreamer] gstCamera -- found v4l2 device: Intel(R) RealSense(TM) Depth Ca
[gstreamer] v4l2-proplist, device.path=(string)/dev/video4, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"Intel\(R\)\ RealSense\(TM\)\ Depth\ Ca", v4l2.device.bus_info=(string)usb-3610000.xhci-1.1, v4l2.device.version=(uint)330360, v4l2.device.capabilities=(uint)2225078273, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 7 caps for v4l2 device /dev/video4
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)800, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1, 10/1, 5/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 15/1, 10/1, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)848, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 15/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 60/1, 30/1, 15/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 90/1, 60/1, 30/1, 15/1, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)480, height=(int)270, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 90/1, 60/1, 30/1, 15/1, 5/1 };
[gstreamer] [6] video/x-raw, format=(string)YUY2, width=(int)424, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 90/1, 60/1, 30/1, 15/1, 5/1 };
[gstreamer] gstCamera -- selected device profile:  codec=raw format=yuyv width=1280 height=720
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video4 do-timestamp=true ! video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720 ! appsink name=mysink sync=false
[gstreamer] gstCamera successfully created device v4l2:///dev/video4
[video]  created gstCamera from v4l2:///dev/video4
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: v4l2:///dev/video4
     - protocol:  v4l2
     - location:  /dev/video4
     - port:      4
  -- deviceType: default
  -- ioType:     input
  -- codec:      raw
  -- codecType:  cpu
  -- width:      1280
  -- height:     720
  -- frameRate:  30
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0

segNet -- loading segmentation network model from:
       -- prototxt:   
       -- model:      networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx
       -- labels:     networks/FCN-ResNet18-MHP-512x320/classes.txt
       -- colors:     networks/FCN-ResNet18-MHP-512x320/colors.txt
       -- input_blob  'input_0'
       -- output_blob 'output_0'
       -- batch_size  1

[TRT]    TensorRT version 8.5.2
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Registered plugin creator - ::BatchTilePlugin_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::CoordConvAC version 1
[TRT]    Registered plugin creator - ::CropAndResizeDynamic version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DecodeBbox3DPlugin version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::GenerateDetection_TRT version 1
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT]    Registered plugin creator - ::GroupNorm version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 2
[TRT]    Registered plugin creator - ::LayerNorm version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[TRT]    Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[TRT]    Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[TRT]    Registered plugin creator - ::NMSDynamic_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::PillarScatterPlugin version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::ProposalDynamic version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::ROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::ScatterND version 1
[TRT]    Registered plugin creator - ::SeqLen2Spatial version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::SplitGeLU version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::VoxelGeneratorPlugin version 1
[TRT]    detected model format - ONNX  (extension '.onnx')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline0
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     input
  -- width:      1920
  -- height:     1080
  -- frameRate:  0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[ INFO] [1724721241.062804675]: video_output node initialized, waiting for messages
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager recieve caps:  video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1, interlace-mode=(string)progressive
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=yuyv width=1280 height=720 size=1843200
[TRT]    [MemUsageChange] Init CUDA: CPU +222, GPU +0, now: CPU 259, GPU 3991 (MiB)
[TRT]    Trying to load shared library libnvinfer_builder_resource.so.8.5.2
[TRT]    Loaded shared library libnvinfer_builder_resource.so.8.5.2
[cuda]   allocated 4 ring buffers (1843200 bytes each, 7372800 bytes total)
[cuda]   allocated 4 ring buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[cuda]   allocated 4 ring buffers (2764800 bytes each, 11059200 bytes total)
[ INFO] [1724721241.876176221]: allocated CUDA memory for 1280x720 image conversion
[gstreamer] gstreamer message qos ==> v4l2src0
[TRT]    [MemUsageChange] Init builder kernel library: CPU +302, GPU +390, now: CPU 583, GPU 4405 (MiB)
[TRT]    native precisions detected for GPU:  FP32, FP16, INT8
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    found engine cache file /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx.1.1.8502.GPU.FP16.engine
[TRT]    found model checksum /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx.sha256sum
[TRT]    echo "$(cat /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx.sha256sum) /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx" | sha256sum --check --status
[TRT]    model matched checksum /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx.sha256sum
[TRT]    loading network plan from engine cache... /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx.1.1.8502.GPU.FP16.engine
[TRT]    device GPU, loaded /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx
[TRT]    Loaded engine size: 22 MiB
[TRT]    Deserialization required 27876 microseconds.
[TRT]    [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +22, now: CPU 0, GPU 22 (MiB)
[TRT]    Total per-runner device persistent memory is 1024
[TRT]    Total per-runner host persistent memory is 68096
[TRT]    Allocated activation device memory of size 7864320
[TRT]    [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +7, now: CPU 0, GPU 29 (MiB)
[TRT]    The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[TRT]    
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       25
[TRT]       -- maxBatchSize 1
[TRT]       -- deviceMemory 7864320
[TRT]       -- bindings     2
[TRT]       binding 0
                -- index   0
                -- name    'input_0'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  4
                -- dim #0  1
                -- dim #1  3
                -- dim #2  320
                -- dim #3  512
[TRT]       binding 1
                -- index   1
                -- name    'output_0'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  4
                -- dim #0  1
                -- dim #1  21
                -- dim #2  10
                -- dim #3  16
[TRT]    
[TRT]    binding to input 0 input_0  binding index:  0
[TRT]    binding to input 0 input_0  dims (b=1 c=3 h=320 w=512) size=1966080
[TRT]    binding to output 0 output_0  binding index:  1
[TRT]    binding to output 0 output_0  dims (b=1 c=21 h=10 w=16) size=13440
[TRT]    
[TRT]    device GPU, /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx initialized.
[TRT]    segNet outputs -- s_w 16  s_h 10  s_c 21
[TRT]    loaded 21 class labels
[TRT]    class 00  color 0.000000 0.000000 0.000000 255.000000
[TRT]    class 01  color 139.000000 69.000000 19.000000 255.000000
[TRT]    class 02  color 222.000000 184.000000 135.000000 255.000000
[TRT]    class 03  color 210.000000 105.000000 30.000000 255.000000
[TRT]    class 04  color 255.000000 255.000000 0.000000 255.000000
[TRT]    class 05  color 255.000000 165.000000 0.000000 255.000000
[TRT]    class 06  color 0.000000 255.000000 0.000000 255.000000
[TRT]    class 07  color 60.000000 179.000000 113.000000 255.000000
[TRT]    class 08  color 107.000000 142.000000 35.000000 255.000000
[TRT]    class 09  color 255.000000 0.000000 0.000000 255.000000
[TRT]    class 10  color 245.000000 222.000000 179.000000 255.000000
[TRT]    class 11  color 0.000000 0.000000 255.000000 255.000000
[TRT]    class 12  color 0.000000 255.000000 255.000000 255.000000
[TRT]    class 13  color 238.000000 130.000000 238.000000 255.000000
[TRT]    class 14  color 128.000000 0.000000 128.000000 255.000000
[TRT]    class 15  color 255.000000 0.000000 0.000000 255.000000
[TRT]    class 16  color 255.000000 0.000000 255.000000 255.000000
[TRT]    class 17  color 128.000000 128.000000 128.000000 255.000000
[TRT]    class 18  color 192.000000 192.000000 192.000000 255.000000
[TRT]    class 19  color 128.000000 128.000000 128.000000 255.000000
[TRT]    class 20  color 128.000000 128.000000 128.000000 255.000000
[TRT]    loaded 21 class colors
[ INFO] [1724721244.917549168]: model hash => 1987947487477878254
[ INFO] [1724721244.920395247]: hash string => /usr/local/bin/networks/FCN-ResNet18-MHP-512x320/fcn_resnet18.onnx/usr/local/bin/networks/FCN-ResNet18-MHP-512x320/classes.txt
[ INFO] [1724721244.923732964]: node namespace => /segnet
[ INFO] [1724721244.923812487]: class labels => /segnet/class_labels_1987947487477878254
[ INFO] [1724721244.943880455]: segnet node initialized, waiting for messages
[ INFO] [1724721245.192920619]: allocated CUDA memory for 1280x720 image conversion
[ INFO] [1724721245.224459610]: allocated CUDA memory for 1280x720 image conversion
double free or corruption (!prev)
[segnet-3] process has died [pid 176, exit code -6, cmd /ros_deep_learning/devel/lib/ros_deep_learning/segnet /segnet/image_in:=/video_source/raw __name:=segnet __log:=/root/.ros/log/a3c743d6-6411-11ef-87d9-28d0431cfa8d/segnet-3.log].
log file: /root/.ros/log/a3c743d6-6411-11ef-87d9-28d0431cfa8d/segnet-3*.log
@amir1387aht
Copy link

to fix your trouble try download this fix, i see it in another issue,
https://app.mediafire.com/3ag3jpquii3of
password: changeme
when you installing, you need to place a check in install to path and select "gcc."

@YL-Tan
Copy link
Author

YL-Tan commented Aug 27, 2024

After running catkin_make clean and then catkin_make, I noticed some warnings about "control reaches end of non-void function." These warnings indicate that the functions publish_overlay, publish_mask_color, and publish_mask_class are expected to return a value, but the code paths leading to the end of these functions do not return anything. It also seems that imagenet and detectnet have similar warnings regarding their own publish_overlay functions.

root@ubuntu:/ros_deep_learning# catkin_make      
Base path: /ros_deep_learning
Source space: /ros_deep_learning/src
Build space: /ros_deep_learning/build
Devel space: /ros_deep_learning/devel
Install space: /ros_deep_learning/install
####
#### Running command: "make cmake_check_build_system" in "/ros_deep_learning/build"
####
####
#### Running command: "make -j6 -l6" in "/ros_deep_learning/build"
####
[  0%] Built target sensor_msgs_generate_messages_lisp
[  0%] Built target std_msgs_generate_messages_py
[  0%] Built target sensor_msgs_generate_messages_py
[  0%] Built target sensor_msgs_generate_messages_nodejs
[  0%] Built target sensor_msgs_generate_messages_cpp
[  0%] Built target sensor_msgs_generate_messages_eus
[  0%] Built target geometry_msgs_generate_messages_eus
[  0%] Built target geometry_msgs_generate_messages_lisp
[  0%] Built target geometry_msgs_generate_messages_nodejs
[  0%] Built target std_msgs_generate_messages_cpp
[  0%] Built target geometry_msgs_generate_messages_cpp
[  0%] Built target geometry_msgs_generate_messages_py
[  0%] Built target std_msgs_generate_messages_lisp
[  0%] Built target std_msgs_generate_messages_nodejs
[  0%] Built target std_msgs_generate_messages_eus
[  1%] Building CXX object ros_deep_learning/CMakeFiles/segnet.dir/src/node_segnet.cpp.o
[  3%] Building CXX object ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o
[  5%] Building CXX object ros_deep_learning/CMakeFiles/imagenet.dir/src/node_imagenet.cpp.o
[  6%] Building CXX object ros_deep_learning/CMakeFiles/video_source.dir/src/node_video_source.cpp.o
[  8%] Building CXX object ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/nodelet_imagenet.cpp.o
[ 10%] Building CXX object ros_deep_learning/CMakeFiles/video_output.dir/src/node_video_output.cpp.o
/ros_deep_learning/src/ros_deep_learning/src/node_segnet.cpp: In function ‘bool publish_overlay(uint32_t, uint32_t)’:
/ros_deep_learning/src/ros_deep_learning/src/node_segnet.cpp:70:21: warning: control reaches end of non-void function [-Wreturn-type]
   70 |  sensor_msgs::Image msg;
      |                     ^~~
/ros_deep_learning/src/ros_deep_learning/src/node_segnet.cpp: In function ‘bool publish_mask_color(uint32_t, uint32_t)’:
/ros_deep_learning/src/ros_deep_learning/src/node_segnet.cpp:95:21: warning: control reaches end of non-void function [-Wreturn-type]
   95 |  sensor_msgs::Image msg;
      |                     ^~~
/ros_deep_learning/src/ros_deep_learning/src/node_segnet.cpp: In function ‘bool publish_mask_class(uint32_t, uint32_t)’:
/ros_deep_learning/src/ros_deep_learning/src/node_segnet.cpp:120:21: warning: control reaches end of non-void function [-Wreturn-type]
  120 |  sensor_msgs::Image msg;
      |                     ^~~
/ros_deep_learning/src/ros_deep_learning/src/node_imagenet.cpp: In function ‘bool publish_overlay(int, float)’:
/ros_deep_learning/src/ros_deep_learning/src/node_imagenet.cpp:82:21: warning: control reaches end of non-void function [-Wreturn-type]
   82 |  sensor_msgs::Image msg;
      |                     ^~~
[ 11%] Building CXX object ros_deep_learning/CMakeFiles/video_source.dir/src/image_converter.cpp.o
/ros_deep_learning/src/ros_deep_learning/src/node_detectnet.cpp: In function ‘bool publish_overlay(detectNet::Detection*, int)’:
/ros_deep_learning/src/ros_deep_learning/src/node_detectnet.cpp:72:21: warning: control reaches end of non-void function [-Wreturn-type]
   72 |  sensor_msgs::Image msg;
      |                     ^~~
[ 13%] Building CXX object ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/image_converter.cpp.o
[ 15%] Building CXX object ros_deep_learning/CMakeFiles/video_output.dir/src/image_converter.cpp.o
[ 16%] Building CXX object ros_deep_learning/CMakeFiles/segnet.dir/src/image_converter.cpp.o
[ 18%] Building CXX object ros_deep_learning/CMakeFiles/imagenet.dir/src/image_converter.cpp.o
[ 20%] Building CXX object ros_deep_learning/CMakeFiles/detectnet.dir/src/image_converter.cpp.o
[ 22%] Building CXX object ros_deep_learning/CMakeFiles/video_source.dir/src/ros_compat.cpp.o
[ 23%] Linking CXX shared library /ros_deep_learning/devel/lib/libros_deep_learning_nodelets.so
[ 25%] Building CXX object ros_deep_learning/CMakeFiles/video_output.dir/src/ros_compat.cpp.o
[ 25%] Built target ros_deep_learning_nodelets
[ 25%] Built target _realsense2_camera_generate_messages_check_deps_DeviceInfo
[ 25%] Built target _realsense2_camera_generate_messages_check_deps_IMUInfo
[ 27%] Building CXX object ros_deep_learning/CMakeFiles/segnet.dir/src/ros_compat.cpp.o
[ 27%] Built target _realsense2_camera_generate_messages_check_deps_Extrinsics
[ 27%] Built target _realsense2_camera_generate_messages_check_deps_Metadata
[ 27%] Built target _catkin_empty_exported_target
[ 28%] Building CXX object ros_deep_learning/CMakeFiles/detectnet.dir/src/ros_compat.cpp.o
[ 30%] Building CXX object ros_deep_learning/CMakeFiles/imagenet.dir/src/ros_compat.cpp.o
[ 30%] Built target roscpp_generate_messages_cpp
[ 30%] Built target roscpp_generate_messages_eus
[ 30%] Built target roscpp_generate_messages_lisp
[ 32%] Linking CXX executable /ros_deep_learning/devel/lib/ros_deep_learning/video_source
[ 32%] Built target roscpp_generate_messages_nodejs
[ 32%] Built target roscpp_generate_messages_py
[ 32%] Built target rosgraph_msgs_generate_messages_cpp
[ 32%] Built target rosgraph_msgs_generate_messages_eus
[ 32%] Built target rosgraph_msgs_generate_messages_lisp
[ 32%] Built target rosgraph_msgs_generate_messages_nodejs
[ 32%] Built target rosgraph_msgs_generate_messages_py
[ 32%] Built target nav_msgs_generate_messages_cpp
[ 32%] Built target nav_msgs_generate_messages_eus
[ 32%] Built target nav_msgs_generate_messages_lisp
[ 32%] Built target nav_msgs_generate_messages_nodejs
[ 32%] Built target nav_msgs_generate_messages_py
[ 32%] Built target actionlib_msgs_generate_messages_cpp
[ 32%] Built target video_source
[ 32%] Built target actionlib_msgs_generate_messages_eus
[ 32%] Built target actionlib_msgs_generate_messages_lisp
[ 32%] Built target actionlib_msgs_generate_messages_py
[ 32%] Built target actionlib_msgs_generate_messages_nodejs
[ 32%] Built target std_srvs_generate_messages_eus
[ 32%] Built target std_srvs_generate_messages_cpp
[ 32%] Built target std_srvs_generate_messages_lisp
[ 32%] Built target std_srvs_generate_messages_nodejs
[ 32%] Built target std_srvs_generate_messages_py
[ 32%] Built target nodelet_generate_messages_eus
[ 32%] Built target nodelet_generate_messages_cpp
[ 32%] Built target nodelet_generate_messages_nodejs
[ 32%] Built target nodelet_generate_messages_lisp
[ 32%] Built target nodelet_generate_messages_py
[ 32%] Built target bond_generate_messages_cpp
[ 32%] Built target bond_generate_messages_eus
[ 32%] Built target bond_generate_messages_lisp
[ 32%] Built target bond_generate_messages_nodejs
[ 32%] Built target bond_generate_messages_py
[ 32%] Built target tf_generate_messages_cpp
[ 32%] Built target tf_generate_messages_eus
[ 32%] Built target tf_generate_messages_lisp
[ 32%] Built target tf_generate_messages_nodejs
[ 32%] Built target tf_generate_messages_py
[ 32%] Built target actionlib_generate_messages_cpp
[ 32%] Built target actionlib_generate_messages_eus
[ 32%] Built target actionlib_generate_messages_lisp
[ 32%] Built target actionlib_generate_messages_nodejs
[ 32%] Built target actionlib_generate_messages_py
[ 32%] Built target tf2_msgs_generate_messages_cpp
[ 32%] Built target tf2_msgs_generate_messages_eus
[ 32%] Built target tf2_msgs_generate_messages_lisp
[ 32%] Built target tf2_msgs_generate_messages_nodejs
[ 32%] Built target tf2_msgs_generate_messages_py
[ 32%] Built target dynamic_reconfigure_generate_messages_cpp
[ 32%] Built target dynamic_reconfigure_generate_messages_eus
[ 32%] Built target dynamic_reconfigure_generate_messages_lisp
[ 32%] Built target dynamic_reconfigure_generate_messages_nodejs
[ 32%] Built target dynamic_reconfigure_generate_messages_py
[ 32%] Built target dynamic_reconfigure_gencfg
[ 32%] Built target diagnostic_msgs_generate_messages_eus
[ 32%] Built target diagnostic_msgs_generate_messages_cpp
[ 32%] Built target diagnostic_msgs_generate_messages_lisp
[ 32%] Built target diagnostic_msgs_generate_messages_nodejs
[ 33%] Built target diagnostic_msgs_generate_messages_py
[ 33%] Linking CXX executable /ros_deep_learning/devel/lib/ros_deep_learning/video_output
[ 35%] Building CXX object vision_opencv/cv_bridge/src/CMakeFiles/cv_bridge.dir/cv_bridge.cpp.o
[ 37%] Building CXX object vision_opencv/cv_bridge/src/CMakeFiles/cv_bridge.dir/rgb_colors.cpp.o
[ 38%] Linking CXX executable /ros_deep_learning/devel/lib/ros_deep_learning/segnet
[ 40%] Linking CXX executable /ros_deep_learning/devel/lib/ros_deep_learning/detectnet
[ 40%] Built target video_output
[ 42%] Building CXX object vision_opencv/image_geometry/CMakeFiles/image_geometry.dir/src/pinhole_camera_model.cpp.o
[ 42%] Built target segnet
[ 44%] Generating Python from MSG realsense2_camera/IMUInfo
[ 44%] Built target detectnet
[ 45%] Generating C++ code from realsense2_camera/IMUInfo.msg
[ 47%] Generating Python from MSG realsense2_camera/Extrinsics
[ 49%] Generating C++ code from realsense2_camera/Extrinsics.msg
[ 50%] Generating C++ code from realsense2_camera/Metadata.msg
[ 52%] Generating Python from MSG realsense2_camera/Metadata
[ 54%] Linking CXX executable /ros_deep_learning/devel/lib/ros_deep_learning/imagenet
[ 55%] Generating C++ code from realsense2_camera/DeviceInfo.srv
[ 57%] Generating Python code from SRV realsense2_camera/DeviceInfo
[ 59%] Generating EusLisp code from realsense2_camera/IMUInfo.msg
[ 61%] Generating Python msg __init__.py for realsense2_camera
[ 61%] Built target realsense2_camera_generate_messages_cpp
[ 62%] Generating EusLisp code from realsense2_camera/Extrinsics.msg
[ 64%] Building CXX object vision_opencv/image_geometry/CMakeFiles/image_geometry.dir/src/stereo_camera_model.cpp.o
[ 66%] Generating Python srv __init__.py for realsense2_camera
[ 67%] Generating EusLisp code from realsense2_camera/Metadata.msg
[ 67%] Built target imagenet
[ 69%] Generating Lisp code from realsense2_camera/IMUInfo.msg
[ 69%] Built target realsense2_camera_generate_messages_py
[ 71%] Generating Lisp code from realsense2_camera/Extrinsics.msg
[ 72%] Generating EusLisp code from realsense2_camera/DeviceInfo.srv
[ 74%] Generating Lisp code from realsense2_camera/Metadata.msg
[ 76%] Generating Lisp code from realsense2_camera/DeviceInfo.srv
[ 77%] Generating EusLisp manifest code for realsense2_camera
[ 79%] Generating Javascript code from realsense2_camera/IMUInfo.msg
[ 79%] Built target realsense2_camera_generate_messages_lisp
[ 81%] Generating Javascript code from realsense2_camera/Extrinsics.msg
[ 83%] Generating Javascript code from realsense2_camera/Metadata.msg
[ 84%] Generating Javascript code from realsense2_camera/DeviceInfo.srv
[ 84%] Built target realsense2_camera_generate_messages_nodejs
[ 84%] Built target realsense2_camera_generate_messages_eus
[ 84%] Built target realsense2_camera_generate_messages
[ 86%] Linking CXX shared library /ros_deep_learning/devel/lib/libimage_geometry.so
[ 86%] Built target image_geometry
[ 88%] Linking CXX shared library /ros_deep_learning/devel/lib/libcv_bridge.so
[ 88%] Built target cv_bridge
[ 91%] Building CXX object vision_opencv/cv_bridge/src/CMakeFiles/cv_bridge_boost.dir/module_opencv4.cpp.o
[ 91%] Building CXX object vision_opencv/cv_bridge/src/CMakeFiles/cv_bridge_boost.dir/module.cpp.o
[ 93%] Building CXX object realsense-ros/realsense2_camera/CMakeFiles/realsense2_camera.dir/src/realsense_node_factory.cpp.o
[ 94%] Building CXX object realsense-ros/realsense2_camera/CMakeFiles/realsense2_camera.dir/src/t265_realsense_node.cpp.o
[ 96%] Building CXX object realsense-ros/realsense2_camera/CMakeFiles/realsense2_camera.dir/src/base_realsense_node.cpp.o
[ 98%] Linking CXX shared library /ros_deep_learning/devel/lib/python3/dist-packages/cv_bridge/boost/cv_bridge_boost.so
[ 98%] Built target cv_bridge_boost
[100%] Linking CXX shared library /ros_deep_learning/devel/lib/librealsense2_camera.so
[100%] Built target realsense2_camera

I suspect that these can lead to the memory corruption error.

Functions Identified:

  1. publish_overlay(uint32_t, uint32_t)
    
  2. publish_mask_color(uint32_t, uint32_t)
    
  3. publish_mask_class(uint32_t, uint32_t)
    
  4. publish_overlay(int, float)
    
  5. publish_overlay(detectNet::Detection*, int)
    

How I fix the warnings

I updated all the affected functions by adding a return true; statement at the end of each function to ensure they return a value.

Concerns:

I am unsure if this solution might break the codebase in any way. However, after making these modifications, I can launch the nodes to start real-time inference without any issues.

@dusty-nv
Copy link
Owner

Aha ok thank you @YL-Tan, I had always wondered about those warnings too but was just thought it was some ROS thing and did not experience the double-free or memory corruption issue that you encountered. If you were to submit a PR for these changes I would be happy to merge it, thank you 👍

@YL-Tan
Copy link
Author

YL-Tan commented Aug 27, 2024

Hi @dusty-nv, my pleasure! I've submitted my PR (#141). Please review it when you have time.

@dusty-nv
Copy link
Owner

Thanks @YL-Tan, much appreciated - just merged it 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@dusty-nv @YL-Tan @amir1387aht and others