Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help with oadk #6

Open
SahilJain8 opened this issue Aug 9, 2023 · 26 comments
Open

Help with oadk #6

SahilJain8 opened this issue Aug 9, 2023 · 26 comments

Comments

@SahilJain8
Copy link
Contributor

Good Greetings Everyone
I am using this repo with ros humble the camera that i am using is oakd the problem is i am not receiving any data the folders are also empty
when i try to log the triangle mesh it shows it has 0 vertices help

@marrts
Copy link
Collaborator

marrts commented Aug 9, 2023

I personally haven't run this on humble, but I expect it should work without issue. I also have not used that particular sensor, but from a quick search it seems like it should work.

A few tips to help debug your issue:

  • Turn on the point cloud in RVIZ and make sure the data is showing up and is where you expect it to be
  • Ensure both your color and depth images are being published on their expected topics
  • Ensure your color and depth images are aligned. They should have the same values in camera_info
  • Make sure you see the Start Reconstruction printout when you are triggering the start of the reconstruction process and Stop Reconstruction when triggering the end of the reconstruction
  • When making the start request set the min and max box values in the tsdf volume to be all zeros. This will prevent any cropping of the mesh from occurring For example:
ros2 service call /start_reconstruction industrial_reconstruction_msgs/srv/StartReconstruction "
 ...
tsdf_params:
  ...
  min_box_values: {x: 0.0, y: 0.0, z: 0.0}
  max_box_values: {x: 0.0, y: 0.0, z: 0.0}
...
"
  • Ensure you aren't getting TF lookup issues printing to your terminal, or if you are then it's only a few every now and then and not a constant stream
  • You can try increasing the slop parameter when launching industrial_reconstruction to make sure you are properly triggering the callback that is expecting both your color and depth image.
  • If you see Error converting ros msg to cv img Then you may have an issue with image types, which I believe has been seen by some other people before. You might need to implement a fix here
  • Make sure you are entering the cameraCallback method by attaching a debugger or adding debugging print statements. If you are never entering this callback then you won't get any data added to your mesh

@SahilJain8
Copy link
Contributor Author

image

@SahilJain8
Copy link
Contributor Author

i just checked the camera callback is not called could you please tell me where to call the funtion

@marrts
Copy link
Collaborator

marrts commented Aug 10, 2023

It should automatically get called when images are received. Ensure that the /camera/depth_image/raw and /camera/color_image/raw topics are being published.

@SahilJain8
Copy link
Contributor Author

I have not made any changes to the code i am running it exactly the way its mentioned in the repo first call the ros2 launch after that start reconstruction and then stop reconstruction

@SahilJain8
Copy link
Contributor Author

And one more thing in the repos readme why is it just launch ut should be launch.xml right?

@marrts
Copy link
Collaborator

marrts commented Aug 10, 2023

And one more thing in the repos readme why is it just launch ut should be launch.xml right?

You are correct, that is a mistake.

I have not made any changes to the code i am running it exactly the way its mentioned in the repo first call the ros2 launch after that start reconstruction and then stop reconstruction

You might have to modify the service calls slightly for your application, the examples provided are intended to show you a template to follow, not explicit instructions to follow precisely. If your camera driver publishes the color and depth image on different topics you need to ensure you are specifying the correct topic names. Additionally, you need to make sure to give the correct tracking and relative frame names for your given robot set up.

I would first advise you to ensure you can view the camera images as ROS topics in RVIZ or RQT.

@SahilJain8
Copy link
Contributor Author

So i need to change the start reconstruction function i am new to ros but i am good with open3d and python

@marrts
Copy link
Collaborator

marrts commented Aug 10, 2023

So i need to change the start reconstruction function i am new to ros but i am good with open3d and python

Yes, change it to fit your application. Also the robot and camera drivers must be running and publishing information. These instructions assume that you already have robot and camera data publishing in your ROS 2 environment. It looks like a driver for your camera exists here. You might find some useful information for setting things up there.

@SahilJain8
Copy link
Contributor Author

Wait so i need to run a robot i am working with ur10e so its mot just camera but even the robot has to be running? Could you pls explain it to me

@marrts
Copy link
Collaborator

marrts commented Aug 10, 2023

Wait so i need to run a robot i am working with ur10e so its mot just camera but even the robot has to be running? Could you pls explain it to me

The robot needs to be publishing joint states in your ROS environment, so you need to connect over ethernet to the UR controller while the robot is on.

This application assumes you have a camera mounted at the end of a robot and that the robot is moving through a trajectory while the camera is collecting data. The python code uses the camera's estimated position in conjunction with the depth and color images to stitch together a mesh using TSDF. In order for it to have access to all of that information you need data from both the robot and the camera, and a URDF defined with the camera mounted in the appropriate location at the end of the robot.

You can find the ROS2 UR driver here.

If you are completely new to ROS it might be worth going through our ROS-Industrial Training which walks you through creating a workspace and setting up URDFs.There's specifically an example where we walk through adding a UR robot to your URDF.

@SahilJain8
Copy link
Contributor Author

Thank you soo much

@SahilJain8
Copy link
Contributor Author

  1. We have tried running oadk in separate terminal were we published oakd topics and in another terminal we ran this repo and while running we even mentioned the topics in the args still no progress.
  2. We tried the same approach in Foxy
  3. Instead of a single camera can we use multiple cameras
  4. Is it necessary to place the camera on robot

@marrts
Copy link
Collaborator

marrts commented Aug 11, 2023

We have tried running oadk in separate terminal were we published oakd topics and in another terminal we ran this repo and while running we even mentioned the topics in the args still no progress.

If the topics are being published and you run everything correctly you should at least see something print out. You'll need a robot to get it fully working though. Verify you can visualize the camera topics with RVIZ or RQT

Instead of a single camera can we use multiple cameras

You only need 1 camera

Is it necessary to place the camera on robot

Yes, the camera needs to be defined in some frame that is moving

@SahilJain8
Copy link
Contributor Author

  1. with which camera was this tested?
  2. from sensor_msgs.msg import Image, CameraInfo from where are we referring this package?

@marrts
Copy link
Collaborator

marrts commented Aug 17, 2023

  1. with which camera was this tested?

This was primarily tested with intel realsense cameras, but I believe others have been successful with other cameras.

  1. from sensor_msgs.msg import Image, CameraInfo from where are we referring this package?

The Image and CameraInfo msg types are normal ros message types. If you are having trouble accessing these then either:

  1. You have not sourced ROS properly. source /opt/ros/{your-ros2-version}/setup.bash
  2. You have not built your workspace properly. From the base of your workspace you should run colon build --symlink-install after sourcing ROS as described above. Then you can source your workspace source {path-to-your-workspace}/install/setup.bash. Then run this node.
  3. It might be possible you only have a barebones version of ROS2 installed that doesn't include the sensor_msgs package. install with sudo apt install ros-{your-ros2-version}-sensor-msgs

@SahilJain8
Copy link
Contributor Author

After following your steps i got this

[industrial_reconstruction-1] 2023-08-18 10:28:14.439 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7414: open_and_lock_file failed -> Function open_port_internal
[industrial_reconstruction-1] 2023-08-18 10:28:14.439 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7415: open_and_lock_file failed -> Function open_port_internal

@SahilJain8
Copy link
Contributor Author

so for rviz we are using depthai-example in rviz we are able to get the depth properly
https://github.com/luxonis/depthai-ros/blob/humble/depthai_examples/launch/stereo.launch.py this file were are using to display in rviz we are able to get it but all the steps the you provided we followed that can still no update i am not sure what the problem is. The rviz file was running while we were using your code

@SahilJain8
Copy link
Contributor Author

Failed to get transform: "world" passed to lookupTransform argument target_frame does not exist. i was able to publish the data to the cameracallback funtion

@SahilJain8
Copy link
Contributor Author

after completing this i am willing to contribute to his repo by making a branch for oakd

@marrts
Copy link
Collaborator

marrts commented Aug 22, 2023

Failed to get transform: "world" passed to lookupTransform argument target_frame does not exist. i was able to publish the data to the cameracallback funtion

Sounds like potentially the world frame doesn't exist in your environment. Ensure in RVIZ you can see both the target frame and the reference frame in the TF display.

@SahilJain8
Copy link
Contributor Author

So were able to generate the mesh we fixed the issue we were facing the data is getting saved in archive folder but when try to access the archive data using rviz command as sooh as i give ros2 service call /start_publishing std_srvs/srv/Trigger the rviz unexpectedly stops could pls help with it

@marrts
Copy link
Collaborator

marrts commented Aug 24, 2023

There could be an issue with that rviz config file, maybe something changed or maybe it improperly references something. You should be able to launch rviz separately and just add the display fields manually. The intent of the rviz parameter that loads that config file was just convenience, it is not required.

@SahilJain8
Copy link
Contributor Author

Good Greetings

  1. I am trying to generate a mesh of a object but the mesh surface is very uneven we want to use it for path planning it should be flat but i contains lot of noise
  2. I wanted to make a second pr to add different type to mesh algo like Alpha shapes,Ball pivoting,Poisson surface reconstruction

@marrts
Copy link
Collaborator

marrts commented Sep 5, 2023

  1. I am trying to generate a mesh of a object but the mesh surface is very uneven we want to use it for path planning it should be flat but i contains lot of noise

Look at discussions on #1 for how to improve mesh reconstruction results.

  1. I wanted to make a second pr to add different type to mesh algo like Alpha shapes,Ball pivoting,Poisson surface reconstruction

I think access to more mesh processing algorithms would be good. I don't want to make the call to the reconstruction process more complicated though. Perhaps additional services can be created specifically for modifying the mesh, or some other clean implementation of allowing processing without adding complexity to the process would be good.

@SahilJain8
Copy link
Contributor Author

  • The way i am planning to add algorithms is with tsdf i can extract the point cloud then the next process is pretty much the same and i can save the mesh in there respective algorithms folder
  • Another thing is to a print the the number of mesh points after each construction as it helps the user to determine how many points it requires to fully generate a mesh and it even acts as a indicates that the function is receiving the images properly and its able to convert it into a mesh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants