Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessing the Gen3 color and depth streams #46

Closed
alexvannobel opened this issue Oct 9, 2019 · 10 comments
Closed

Accessing the Gen3 color and depth streams #46

alexvannobel opened this issue Oct 9, 2019 · 10 comments
Labels
enhancement New feature or request

Comments

@alexvannobel
Copy link
Contributor

The ros_kortex_vision repository contains the code to access the color and depth streams.
It will eventually be brought in the ros_kortex repository, but in the meantime to access the streams, you have to clone the ros_kortex_vision repo. You will find the installation instructions and examples in the repo.

@RaduCorcodel
Copy link
Contributor

Vision module works fine on my end. I get the point cloud, registered cloud and the color image. There is one thing though, the xacro in kortex_description does not contain a fixed joint between the camera_link and bracelet_link. I added a joint myself so that the 3D point cloud and the planning scene can play along in the same Rviz window but I don't know the exact numbers for position and orientation between these two links. Could we update the robot description to contain the camera_link? Also, do the frames camera_color_frame and camera_depth_frame need to be included in the robot description as well?

@alexvannobel
Copy link
Contributor Author

Hi @RaduCorcodel ,

I asked a couple people about those dimensions and it seems they are documented in the Gen3 arm user guide at page 145. We will be adding a frame for the color and depth frames in the kortex_description URDF for the Gen3, but for now you can specify the values in the User Guide for the fixed joint you created in your URDF. This should do it for now.
As I say, we are going to add that in the repo pretty soon.

Hope this helps,
Best,
Alex

@marlow-fawn
Copy link

Is there currently a way to access the color and depth streams in simulation?

@huiwenzhang
Copy link

Hi, any progress made on this thread? The camera related link frames are still missing. Besides, how could I access to the color and depth streams in simulation? As @marlow-fawn asked. Can I use the common used kinect gazebo plugin (can't find ros plugins for other vision sensors, such as realsense) to retrieve the vision information?

@polthergeist
Copy link

Hi @huiwenzhang @marlow-fawn @RaduCorcodel,
There is nothing new regarding this topic for now. Because of the COVID-19 situation most of the development process has slowed down a lot and I'm very sorry about that. As soon as the situation is resolved I will keep you posted.

@RaduCorcodel
Copy link
Contributor

Hi @huiwenzhang,

I opened a pull request where I added the frames needed for the vision module. You can find my repo here and switch to the 'vision_fix' branch. You can clone my repo (don't forget to switch to branch vision_fix, then clone the vision module in the same catkin workspace and build normally. To get the depth/color stream, launch the driver, then in a different terminal launch the vision module launcher (roslaunch kinova_vision kinova_vision.launch device:=192.168.0.1 see the vision repo for the list of parameters and of course change the IP to your robot's IP address).
Hope it helps.

Radu,

@huiwenzhang
Copy link

@RaduCorcodel Great, I saw your code and it really helps. But for now, I don't have a real robot on hand, and the purchasing process may take a few months. So I am working with the simulation and trying to figure out a way to use the vision information

Alvin,

@alexvannobel
Copy link
Contributor Author

Hi @huiwenzhang ,

As Radu pointed out, he made a PR (#82) to add vision frame to the kortex_description package. I will approve it as soon as I get my hands on a real robot, as we are currently working from home right now.

We didn't implement the Intel Realsense simulation in our Gen3 simulation, but if you want you can take a look at Intel Realsense's Gazebo repository. You could probably get simulated images this way.

We don't plan on adding this to the Gen3 simulation on the short to mid term, but if this changes I'll make sure to update this issue.

Best,

Alex

@2000222
Copy link

2000222 commented Feb 21, 2022

Is there currently a way to access the color and depth streams in simulation?

If there exist any simulation for ros-kortex-vision package now? Thank you.

@felixmaisonneuve
Copy link
Contributor

Hi @2000222,

No, there is still no implementation for the Intel Realsense simulation.

Like Alex said, you could look at the Intel Realsense's Gazebo repository to get simulated images.

Best,
Felix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants