-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth and color not aligning properly #1
Comments
I have the same problem. |
I think the problem is in the transformation between color and depth sensor. You can find this transform here This is how the line should look like with a properly calibrated transform I am intended to validate this solution. But I am not a ROS-ninja. So, bear with me. Because it will take some time on my end to set things up in my environment for ROS-development. I did, however, use this exact transform in my other C++ vision projects (not ROS-based). Now, if you want to achieve even better color-to-depth alignment, the exact camera internal parameters and extrinsic transformation have to be calibrated on a case-by-case basis using the following procedure:
References:
|
Hi, @VitaliyKhomko I have changed the transformation between color and depth sensor like you, however, I found that this change doesn't work. No matter what I change the args, the result remains the same. This problem has been bothering me for a long time. What else can I try? Thanks. |
Hi @JingKangSun, Have you tried the following suggestion
Best, |
Hello @VitaliyKhomko, I've been experiencing the same issue and am trying to follow your suggestion. However, I have a few questions:
Also, here are photos showing what color-to-depth alignment looks like: This is what the scene looks like. Note that the checkerboard in on top of a box and with proper depth-to-color alignment we would expect the checkerboard to be on top of a "mesa" in the point cloud. Using the default color-to-depth transform found in kinova_vision.launch and kinova_vision_rgbd.launch
Using the transform @VitaliyKhomko suggested
In both cases the checkerboard is not centered on its corresponding "mesa" in the depth image and spills over the side. Thanks in advance for the help! |
Hi, @FalseFeint I wanted to let you know that I am currently working on a solution for this. I am designing a program that will allow you to calibrate the cameras by trying different offset values for the streams and then optimize the match between the color and depth streams using edge detection. I believe this method will be the most accurate for the least amount of work. |
Hi, @FalseFeint it has been a while, but I want to assure you I am working on the issue. I currently am working from home because I got covid, but I am going to the office back on monday. I am making a lot of progress since I switched to using opencv on python instead of c++. c++ was causing a lot of issues which is why it took longer than expected. My program requires a bit of a setup. It needs a rectangular shape that you can hold at a certain distance from the background. It is also much better if the background is darker than the rectangle. For the setup I am using aat home, I simply put a bright lamp pointed at a post it note I stuck at the end of a ruler. So the setup really doesn't need to be fancy at all. You can do it with pretty much anything. I am currently able to find a square in a picture, find its bounding box and get the position of the corners. I have made it work with a webcam on my laptop, but I also see it is possible to convert a ros topic into a video capture for opencv. This is what I will be doing on monday. I have made the logic to figure out the offset between two bounding boxes (color and depth) and all that is left is to control the camera offsets from the kinova_vision driver. |
It's finally here! |
Hi again, I now made an update that made the configuration of the streams a lot easier. Since I haven't had the opportunity to attempt to install it on a new machine, don't know if the installation steps are accurate, if you have issues, you can reply here. If you have any ideas on what should be improved, let me know. |
Unfortunately, even after using Iliana's code we are still experiencing significant misalignment. @VitaliyKhomko @felixmaisonneuve would it be possible to setup a meeting to discuss this? @tfrasca @heiyumiao @JingKangSun were you able to resolve this and, if so, how? |
Hi. I know that Kinect Fusion does that quite well, but from what I know, there is very little information on how the color texturing is done and it is the result of over 10 years of research. |
Hello, I didn't have time to test it properly with a checkerboard or something. My use case is to combine point cloud from the Kinova camera with point cloud taken from another RGBD camera (fixed on a table, properly calibrated). With the parameters from Kinova xacro, the point clouds align properly from any position of the robot (and its camera). So if anybody still has the same problem, this is worth a try. |
When I visualize the point cloud from the realsense in rviz the color and depth images are not aligned properly. It seems as though the color image is shifted to the right and I don't know how to correct this misalignment.
I'm running the kortex_vision_rgbd.launch file to start up the realsense on the kortex arm and it is currently using the 1280x720 calibration file for the color stream and the 480x270 calibration file for the depth stream. I looked through the launch files and couldn't find any information regarding configuring the depth registration. How can i properly align the color and depth images?
ros version: melodic
The text was updated successfully, but these errors were encountered: