Skip to content
This repository has been archived by the owner on Jul 14, 2023. It is now read-only.

KInect 2 calibration? #49

Open
tlpmit opened this issue Aug 31, 2018 · 11 comments
Open

KInect 2 calibration? #49

tlpmit opened this issue Aug 31, 2018 · 11 comments

Comments

@tlpmit
Copy link

tlpmit commented Aug 31, 2018

We have only recently started to use a Movo and have been having any problems with the Kinect2. Kinova does a calibration before shipping but we're getting substantial errors. Before undertaking random acts of calibration, we wanted to see if someone can report that they get good behavior. That way we could rule out other sources for the problems.

If you look at the attached images you see one of the arms (with a Robotiq gripper) holding a ruler up to a block. You can see in the photograph that the left edge of the "actual" finger is aligned with the left edge of the ruler and the right edge of the block. In the Rviz picture you can see that the “simulated” finger is almost at the right edge of the ruler. It's also off in the approach distance but that's harder to judge from the picture.

The Rviz picture is showing the topic /kinect2/sd/points and the robot model. It’s being displayed relative to the robot base frame. So, this should simply be showing the Kinect point cloud and the robot model placed at whatever angles are being read as the current position.

As far as I understand it (but I may be missing something), this relatively large offset can only come from the following sources:

  • The encoders are reporting (very) incorrect joint angles
  • The actual geometry of this robot does not match the geometry in the URDF
  • The kinect is (very) uncalibrated

Is your Kinect data reasonably aligned with the world? Note that I don't mind noise - 2 cm systematic offsets are another thing. Any suggestions?

Thanks,
Tomas Lozano-Perez

screenshot from 2018-08-30 10 09 30
img_1521

@JoshSong
Copy link

JoshSong commented Sep 1, 2018

We (University of Queensland) have had the same issue. I have tried to do the re-calibration with the checkerboard, but that did not help much. We have a optitrack system in our lab, so I did a small experiment by tracking the pose of a cylinder with that and comparing it to the pose from Movo's object detection. A plot is shown here with kinect pose in red and optitrack in blue, X axis is forward from center of Movo.

capture2

We then hard-coded a -3, 1.3 cm offset to our object detections, which mostly seemed to fix the problem for 0.8m which is usually the distance we do grasping at. (Although we don't use any offset when using our experimental POMDP planner).

capture

So it seems like the Kinect loses accuracy the further the object is, but I feel like there still could be an error with the urdf.

@tlpmit
Copy link
Author

tlpmit commented Sep 2, 2018

Has anyone succeeded in getting grasping based on detections from Kinect data to work? I would imagine that with this level of systematic error, it is not likely to work.

If you have gotten it to work, I would love to hear your positive experience.

@hannakur
Copy link

hannakur commented Sep 2, 2018

We have done a bit of grasping where Kinect data is used to generate initial position of the objects (a video is in: https://www.dropbox.com/s/930zfqmyhgyoy9c/demoAtICRA-UQ.m4v?dl=0 0:28-0:50). But, we don't use visual servoing.

For grasping a cup from the table, we use Kinect to scan the initial scene and detect the table and cup, but we then add sufficient errors on the position of the cup. We use our POMDP solver to find a strategy for grasping, with observation being whether grasp has been successful or not based on encoders of the fingers. The policy we got includes a small pushing motion (0:42-0:50 in the video) to help reduce uncertainty on the cup position. There's a number of failure cases we can't recover from (e.g., when the cup falls, clutter, etc.) at the moment, but for the particular scenario we ran, the above strategy gave ~98% success (out of ~200 runs over 7 days X 10h/day demo).

@martine1406
Copy link
Contributor

Thank you for your feedback Hanna. I am not an expert with the Kinect, but I wonder if a 1-2 cm absolute accuracy error is not expected when comparing the Kinect measurements to a motion capture golden standard. Here are a few papers I have found while searching on the internet:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5115766/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3304120/

Let me know your impressions

@alexvannobel
Copy link
Collaborator

Hi all,
I have made a few tests with our MOVO and I could reproduce the behaviour @tlpmit and @JoshSong are reporting with the Kinect.
I made MOVO hold a ruler both in a vertical and an horizontal position and I observed the Kinect's Pointcloud (HD) vs the Kinect's Pointcloud + the Robot Model.

Vertical position
reallifeoffset
no_robot_model_closeup 1
with_robot_model_closeup
with_robot_model_closeup 1
We could observe that the Robot Model's end effectors (in red) are not superposed with the end effectors captured by the Kinect (in blue). When measured in RViz, the offset was roughly 2.5cm in length and it was present in the 3 axis.

Horizontal position
horizontal_real
no_robot_model_horizontal
with_robot_model_horizontal
with_robot_model_horizontal_modified
The same behaviour was seen in the horizontal position, with the measurements also giving a ~2.5cm total offset.

Do you see the same behaviour as we do when you compare only the Kinect's Pointcloud versus the Kinect's Pointcloud + the Robot Model?

@hannakur
Copy link

hannakur commented Sep 5, 2018

hi Martine: I believe Josh did compare with results from a stand alone kinect last time, and didn't find much different in accuracy. I'm not sure if there's actually a better method/model for calibration that we could use. But, we didn't dig deeper into this because, we view the error as a sort of test to our planning under uncertainty methods too.

Of course, it would be nice to have higher accuracy.

@tlpmit
Copy link
Author

tlpmit commented Sep 5, 2018

We got better accuracy than this on the PR2. I don't think what we're seeing is noise; it's a systematic offset.

From some initial looking around, I believe that the calibration that has been done on the Kinects is calibrating the depth camera and the rgb camera so that they are registered - this is all internal the to Kinect itself. I don't think there is any "exterior" calibration being done, that is, aligning the camera coordinate frame to the robot frame.

I'm trying to set up a simple procedure to do a rough calibration to see if it helps. I'll let people know if I get somewhere... But the school term starts tomorrow, so progress may be slow ;-)

@cbames
Copy link

cbames commented Sep 5, 2018 via email

@tlpmit
Copy link
Author

tlpmit commented Sep 5, 2018

Oh good. I didn't know about that package. Thanks.

I've seen the Fetch go through the calibration... it has the LEDs built into the hand which is very convenient. It looks like if we can also attach a checkerboard at a known location in a Movo hand, then this code can handle that.

I had started to write some code using a thin box held in the Robotiq hand and fitting the shape to the depth data. It looks like the Ferguson thing is already debugged. I'll give it a try... but I have a low pain threshold when it comes to systems issues using other people's code, so if I can't get it to work, I'll just finish my own. Other people with higher tolerance for systems issues might want to give it a try as well.

@martine1406
Copy link
Contributor

Hi everyone.

Thank you for those suggestions. I just want to confirm that there is indeed no specific procedure we do at Kinova to calibrate the camera frame and the robot base frame (or hand frame). An extra "exterior" calibration procedure might help. @tlpmit, keep us posted on your developments and good luck for your semester! @cbames, thanks again for your suggestion. We would need to be able to locate some specific features such as LEDs or screws both on the urdf and on the real robot I suppose. We<ll have to think about that.

We keep in touch

@tlpmit
Copy link
Author

tlpmit commented Sep 5, 2018

That package seems like just the right thing; it's what they use on the Fetch. If somebody has the patience to figure out how to install the libraries it needs to compile and run under Ubuntu 14.04, that would be great service to humanity...

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants