Skip to content

Calibration Guide

Anuj Pasricha edited this page Jul 5, 2023 · 2 revisions
  1. Clone HIRO's calibration package in your home directory.

Intrinsics (repeat for each camera)

Compute camera focal length, center, and distortion parameters. Only need to do this once per camera.

  1. Clone the ROS camera_calibration package in your catkin workspace + build + source.
  2. Start a roscore in a terminal.
  3. In another terminal, publish camera frames to a ROS topic: cd ~/aruco_camera_calibration/autolab_aruco && python publish_raw_img.py --cam=[left/right]. Verify your image stream by running rosrun image_view image_view image:=/logitech/image in another terminal.
  4. Confirm that your printed chessboard pattern has 10x7 squares (1 extra row + column of padding to 9x6). Also confirm that any given square has a width of 2.4cm.

  1. In yet another terminal, start the camera calibration routine: rosrun camera_calibration cameracalibrator.py --size 9x6 --square 0.024 image:=/logitech/image camera:=/logitech.
  2. Move the chessboard around in the camera frame to get complete coverage of x- and y-directions. Entire chessboard should be visible for data capture. Ensure there is ample coverage of chessboard skew and size by tilting the pattern and moving it further away from/closer to the camera center. When all bars in calibration GUI are green and full (besides skew), hit save and be patient until the window unfreezes.

  1. You can now terminate the roscore, camera_publisher.py, and cameracalibrator.py processes.
  2. Grab the ost.yaml from /tmp/calibrationdata.tar.gz and place it in ~/aruco_camera_calibration/autolab_aruco/. Prepend [unique_identifier]_ to filename. Example final name: left_ost.yaml

Extrinsics (repeat for each camera and robot)

Compute camera pose in robot frame. Redo if camera or robot are relocated.

  1. Tape the ArUco marker in a spot where it is both visible in the camera frame and accessible by the robot end-effector. Navigate to HIRO's calibration package: cd ~/aruco_camera_calibration/autolab_aruco

  1. Make sure marker is not occluded in the camera frame. Run python capture_image.py --cam=[left/right]. Verify you're viewing the correct camera if multiple cameras are plugged in; change the --cam argument if not. Press SPACE to capture an image and then ESC to quit.
  2. Compute the transformation from tag to camera frame: python tag_to_camera.py --cam=[left/right]
  3. Touch each corner of the ArUco marker (in the order specified by the image below) with the robot end-effector and capture the end-effector positions in each state. Copy these xyz coordinates into ~/aruco_camera_calibration/autolab_aruco/ee_positions.yaml. You may use the provided capture_ee_positions.py script to capture and append a single EE position (for the Franka Emika Panda arm) to the ee_positions.txt file.

  1. Make sure the ee_positions.txt file contains only 4 lines, i.e., 1 xyz tuple per corner, in the order specified above. Compute the transformation from the tag to the robot frame: python tag_to_robot.py
  2. Compute the transformation from the camera to the robot frame: python camera_to_robot.py
  3. Export camera to robot transform to yaml: python transform_to_yaml.py --filename=T_cam_robot.tf
  4. Verify calibration by positioning the yellow AprilTag object anywhere on the table and moving end-effector to object location: test. Check for XY position accuracy and z-orientation alignment of EE with object.

Clone this wiki locally