forked from BerkeleyAutomation/aruco_camera_calibration
-
Notifications
You must be signed in to change notification settings - Fork 0
Calibration Guide
Anuj Pasricha edited this page Jul 5, 2023
·
2 revisions
- Clone HIRO's calibration package in your home directory.
Compute camera focal length, center, and distortion parameters. Only need to do this once per camera.
- Clone the ROS
camera_calibration
package in your catkin workspace + build + source. - Start a
roscore
in a terminal. - In another terminal, publish camera frames to a ROS topic:
cd ~/aruco_camera_calibration/autolab_aruco && python publish_raw_img.py --cam=[left/right]
. Verify your image stream by runningrosrun image_view image_view image:=/logitech/image
in another terminal. - Confirm that your printed chessboard pattern has 10x7 squares (1 extra row + column of padding to 9x6). Also confirm that any given square has a width of 2.4cm.
- In yet another terminal, start the camera calibration routine:
rosrun camera_calibration cameracalibrator.py --size 9x6 --square 0.024 image:=/logitech/image camera:=/logitech
. - Move the chessboard around in the camera frame to get complete coverage of x- and y-directions. Entire chessboard should be visible for data capture. Ensure there is ample coverage of chessboard skew and size by tilting the pattern and moving it further away from/closer to the camera center. When all bars in calibration GUI are green and full (besides skew), hit save and be patient until the window unfreezes.
- You can now terminate the
roscore
,camera_publisher.py
, andcameracalibrator.py
processes. - Grab the
ost.yaml
from/tmp/calibrationdata.tar.gz
and place it in~/aruco_camera_calibration/autolab_aruco/
. Prepend[unique_identifier]_
to filename. Example final name:left_ost.yaml
Compute camera pose in robot frame. Redo if camera or robot are relocated.
- Tape the ArUco marker in a spot where it is both visible in the camera frame and accessible by the robot end-effector. Navigate to HIRO's calibration package:
cd ~/aruco_camera_calibration/autolab_aruco
- Make sure marker is not occluded in the camera frame. Run
python capture_image.py --cam=[left/right]
. Verify you're viewing the correct camera if multiple cameras are plugged in; change the--cam
argument if not. Press SPACE to capture an image and then ESC to quit. - Compute the transformation from tag to camera frame:
python tag_to_camera.py --cam=[left/right]
- Touch each corner of the ArUco marker (in the order specified by the image below) with the robot end-effector and capture the end-effector positions in each state. Copy these xyz coordinates into
~/aruco_camera_calibration/autolab_aruco/ee_positions.yaml
. You may use the providedcapture_ee_positions.py
script to capture and append a single EE position (for the Franka Emika Panda arm) to theee_positions.txt
file.
- Make sure the
ee_positions.txt
file contains only 4 lines, i.e., 1 xyz tuple per corner, in the order specified above. Compute the transformation from the tag to the robot frame:python tag_to_robot.py
- Compute the transformation from the camera to the robot frame:
python camera_to_robot.py
- Export camera to robot transform to
yaml
:python transform_to_yaml.py --filename=T_cam_robot.tf
- Verify calibration by positioning the yellow AprilTag object anywhere on the table and moving end-effector to object location:
test
. Check for XY position accuracy and z-orientation alignment of EE with object.