Table of Contents
- Research Paper
- About
- Requirements
- Submodule
- Calibration Board
- Calibration Images
- Data Directory
- Frame Camera Intrinsic
- Setup Steps
- Calibration Configuration
- Initial Guesses
- Flipping Line-scan Image
- Results
- TODO
- License
- Contact
- Acknowledgments
You can find my research paper here which describes the active calibration algorithm:
Observability driven Multi-modal Line-scan Camera Calibration
@inproceedings{mehami2020observability,
title={Observability driven multi-modal line-scan camera calibration},
author={Mehami, Jasprabhjit and Vidal-Calleja, Teresa and Alempijevic, Alen},
booktitle={2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)},
pages={285--290},
year={2020},
organization={IEEE}
}
A probabilistic calibration framework for calibrating a camera system comprised of a line-scan camera and a 2D frame camera. This framework includes uncertainty estimation, due to measurement noisy, where the optimised calibration parameters are returned with an associated error. The camera system is assumed to be rigidly mounted together and calibration is performed using a calibration pattern with triangle features and cross-corner markers.
The observability is an important aspect in this calibration as it is common to capture the calibration pattern in similar poses. These similar poses do not have sufficient parallax, so they do not provide any more information to support the calibration, and in some cases they degrade the results. An active calibration algorithm is implemented which only keeps measurements that improve parameter observability, and ignore all others.
- MATLAB 2020a or higher (Tested with 2021b)
- GCC 7.0 or higher (Tested with 7.5.0)
- OpenCV 4.0 or higher
- CMake 3.10 or higher
This repository has a submodule. Make sure to add the --recurse-submodules
flag when pulling or cloning.
#clone repo and submodules
git clone --recurse-submodules **REPO**
#first clone repo and then initialise and update submodule
git clone **REPO**
git submodule init
git submodule update
The calibration is performed using a pattern with known triangles on an ArUco board as shown below. The triangles provide feature points for the line-scan camera. All triangles must be visible in the FOV of the line-scan camera in order to properly calibrate. The ArUco board markers are used by the frame camera to acquire the board's pose automatically using OpenCV functionality (further details of the ArUco board estimation can be found in the README of submodule).
An A3 PDF version of the board can be found in calibration_board and its corresponding dimension measurements which are used in the calibration are saved in the file config.yaml
When capturing calibration images, the board should be moved and rotated such that there are variations in the set of calibration images. Atleast two ArUco markers should be present in the frame camera image to estimate the board pose (see PnP algorithm) and eight line features should be detected in the line-scan image as shown below. The line-scan image may need to be flipped to match the order of the lines on the board (see below for further details).
The calibration data should be organised as follows:
Calibration_Data_directory
├── Frame
├── img1.png
├── img2.png
└── ...
├── Line-scan
├── hs1.png
├── hs2.png
└── ...
├── frame_camera_intrinsic.mat
└── calibration_results (generated after successfully running calibration script)
Both Frame and Line-scan directories should have the same number of images.
The intrinsic parameters of your Frame (RGB) camera must be calibrated prior to performing this calibration. This can be done through checkerboard calibration using MATLAB's calibration app. The saved intrinsic parameters should be stored with that data with the filename frame_camera_intrinsic.mat
. Inside of the MAT file is a MATLAB cameraParameters object with the name cameraParams.
-
Extract images from Rosbag by running the script
Rosbag_Unpack_LineScan_Frame_Cameras.m
located in rosbag_processing in MATLABPass Rosbag location to first window
A new directory will be created called
*NAME OF Rosbag*
and inside of that directory will beFrame
andLine-scan
directories containing images.Note: Extraction and saving might take a couple of minutes
-
Compile ArUco pixel detection MEX function from submodule
cd ext_lib/mex_ChArUco_Pose/ mkdir build cd build cmake .. make -j
-
Edit the calibration configuration file config.yaml as needed. Details of the calibration configuration are described in the next section.
-
Run calibration script
Main_Calibration.m
Pass the directory location of the extracted imagesImages/*NAME OF BAG*/
which contains theFrame
andLine-scan
directories. -
Check that the line-scan lines are properly aligned to the frame camera images. If successful, the results will be saved in the directory
calibration_results
. If results already exist, program will ask if you want to override results.
Configuration parameters found in the config.yaml file.
Flags | Description |
---|---|
display_on | TRUE/FALSE Turns the intermediate visualisations on/off. Turn off to speed up processing |
t1_approximate | The approximate distance from Resonon to RGB camera along the positive x-axis of Resonon (see below for further details) |
t3_approximate | The approximate distance from Resonon to RGB camera along the positive z-axis of Resonon (see below for further details) |
flip_linescan_img | TRUE if the x-axis of the pattern is in the opposite direction to the y-axis of the Resonon (see below for further details) |
naive_calibration | TRUE/FALSE runs naive calibration with all images, else runs active algorithm |
algorithm | Algorithm for solving the optimisation. 1 - Levenberg-Marquardt (default) 2 -Trust-Region-Reflective The trust-region-reflective generally will result in proper calibration parameters as it takes in constraints that will ensure the intrinsics of the hyperspectral camera are in proper range. (These should generally remain fixed) |
steadystate_readings | Stopping criteria for the active algorithm. Minimum number of consecutive optimisations until steady-state is reached. Checks the sum of the relative change in eigenvalues for each optimisation (referred to as summed normalised metric). |
minimum_eigen_value | Minimum eigenvalue magnitude. This avoids having to deal with values close to 0. |
sum_eigen_threshold | Threshold value for the summed normalised metric. Default value is 0 to ensure there is an increase in information. |
skip_covariance | TRUE/FALSE skipping covariance/error estimation. Can only skip if naiveCalibration is TRUE |
std_pixel_error | STD of pixel error for all cameras (default value 1) |
lower_bounds | Lower bounds for trust-region-reflective algorithm in the vector form [tx, ty, tz, qw, qx, qy, qz, fy, v0, K1, K2, P2] |
upper_bounds | Upper bounds for trust-region-reflective algorithm in the vector form [tx, ty, tz, qw, qx, qy, qz, fy, v0, K1, K2, P2] |
The closed form solution of the calibration requires an initial guess be given for the t1 and t3 parameters. This will avoid the calibration converging to an incorrect mirror pose. The diagram below demonstrates the required approximate measurements of the initial guess (should be measured in metres):
Note that the extrinsic transformation describes the pose of the frame camera axis with respect to the line-scan camera axis. In the above example both guesses would be negative.
Depending on the orientation of the calibration board, the line-scan camera images may need to be flipped horizontally so that the order of the feature points in the image correspond to the order on the board. You will need to manually set the flag flip_linescan_img
as shown above.
This will depend on the setup of the cameras but the general guideline to set the flag to true, is that the x-axis of the pattern needs to to be in the opposite direction to the y-axis of the camera.
For my setup, the following examples show when the line-scan image is flipped and not flipped respectively:
Results are saved as a MAT file called optimised_parameters.mat
which stores all the necessary data after calibration. Load this file into MATLAB.
The final optimised calibration parameters can be found in the matrix linescan_Opt_Param. This is a n x 11
matrix where each column represents values for each of the 11 calibration parameters [t1, t2, t3, rz, ry, rx, fy, v0, K1, K2, P2]. The last row contains the final optimised parameters you should use. The covariance matrix for this array of parameters can be found in the 11 x 11 x n
array linescan_Opt_ParamErr
by taking the very last 11 x 11
matrix.
numOpt
tells you how many optimisation iterations were performed. In each iteration of the active algorithm a new image gets added, so therefore numOpt + 1
is the number of images that are used in the final optimisation.
Below is the final output figure showing the calibrated cameras and one instance of the pattern with the projected line-scan view-line.
- Rotation optimisation over the manifold rather than using quaternion parameterisation. This should fix convergence issues.
- Read in frame camera intrinsic parameters from data folder or Rosbag
- Allow for passing in Rosbag of images
- Allow for any image naming schemes
- Automatic method to set flip_linescan_img to true/false.
Distributed under the MIT License. See LICENSE.txt
for more information.
Email: [email protected]