Stereo camera system calibration: the need of two sets of parameters
The reconstruction of a scene via a stereo-camera system is a two-steps process, where at first images from different cameras are matched to identify the set of point-to-point correspondences that then will actually be reconstructed in the three dimensional real world. The performance of the system strongly relies of the calibration procedure, which has to be carefully designed to guarantee optimal results. We implemented three different calibration methods and we compared their performance over datasets. We present the experimental evidence that, due to the image noise, a single set of parameters is not sufficient to achieve high accuracy in the identification of the correspondences and in the reconstruction at the same time. We propose to calibrate the system twice to estimate two different sets of parameters: the one obtained by minimizing the reprojection error that will be used when dealing with quantities defined in the space of the cameras, and the one obtained by minimizing the reconstruction error that will be used when dealing with quantities defined in the real world.
Stereo camera systems represent an effective instrument for reconstruction in great variety of applications and in different fields of science, entertainment, industry and automotive, [entertainment2000, cavagna2017Swarm, dell2014automated, robotics2017, entertainment2007, navigation2019, medicine2016, industrial2020].
The common objective of all these applications is the reconstruction of a scene - the one in the common field of view of the cameras - starting from the images. The central topic of all stereo-vision applications is then the calibration of the system, which defines the relation between the real world and the world of the cameras.
The role of the calibration parameters in the reconstruction of the scene is two-fold. On one side they are needed to match the images across the cameras, hence to pair the points belonging to different cameras, images of the same target. On the other side they are needed in the actual reconstruction process, to define the geometry of the system with respect to the reference frame. The calibration procedure has then to be robust against noise in the identification of the correct correspondences, and to be accurate in the reconstruction.
The literature suggests different approaches to the calibration problem, as in [2DmeasureCalibration2019a, 2DmeasureCalibration2019b, 2DmeasureCalibration2018, 2DmeasureCalibration2016, 3DmeasureCalibration2020, 3DmeasureCalibration2019, 3DmeasureCalibration2016]. Regardless the particular strategy, the set of parameters estimated with the calibration procedure is generally used both for the identification of the correct correspondences and for the reconstruction. Working with one set of parameters is reasonable in a noise-free world, but it is not sufficient to guarantee optimal results in the real world, where images are affected by noise.
We show this discrepancy by implementing three different calibration algorithms. In the first one, i.e. the essential method, we calibrate the system estimating the essential matrix from a set of known point-to-point correspondences, following the basic calibration algorithms found in [hartley2003multiple] and in [faugeras]. In the other two methods, we use the set of parameters found with the essential method as the initial condition of a Montecarlo algorithm, to find the parameters that minimize the reprojection error, in what we call the minimization method, and the reconstruction error, in the minimization method.
We tested the three methods on datasets and we show that none of them produces high quality results on the identification of the correspondences and on the reconstruction at the same time. The minimization algorithm gives the best performance in terms of the identification of the correspondences, while the reconstruction algorithm gives the best performance in terms of the accuracy in the reconstruction.
We propose to estimate two different sets of parameters and to use one or the other according to the needs: to identify the correspondences we use the parameters estimated through the minimization method, while to reconstruct the already identified correspondences in the space we use the parameters estimated through the minimization method.
In this section we summarize the notations and the basic concepts that we will use throughout the paper.
2.1 Single camera
Pinhole model. This is the basic camera model adopted in the field of computer vision. It assumes that the image, , of a target lies at the intersection between the camera sensor and the line connecting the camera optical center and the target , i.e. the optical ray passing through and , see Fig.1c. The target and its image are therefore related by a projection centered on the camera optical center, defined by the following equation:
where represents the projective coordinates of the point , i.e. the three dimensional point such that and , represents the projective coordinates of , i.e. the four dimensional point such that , and . The coordinates and of the point are expressed in the camera reference frame shown in Fig.1. is the projective matrix that defines the relation between the world, where coordinates are expressed in meters, and the world of the cameras, where coordinates are expressed in pixels.
Projection matrix. The general form of the projective matrix is the following:
where is the matrix of the camera internal parameters and is the matrix of the camera external parameters.
Camera internal parameters. is the matrix of the camera internal parameters, which stores the camera intrinsic characteristics:
where is the focal length expressed in pixel, is the pixel skewness, and are the coordinate of the image center, , expressed in the image reference frame, see Fig.1.
A camera with known internal parameters is referred as calibrated camera, and the coordinates of its images may be naturally normalized with the following transformation:
where represents the normalized and dimensionless coordinates and the inverse of the intrinsic parameters matrix.
Camera external parameters. is the matrix of the camera external parameters, which defines the orientation and position of the camera with respect to the world reference frame, namely is the rotational matrix and is the three dimensional translational vector that bring the camera reference frame into the world reference frame, i.e. the three dimensional reference frame where the coordinates of the point are defined.
reconstruction ambiguity. The correspondence between the world and the world of the camera, defined by the central projection in eq.1, is not one-to-one: all the points lying on the same optical ray are imaged on the same point, as shown in Fig.1. Therefore it is not possible to reconstruct the position of an object using the information of one camera only.
2.2 Two cameras systems
The reconstruction ambiguity of the single camera may be overcome matching the information from two cameras. When a target is seen by two cameras, its two images, and , identify two optical rays, and (the two red lines in Fig.2), at the intersection of which lies. The coordinates of may be found by solving the following system:
where and are the projective coordinates of the two images, and are the projective matrices of the two cameras, and represents the projective coordinates of .
Canonical systems. In the particular situation where the the world reference frame coincides with the reference frame of one of the two cameras, the system is in its canonical form. We denote the reference camera as the primary camera, highlighted in purple in Fig.2, while we denote the other camera as the secondary one, highlighted in green in Fig.2. The projective matrix of the primary camera is of the form , while for the secondary camera the projective matrix is in the standard form , where and are the two internal parameters matrices.
Note that in this configuration the matrix and the vector defines the orientation and position of the secondary camera with respect to the primary camera, hence describes the mutual orientation/position of the two cameras, see Fig.2.
3 The role of noise
The pinhole camera model and its generalization to multi-camera system are based on the strong assumption of the absence of noise. This is reasonable for a model but not realistic, since images are in fact affected by noise. The direct effect of noise is a mis-position of the images on the cameras sensors that indirectly affects both the accuracy on the reconstruction and the accuracy on the reprojection.
Because of the noise, the target that should be imaged in the two points and is instead imaged in and , see Fig.3 where for the sake of simplicity we describe the effect of noise on one camera only. As a consequence, the two optical rays and passing by and do not coincide with the two noise-free optical rays, and (passing by and ). Because of this, and do not intersect in the point , which lies instead at the intersection between and , and actually in the general case they do not intersect at all.
Therefore the system defined in eq.1 does not have an exact solution and in principle we cannot reconstruct the correspondence in the space. But we can find an approximate solution of the system: the point , defined as the point at the minimum distance from both and . This reconstructed is back-projected onto the two points and , which differ from both the noise-free pair , and from the corrupted pair , as shown in Fig.3.
4 Canonical systems calibration
We focus here on the calibration of a system in its canonical form. Therefore we assume that the internal parameters of the cameras are known or pre-calibrated and that only the external parameters of the system have to be calibrated.
We implemented three different calibration algorithms: i. the essential method, in which the essential matrix of the system is computed from a set of known point-to-point correspondences; ii. the minimization method, which finds the set of parameters that minimizes the reprojection error over the same set of known point-to-point correspondences already used in the essential method; iii. the minimization method, which finds the set of parameters that minimizes the reconstruction error over a set of target-to-target and point-to-point correspondences.
4.1 Essential method.
Following [hartley2003multiple] and [faugeras], we start from a set of point-to-point correspondences within the two cameras defined as
where and are the images of the same target in the primary and secondary camera respectively.
We use this set to estimate the essential matrix , namely the matrix that minimizes the sum of the residuals, :
where , while and represent the projective coordinates of and normalized as in eq.4
The essential matrix is of the form:
where is the rotational matrix and the unitary vector associated to translation vector () that bring the reference frame of the secondary camera into the primary camera, and denotes the cross product expression for the vector
We express in terms of the polar and zenith angles associated to the center of the primary reference frame in the secondary camera reference frame:
and in terms of the three angles of yaw (), pitch () and roll () of the secondary camera with respect to the primary camera reference frame:
where , and represent the rotations about the -axis, the -axis and the -axis respectively. The essential matrix has then degrees of freedom, represented by the five angles: , , , and .
Inverting eq.8 we find and from the estimated essential matrix . Hence we can compute the matrix up to the scale factor , which represent the distance between the centers of the two camera reference frames, as shown in Fig.2. We fix this scale factor measuring the system baseline, , in the real world.
4.2 2D and 3D minimization methods.
Both these methods use the parameters estimated through the essential matrix method and the associated angles, , , , and , as the initial condition of a iterative Montecarlo procedure.
In detail, at each iteration :
Step 1. We randomly select one of the five angles.
Step 2. We randomly choose if to add or subtract a quantity (initially set equal to rad) to the angle selected at Step 1.
Step 3. We define the set of angle for the current iteration, , obtained from the angles of the previous iteration modifying the angle selected in Step 1 of the quantity .
Step 4. We compute the rotation matrix and the translation vector that correspond to -th set of angles: and .
Step 5. We compute a cost function, defined in detail in Section4.2.1, over a set of known correspondences (point-to-point correspondences for the minimization method and target-to-point correspondences for the minimization method) and we compare this current cost with the one of the previous iteration, .
If we accept the move.
Otherwise we do not accept the move, hence we set the current set of angles to the ones of the previous iteration: .
Step 6. We compute the acceptance ratio, i.e. the ratio between the accepted moves and the total number of iteration. If the acceptance ratio is smaller than a chosen value ( in our case), we reset the number of iteration to and we decrease (in our case we multiply by a factor equal to ). If is larger than a threshold (rad in our case) we go back to Step1, otherwise the procedure ends and the solution of the minimization problem is the pair .
The minimization method and the minimization method differ for the cost functions used.
minimization method. Given the set of point-to-point correspondences defined in eq.6, we first reconstruct each pair in the space and then we back-project these reconstructed points onto the planes of the camera sensors, obtaining the set of reprojected pairs .
We define the reprojection error as the distance between the original position and its reprojection :
Finally we define the reprojection cost as the sum of the reprojection errors over all the points .
minimization method. We start from a set of pairs of targets, , at the mutual distances that is measured in the real world. We define the set as:
where and are the images of and on the two cameras.
From the pairs and we obtain and , the reconstructed positions of the two targets and , and we compute their distance .
We define the reconstruction error between the -th and the -th targets as the difference between their measured distance and their reconstructed distance:
Finally, we define the reconstruction cost as the sum of the reconstruction errors over all the elements of .
We tested the calibration accuracy on sets of images acquired with a system of two cameras (IDT-M5) equipped with mm lenses (Schneider Xenoplan ). We calibrate the internal parameters of each camera separately in the lab, using a method based on [StandardCalib]: we collect images of a checkerboard in different positions, we randomly pick of these pictures and we estimate the focal length, the position of the image center and the distortion coefficients. We iterate this process times and we choose each parameter as the median value obtained in the iterations.
The datasets used in the tests are sets of images of two calibration targets, namely two checkerboards mounted on an aluminum bar, whose centers will be denoted by target and in the following. We detect the position of the targets on the images with the automatic subpixel routine in [subPixel].
The datasets slightly differ in the cameras baseline, , and in the distance between the two targets, , which is constant within the images of the same dataset. We accurately measure and of each dataset with a laser range finder.
For each dataset we define the set of point-to-point correspondences as:
where and are the images on the -th picture of the target and respectively.
We also defined the set of the correspondences as:
6 Tests and results
We tested the three algorithms described in Section 4 on the datasets described in Section 5. We used images, i.e. calibration images, of the total images of each dataset to estimate the system parameters and the other images, i.e. validation images, to validate the calibration. To increase the statistical significance of the tests, for each dataset we performed runs, choosing at each run a different set of calibration images.
In each run we performed the three algorithms described above, obtaining three different sets of parameters of the system: i. from the essential method we estimated the essential matrix, , and inverting eq.8 we found its associated matrix; ii. from the minimization method we estimated the matrix, and we computed the associated essential matrix, , from eq.8; iii. from the minimization method we estimated the matrix, and we computed the associated essential matrix, , from eq.8.
We evaluated the three calibration algorithms in terms of the accuracy in the identification of the point-to-point correspondences across the cameras and in terms of the accuracy in the reconstruction.
Correspondences identification. For the identification of the correspondences we need to define a quantity that discriminates the correct correspondences, i.e. the pairs of points images of the same target in the two cameras, from the wrong ones, i.e. pairs of points that do not correspond to the same target.
We used the validation images of each run to define the set of correct correspondences:
as the set of the pairs of points (one for each camera) both corresponding to the target or the target on the -th image.
But we also used the same validation images to define the set of wrong correspondences:
as the set of the pairs obtained associating the point corresponding to target to the point corresponding to target on the same image, , or on different images, , and of the pairs of points both corresponding to the same target or in different images, .
The two natural quantities for the identification of the correspondences are the residuals, defined in eq.7, and the reprojection error, defined in eq.11. We computed the residuals using the three different essential matrices , and , and the reprojection errors using the three different matrices of the external parameters, , and . Finally we computed the probability distribution functions (PDF) shown in the first two columns of Fig.4.
To discriminate the correct correspondences from the wrong ones, the two PDFs should be separated, namely the right tail of the distribution of the correct correspondences (highlighted with a black dashed line on the plots in the first two columns of Fig.4) should not overlap the left tail of the distribution of the wrong correspondences. We measured the false correspondence probability (FCP) as the overlapping area between the correct and wrong probability distributions, so that an high value of FCP corresponds to a high probability of a mis-correspondence.
In Fig.4 we reported the FCP for the three algorithms and for both the residual and the reprojection error. The best performance is obtained when we use the calibration method and we identify the correspondences through the reprojection error: the plot with the red frame in the second column of Fig.4.
3D reconstruction. To evaluate the accuracy in terms of the reconstruction we measure the reconstruction error defined in eq.12. In the last column of Fig.4 we show the probability distribution of the percentage reconstructed error, i.e. , for the three algorithms we implemented. We chose to compute and to plot the percentage reconstruction error, because this is the measure that it is relevant in most of the application: an error of over a distance of has a different meaning than an error of over a distance of .
The best performance is obtained using the calibration method, i.e. the plot with the red frame in the last column of Fig.4. The dashed black line at corresponds to a percentage error equal to , which is an acceptable accuracy for most of the applications. Note that, while the reconstruction error for the method is always smaller than the , for the other two methods errors are always above this threshold.
7 The need of two sets of parameters
The tests presented in Section 6 show that using a single set of parameters for both the identification of the correspondences and for the reconstruction is not optimal. With the minimization algorithm we can successfully detect the correct correspondences computing their reprojection error, but we are not accurate in the reconstruction. On the opposite, with the minimization algorithm we achieve a very high accuracy in the reconstruction, but we cannot discriminate the correct correspondences from the wrong ones.
The reason for this double-set of parameters has to be found in the presence of noise and in how the two methods manage it.
In the minimization method we minimize the reprojection error, hence we are implicitly assuming that the position of the images on the camera sensors are not affected by noise and that all the noise is in the space. Given a target and its images and , we look for those parameters such that the two optical lines, and , passing by and intersect in the space in a point , the reprojections of which are as close as possible to the original points. It does not matter how far the point is from the original point , the only significant quantity here is the reprojection error.
On the opposite, in the minimization method we minimize the reconstruction error, hence we are implicitly assuming that the position of the targets are not affected by noise and that all the noise is in the space. Given two targets and , their distance in the space and their images and , we look for those parameters such that the distance between the two reconstructed points is as close as possible to the measured distance. It does not matter if the two pairs of optical lines, and passing by the images of the two targets, intersect in the space, but only that the distance between the two reconstructed points is close to the reality. The only significant quantity here is the distance between the targets: no matter how large the reprojection error is.
We implemented three different calibration methods for the estimation of the external parameters of a camera system with the aim of comparing their quality in terms of two factors: i. the correct identification of point-to-point correspondences; ii. the accuracy of the reconstruction.
We tested the three methods over datasets and we presented the experimental evidence that, due to the image noise, a single set of parameters cannot be optimal both for the identification of the correspondences and for the reconstruction.
To obtain good results in terms of the identification of the correspondences, we need to work in the space of the cameras minimizing the reprojection error and moving all the noise in the space, hence producing high reconstruction error. On the opposite to obtain good results in terms of reconstruction, we need to work in the space minimizing the reconstruction error and moving all the noise on the space of the cameras, hence producing large reprojection errors and a high probability of false correspondences.
The optimal choice for the system calibration is then to calibrate the system twice and to use the two sets depending on the quantities that we need to compute. The optimal set of parameters for the computation of all those quantities that live in the space of the camera is the one obtained by minimizing the reprojection error, while the optimal set of parameters for the computation of all those quantities that live in the real world is the one obtained by minimizing the reconstruction error.
We do not claim that the three algorithms presented in the paper are an exhaustive panorama of all the calibration techniques provided by the literature. But the same tests that we presented here may be applied to different and even more complicated and more accurate algorithms. The core message of the paper is that for the two tasks the calibration is meant for, we are in the usual blanket-too-short dilemma, so that if we improve the accuracy in the space we lose accuracy in the space and viceversa, hence the easiest and optimal approach is to separate the two tasks and calibrate the system twice.
This work was supported by ERC grant RG.BIO (Grant No. 785932).
- Two constraints are also added to the minimization of the residuals: i. ; ii. has only one not null eigenvalue with multiplicity equal to two.
- where , and denote the components along the -axis, the -axis and the -axis respectively.
- The same procedure may be applied at the set of - correspondences defined as made of targets and their two images. The reconstruction error should then be defined as where is the point reconstructed from and . But in this approach we would need to accurately measure the position of the target in the reference frame of the primary camera, which is not a physical reference frame. We chose to measure a quantity that does not depend on the particular reference frame, but that guarantees that the metric proportion of the scene are respected and also the dynamic quantities of the targets, i.e. velocity and acceleration, may be accurately computed.