 Research
 Open
 Published:
Calibration method for linestructured light multivision sensor based on combined target
EURASIP Journal on Wireless Communications and Networkingvolume 2013, Article number: 92 (2013)
Abstract
Calibration is one of the most important technologies for linestructured light vision sensor. The existing calibration methods depend on special calibration equipments, whose accuracy determines the calibration accuracy. It is difficult to meet the requirements of universality and field calibration with those methods. In order to solve these problems, a new calibration method based on the combined target for linestructured light multivision sensor is proposed. Each linestructured light multivision sensor is locally calibrated by combined target with high precision. On the basis of global calibration principle and the characteristics of the combined target, global calibration and local calibration are unified. The unification avoids the precision decrement caused by coordinate transformation. And the occlusion problem of 3D reference objects can be avoided. An experimental system for 3D multivision measurement is set up with two sets of vision sensor and its calibration matrix is obtained. To improve the calibration accuracy, the method of acquiring calibration points with high precision and error factors in calibration are analyzed. After being calibrated, the experimental system is finally tested through a workpiece measurement experiment. The repeatability of this system is 0.04 mm, and it proves that the proposed calibration method can obtain high precision. Moreover, by changing the structure of the combined target, this calibration method can adapt to the different multivision sensors, while the accuracy of the combined target is still guaranteed. Thus, this calibration method has the advantages of universality and field calibration.
1. Introduction
Linestructured light vision sensor is noncontact and realtime sensor that has the advantages of high precision and active controllability. It is widely used in industrial inspection [1, 2]. When the laser plane of a linestructured light vision sensor is projected onto a workpiece surface to be measured, the laser plane would produce distortion strip due to modulation caused by the size of the workpiece. The cameras obtain the images of the modulated light strip, from which the 3D information of the workpiece surface can be calculated. This is the principle of linestructured light vision sensor [3].
Calibration is one of the most important technologies for linestructured light vision sensor [4, 5]. The current methods to calibrate the vision model of linestructured light vision sensor mainly include press wire method, tooth profile target method [6], and crossratio invariance method [1, 7, 8]. Those methods depend on special calibration equipments. That makes them unsuitable for general use and field calibration. With the complicated calibration process, there is still room for improvement for those methods’ precision. And there is also an occlusion problem for 3D reference objects. To solve this problem, calibration methods based on plane reference object and 1D target have been studied [9]. No matter what kind of target is used, the calibration accuracy is strongly dependent on the target.
A new method based on combined target is proposed in this article to calibrate the vision model of linestructured light multivision sensor. The combined target is made up of standard gauge blocks. Since every gauge block is of very high precision, it is convenient to obtain high precision of the combined target. With the different splicing types of gauge blocks, the structure of the combined target can be changed in order that its structure is in agreement with the vision sensor to be calibrated. In this method, several calibration points on the laser plane are first collected with high precision. Then each of the vision sensors is locally calibrated. On the principle of global calibration [10] and the characteristics of the combined target, global calibration is unified with local calibration. The surface of the combined target exhibits a particular kind of characteristic between plane reference object and 3D target. For this calibration method, there is no occlusion problem which exists in those methods based on 3D reference objects. Therefore, the proposed method has the advantages of high precision, universality, and field calibration.
2. Calibration method based on combined target
The calibration process for linestructured light vision sensor includes calibration for camera intrinsic parameters and calibration for the position relationship between laser plane and camera.
2.1. Calibration for camera intrinsic parameters
There are many relatively mature calibration methods for camera intrinsic parameters. The main distortion errors of camera include radial distortion, tangential distortion, and thin prism distortion. For a piece of ordinary lens, radial distortion can be an adequate description of nonlinear distortion. Introducing too many distortion coefficients cannot improve the accuracy, and instead, it may lead to the unstable solution. Therefore, only the firstorder radial distortion is taken into account and the accuracy requirement could be fulfilled. Such parameters have to be calibrated as the effective focal length (f/mm), the center coordinates of the image plane (u_{0} and v_{0}/pixel), image aspect ratio s_{ x } and the firstorder radial distortion coefficient (k_{1}).
2.2. Calibration principle of multivision sensor
The calibration principle of multivision sensor is shown in Figure 1. The combined target is made up of standard gauge blocks with different lengths, which are arranged in staggered way at certain intervals. That is shown in Figure 2. By changing the arrangement way and the specification of gauge blocks, different types of the combined target are formed. Its structure is designed according to the vision sensor to be calibrated. The accuracy of single size parameter in the combined target is easily guaranteed by the standard gauge block. Meanwhile, due to their extremely smooth surface, several gauge blocks can be combined into a whole so as to ensure the accuracy of composite size parameter. A beam of light with rectangle crosssection is projected by a laser. When the beam falls on the surface of the combined target, it transforms into a piece of discontinuous stripe. This stripe is imaged on the image plane of the camera and the imaged stripe is also discontinuous. Figure 3 gives the acquired image of the stripe. Through calculating the gray gradient of the stripe image, the endpoints of the stripe can be searched. Thus, the length of each stripe is acquired, which is the nominal thickness of the gauge block.
In Figure 1, the world coordinate system o_{w}–x_{w}y_{w}z_{w} is built on the combined target, whose origin (o_{w}) is the center of the combined target. In the world coordinate system, x_{w}, y_{w}, and z_{w} axes are parallel to the ridges of the combined target, respectively, and laser plane is defined as Z_{w} = 0. The target is fixed on a onedimensional mobile platform, and the laser beam should be adjusted to make sure that it is perpendicular to the combined target and parallel to o_{w}y_{w}. Then, the target moves at an interval of 10 mm along o_{w}x_{w} direction, which is parallel to the laser plane. Several stripe points at different positions are acquired when moving the target. At this time, the coordinate x_{w} of the points on the laser plane is changed, but the coordinate y_{w} keeps invariant. The number of the stripe points is decided by the size of the gauge block in o_{w}y_{w} direction. That means the smaller the size of the gage block is, the more stripe points there is in a fixed field range. Having acquired the coordinate x_{w} of the platform at different positions, the coordinates of n points (X_{wi},Y_{wi}) (i = 1,…,n) in the laser plane can be obtained. All these endpoints of the stripe at every position are mapped into image points. Thus, by image processing the corresponding coordinates (u_{ i },v_{ i }) (i = 1,…,n) of image points are calculated.
After the intrinsic calibration, the camera can be regarded as an ideal perspective transformation model. The relationship between the space coordinates p_{wi}(X_{wi},Y_{wi},Z_{wi}) (i = 1,…,n) of the i th feature point and their corresponding coordinates p_{ci} (u_{ i },v_{ i })in the image plane can be described as follows
In the above equation, w indicates a proportionality constant, f indicates the effective focal length of the CCD camera, R is a 3 × 3 rotation matrix and t is a 3 × 1 translation vector. Taking the laser plane Z_{w} = 0 into consideration, the following equation is derived:
where A is a 3 × 3 matrix, the socalled projection matrix. This matrix maps image coordinates into world coordinates. The above equation can be expressed in another way:
where a_{ ij } is the i th row and the j th column element of the transformation matrix A.
Equation (3) establishes the perspective transformation relationship between the points on laser plane and their perspective points on the image plane. That is the mathematical model of the linestructured light vision sensor. In this equation, the world coordinates (X_{wi}, Y_{wi}) and their corresponding image coordinates (u_{ i }, v_{ i }) are known. Therefore, at least six or more characteristic points on the reference object are needed so as to calculate the transformation matrix A. The relationship between the laser plane and the camera position can be determined in this way.
The proposed calibration method for linestructured light multivision sensor based on combined target provides n calibration points both in threedimensional world coordinates (X_{wi}, Y_{wi}, Z_{wi}) and twodimensional image coordinates (u_{ i }, v_{ i }). Combining the above equations, the proportionality constant w can be eliminated and then 2n linear equations KA = U can be obtained. Here, K, A, U are given by
The solution of the above linear equations can be calculated through the method of least squares:
Through Equation (3) and the matrix A, the following equation can be obtained:
Matrix A determines the relationship between world coordinates and image coordinates for a given point. According to Equation (8), if matrix A is calibrated, threedimensional world coordinates (X_{wi}, Y_{wi}, Z_{wi}) can be reconstructed by twodimensional image coordinates (u_{ i }, v_{ i }). Hence, threedimensional measurement is fulfilled.
2.3. Calibration process of multivision sensor
To apply the above calibration method, a linestructured light multivision sensor is set up. As shown in Figure 4, two sets of linestructured light singlevision sensor are configured on both sides of the combined target and the laser planes are adjusted in parallel. Since each of the vision sensors has an independent coordinate system, the local measurement data are not of unity. So, it is necessary to unify the local measurement results of singlevision sensors to a global coordinate system.
As shown in Figure 1, the unique world coordinate system is established on the target. Suppose that the thickness of the combined target along o_{w}x_{w} direction is d and sensor 1 gets one point at any spatial position whose threedimensional vector coordinate is S_{w1} = [x_{w1},y_{w1},z_{w1}]^{T}. Accordingly, the vector coordinate can be expressed as S_{w2} = [x_{w} + d,y_{w},z_{w}]^{T} for sensor 2. Under the measurement state, each singlevision sensor can locally be calibrated by the calibration points in the world coordinate system, and afterwards the structure parameters of this system can be acquired.
Since those calibration points used for local calibration have been unified to the global coordinate system, all the singlevision sensors are globally unified. So, it is unnecessary to globally calibrate the singlevision sensor. This method directly unifies the local coordinate system to the global coordinate system. It is useful to improve the calibration accuracy due to less times of coordinate transformation.
3. Calibration errors analysis
The errors in this calibration method mainly include the error of the laser plane position and the error of endpointextraction.
3.1. The error of the laser plane position
The ideal laser line should be parallel to the ridge of the combined target as shown in Figure 5. There inevitably exists a certain inclination angle between the actual laser plane and the ridge of the combined target. This adjustment deviation will lead to a calibration error. Assuming the deviation angle between the actual laser line and its ideal position is Δα, the spacing between the gauge blocks is d and the length of the line segment on the actual laser line intercepted by the ridges of the gauge blocks is d', the error Δd is given by
As deduced from Equation (9), the deviation angle Δα has little influence on Δd for their quadratic relationship. When Δα = 2°, Δd is only 0.000609 mm.
When the guide rail moving direction is not parallel to the laser plane, as shown in Figure 6, there is a angle Δβ between them. Given the distance of guide rail moving l and the actually moving distance of the calibration point in the laser plane l', the error Δl is obtained by
Similarly, the deviation angle Δβ has a quadratic relationship with Δl.
In terms of Equations (9) and (10), the laser plane position has little influence on calibration accuracy. So, the adjustment of the laser plane position can be implemented by comparing the gauge block vertical ridge with the actual location of the laser plane. The corresponding calibration error will be reduced to 0.1 μm degree.
3.2. Extraction error and rectification
Figure 7 gives an endpoint image used as calibration point which is extracted from the laser stripe. In the actual application, the laser stripe images may bend or deviate at the edge of the laser stripe, which leads to endpoint extraction error. In a threedimensional scene, the straight line remains straight on the image plane through ideal imaging. Getting the intersection point of the straight lines is more accurate than getting a separate endpoint. This method to acquire calibration points can reduce the extraction error.
To obtain calibration points based on the above method, the extracted endpoints need to be fitted into a straight line. As the moving trajectory of the gauge block endpoint is of straight line, n straight lines can be fitted along the moving direction namely o_{w}x_{w}. And m straight lines can also be fitted along the vertical direction namely o_{w}y_{w}. These are shown in Figure 8. Then the intersection points of those fitted lines can be found out and be used to replace the endpoints of the gauge blocks for rectification use.
In order to verify the rectification method, a rectification experiment is carried out. First, the 3D world coordinates of calibration points are reconstructed by their 2D image coordinates without rectification. Compare the reconstructed coordinates with their corresponding theoretical values and calculate the RMS values. Second, correct the endpoint coordinates by line fitting, and repeat the first step. Table 1 gives RMS values of 14 calibration points after rectification.
Before rectification, the coordinate component RMS errors of the vision measurement system are RMS_{ x } = 0.180 mm and RMS_{ y } = 0.162 mm. In terms of Table 1, those errors are RMS_{ x } = 0.088 mm and RMS_{ y } = 0.074 mm after rectification. It is concluded that the method of intersection point extraction can reduce the error caused by separate endpoint extraction and improve the calibration accuracy.
4. Experiments and discussion
An experiment system of linestructured light multivision sensor is set up, which is shown in Figure 9. It comprises two sets of vision sensors, in which the camera has 1280 pixel × 1024 pixel and a fixed focal length lens with f = 8 mm. A series of round workpiece with different diameters are used as the measurement objects.
4.1. Calibration experiment
The calibration points are acquired and then the experiment system is calibrated according to the proposed calibration method. The results are as follows.

(1)
The internal parameters of the camera
Camera 1: s_{ x } = 0.999207, k_{1} = −0.056308, u_{0} = 685.890 pixel, v_{0} = 488.186 pixel
Camera 2: s_{ x } = 0.996126, k_{1} = −0.056308, u_{0} = 602.681 pixel, v_{0} = 458.935 pixel

(2)
The structural parameters of the multivision sensor
$$\begin{array}{c}{A}_{1}=\left[\begin{array}{ccc}5.207687& 0.036653& 775.216473\\ 1.395210& 5.570409& 504.777588\\ 0.002785& 0.000012& 1\end{array}\right]\\ \phantom{\rule{1em}{0ex}}{A}_{2}=\left[\begin{array}{ccc}1.474586& 0.052835& 491.491942\\ 1.501106& 5.439574& 559.661316\\ 0.002766& 0.000027& 1\end{array}\right]\end{array}$$
4.2. Repeatability experiment
Having been calibrated, the experiment system collects the images of the laser stripe on the workpiece surface to measure the diameter. Figure 10 gives two images, respectively, captured by cameras 1 and 2. Change the diameter, the measurement position, and the attitude of the workpiece, and repeat collecting the images for many times. The diameters can be derived from the reconstructed 3D data and the results of 15 times of measurement are shown in Table 2.
The standard deviation of four diameters in Table 2 are, respectively, 0.039, 0.033, 0.034, and 0.040 mm. The repeatability of the experiment system is denoted as 0.040 mm that is the maximum of the standard deviation. So, this system has good repeatability.
The accuracy of the experiment system depends on many factors, but the calibration method is the primary one. Thus, it can be concluded from the experimental results that the proposed calibration method has high precision.
5. Conclusions
A new calibration method for linestructured light multivision sensor based on combined target is proposed. This method combines the local calibration with the global calibration and reduces the calibration error caused by coordinate transformation. Meanwhile, the occlusion problem of 3D reference objects is solved with the combined target, which has a particular kind of characteristic between plane reference object and 3D target. Finally, an experiment system of linestructured light multivision sensor is set up. Repeatability of this system is 0.04 mm. That proves that the proposed calibration method is feasible and can obtain high precision. And also this method has the advantages of universality and field calibration.
References
 1.
Leandry I, Breque C, Valle V: Calibration of a structuredlight projection system: development to large dimension objects. Opt Lasers Eng 2012, 50(3):373379. 10.1016/j.optlaseng.2011.10.020
 2.
Miguel R, Markus B: State of the art on visionbased structured light systems for 3D measurements. In IEEE International Workshop on Robotic and Sensors Environments (ROSE 2005). Ottawa, Canada; 2005:17.
 3.
Sandro B, Alessandro P, Viviano RA: Threedimensional point cloud alignment detecting fiducial markers by structured light stereo imaging. Mach Vis Appl 2012, 23(2):217229. 10.1007/s0013801103401
 4.
Zhang B, Li YF, Wu YH: Selfrecalibration of a structured light system via planebased homography. Pattern Recognit. 2007, 40(4):13681377. 10.1016/j.patcog.2006.04.001
 5.
Kim D, Lee S, Kim H, Lee S: Wideangle laser structured light system calibration with a planar object. In International Conference on Control Automation and Systems (ICCAS 2010). Gyeonggido, Korea; 2010:18791882.
 6.
Galilea JLL, Lavest JM, Vazquez CAL, Vicente AG, Munoz IB: Calibration of a highaccuracy 3D coordinate measurement sensor based on laser beam and CMOS camera. Instrum Meas 2009, 58(9):33413346.
 7.
Shi YQ, Sun CK, Wang BG, Wang P, Duan HX: A global calibration method of multivision sensors in the measurement of engine cylinder joint surface holes. In International Conference on Materials, Mechatronics and Automation (ICMMA 2011). Melbourne, Australia; 2011:11821188.
 8.
Marcuzzi E, Parzianello G, Tordi M, Bartolozzi M, Lunardelli M, Selmo A, Baglivo L, Debei S, Cecco M: Extrinsic parameters calibration of a structured light system via planar homography based on a reference solid. In Proceedingsof Fundamental And Applied Metrology. Lisbon, Portugal; 2009:19031908.
 9.
de Alexandre JA, Stemmer MR, de França MB: A new robust algorithmic for multicamera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter. Pattern Recognit. 2012, 45(10):36363647. 10.1016/j.patcog.2012.04.006
 10.
Bangkui H, Zhen L, Guangjun Z: Global calibration of multisensor vision measurement system based on line structured light. J Optoelectron Laser 2011, 22(12):18161820.
Acknowledgment
This study was supported by the Science and Technology Support Project (State Key Laboratory of Mechatronical Engineering and Control).
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Rights and permissions
About this article
Received
Accepted
Published
DOI
Keywords
 Combined target
 Multivision sensor
 Calibration
 Linestructured light