Skip to main content

Calibration method for line-structured light multi-vision sensor based on combined target

Abstract

Calibration is one of the most important technologies for line-structured light vision sensor. The existing calibration methods depend on special calibration equipments, whose accuracy determines the calibration accuracy. It is difficult to meet the requirements of universality and field calibration with those methods. In order to solve these problems, a new calibration method based on the combined target for line-structured light multi-vision sensor is proposed. Each line-structured light multi-vision sensor is locally calibrated by combined target with high precision. On the basis of global calibration principle and the characteristics of the combined target, global calibration and local calibration are unified. The unification avoids the precision decrement caused by coordinate transformation. And the occlusion problem of 3D reference objects can be avoided. An experimental system for 3D multi-vision measurement is set up with two sets of vision sensor and its calibration matrix is obtained. To improve the calibration accuracy, the method of acquiring calibration points with high precision and error factors in calibration are analyzed. After being calibrated, the experimental system is finally tested through a workpiece measurement experiment. The repeatability of this system is 0.04 mm, and it proves that the proposed calibration method can obtain high precision. Moreover, by changing the structure of the combined target, this calibration method can adapt to the different multi-vision sensors, while the accuracy of the combined target is still guaranteed. Thus, this calibration method has the advantages of universality and field calibration.

1. Introduction

Line-structured light vision sensor is non-contact and real-time sensor that has the advantages of high precision and active controllability. It is widely used in industrial inspection [1, 2]. When the laser plane of a line-structured light vision sensor is projected onto a workpiece surface to be measured, the laser plane would produce distortion strip due to modulation caused by the size of the workpiece. The cameras obtain the images of the modulated light strip, from which the 3D information of the workpiece surface can be calculated. This is the principle of line-structured light vision sensor [3].

Calibration is one of the most important technologies for line-structured light vision sensor [4, 5]. The current methods to calibrate the vision model of line-structured light vision sensor mainly include press wire method, tooth profile target method [6], and cross-ratio invariance method [1, 7, 8]. Those methods depend on special calibration equipments. That makes them unsuitable for general use and field calibration. With the complicated calibration process, there is still room for improvement for those methods’ precision. And there is also an occlusion problem for 3D reference objects. To solve this problem, calibration methods based on plane reference object and 1D target have been studied [9]. No matter what kind of target is used, the calibration accuracy is strongly dependent on the target.

A new method based on combined target is proposed in this article to calibrate the vision model of line-structured light multi-vision sensor. The combined target is made up of standard gauge blocks. Since every gauge block is of very high precision, it is convenient to obtain high precision of the combined target. With the different splicing types of gauge blocks, the structure of the combined target can be changed in order that its structure is in agreement with the vision sensor to be calibrated. In this method, several calibration points on the laser plane are first collected with high precision. Then each of the vision sensors is locally calibrated. On the principle of global calibration [10] and the characteristics of the combined target, global calibration is unified with local calibration. The surface of the combined target exhibits a particular kind of characteristic between plane reference object and 3D target. For this calibration method, there is no occlusion problem which exists in those methods based on 3D reference objects. Therefore, the proposed method has the advantages of high precision, universality, and field calibration.

2. Calibration method based on combined target

The calibration process for line-structured light vision sensor includes calibration for camera intrinsic parameters and calibration for the position relationship between laser plane and camera.

2.1. Calibration for camera intrinsic parameters

There are many relatively mature calibration methods for camera intrinsic parameters. The main distortion errors of camera include radial distortion, tangential distortion, and thin prism distortion. For a piece of ordinary lens, radial distortion can be an adequate description of nonlinear distortion. Introducing too many distortion coefficients cannot improve the accuracy, and instead, it may lead to the unstable solution. Therefore, only the first-order radial distortion is taken into account and the accuracy requirement could be fulfilled. Such parameters have to be calibrated as the effective focal length (f/mm), the center coordinates of the image plane (u0 and v0/pixel), image aspect ratio s x and the first-order radial distortion coefficient (k1).

2.2. Calibration principle of multi-vision sensor

The calibration principle of multi-vision sensor is shown in Figure 1. The combined target is made up of standard gauge blocks with different lengths, which are arranged in staggered way at certain intervals. That is shown in Figure 2. By changing the arrangement way and the specification of gauge blocks, different types of the combined target are formed. Its structure is designed according to the vision sensor to be calibrated. The accuracy of single size parameter in the combined target is easily guaranteed by the standard gauge block. Meanwhile, due to their extremely smooth surface, several gauge blocks can be combined into a whole so as to ensure the accuracy of composite size parameter. A beam of light with rectangle cross-section is projected by a laser. When the beam falls on the surface of the combined target, it transforms into a piece of discontinuous stripe. This stripe is imaged on the image plane of the camera and the imaged stripe is also discontinuous. Figure 3 gives the acquired image of the stripe. Through calculating the gray gradient of the stripe image, the endpoints of the stripe can be searched. Thus, the length of each stripe is acquired, which is the nominal thickness of the gauge block.

Figure 1
figure 1

Calibration principle of multi-vision sensor.

Figure 2
figure 2

Combined target.

Figure 3
figure 3

Image of discontinuous stripe.

In Figure 1, the world coordinate system ow–xwywzw is built on the combined target, whose origin (ow) is the center of the combined target. In the world coordinate system, xw, yw, and zw axes are parallel to the ridges of the combined target, respectively, and laser plane is defined as Zw = 0. The target is fixed on a one-dimensional mobile platform, and the laser beam should be adjusted to make sure that it is perpendicular to the combined target and parallel to owyw. Then, the target moves at an interval of 10 mm along owxw direction, which is parallel to the laser plane. Several stripe points at different positions are acquired when moving the target. At this time, the coordinate xw of the points on the laser plane is changed, but the coordinate yw keeps invariant. The number of the stripe points is decided by the size of the gauge block in owyw direction. That means the smaller the size of the gage block is, the more stripe points there is in a fixed field range. Having acquired the coordinate xw of the platform at different positions, the coordinates of n points (Xwi,Ywi) (i = 1,…,n) in the laser plane can be obtained. All these endpoints of the stripe at every position are mapped into image points. Thus, by image processing the corresponding coordinates (u i ,v i ) (i = 1,…,n) of image points are calculated.

After the intrinsic calibration, the camera can be regarded as an ideal perspective transformation model. The relationship between the space coordinates pwi(Xwi,Ywi,Zwi) (i = 1,…,n) of the i th feature point and their corresponding coordinates pci (u i ,v i )in the image plane can be described as follows

w · u i w · v i w = f 0 0 0 0 f 0 0 0 0 1 0 R t 0 1 X wi Y wi Z wi 1
(1)

In the above equation, w indicates a proportionality constant, f indicates the effective focal length of the CCD camera, R is a 3 × 3 rotation matrix and t is a 3 × 1 translation vector. Taking the laser plane Zw = 0 into consideration, the following equation is derived:

w u i v i 1 = f · r 1 f · r 2 f · T x f · r 4 f · r 5 f · T y r 7 r 8 T z X wi Y wi 1 = A X w
(2)

where A is a 3 × 3 matrix, the so-called projection matrix. This matrix maps image coordinates into world coordinates. The above equation can be expressed in another way:

w u i v i 1 = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 X wi Y wi 1
(3)

where a ij is the i th row and the j th column element of the transformation matrix A.

Equation (3) establishes the perspective transformation relationship between the points on laser plane and their perspective points on the image plane. That is the mathematical model of the line-structured light vision sensor. In this equation, the world coordinates (Xwi, Ywi) and their corresponding image coordinates (u i , v i ) are known. Therefore, at least six or more characteristic points on the reference object are needed so as to calculate the transformation matrix A. The relationship between the laser plane and the camera position can be determined in this way.

The proposed calibration method for line-structured light multi-vision sensor based on combined target provides n calibration points both in three-dimensional world coordinates (Xwi, Ywi, Zwi) and two-dimensional image coordinates (u i , v i ). Combining the above equations, the proportionality constant w can be eliminated and then 2n linear equations KA = U can be obtained. Here, K, A, U are given by

K = X w 1 Y w 1 1 0 0 0 − X w 1 u 1 − Y w 1 u 1 0 0 0 0 X w 1 Y w 1 1 − X w 1 v 1 − Y w 1 v 1 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ X wi Y wi 1 0 0 0 − X wi u i − Y wi u i 0 0 0 0 X wi Y wi 1 − X wi v i − Y wi v i 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ X wn Y wn 1 0 0 0 − X wn u n − Y wn u n 0 0 0 0 X wn Y wn 1 − X wn v n − Y wn v n 0
(4)
A = a 11 , a 12 , a 13 , a 21 , a 22 , a 23 , a 31 , a 32 , a 33 T
(5)
U = u 1 a 33 , v 1 a 33 , … , u n a 33 , v n a 33 T
(6)

The solution of the above linear equations can be calculated through the method of least squares:

A = K T K − 1 K T U
(7)

Through Equation (3) and the matrix A, the following equation can be obtained:

a 11 − a 31 u i a 12 − a 32 u i a 21 − a 31 v i a 22 − a 32 v i X wi Y wi = a 33 u i − a 13 a 33 v i − a 23
(8)

Matrix A determines the relationship between world coordinates and image coordinates for a given point. According to Equation (8), if matrix A is calibrated, three-dimensional world coordinates (Xwi, Ywi, Zwi) can be reconstructed by two-dimensional image coordinates (u i , v i ). Hence, three-dimensional measurement is fulfilled.

2.3. Calibration process of multi-vision sensor

To apply the above calibration method, a line-structured light multi-vision sensor is set up. As shown in Figure 4, two sets of line-structured light single-vision sensor are configured on both sides of the combined target and the laser planes are adjusted in parallel. Since each of the vision sensors has an independent coordinate system, the local measurement data are not of unity. So, it is necessary to unify the local measurement results of single-vision sensors to a global coordinate system.

Figure 4
figure 4

Calibration of line-structured light multi-vision sensor.

As shown in Figure 1, the unique world coordinate system is established on the target. Suppose that the thickness of the combined target along owxw direction is d and sensor 1 gets one point at any spatial position whose three-dimensional vector coordinate is Sw1 = [xw1,yw1,zw1]T. Accordingly, the vector coordinate can be expressed as Sw2 = [xw + d,yw,zw]T for sensor 2. Under the measurement state, each single-vision sensor can locally be calibrated by the calibration points in the world coordinate system, and afterwards the structure parameters of this system can be acquired.

Since those calibration points used for local calibration have been unified to the global coordinate system, all the single-vision sensors are globally unified. So, it is unnecessary to globally calibrate the single-vision sensor. This method directly unifies the local coordinate system to the global coordinate system. It is useful to improve the calibration accuracy due to less times of coordinate transformation.

3. Calibration errors analysis

The errors in this calibration method mainly include the error of the laser plane position and the error of endpoint-extraction.

3.1. The error of the laser plane position

The ideal laser line should be parallel to the ridge of the combined target as shown in Figure 5. There inevitably exists a certain inclination angle between the actual laser plane and the ridge of the combined target. This adjustment deviation will lead to a calibration error. Assuming the deviation angle between the actual laser line and its ideal position is Δα, the spacing between the gauge blocks is d and the length of the line segment on the actual laser line intercepted by the ridges of the gauge blocks is d', the error Δd is given by

Δd = d ' − d = d cos Δα − d = d Δ α 2 2
(9)
Figure 5
figure 5

Analysis of laser plane adjustment deviation.

As deduced from Equation (9), the deviation angle Δα has little influence on Δd for their quadratic relationship. When Δα = 2°, Δd is only 0.000609 mm.

When the guide rail moving direction is not parallel to the laser plane, as shown in Figure 6, there is a angle Δβ between them. Given the distance of guide rail moving l and the actually moving distance of the calibration point in the laser plane l', the error Δl is obtained by

Δl = l ' − l = l 1 − 1 cos Δβ = l Δ β 2 2
(10)
Figure 6
figure 6

Analysis of guide rail moving direction deviation.

Similarly, the deviation angle Δβ has a quadratic relationship with Δl.

In terms of Equations (9) and (10), the laser plane position has little influence on calibration accuracy. So, the adjustment of the laser plane position can be implemented by comparing the gauge block vertical ridge with the actual location of the laser plane. The corresponding calibration error will be reduced to 0.1 μm degree.

3.2. Extraction error and rectification

Figure 7 gives an endpoint image used as calibration point which is extracted from the laser stripe. In the actual application, the laser stripe images may bend or deviate at the edge of the laser stripe, which leads to endpoint extraction error. In a three-dimensional scene, the straight line remains straight on the image plane through ideal imaging. Getting the intersection point of the straight lines is more accurate than getting a separate endpoint. This method to acquire calibration points can reduce the extraction error.

Figure 7
figure 7

The image of laser stripe and its endpoint.

To obtain calibration points based on the above method, the extracted endpoints need to be fitted into a straight line. As the moving trajectory of the gauge block endpoint is of straight line, n straight lines can be fitted along the moving direction namely owxw. And m straight lines can also be fitted along the vertical direction namely owyw. These are shown in Figure 8. Then the intersection points of those fitted lines can be found out and be used to replace the endpoints of the gauge blocks for rectification use.

Figure 8
figure 8

Rectification of the gauge block endpoints.

In order to verify the rectification method, a rectification experiment is carried out. First, the 3D world coordinates of calibration points are reconstructed by their 2D image coordinates without rectification. Compare the reconstructed coordinates with their corresponding theoretical values and calculate the RMS values. Second, correct the endpoint coordinates by line fitting, and repeat the first step. Table 1 gives RMS values of 14 calibration points after rectification.

Table 1 RMSvalues of calibration points after rectification

Before rectification, the coordinate component RMS errors of the vision measurement system are RMS x = 0.180 mm and RMS y = 0.162 mm. In terms of Table 1, those errors are RMS x = 0.088 mm and RMS y = 0.074 mm after rectification. It is concluded that the method of intersection point extraction can reduce the error caused by separate endpoint extraction and improve the calibration accuracy.

4. Experiments and discussion

An experiment system of line-structured light multi-vision sensor is set up, which is shown in Figure 9. It comprises two sets of vision sensors, in which the camera has 1280 pixel × 1024 pixel and a fixed focal length lens with f = 8 mm. A series of round workpiece with different diameters are used as the measurement objects.

Figure 9
figure 9

The experiment system of multi-vision sensor.

4.1. Calibration experiment

The calibration points are acquired and then the experiment system is calibrated according to the proposed calibration method. The results are as follows.

  1. (1)

    The internal parameters of the camera

Camera 1: s x = 0.999207, k1 = −0.056308, u0 = 685.890 pixel, v0 = 488.186 pixel

Camera 2: s x = 0.996126, k1 = −0.056308, u0 = 602.681 pixel, v0 = 458.935 pixel

  1. (2)

    The structural parameters of the multi-vision sensor

    A 1 = 5.207687 0.036653 775.216473 1.395210 5.570409 504.777588 0.002785 0.000012 1 A 2 = 1.474586 - 0.052835 491.491942 - 1.501106 5.439574 559.661316 - 0.002766 - 0.000027 1

4.2. Repeatability experiment

Having been calibrated, the experiment system collects the images of the laser stripe on the workpiece surface to measure the diameter. Figure 10 gives two images, respectively, captured by cameras 1 and 2. Change the diameter, the measurement position, and the attitude of the workpiece, and repeat collecting the images for many times. The diameters can be derived from the reconstructed 3D data and the results of 15 times of measurement are shown in Table 2.

Figure 10
figure 10

The image of the laser stripe.

Table 2 The results of diameter measurement experiments (mm)

The standard deviation of four diameters in Table 2 are, respectively, 0.039, 0.033, 0.034, and 0.040 mm. The repeatability of the experiment system is denoted as 0.040 mm that is the maximum of the standard deviation. So, this system has good repeatability.

The accuracy of the experiment system depends on many factors, but the calibration method is the primary one. Thus, it can be concluded from the experimental results that the proposed calibration method has high precision.

5. Conclusions

A new calibration method for line-structured light multi-vision sensor based on combined target is proposed. This method combines the local calibration with the global calibration and reduces the calibration error caused by coordinate transformation. Meanwhile, the occlusion problem of 3D reference objects is solved with the combined target, which has a particular kind of characteristic between plane reference object and 3D target. Finally, an experiment system of line-structured light multi-vision sensor is set up. Repeatability of this system is 0.04 mm. That proves that the proposed calibration method is feasible and can obtain high precision. And also this method has the advantages of universality and field calibration.

References

  1. Leandry I, Breque C, Valle V: Calibration of a structured-light projection system: development to large dimension objects. Opt Lasers Eng 2012, 50(3):373-379. 10.1016/j.optlaseng.2011.10.020

    Article  Google Scholar 

  2. Miguel R, Markus B: State of the art on vision-based structured light systems for 3D measurements. In IEEE International Workshop on Robotic and Sensors Environments (ROSE 2005). Ottawa, Canada; 2005:1-7.

    Google Scholar 

  3. Sandro B, Alessandro P, Viviano RA: Three-dimensional point cloud alignment detecting fiducial markers by structured light stereo imaging. Mach Vis Appl 2012, 23(2):217-229. 10.1007/s00138-011-0340-1

    Article  Google Scholar 

  4. Zhang B, Li YF, Wu YH: Self-recalibration of a structured light system via plane-based homography. Pattern Recognit. 2007, 40(4):1368-1377. 10.1016/j.patcog.2006.04.001

    Article  Google Scholar 

  5. Kim D, Lee S, Kim H, Lee S: Wide-angle laser structured light system calibration with a planar object. In International Conference on Control Automation and Systems (ICCAS 2010). Gyeonggi-do, Korea; 2010:1879-1882.

    Google Scholar 

  6. Galilea JLL, Lavest J-M, Vazquez CAL, Vicente AG, Munoz IB: Calibration of a high-accuracy 3-D coordinate measurement sensor based on laser beam and CMOS camera. Instrum Meas 2009, 58(9):3341-3346.

    Article  Google Scholar 

  7. Shi YQ, Sun CK, Wang BG, Wang P, Duan HX: A global calibration method of multi-vision sensors in the measurement of engine cylinder joint surface holes. In International Conference on Materials, Mechatronics and Automation (ICMMA 2011). Melbourne, Australia; 2011:1182-1188.

    Google Scholar 

  8. Marcuzzi E, Parzianello G, Tordi M, Bartolozzi M, Lunardelli M, Selmo A, Baglivo L, Debei S, Cecco M: Extrinsic parameters calibration of a structured light system via planar homography based on a reference solid. In Proceedingsof Fundamental And Applied Metrology. Lisbon, Portugal; 2009:1903-1908.

    Google Scholar 

  9. de Alexandre JA, Stemmer MR, de França MB: A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter. Pattern Recognit. 2012, 45(10):3636-3647. 10.1016/j.patcog.2012.04.006

    Article  Google Scholar 

  10. Bangkui H, Zhen L, Guangjun Z: Global calibration of multi-sensor vision measurement system based on line structured light. J Optoelectron Laser 2011, 22(12):1816-1820.

    Google Scholar 

Download references

Acknowledgment

This study was supported by the Science and Technology Support Project (State Key Laboratory of Mechatronical Engineering and Control).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yin-guo Huang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Huang, Yg., Li, Xh. & Chen, Pf. Calibration method for line-structured light multi-vision sensor based on combined target. J Wireless Com Network 2013, 92 (2013). https://doi.org/10.1186/1687-1499-2013-92

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-92

Keywords