# Calibration method for line-structured light multi-vision sensor based on combined target

- Yin-guo Huang
^{1}Email author, - Xing-hua Li
^{1}and - Pei-fen Chen
^{1}

**2013**:92

https://doi.org/10.1186/1687-1499-2013-92

© Huang et al.; licensee Springer. 2013

**Received: **9 January 2013

**Accepted: **25 February 2013

**Published: **28 March 2013

## Abstract

Calibration is one of the most important technologies for line-structured light vision sensor. The existing calibration methods depend on special calibration equipments, whose accuracy determines the calibration accuracy. It is difficult to meet the requirements of universality and field calibration with those methods. In order to solve these problems, a new calibration method based on the combined target for line-structured light multi-vision sensor is proposed. Each line-structured light multi-vision sensor is locally calibrated by combined target with high precision. On the basis of global calibration principle and the characteristics of the combined target, global calibration and local calibration are unified. The unification avoids the precision decrement caused by coordinate transformation. And the occlusion problem of 3D reference objects can be avoided. An experimental system for 3D multi-vision measurement is set up with two sets of vision sensor and its calibration matrix is obtained. To improve the calibration accuracy, the method of acquiring calibration points with high precision and error factors in calibration are analyzed. After being calibrated, the experimental system is finally tested through a workpiece measurement experiment. The repeatability of this system is 0.04 mm, and it proves that the proposed calibration method can obtain high precision. Moreover, by changing the structure of the combined target, this calibration method can adapt to the different multi-vision sensors, while the accuracy of the combined target is still guaranteed. Thus, this calibration method has the advantages of universality and field calibration.

## Keywords

## 1. Introduction

Line-structured light vision sensor is non-contact and real-time sensor that has the advantages of high precision and active controllability. It is widely used in industrial inspection [1, 2]. When the laser plane of a line-structured light vision sensor is projected onto a workpiece surface to be measured, the laser plane would produce distortion strip due to modulation caused by the size of the workpiece. The cameras obtain the images of the modulated light strip, from which the 3D information of the workpiece surface can be calculated. This is the principle of line-structured light vision sensor [3].

Calibration is one of the most important technologies for line-structured light vision sensor [4, 5]. The current methods to calibrate the vision model of line-structured light vision sensor mainly include press wire method, tooth profile target method [6], and cross-ratio invariance method [1, 7, 8]. Those methods depend on special calibration equipments. That makes them unsuitable for general use and field calibration. With the complicated calibration process, there is still room for improvement for those methods’ precision. And there is also an occlusion problem for 3D reference objects. To solve this problem, calibration methods based on plane reference object and 1D target have been studied [9]. No matter what kind of target is used, the calibration accuracy is strongly dependent on the target.

A new method based on combined target is proposed in this article to calibrate the vision model of line-structured light multi-vision sensor. The combined target is made up of standard gauge blocks. Since every gauge block is of very high precision, it is convenient to obtain high precision of the combined target. With the different splicing types of gauge blocks, the structure of the combined target can be changed in order that its structure is in agreement with the vision sensor to be calibrated. In this method, several calibration points on the laser plane are first collected with high precision. Then each of the vision sensors is locally calibrated. On the principle of global calibration [10] and the characteristics of the combined target, global calibration is unified with local calibration. The surface of the combined target exhibits a particular kind of characteristic between plane reference object and 3D target. For this calibration method, there is no occlusion problem which exists in those methods based on 3D reference objects. Therefore, the proposed method has the advantages of high precision, universality, and field calibration.

## 2. Calibration method based on combined target

The calibration process for line-structured light vision sensor includes calibration for camera intrinsic parameters and calibration for the position relationship between laser plane and camera.

### 2.1. Calibration for camera intrinsic parameters

There are many relatively mature calibration methods for camera intrinsic parameters. The main distortion errors of camera include radial distortion, tangential distortion, and thin prism distortion. For a piece of ordinary lens, radial distortion can be an adequate description of nonlinear distortion. Introducing too many distortion coefficients cannot improve the accuracy, and instead, it may lead to the unstable solution. Therefore, only the first-order radial distortion is taken into account and the accuracy requirement could be fulfilled. Such parameters have to be calibrated as the effective focal length (*f*/mm), the center coordinates of the image plane (*u*_{0} and *v*_{0}/pixel), image aspect ratio *s*_{
x
} and the first-order radial distortion coefficient (*k*_{1}).

### 2.2. Calibration principle of multi-vision sensor

In Figure 1, the world coordinate system *o*_{w}–*x*_{w}*y*_{w}*z*_{w} is built on the combined target, whose origin (*o*_{w}) is the center of the combined target. In the world coordinate system, *x*_{w}, *y*_{w}, and *z*_{w} axes are parallel to the ridges of the combined target, respectively, and laser plane is defined as *Z*_{w} = 0. The target is fixed on a one-dimensional mobile platform, and the laser beam should be adjusted to make sure that it is perpendicular to the combined target and parallel to *o*_{w}*y*_{w}. Then, the target moves at an interval of 10 mm along *o*_{w}*x*_{w} direction, which is parallel to the laser plane. Several stripe points at different positions are acquired when moving the target. At this time, the coordinate *x*_{w} of the points on the laser plane is changed, but the coordinate *y*_{w} keeps invariant. The number of the stripe points is decided by the size of the gauge block in *o*_{w}*y*_{w} direction. That means the smaller the size of the gage block is, the more stripe points there is in a fixed field range. Having acquired the coordinate *x*_{w} of the platform at different positions, the coordinates of *n* points (*X*_{wi},*Y*_{wi}) (*i* = 1,…,*n*) in the laser plane can be obtained. All these endpoints of the stripe at every position are mapped into image points. Thus, by image processing the corresponding coordinates (*u*_{
i
},*v*_{
i
}) (*i* = 1,…,*n*) of image points are calculated.

*p*

_{wi}(

*X*

_{wi},

*Y*

_{wi},

*Z*

_{wi}) (

*i*= 1,…,

*n*) of the

*i*th feature point and their corresponding coordinates

*p*

_{ci}(

*u*

_{ i },

*v*

_{ i })in the image plane can be described as follows

*w*indicates a proportionality constant,

*f*indicates the effective focal length of the CCD camera,

*R*is a 3 × 3 rotation matrix and

*t*is a 3 × 1 translation vector. Taking the laser plane

*Z*

_{w}= 0 into consideration, the following equation is derived:

*A*is a 3 × 3 matrix, the so-called projection matrix. This matrix maps image coordinates into world coordinates. The above equation can be expressed in another way:

where *a*_{
ij
} is the *i* th row and the *j* th column element of the transformation matrix *A*.

Equation (3) establishes the perspective transformation relationship between the points on laser plane and their perspective points on the image plane. That is the mathematical model of the line-structured light vision sensor. In this equation, the world coordinates (*X*_{wi}*, Y*_{wi}) and their corresponding image coordinates (*u*_{
i
}, *v*_{
i
}) are known. Therefore, at least six or more characteristic points on the reference object are needed so as to calculate the transformation matrix *A*. The relationship between the laser plane and the camera position can be determined in this way.

*n*calibration points both in three-dimensional world coordinates (

*X*

_{wi},

*Y*

_{wi},

*Z*

_{wi}) and two-dimensional image coordinates (

*u*

_{ i },

*v*

_{ i }). Combining the above equations, the proportionality constant

*w*can be eliminated and then 2

*n*linear equations

*KA*=

*U*can be obtained. Here,

*K*,

*A*,

*U*are given by

*A*, the following equation can be obtained:

Matrix *A* determines the relationship between world coordinates and image coordinates for a given point. According to Equation (8), if matrix *A* is calibrated, three-dimensional world coordinates (*X*_{wi}, *Y*_{wi}, *Z*_{wi}) can be reconstructed by two-dimensional image coordinates (*u*_{
i
}, *v*_{
i
}). Hence, three-dimensional measurement is fulfilled.

### 2.3. Calibration process of multi-vision sensor

As shown in Figure 1, the unique world coordinate system is established on the target. Suppose that the thickness of the combined target along *o*_{w}*x*_{w} direction is *d* and sensor 1 gets one point at any spatial position whose three-dimensional vector coordinate is *S*_{w1} = [*x*_{w1},*y*_{w1},*z*_{w1}]^{T}. Accordingly, the vector coordinate can be expressed as *S*_{w2} = [*x*_{w} + *d*,*y*_{w},*z*_{w}]^{T} for sensor 2. Under the measurement state, each single-vision sensor can locally be calibrated by the calibration points in the world coordinate system, and afterwards the structure parameters of this system can be acquired.

Since those calibration points used for local calibration have been unified to the global coordinate system, all the single-vision sensors are globally unified. So, it is unnecessary to globally calibrate the single-vision sensor. This method directly unifies the local coordinate system to the global coordinate system. It is useful to improve the calibration accuracy due to less times of coordinate transformation.

## 3. Calibration errors analysis

The errors in this calibration method mainly include the error of the laser plane position and the error of endpoint-extraction.

### 3.1. The error of the laser plane position

*α*, the spacing between the gauge blocks is

*d*and the length of the line segment on the actual laser line intercepted by the ridges of the gauge blocks is

*d'*, the error Δ

*d*is given by

As deduced from Equation (9), the deviation angle Δ*α* has little influence on Δ*d* for their quadratic relationship. When Δ*α* = 2°, Δ*d* is only 0.000609 mm.

*β*between them. Given the distance of guide rail moving

*l*and the actually moving distance of the calibration point in the laser plane

*l*', the error Δ

*l*is obtained by

Similarly, the deviation angle Δ*β* has a quadratic relationship with Δ*l*.

In terms of Equations (9) and (10), the laser plane position has little influence on calibration accuracy. So, the adjustment of the laser plane position can be implemented by comparing the gauge block vertical ridge with the actual location of the laser plane. The corresponding calibration error will be reduced to 0.1 μm degree.

### 3.2. Extraction error and rectification

*n*straight lines can be fitted along the moving direction namely

*o*

_{w}

*x*

_{w}. And

*m*straight lines can also be fitted along the vertical direction namely

*o*

_{w}

*y*

_{w}. These are shown in Figure 8. Then the intersection points of those fitted lines can be found out and be used to replace the endpoints of the gauge blocks for rectification use.

**RMSvalues of calibration points after rectification**

Calibration point | RMS | RMS |
---|---|---|

1 | 0.092 | 0.019 |

2 | 0.075 | 0.039 |

3 | 0.039 | 0.065 |

4 | 0.046 | 0.138 |

5 | 0.074 | 0.065 |

6 | 0.099 | 0.086 |

7 | 0.154 | 0.018 |

8 | 0.091 | 0.042 |

9 | 0.083 | 0.060 |

10 | 0.060 | 0.077 |

11 | 0.065 | 0.112 |

12 | 0.082 | 0.077 |

13 | 0.095 | 0.088 |

14 | 0.118 | 0.041 |

Before rectification, the coordinate component RMS errors of the vision measurement system are RMS_{
x
} = 0.180 mm and RMS_{
y
} = 0.162 mm. In terms of Table 1, those errors are RMS_{
x
} = 0.088 mm and RMS_{
y
} = 0.074 mm after rectification. It is concluded that the method of intersection point extraction can reduce the error caused by separate endpoint extraction and improve the calibration accuracy.

## 4. Experiments and discussion

*f*= 8 mm. A series of round workpiece with different diameters are used as the measurement objects.

### 4.1. Calibration experiment

- (1)
*The internal parameters of the camera*

Camera 1: *s*_{
x
} = 0.999207, *k*_{1} = −0.056308, *u*_{0} = 685.890 pixel, *v*_{0} = 488.186 pixel

*s*

_{ x }= 0.996126,

*k*

_{1}= −0.056308,

*u*

_{0}= 602.681 pixel,

*v*

_{0}= 458.935 pixel

- (2)
*The structural parameters of the multi-vision sensor*$\begin{array}{c}{A}_{1}=\left[\begin{array}{ccc}5.207687& 0.036653& 775.216473\\ 1.395210& 5.570409& 504.777588\\ 0.002785& 0.000012& 1\end{array}\right]\\ \phantom{\rule{1em}{0ex}}{A}_{2}=\left[\begin{array}{ccc}1.474586& -0.052835& 491.491942\\ -1.501106& 5.439574& 559.661316\\ -0.002766& -0.000027& 1\end{array}\right]\end{array}$

### 4.2. Repeatability experiment

**The results of diameter measurement experiments (mm)**

Numbers | Diameter 1 | Diameter 2 | Diameter 3 | Diameter 4 |
---|---|---|---|---|

1 | 120.057 | 116.060 | 112.051 | 108.085 |

2 | 120.038 | 116.030 | 112.040 | 108.069 |

3 | 120.058 | 116.064 | 112.046 | 108.010 |

4 | 120.064 | 116.050 | 112.050 | 108.031 |

5 | 120.064 | 116.046 | 112.050 | 108.055 |

6 | 119.971 | 116.037 | 112.042 | 108.087 |

7 | 119.979 | 116.044 | 112.034 | 108.061 |

8 | 119.989 | 116.035 | 112.021 | 108.067 |

9 | 119.990 | 116.052 | 112.034 | 108.054 |

10 | 119.990 | 116.019 | 112.047 | 108.034 |

11 | 120.056 | 116.115 | 112.098 | 108.003 |

12 | 120.062 | 116.113 | 111.994 | 107.993 |

13 | 120.064 | 116.115 | 111.940 | 107.994 |

14 | 120.064 | 116.098 | 112.021 | 107.989 |

15 | 120.085 | 116.103 | 112.028 | 107.953 |

The standard deviation of four diameters in Table 2 are, respectively, 0.039, 0.033, 0.034, and 0.040 mm. The repeatability of the experiment system is denoted as 0.040 mm that is the maximum of the standard deviation. So, this system has good repeatability.

The accuracy of the experiment system depends on many factors, but the calibration method is the primary one. Thus, it can be concluded from the experimental results that the proposed calibration method has high precision.

## 5. Conclusions

A new calibration method for line-structured light multi-vision sensor based on combined target is proposed. This method combines the local calibration with the global calibration and reduces the calibration error caused by coordinate transformation. Meanwhile, the occlusion problem of 3D reference objects is solved with the combined target, which has a particular kind of characteristic between plane reference object and 3D target. Finally, an experiment system of line-structured light multi-vision sensor is set up. Repeatability of this system is 0.04 mm. That proves that the proposed calibration method is feasible and can obtain high precision. And also this method has the advantages of universality and field calibration.

## Declarations

### Acknowledgment

This study was supported by the Science and Technology Support Project (State Key Laboratory of Mechatronical Engineering and Control).

## Authors’ Affiliations

## References

- Leandry I, Breque C, Valle V: Calibration of a structured-light projection system: development to large dimension objects.
*Opt Lasers Eng*2012, 50(3):373-379. 10.1016/j.optlaseng.2011.10.020View ArticleGoogle Scholar - Miguel R, Markus B: State of the art on vision-based structured light systems for 3D measurements. In
*IEEE International Workshop on Robotic and Sensors Environments (ROSE 2005)*. Ottawa, Canada; 2005:1-7.Google Scholar - Sandro B, Alessandro P, Viviano RA: Three-dimensional point cloud alignment detecting fiducial markers by structured light stereo imaging.
*Mach Vis Appl*2012, 23(2):217-229. 10.1007/s00138-011-0340-1View ArticleGoogle Scholar - Zhang B, Li YF, Wu YH: Self-recalibration of a structured light system via plane-based homography.
*Pattern Recognit.*2007, 40(4):1368-1377. 10.1016/j.patcog.2006.04.001View ArticleGoogle Scholar - Kim D, Lee S, Kim H, Lee S: Wide-angle laser structured light system calibration with a planar object. In
*International Conference on Control Automation and Systems (ICCAS 2010)*. Gyeonggi-do, Korea; 2010:1879-1882.Google Scholar - Galilea JLL, Lavest J-M, Vazquez CAL, Vicente AG, Munoz IB: Calibration of a high-accuracy 3-D coordinate measurement sensor based on laser beam and CMOS camera.
*Instrum Meas*2009, 58(9):3341-3346.View ArticleGoogle Scholar - Shi YQ, Sun CK, Wang BG, Wang P, Duan HX: A global calibration method of multi-vision sensors in the measurement of engine cylinder joint surface holes. In
*International Conference on Materials, Mechatronics and Automation (ICMMA 2011)*. Melbourne, Australia; 2011:1182-1188.Google Scholar - Marcuzzi E, Parzianello G, Tordi M, Bartolozzi M, Lunardelli M, Selmo A, Baglivo L, Debei S, Cecco M: Extrinsic parameters calibration of a structured light system via planar homography based on a reference solid. In
*Proceedingsof Fundamental And Applied Metrology*. Lisbon, Portugal; 2009:1903-1908.Google Scholar - de Alexandre JA, Stemmer MR, de França MB: A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter.
*Pattern Recognit.*2012, 45(10):3636-3647. 10.1016/j.patcog.2012.04.006View ArticleGoogle Scholar - Bangkui H, Zhen L, Guangjun Z: Global calibration of multi-sensor vision measurement system based on line structured light.
*J Optoelectron Laser*2011, 22(12):1816-1820.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.