Image acquisition and global motion estimation
In the image acquisition of high-speed moving target and unstable scene, the camera must be installed in the mobile shooting environment sometimes, which results in the video image instability, which needs to be processed [8]. In this paper, a multi-exposure image feature point registration technology is designed, and image acquisition is carried out first. Feature points are robust to occlusion, noise, luminance change, angle change, and affine transformation. The affine model is used to estimate the motion between two frames [9]. In the process of image rotation and image scaling transformation, the feature points of the image can be described as:
$$ {X}_t={AX}_{t-1}+t $$
(1)
In which, X = [xt, yt]T is the coordinate of the midpoint of the t frame, assuming that the pixel points are associated with two pixels G and H; the pixel mean value in the neighborhood is obtained:
$$ A=s\left[\begin{array}{cc}\cos \theta & -\sin \theta \\ {}\sin \theta & \cos \theta \end{array}\right],t=\left[\begin{array}{c}{t}_x\\ {}{t}_y\end{array}\right] $$
(2)
In order to realize the decision output of single-frame gray compensation information of moving target image and achieve the purpose of feature point matching and multi-exposure image feature point registration, the image feature extraction preprocessing is needed, and the image is photographed in mobile mode [10]. The horizontal displacement will occur when collecting in the environment. Image vertical displacement, rotation displacement, scaling displacement, and the related global motion estimation model are shown in Fig. 1.
In the process of image stabilization, the image contains horizontal and vertical; the feature points of each feature point are matched by rotating and zooming motion [11]. By using the probability analysis technique and data fusion method with statistical significance, the pixel splitter fitting results of multi-exposure images under mobile photography are obtained as follows:
$$ \frac{\partial u\left(x,y;t\right)}{\partial t}=M{\Delta}_su\left(x,y;t\right)+N{\Delta}_tu\left(x,y;d,t\right) $$
(3)
In which, (x, y) ∈ Ω and N denote the number of pixels and the mean value of pixels in the neighborhood, respectively. The feature point matching feature of motion frame compensation is expressed as:
$$ {u}_{ik}^{\ast }={u}_{ik}+{\pi}_{ik} $$
(4)
For the superpixel plane of the image, the shape rule feature forms a horizontal subset MST(C, E) in the direction of the area geometric flow with a good exposure degree, and the edge feature internal difference of the image is the least generated superpixel tree graph C ⊆ V in the region shape rule, and the maximum weight value calculation formula of the edge feature is:
$$ \mathrm{Int}(C)=\underset{e\in \mathrm{MST}\left(C,E\right)}{\max }w(e) $$
(5)
The difference between the regions in edge detection based on information entropy is the minimum weight edge to connect the two parts, that is:
$$ \mathrm{Dif}\left({C}_1,{C}_2\right)=\underset{v_i\in {C}_1,{v}_j\in {C}_{2,}\left({v}_i,{v}_j\right)\in E}{\min }w\left(\left({v}_i,{v}_j\right)\right) $$
(6)
Finally, the variance, contrast, second-order moment, and detail signal energy of the image are selected as the feature point matching features of the image, and the edge signal energy, sharpness, image correlation, mean value, and power spectrum are calculated. Image parameters such as radiating accuracy steepness and so on are used as initial evaluation parameters to realize unstable image acquisition and global motion estimation in mobile shooting environment [12].
Preprocessing of image motion feature extraction based on single-frame feature point matching analysis
On the basis of the above analysis of image feature information collection and image feature point matching instability, this paper uses the method of single-frame visual difference analysis to extract the motion feature of a multi-exposure image under fast mobile photography. Preprocessing is used to realize image stabilization processing [13]. Assuming that the image motion amplitude structure information is represented by a unit length in unit time, the plane pheromone is defined as G(x, y; t), in which:
$$ u\left(x,y;t\right)=G\left(x,y;t\right) $$
(7)
$$ p\left(x,t\right)=\underset{\Delta x\to 0}{\lim}\left[\sigma \frac{u-\left(u+\Delta u\right)}{\Delta x}\right]=-\sigma \frac{\partial u\left(x,t\right)}{\partial x} $$
(8)
Assume that the edge information of the image along the gradient is:
$$ {G}_x\left(x,y;t\right)=\partial u\left(x,y;t\right)/\partial x $$
(9)
$$ {G}_y\left(x,y;t\right)=\partial u\left(x,y;t\right)/\partial y $$
(10)
The image edge amplitude information is decomposed into two components along the gradient direction, and Xi, j is used to represent the gray value of the pixels at the (i, j) position. The horizontal displacement of the multi-exposure image is estimated as follows:
$$ p\left(x,y;t\right)=-\sigma \nabla u\left(x,y;t\right)=-\sigma G\left(x,y;t\right)=-\sigma \left[{G}_x\left(x,y;t\right)i+{G}_y\left(x,y;t\right)j\right] $$
(11)
In this paper, the spatial neighborhood information is integrated into the amplitude detection of the moving multi-exposure image, and the amplitude feature of the multi-exposure image is extracted, and the texture information of the image is compared to that of a global moving RGB 3D. In the bit plane random field, the relative motion parameters of the image are obtained by selecting the appropriate method [14], and the information collection of pixel features realized by the information between frames of the prior edge information flow is obtained. The compensation differential equation of white balance deviation is obtained as follows:
$$ \frac{\partial u\left(x,y;t\right)}{\partial t}=\frac{\sigma }{\rho s}\nabla G\left(x,y;t\right)=k\left[\frac{\partial {G}_x\left(x,y;t\right)}{\partial x}+\frac{\partial {G}_y\left(x,y;t\right)}{\partial y}\right] $$
(12)
In order to match and detect the feature points of the image, the feature point feature extraction method based on sift is used to realize the motion estimation. The reference image is regarded as scale 1, the highest scale is M, and the M − 1 times transfer iteration is carried out. The iterative recurrence formula is as follows:
$$ {d}_{i+1}=2F\left({x}_{i+1}+\frac{1}{2},{y}_i+2\right)=\left\{\begin{array}{c}2\left[\Delta x\left({y}_i+2\right)-\Delta y\left({x}_{i,r}+\frac{1}{2}-\Delta xB\right)\right]\\ {}2\left[\Delta x\left({y}_i+2\right)-\Delta y\left({x}_{i,r}+1+\frac{1}{2}-\Delta xB\right)\right]\end{array}\right.{\displaystyle \begin{array}{cc}& {d}_i\le 0\\ {}& {d}_i>0\end{array}} $$
(13)
Based on the above processing, the preprocessing of image motion feature extraction based on single-frame feature point matching analysis is realized, which lays a foundation for the realization of multi-exposure image feature point registration.