Skip to main content

Research on feature point registration method for wireless multi-exposure images in mobile photography

Abstract

In the mobile shooting environment, the multi-exposure is easy to occur due to the impact of the jitter and the sudden change of ambient illumination, so it is necessary to deal with the feature point registration of the multi-exposure image under mobile photography to improve the image quality. A feature point registration technique is proposed based on white balance offset compensation. The global motion estimation of the image is carried out, and the spatial neighborhood information is integrated into the amplitude detection of the multi-exposure image under mobile photography, and the amplitude characteristics of the multi-exposure image under the mobile shooting are extracted. The texture information of the multi-exposure image is compared to that of a global moving RGB 3D bit plane random field, and the white balance deviation of the multi-exposure image is compensated. At different scales, suitable white balance offset compensation function is used to describe the feature points of the multi-exposure image, the parallax analysis and corner detection of the target pixel of the multi-exposure image are carried out, and the image stabilization is realized by combining the feature registration method. The simulation results show that the proposed method has high accuracy and good registration performance for multi-exposure image feature points under mobile photography, and the image quality is improved.

1 Introduction

With the development of computer image processing technology, the quality and accuracy of image acquisition and imaging are becoming more and more important. Cameras must sometimes be installed in a mobile shooting environment (such as a camera mounted on a car, an aircraft, a ship), and because of these mobile shooting environments, there will be instability in the image collection. Blurred image information seriously affects the image acquisition and video detection, so it is necessary to stabilize the image [1]. Nowadays, electronic image stabilization technology has gradually replaced mechanical image stabilization and optical image stabilization, and it has become one of the main image stabilization technologies. By means of electronic image stabilization technology, the image jitter introduced by random motion can be effectively filtered from the image collected under the condition of mobile photography, and the stability of the image sequence can be improved. Image stabilization technology has become a hot topic in image processing [2].

Traditionally, the research of multi-exposure image feature point registration is mainly based on the block matching method, and some results have been obtained [3]. Multi-level reversible information hiding method is used to correct the hidden image shift deviation and achieve multi-exposure image feature point registration; the algorithm takes the least time in image feature matching, which is due to the application of the concept of integral image to reduce the image scale and improve the computing speed, but the matching accuracy is low [4]. Multi-exposure image feature point registration reduces the restoration accuracy of the image, and the algorithm has no advantage in matching accuracy and time consuming [5]. The RGB component edge detection method is used to extract the target feature from the high-speed moving target image, and the time-domain characteristic of the multi-frame image data is extracted in the time domain, and the frequency-domain characteristic of the multi-frame image data is extracted in the frequency domain combined with the time-frequency composite weighting processing. The multi-exposure image feature point registration is realized, and the image stabilization processing is completed in the time domain and frequency domain by using the weighted characteristic of the image feature. However, the algorithm is complex, which is influenced by the sub-block sequence greatly, and the resolution is not high [6]. A multi-exposure image feature point registration algorithm based on high-resolution gray projection is proposed. Traditional block matching multi-exposure image feature point registration technology is affected by shadow and obstacle occlusion and other interference; the multi-exposure image feature point registration effect is not good [7].

In order to solve the above problems, this paper presents a feature point registration technique based on white balance offset compensation. The global motion estimation of the image is carried out, and the spatial neighborhood information is integrated into the amplitude detection of the multi-exposure image under mobile photography, and the amplitude characteristics of the multi-exposure image under the mobile shooting are extracted. The texture information of the multi-exposure image is compared to that of a global moving RGB 3D bit plane random field, and the white balance deviation of the multi-exposure image is compensated. The white balance offset compensation function is chosen to describe the feature points of multi-exposure images at different scales, and the parallax analysis and corner detection of multi-exposure images are carried out. Finally, the simulation experiments are carried out. It is shown that the proposed method can improve the ability of feature point registration in mobile multi-exposure images.

The rest of this paper is organized as follows. Section 2 discusses the methods. The improved algorithm implementation is discussed in Section 3. The experiment is discussed in Section 4. Section 5 concludes the paper with a summary and future research directions.

2 Methods

2.1 Image acquisition and global motion estimation

In the image acquisition of high-speed moving target and unstable scene, the camera must be installed in the mobile shooting environment sometimes, which results in the video image instability, which needs to be processed [8]. In this paper, a multi-exposure image feature point registration technology is designed, and image acquisition is carried out first. Feature points are robust to occlusion, noise, luminance change, angle change, and affine transformation. The affine model is used to estimate the motion between two frames [9]. In the process of image rotation and image scaling transformation, the feature points of the image can be described as:

$$ {X}_t={AX}_{t-1}+t $$
(1)

In which, X = [xt, yt]T is the coordinate of the midpoint of the t frame, assuming that the pixel points are associated with two pixels G and H; the pixel mean value in the neighborhood is obtained:

$$ A=s\left[\begin{array}{cc}\cos \theta & -\sin \theta \\ {}\sin \theta & \cos \theta \end{array}\right],t=\left[\begin{array}{c}{t}_x\\ {}{t}_y\end{array}\right] $$
(2)

In order to realize the decision output of single-frame gray compensation information of moving target image and achieve the purpose of feature point matching and multi-exposure image feature point registration, the image feature extraction preprocessing is needed, and the image is photographed in mobile mode [10]. The horizontal displacement will occur when collecting in the environment. Image vertical displacement, rotation displacement, scaling displacement, and the related global motion estimation model are shown in Fig. 1.

Fig. 1
figure 1

Image global motion estimation model. a Horizontal shift. b Image vertical displacement. c Image rotation motion. d Image scaling motion

In the process of image stabilization, the image contains horizontal and vertical; the feature points of each feature point are matched by rotating and zooming motion [11]. By using the probability analysis technique and data fusion method with statistical significance, the pixel splitter fitting results of multi-exposure images under mobile photography are obtained as follows:

$$ \frac{\partial u\left(x,y;t\right)}{\partial t}=M{\Delta}_su\left(x,y;t\right)+N{\Delta}_tu\left(x,y;d,t\right) $$
(3)

In which, (x, y)  Ω and N denote the number of pixels and the mean value of pixels in the neighborhood, respectively. The feature point matching feature of motion frame compensation is expressed as:

$$ {u}_{ik}^{\ast }={u}_{ik}+{\pi}_{ik} $$
(4)

For the superpixel plane of the image, the shape rule feature forms a horizontal subset MST(C, E) in the direction of the area geometric flow with a good exposure degree, and the edge feature internal difference of the image is the least generated superpixel tree graph CV in the region shape rule, and the maximum weight value calculation formula of the edge feature is:

$$ \mathrm{Int}(C)=\underset{e\in \mathrm{MST}\left(C,E\right)}{\max }w(e) $$
(5)

The difference between the regions in edge detection based on information entropy is the minimum weight edge to connect the two parts, that is:

$$ \mathrm{Dif}\left({C}_1,{C}_2\right)=\underset{v_i\in {C}_1,{v}_j\in {C}_{2,}\left({v}_i,{v}_j\right)\in E}{\min }w\left(\left({v}_i,{v}_j\right)\right) $$
(6)

Finally, the variance, contrast, second-order moment, and detail signal energy of the image are selected as the feature point matching features of the image, and the edge signal energy, sharpness, image correlation, mean value, and power spectrum are calculated. Image parameters such as radiating accuracy steepness and so on are used as initial evaluation parameters to realize unstable image acquisition and global motion estimation in mobile shooting environment [12].

2.2 Preprocessing of image motion feature extraction based on single-frame feature point matching analysis

On the basis of the above analysis of image feature information collection and image feature point matching instability, this paper uses the method of single-frame visual difference analysis to extract the motion feature of a multi-exposure image under fast mobile photography. Preprocessing is used to realize image stabilization processing [13]. Assuming that the image motion amplitude structure information is represented by a unit length in unit time, the plane pheromone is defined as G(x, y; t), in which:

$$ u\left(x,y;t\right)=G\left(x,y;t\right) $$
(7)
$$ p\left(x,t\right)=\underset{\Delta x\to 0}{\lim}\left[\sigma \frac{u-\left(u+\Delta u\right)}{\Delta x}\right]=-\sigma \frac{\partial u\left(x,t\right)}{\partial x} $$
(8)

Assume that the edge information of the image along the gradient is:

$$ {G}_x\left(x,y;t\right)=\partial u\left(x,y;t\right)/\partial x $$
(9)
$$ {G}_y\left(x,y;t\right)=\partial u\left(x,y;t\right)/\partial y $$
(10)

The image edge amplitude information is decomposed into two components along the gradient direction, and Xi, j is used to represent the gray value of the pixels at the (i, j) position. The horizontal displacement of the multi-exposure image is estimated as follows:

$$ p\left(x,y;t\right)=-\sigma \nabla u\left(x,y;t\right)=-\sigma G\left(x,y;t\right)=-\sigma \left[{G}_x\left(x,y;t\right)i+{G}_y\left(x,y;t\right)j\right] $$
(11)

In this paper, the spatial neighborhood information is integrated into the amplitude detection of the moving multi-exposure image, and the amplitude feature of the multi-exposure image is extracted, and the texture information of the image is compared to that of a global moving RGB 3D. In the bit plane random field, the relative motion parameters of the image are obtained by selecting the appropriate method [14], and the information collection of pixel features realized by the information between frames of the prior edge information flow is obtained. The compensation differential equation of white balance deviation is obtained as follows:

$$ \frac{\partial u\left(x,y;t\right)}{\partial t}=\frac{\sigma }{\rho s}\nabla G\left(x,y;t\right)=k\left[\frac{\partial {G}_x\left(x,y;t\right)}{\partial x}+\frac{\partial {G}_y\left(x,y;t\right)}{\partial y}\right] $$
(12)

In order to match and detect the feature points of the image, the feature point feature extraction method based on sift is used to realize the motion estimation. The reference image is regarded as scale 1, the highest scale is M, and the M − 1 times transfer iteration is carried out. The iterative recurrence formula is as follows:

$$ {d}_{i+1}=2F\left({x}_{i+1}+\frac{1}{2},{y}_i+2\right)=\left\{\begin{array}{c}2\left[\Delta x\left({y}_i+2\right)-\Delta y\left({x}_{i,r}+\frac{1}{2}-\Delta xB\right)\right]\\ {}2\left[\Delta x\left({y}_i+2\right)-\Delta y\left({x}_{i,r}+1+\frac{1}{2}-\Delta xB\right)\right]\end{array}\right.{\displaystyle \begin{array}{cc}& {d}_i\le 0\\ {}& {d}_i>0\end{array}} $$
(13)

Based on the above processing, the preprocessing of image motion feature extraction based on single-frame feature point matching analysis is realized, which lays a foundation for the realization of multi-exposure image feature point registration.

3 Improved algorithm implementation

The global motion estimation of the image is carried out, and the spatial neighborhood information is integrated into the amplitude detection of the multi-exposure image under mobile photography, and the amplitude characteristics of the multi-exposure image under the mobile shooting are extracted. The texture information of the multi-exposure image is compared to that of a global moving RGB 3D bit plane random field. In this paper, a feature point registration technique based on white balance offset compensation is proposed. Using the Gaussian filtering method, the feature point matching algorithm of gray images is improved [15, 16]. The Gaussian filter compensates the white balance deviation by judging the noise in the sub-block and selects the appropriate white balance deviation at different scales [17, 18]. The difference compensation function is used to describe the feature points of the image. The window is selected as 3 × 3, and the gray value of the pixels at the (i, j) position is represented by Xi,j. The scale space of the image is obtained as follows:

$$ {M}_{i,j}=\mathrm{med}\left({X}_{i-1,j-1}\cdots {X}_{i,j}\cdots {X}_{i+1,j+1}\right) $$
(14)

There are:

$$ {F}_{i,j}=\left\{\begin{array}{c}1\kern2.00em \left|{X}_{i,j}-{M}_{i,j}\right|\ge T\\ {}0\begin{array}{cccc}& \begin{array}{cc}& \end{array}& & \left|{X}_{i,j}-{M}_{i,j}\right|<T\end{array}\end{array}\right. $$
(15)

The adjacent frames of the current frame Ic are represented as NFc = {n : c − k ≤ n ≤ c + k}. The Gaussian filter compensates the motion transformation parameters between each frame of the unstable image to achieve feature point matching. The feature point motion estimation algorithm uses the Gaussian function as the convolution kernel. In a multi-scale space, the pixels corresponding to the extremum are determined as feature points, and the judgment matrix is expressed as follows:

$$ D=\left[\begin{array}{cc}{I}_x^2& {I}_x{I}_y\\ {}{I}_x{I}_y& {I}_y^2\end{array}\right] $$
(16)

The texture structure information of the image is defined as G(x, y; t) in unit length as a plane pheromone in unit time, where:

$$ u\left(x,y;t\right)=G\left(x,y;t\right) $$
(17)

In the above model, combining the variance of each pixel, contrast, and second-order moment of angle, the relation of motion compensation is obtained as follows:

$$ \frac{\partial u\left(x,y;t\right)}{\partial t}=\frac{\sigma }{\rho s}\nabla G\left(x,y;t\right)=k\left[\frac{\partial {G}_x\left(x,y;t\right)}{\partial x}+\frac{\partial {G}_y\left(x,y;t\right)}{\partial y}\right] $$
(18)

In which, k is the conduction coefficient, ρ is the medium density, and s is the frame parameter of motion transformation. Based on the above model, the feature point registration and information reconstruction of multi-exposure images with motion frame compensation are used to reduce the occlusion. The object pixel parallax analysis and pixel image registration are carried out. The motion parameters are compensated by a single-frame visual difference analysis method, and the corresponding random jitter of the image is described. With the change of motion parameters, multi-exposure image feature point registration is realized.

4 Experiment

In order to test the performance of this algorithm in the realization of multi-exposure image feature point registration and carry out simulation experiments, the experiment environment is processor VS2008 as the software platform, combined with Matlab programming to achieve algorithm code design. The CCD camera parameters used in the image acquisition system are as follows: photosensitive element, CMOS; dynamic resolution, 1280 × 960; and maximum frame rate, 60 FPS pixels 5 million. In the simulation experiment, video image acquisition is carried out in the mobile shooting environment. The camera is placed on the high-speed driving ship to capture the video image. The following are the test conditions: (a) the odd frames of the video sequence are used as key frames in the test, and (b) the DCT of 4 × 4 is adopted. Because different quantization steps correspond to different thresholds, and the larger the quantization step size, the larger the threshold value is, the threshold value chosen in this paper is 2.23, and the original multi-exposure image is shown in Fig. 2.

Fig. 2
figure 2

Original images. a Frame 1024. b Frame 2000

Taking the image shown in Fig. 2 as the research object, the registration of feature points is simulated, and the registration results of feature points are shown in Fig. 3.

Fig. 3
figure 3

Result of feature point registration. a Frame 1024. b Frame 2000

Figure 3 shows that the proposed method has better performance for feature point registration of multi-exposure images under mobile photography. The peak signal to noise ratio (PSNR) of the output images after different feature point registration is tested, and the results are shown in Fig. 4.

Fig. 4
figure 4

Comparison of output PSNR

It is clearly evident in Fig. 4 that the analysis shows that the proposed method can effectively improve the output peak signal to noise ratio of the image, and the image quality under mobile photography is improved.

Although the second stage of this algorithm is to mitigate the influence of rollover, all nodes start with coordinate optimization based on anchor nodes. If the location of the anchor node is close to a straight line, or too concentrated, the final positioning result will be worse. For less anchor nodes, if the anchor nodes can be placed on the edge, for example, the 3 anchor nodes are placed in any 3 corners of the square area. If it is 4 paving nodes, the position is placed in the 4 corner, and the positioning result is shown in Fig. 5.

Fig. 5
figure 5

Influence of anchor nodes in corner position on GIL location algorithm

As shown in Fig. 5, if the anchor node is placed in the corner of the region, only 3 or 4 anchor nodes are required, and the positioning results are ideal. Although the positioning results are a little undulating, the positioning accuracy is very high, all within 2.5%. For sensor nodes, the radius of communication determines the communication range of nodes and also determines the connectivity of sensor networks and the number of neighbor nodes. The DV-HOP location algorithm is located by the distance between the neighbor nodes. This section tests the influence of the communication radius on the DV-HOP location algorithm, randomly generates a set of data, and uses 3, 4, and 5 anchor nodes to test the influence of the communication radius on the location result of the GIL location algorithm. Figure 6 shows the influence of communication radius on DV-HOP location algorithm.

Fig. 6
figure 6

Influence of communication radius on DV-HOP location algorithm

Through the above experimental results, it can be observed that the location error of the unknown node is reduced with the increase of the communication radius, no matter how many anchor nodes are. This is because the DV-HOP localization algorithm takes advantage of the distance constraint relationship between nodes and neighbor nodes, and the larger the communication radius is, the more the distance constraint relationship between nodes and neighbors is and the smaller the location error is.

5 Results and discussion

It is necessary to deal with the feature point registration of the multi-exposure image under mobile photography to improve the image quality. A feature point registration technique is proposed based on white balance offset compensation. The global motion estimation of the image is carried out, and the spatial neighborhood information is integrated into the amplitude detection of the multi-exposure image under mobile photography, and the amplitude characteristics of the multi-exposure image under the mobile shooting are extracted. The white balance deviation of the multi-exposure image is compensated. At different scales, suitable white balance offset compensation function is used to describe the feature points of the multi-exposure image, the parallax analysis and corner detection of the target pixel of the multi-exposure image are carried out, and the image stabilization is realized by combining the feature registration method. Reduce the impact of multiple exposures. The simulation results show that the proposed method has high accuracy and good registration performance for multi-exposure image feature points under mobile photography, and the image quality is improved. This method has good application value in the image optimization of a multi-exposure image.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CCD:

Charge-coupled device

PSNR:

Peak signal to noise ratio

References

  1. L. Yaqian, Z. Shaowei, L. Haibin, et al., Face recognition method using Gabor wavelet and cross-covariance dimensionality reduction. Journal of Electronics and Information 39(8), 2023–2027 (2017)

    Google Scholar 

  2. Sun Yitang, Song Huihui, Zhang Kaihua, Yan Fei, (2018). Face super-resolution via very deep convolutional neural network. Journal of Computer Applications,38(4): 1141-1145.

  3. Z. Huang, X. Xu, J. Ni, H. Zhu, W. Cheng, Multimodal representation learning for recommendation in Internet of Things. IEEE Internet of Things Journal 6(6), 10675–10685 (2019)

    Google Scholar 

  4. Yang JC, Wright J, Huang TS, et al., (2010). Image super-resolution via sparse representation. IEEE Transaction on Image Processing, 19(11):2861-2873.

  5. B. Wu, T.T. Cheng, T.L. Yip, Y. Wang, Fuzzy logic based dynamic decision-making system for intelligent navigation strategy within inland traffic separation schemes. Ocean Engineering 197, 106909 (2020)

    Google Scholar 

  6. G. Ilboa, S. Osher, Nonlocal operators with applications to image processing. SIAM Journal on Multiscale Modeling and Simulation 7(3), 1005–1028 (2008)

    MathSciNet  MATH  Google Scholar 

  7. W.B. Yang, T.W. Ma, J. Liu, Elimination of impulse noise by non-local variation inpainting method. Chinese Optics 6(6), 876–884 (2013)

    Google Scholar 

  8. H.M. Liu, X.H. Bi, Z.F. Ye, et al., Arc promoting inpainting using exemplar searching and priority filling. Journal of Image and Grapgics 21(8), 993–1003 (2016)

    Google Scholar 

  9. J.Y. Lin, D.X. Deng, J. Yan, et al., Self-adaptive group based sparse representation for image inpainting. Journal of Computer Applications 37(4), 1169–1173 (2017)

    Google Scholar 

  10. Z. Wang, A.C. Bovik, H.R. Sheikh, et al., Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13(4), 600–612 (2004)

    Google Scholar 

  11. T.Y. Fu, L.Q. Jin, Z. Lei, et al., Face super-resolution method based on key points layer by layer. Journal of Signal Processing 32(7), 834–841 (2016)

    Google Scholar 

  12. L. Gang, L. Haifang, S. Fangxin, H. Guo, Noise image segmentation model with local intensity difference. Journal of Computer Applications 38(3), 842–847 (2018)

    Google Scholar 

  13. L. Wang, C. Pan, Robust level set image segmentation via a local correntropy-based K-means clustering. Pattern Recognition 47(5), 1917–1925 (2014)

    Google Scholar 

  14. Z. Chen, H. Cai, Y. Zhang, C. Wu, M. Mu, Z. Li, M.A. Sotelo, A novel sparse representation model for pedestrian abnormal trajectory understanding. Expert Systems with Applications 138, 112753 (2019)

    Google Scholar 

  15. Niu S, Chen Q, Sisternes LD, et al., (2016). Robust noise region-based active contour model via local similarity factor for image segmentation. Pattern Recognition, 61:104-119.

  16. M.F.A. Abdullah, S.M. Sayeed, K.M. Sonai, et al., Face recognition with symmetric local graph Structure. Expert Systems with Applications 41(14), 6131–6137 (2014)

    Google Scholar 

  17. Z. Huang, X. Xu, H. Zhu, M.C. Zhou, An efficient group recommendation model with multiattention-based neural networks. IEEE Transactions on Neural Networks and Learning Systems. (2020). https://doi.org/10.1109/TNNLS.2019.2955567

  18. Wei Wei, Qi Yong. Information potential fields navigation in wireless ad-hoc sensor networks, (2011). Sensors,11(2): 4794-4807.

Download references

Acknowledgements

None

Funding

None

Author information

Authors and Affiliations

Authors

Contributions

Hui Xu wrote the entire article. The author read and approved the final manuscript.

Corresponding author

Correspondence to Hui Xu.

Ethics declarations

Ethics approval and consent to participate

This article does not contain any studies with human participants or animals performed by the author.

The author agrees to submit this version and claims that no part of this manuscript has been published or submitted elsewhere.

Competing interests

The author declares that he has no conflict of interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, H. Research on feature point registration method for wireless multi-exposure images in mobile photography. J Wireless Com Network 2020, 98 (2020). https://doi.org/10.1186/s13638-020-01695-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-020-01695-4

Keywords