Skip to main content

Design of incomplete 3D information image recognition system based on SIFT algorithm and wireless network

Abstract

The traditional image recognition technology can transform some expression form of image into the data which can be processed by computer, and recognize the image with decision function. However, in actual applications, incomplete 3D images will be encountered. In order to screen the required image information from a large amount of images, it is necessary to recognize and match the image, and so the research has long-term application value. In this paper, SIFT algorithm was used to extract the feature vectors for incomplete 3D information recognition. Then, the modeling method of circular matching pattern was proposed, and the pattern of mathematical recognition was adopted in the process of image recognition. After that, on the basis of a large number of domestic and foreign research literatures, the process of “image hypothesis” converted to “image recognition” was completed. Finally, the system simulation was carried out by using Microsoft Visual Studio. For the incomplete 3D information image, the image feature recognition system was completed.

1 Introduction

With the rapid development of computer technology, image recognition technology has made great progress and has also become an important branch of computer technology [1]. From the existing literature basis for analysis and judgment, although the image recognition technology has been able to be fully applied in many fields, more image recognition technology problems should be challenged and resolved [2]. For example, camera can be used to recognize objects, but at the initial stage of recognition, little image information can be obtained. So how to obtain the target information needed by researchers and identification workers from the huge database in the early stage? Only by finding the solution to this problem can the artificial screening and identification be carried out [3]. How to obtain the overall impression of the object from the limited image information? Only by comparing with the object of the previous stage can the appropriate basic features of the image object be found. From the above discussion, it can be found that the full study of incomplete 3D information is of great importance to the recognition process [4]. Human senses and stimuli are the main sensory pathways to recognize the objective world. In this paper, from the viewpoint of object recognition, the computer was used to complete the recognition. From the perspective of basic concept recognition, the definition of incomplete 3D information can be interpreted as the search process of incomplete information for basic activities, the early search stage of which is information positioning, and the later stage of which is to confirm the source of information.

The feature-based image matching algorithm makes up for the shortcomings based on the gray-scale matching algorithm and has a good effect on the matching between the image pairs with affine transformation and projection transformation. At the same time, because the feature-based matching algorithm does not match the whole image, but extracts a series of representative features in the image, and then matches the features between the two images, the algorithm complexity is greatly reduced. The matching rate is faster. In some applications where real-time requirements are high, feature-based image matching algorithms are often used. Therefore, this algorithm is also a hot topic in recent years. Image recognition technology is a classic problem in digital image processing. In the past decade, a large number of literatures have proposed image recognition methods in different application fields, aiming at improving the accuracy, speed, versatility, and anti-interference of image recognition [5]. In the aspect of image edge and shape feature extraction, the related scholars firstly filter the image, then use the classical feature point detection algorithm to detect the corner points, and finally use some similarity measure of the feature points as the recognition function of the image feature points [6]. It is also a method to describe the shape feature by using the image edge direction histogram. The method is simple in calculation and maintains translation invariance, but has no characteristics such as scale invariance and rotation invariance. In order to identify the deformed image, an adaptive mapping method is proposed [7]. Firstly, two remote sensing images are automatically segmented, and then the similarity of the corresponding sub-blocks on the two divided images is maximized. Finally, according to the spatial position between the corresponding sub-blocks, a method of identifying the original image is proposed. The absolute equilibrium search method proposed by the researcher directly recognizes the correlation between the template image and the image to be recognized [8]. If the difference is less than the set threshold, the recognition is successful, otherwise the recognition fails. The method is simple and easy to implement.

In the daily problems, instead of insufficient information to be dealt with in the previous stage, with the increase of processing time, the information of some images increases progressively, and thus it is necessary to screen the required image information in the background database from the limited amount of information [9]. Then, the number of images filtered in the later stage is large enough, and in comparison with previous images, the basic features of composite images can be found [10]. In the process of research, incomplete information, insufficient feature points, and other problems may be encountered. In addition, the image may be affected by the external environmental information in the process of returning, resulting in relatively large image noise. In the later research, how to quantify the characteristics and use the basic computer technology to coordinate the calculation and accuracy is also a problem that must be faced in the process of research. It is a widely used method to recognize images by using complex calculation formulas. However, there is a big fluctuation in the gray scale and depth of the massive data, and the complexity of the program is relatively high. Therefore, it is also difficult to screen data from data only [11]. Image distortion is common in the process of image recognition, and it is easy to be affected by moving objects in the process of image acquisition. A variety of non-human factors may leave distortion information on the image, causing confusion to the image recognition. In this paper, the focus of the research is how to obtain the effective feature points of the image from the limited features.

2 Theoretical method

2.1 Progress in research

The application of image recognition information processing is more extensive. In recent years, the basic recognition technology of images has been developed very rapidly. Starting from the point of view of embedded application and application, and combined with internet communication technology, the basic recognition technology of image has made people feel the application advantages and has also changed people's basic way of life, providing convenience for human life. From simple to complex, at first, the application of recognition technology is the basic recognition method of text, and later develops into image processing technology, and finally will develop into object recognition [12]. The development of digital image information helps the compression and transmission of images, and thus in the process of real image transmission, the image is not easy to distort and maintains good stability. The process of object recognition is excessive to the cognition of computer or artificial intelligence robot to 3D object. In recent years, some basic problems have been exposed gradually, one of which is the poor ability of image conversion and adaptation [13]. Image recognition objects may be affected by the larger environmental noise, so that part of the information of the image is covered, and the amount of feature extracted by the researchers is relatively small. The basic research direction of the text is to deal with incomplete 3D information image processing. Through the optimization and improvement of the algorithm, the running time of the computer can be reduced, and the real reliability and usability of the calculation program can be enhanced [14]. Traditional research models include pattern recognition method, neural network identification method and so on.

2.2 Image preprocessing technology

People can observe the movement and characteristics of objects through the camera. However, the interference of many human factors and environmental factors may lead to the poor quality of the image itself, and thus the acquisition of the image is not clear, which is the basic characteristics of incomplete three-dimensional information. In this paper, the discriminant features of incomplete 3D information were studied, and the image preprocessing was carried out at the same time. Vision and hearing are the common sensory modalities of human beings, and the recognition and application of vision to image information are the most [15]. In the process of image collection and transmission, the change of image quality may occur, which may also be the problem of image equipment, or be the method of image acquisition. In the view of the target image, it can be determined that it is unpredictable image noise. The noise of general image is divided into many kinds, including electromagnetic interference, sensor interference, filtering noise, and so on [16].

In the preprocessing of image, the basic information of the image is acquired by the sensor. And under the basic influence of imaging equipment and environmental factors, the information will always be affected by imaging sensors, and the original recognition image is not ideal. The difference will make the image error exists in the recognition process. The purpose of image preprocessing is to make the processing image in an important position, and the purpose of image enhancement is to obtain images with better definition and better visual effect, so as to be convenient for computer processing and calculation. In order to eliminate the noise that interferes with the image, the image should be enhanced, and the more common technique is the convolution feature in the frequency domain. The basic transformation expression of image enhancement is [17]

$$ g\left(x,y\right)=h\left(x,y\right)\ast f\left(x,y\right) $$
(1)

In which, VD is a convolution method for processing two images. The image is made up of individual pixels, so that it is the main means of direct connection and processing of image enhancement in space. The methods of general image enhancement are mainly divided into the following methods: global operation method, neighborhood operation method, and point operation method. In addition, the methods and techniques of image enhancement include the modification of histogram, the method of gray level conversion and the basic technology of color processing [18]. Histogram can be used to deal with the original image to obtain histogram. Then, the uniform histogram is obtained by function transformation. Finally, the basic image clearer than the original image is obtained after modification and homogenization. At this point, the histogram should be specified and be matched, thus to form a histogram equalization map of a predetermined shape, and highlight the gray quality of the image [19].

For noisy images, as well as the gray difference between the noise gray level and the period, it is necessary to smooth the image. The most commonly used method is the smoothing method of mean filtering, as shown in Fig. 1.

Fig. 1
figure 1

Smoothing method of mean filter

Assuming that the image expression with noise isf(x, y), the smooth transition of the image is processed uniformly, and the expression is calculated as follows [20]:

$$ g\left(x,y\right)=\frac{1}{M}\sum \limits_{\left(x,y\right)\in S}f\left(x,y\right)=\frac{1}{M}\sum \limits_{\left(x,y\right)\in S}{f}^{,}\left(x,y\right)+\frac{1}{M}\sum \limits_{\left(x,y\right)\in S}n\left(x,y\right) $$
(2)

In which, f,(x, y) represents the image without noise, n(x, y) represents the image with noise, S represents the set of all the median points (x, y) in the field, and M represents the set of total points in the set S. Examples of smoothing filtering processing are shown in Fig. 2.

Fig. 2
figure 2

Examples of smoothing filtering processing

Assuming that the mean value of noise is 0, the property is superposition noise and has little correlation with the signal transmission, and then the variance of the image after smoothing is [21]

$$ D\left\{\frac{1}{M}\sum \limits_{\left(i,j\in S\right)}n\left(i,j\right)\right\}=\frac{1}{M}\sum \limits_{\left(i,j\in S\right)}D\left\{n\left(i,j\right)\right\}=\frac{1}{M}{\sigma}^2 $$
(3)

2.3 Image interpolation and geometric change model

In the field of visual information research, since computers cannot have the ability to recognize intelligently like the human eye, when using a computer to identify two or more images obtained in different shooting environments of the same scene, a certain algorithm is needed. To achieve this, this method is image matching. The definition of image matching refers to establishing a relationship between geometric space and gray intensity between the reference image and the image to be matched. Solving spatial geometric transformation between images is an important step in image matching.

In image matching, there are two main ways to solve spatial geometric transformation models: global spatial geometric transformation model and local spatial geometric transformation model. The global spatial geometric transformation model refers to the same transformation model for the whole image when solving the transformation model between two images, that is, using a transformation function between two images. The global spatial geometric transformation model is currently a method often used in image matching. The local spatial geometric transformation model refers to the decomposition of an image into several small blocks when solving the transformation model between two images. The transformation model used by each small block is different, that is, one is used between each small block. The transformation function is used to represent, then the entire transformation model is implemented by multiple transformation functions. Since the local spatial geometric transformation model is more complicated, the range currently used is relatively small.

By using the interpolation method of image difference, the results are fuller, the resolution of the image itself is stronger, and the maximum data characteristic information obtained by the computer is more accurate. Therefore, the interpolation method is often used. Image interpolation refers to obtaining unknown data point from a known data point by calculation. The commonly used interpolation methods are near interpolation, double line interpolation, and three line interpolation. The near interpolation method is the least complicated method in the three methods, the main process of which is to calculate the gray value of two pixels, and the expression of which is [22]:

$$ f\left(x,y\right)=g\left(x,y\right) $$
(4)
$$ y=\left[v+0.5\right] $$
(5)
$$ x=\left[u+0.5\right] $$
(6)

Double line interpolation method is a linear interpolation operation of a pixel in two directions of x and y. In the nearest 2*2 neighborhood of the output, the weighted average method of gray value pixels is used, as shown in Fig. 3. The calculation formula of double line interpolation is [23]

$$ {\displaystyle \begin{array}{l}f\left(x,y\right)=x\left[f\left(1,0\right)-f\Big(0,0\Big)\right]+\left[f\left(0,1\right)-f\Big(0,0\Big)\right]y\\ {}+\left[f\left(1,1\right)+f\Big(0,0\left)-f\right(0,1\left)-f\right(1,0\Big)\right] xy+f\left(0,0\right)\end{array}} $$
(7)
Fig. 3
figure 3

Weighted average method of gray value pixels

In order to obtain the transformation relationship between the coordinate point of coordinate system and the coordinate point of another coordinate system, rigid transformation, radiation transformation, and projection transformation are often used. The operation of the rigid transformation method is that a certain image is rigidly operated, and the relative position of the points of the two pixels is unchanged. The rigid transformation method satisfying the requirement can be used to obtain the midpoint transformation coordinates of the three-dimensional space by rotation or translation, and the calculated expression is [24]

$$ \left[\begin{array}{l}{x}^{,}\\ {}{y}^{,}\end{array}\right]=\left[\begin{array}{l}\cos \theta, \pm \sin \theta \\ {}\sin \theta, mcod\theta \end{array}\right]\left[\begin{array}{l}x\\ {}y\end{array}\right]+\left[\begin{array}{l}{t}_x\\ {}{t}_y\end{array}\right] $$
(8)

In which, θ is the basic angle of rotation, and \( \left[\begin{array}{l}{t}_x\\ {}{t}_y\end{array}\right] \) is the translation basic vector value of the rigid transformation of the object.

After the image is transformed, the original parallel relation can still be maintained. The transformation relation of the image is called affine transformation method. And the simplified expression of two-point matrix is as follows:

$$ \left[\begin{array}{l}{x}^{,}\\ {}{y}^{,}\end{array}\right]=\left[\begin{array}{l}{m}_0{m}_1\\ {}{m}_3{m}_4\end{array}\right]\left[\begin{array}{l}x\\ {}y\end{array}\right]+\left[\begin{array}{l}{m}_2\\ {}{m}_5\end{array}\right] $$
(9)

The transformation of left and right matrices is the projection transformation method of images, and the relationship between the two projection points is

$$ q={H}_pP\left(\begin{array}{l} At\\ {}{v}^Tu\end{array}\right)P $$
(10)

In which, A represents a matrix of2 × 2, 2 × 2 is the vector of 2 × 1, and vector V = [v1, v2]T represents the 9 basic elements of the HPmatrix.

The expression of the matrix of similarity transformation is

$$ q={H}_sP=\left(\begin{array}{l} sR,t\\ {}{0}^T,1\end{array}\right)P $$
(11)

In which, sparameter is the coordination coefficient of each calculation, andR is the transformation form of orthogonal matrix of 2 × 2.

3 Research on quantitative simulation of incomplete 3D image features based on SIFT

After detailed discussion, the recognition method of incomplete 3D information is obtained, and the basic features of the image can be successfully recognized. To use computer processing technology for fast matching between different images, the image is required to be transformed into the basic form that the computer can handle. Using the transformation form of image, the computer can process the data of corresponding image feature points. Therefore, it is necessary to manually perform the basic quantitative processing of different scales and to perform the image difference matching.

For objects far away from human vision, human can only form basic contour information when they use eyes to observe. However, according to the basic features of the human visual nerve system, the details can be expressed for nearby objects. Through the expression and processing of the computer, the spatial relationship between the object and the scale of the image can be realized, and the important application significance of the spatial processing image can also be confirmed. In the 1960s, there was a preliminary framework for the application of scale space. In the early stage, scholars put forward the basic concept and planning thought of knowledge scale space. In the middle stage, after the digital image processing technology appears, the differential equation calculation method was used to advance the image processing method. In order to capture the edge of the image, Mar used the Gauss function in mathematics to filter the image. Lindeberg summed up the discrete spatial signals and proved that the Gauss convolution processing method can be converted into linear functions. After the method of filtering was put forward, combining the basic concept of image scale space, the image size and heat conduction equation were greatly connected. After the theory of constant in scale space is confirmed, the computer vision applications will be expanded.

If the original image is I0(X), when the image is transformed by scale parameter, another image in that form can be obtained, then a certain scale space operator can be obtained, and the operator set is defined as the scale space. Then, it can be evolved into a Gauss scale space, and it can reach a function after space transformation. After studying by scholars, it is found that Gauss function is a kind of linear function which can be changed, and its expression is as follows:

$$ L\left(X,\sigma \right)=G\left(X,\sigma \right)\otimes I(X) $$
(12)

In which, X = (x1, x2, …, xD), Ω is the control region space of the image, and represents the convolution basic operation of the Gauss function. G(X, σ) represents Gauss function that scales can change.

The basic expression of the heat diffusion equation is

$$ {\partial}_{\sigma }L=\frac{1}{2}{\nabla}^2L $$
(13)
$$ L\left(0,X\right)={L}_0(X) $$
(14)

Since the Gauss function and the diffusion equation have the same linear function characteristics, the solution of Gauss function has the same scale space as the solution of the diffusion equation.

The processing technology of stable variables has gradually matured. The calculation method of the stable variable can be used in the image processing. At the same time, an image feature matching algorithm is proposed and defined as SIFT computing method. This method not only can collect stable image feature points, but also can make noise transformation of different images, so that the impact of the environment is reduced to the minimum. Figure 4 is the flow chart of the algorithm of SIFT.

Fig. 4
figure 4

Algorithm flow chart of SIFT

The algorithm steps of SIFT strictly follow the flowchart of Fig. 4. Firstly, scale space is needed. The image to be processed is transformed into scale space, and the Gauss ambiguity is processed by using Gauss function. For fuzzy processing, the basic analysis of the sample should be carried out, and the Gauss function of pyramid is established; in addition, in order to obtain the basic characteristics of image, it is necessary to perform differential calculation for obtaining Gauss function of pyramid, so as to form Gauss’s differential pyramid model. Then, the location and detection mechanism of image feature points should be carried out. If the scale changes, the monitoring point and extreme value will show the change of the nature, which is also the basic attribute of the point feature, and can determine the relative position and obtain the scale size of point in space. Images can also be affected by the environment. However, in order to ensure that the image is not affected by rotation, the algorithm defines the characteristics of independent key points. Through the above basic steps, the basic model of the critical point vector can be obtained.

Before the SIFT calculation, the scale space of the image should be described basically, so as to detect the role of the change of image scale to the location of the stable point. The expression of scale space function is

$$ L\left(x,y,\sigma \right)=G\left(x,y,\sigma \right)\ast I\left(x,y\right) $$
(15)

In which, σ represents the scale factor in feature space vector space, and image smoothing degree is the main physical meaning. If the scale factor increases gradually, so the image will become increasingly blurred, as shown in Fig. 5, in which, is the convolution rule of Gauss function.

Fig. 5
figure 5

Image scale and spatial variation trend map

The pyramid type structure of the image is defined as follows: the original basic image is the first layer of pyramid. In order to obtain more feature points in pyramid, the interpolation method is usually used to sample the magnified image. Then, the new image is used as the first layer of pyramid structure. The expression of the calculation is

$$ n={\log}_2\left\{\min \left(M,N\right)\right\}-t $$
(16)

According to the research theory of David, after the image is transformed by Gauss, the spatial function of substitution scale is obtained. After the Gauss difference, the spatial function will be normalized, and the result is an expression which is very close to the Gauss function, as shown in Fig. 6. By using the approximate substitution method of difference method, the expression is obtained:

$$ G\left(x,y, k\delta \right)-G\left(x,y,\delta \right)\approx \left(k-1\right){\sigma}^2{\nabla}^2G $$
(17)
Fig. 6
figure 6

The pyramid model of Gauss function

In Gauss pyramid expression, it is an invariant function value, which is not the position that the value must be examined. If it is 1, then the calculated error is about 0. Through the traditional experiment, the fluctuation of the limit value of the image can be known.

In the scale space, excluding the first and second layers of pyramid, it is necessary to establish a sampling method of layer structure to compare with the majority of pixels used in the surrounding space, thus a cubic comparison process can be obtained. A comparison is made between the different layers, as shown in Fig. 7. The monitoring points are compared, and whether the results of each pixel are the highest value of adjacent points is detected. If the highest point is detected, it is defined as a candidate point. The detection process seems complex and changeable, but the sampling structure of each layer does not pass the first detection, and thus the time consumption of detection is relatively low.

Fig. 7
figure 7

Extreme point detection in scale space

In order to give a vector parameter to the key point of detection, the gradient histogram method is used to assign the direction value to the key points, that is to say, the histogram is weighted. Taking the radius 1.5σ as the prototype window of Gauss function, the gradient values of each feature point are calculated respectively. By taking advantage of the gradient of the neighboring pixels and the histogram features of pixels, the image does not have much impact even if it is converted, and the expression of the calculation is

$$ \theta \left(x,y\right)=\arctan \left(\frac{L\left(x+1,y\right)-L\left(x-1,y\right)}{L\left(x,y+1\right)-L\left(x,y-1\right)}\right) $$
(18)

Sampling analysis is performed within the radius of CS, and the neighborhood gradient of the histogram is statistically analyzed. The circumference of 360° is divided into 36 modules, with each module as 10°. Then, in the calculated histogram, the key points with arrow pointing in the graph are taken as the main direction values, and also as the basic direction of the key points of the feature. In theory, such a description method is more complex, but in order to obtain accurate description methods, such a preservation method is also feasible, as shown in Fig. 8.

Fig. 8
figure 8

Histogram of gradient direction

When the image is illuminated by a certain light, the angle of the camera will change with the change of the shooting environment. Therefore, the description of feature key points must be set up, which not only ensures the independent feature, but also avoids the influence of external factors, so as to improve the probability of image matching success. In Fig. 9, the red dot is the key point where the feature is located, the grid is the basic pixel of the image, the blue area is the coordinate circle of the key point, and the Gauss function is calculated in the blue range. The square lattice can be computed as a micro unit, and the histogram is solved in different directions of the micro unit.

Fig. 9
figure 9

Key point influence factor generation

4 Simulation results and analysis

4.1 Feature point image recognition simulation

In this paper, the importance of image matching was described according to the recognition of reference image and the expression of eigenvector, and the metric of this study was to match images into images with very close feature points. The nearest approach can realize the matching of feature points of images, in which the Euclidean distance can be minimized, thus which is the best performance calculation and matching method at present. In order to better confirm the basic criterion of matching measure, the specific process of matching is needed. Since there are a large number of feature basic points in a picture, when comparing the two images, firstly, the similar feature points are obtained after the whole search, and then the feature points are searched by using the data feature structure. The way to search is taking target image as the starting point, and with the similarity of the identified image as a reference, the feature points in the image are searched, then gradually expanding into similar feature points.

The main matching method of the image is to obtain and calculate the Euclidean distance of each feature point of the reference image, and then calculate and identify all the Euclidean distance in the image to be detected. Finally, according to the result of Euclidean distance, the minimum and maximum values of Euclidean distance in two images are obtained. The minimum (Dmin) and maximum (Dscn) results are calculated, and the basic expression is

$$ {R}_D={D}_{\mathrm{min}}/{D}_{scn} $$
(19)

After defining the threshold value of 0.7, the correct and wrong matching trend graph results are shown in Fig. 10.

Fig. 10
figure 10

Match result trend chart

Figure 11 is the result of the matching. As can be seen from Fig. 11, the part that is surrounded by a blue rectangle is the image that can be identified after the matching. The area around the left side of the green image is also a recognized part, but the left and right sides describe the same object. By using the basic calculation method of SIFT, the incomplete 3D image information can be matched to the maximum extent. The information in the image is identified to the maximum extent. As can be seen from Fig. 11, the incomplete 3D image information recognition system based on SIFT algorithm can recognize the figure in the image, and the simulation results can basically meet the requirements of the research, thus verifying the feasibility of the algorithm.

Fig. 11
figure 11

Identification of feature points with multiple factors

Table 1 is the effect of different factors on the SIFT algorithm for image recognition. It can be seen from Table 1 that the detection time is 2.97 s, and the number of feature points is 896. When the image is affected by other interference factors, the extracted feature points can be kept unchanged, which proves that the optimized algorithm can meet the basic requirements of image recognition and matching.

Table 1 The effect of the same factors on the SIFT algorithm for image recognition

4.2 Euclidean distance ratio threshold parameter adaptive experiment

The Euclidean distance ratio represents the ratio between the nearest neighbor Euclidean distance and the next nearest neighbor Euclidean distance. When the ratio is within a certain threshold t, it is considered that the pair of feature points corresponding to the nearest neighbor Euclidean distance is the matching feature point. The Euclidean distance ratio threshold is a bond that establishes a matching relationship between feature points and thus plays an important role. The larger the value of threshold t is, the more broad the requirement of matching is, and the more incorrect matching points will be; the smaller the value of threshold t is, the stricter the matching requirement is, and the fewer the correct matching points will be.

Due to the diversity of image content, the setting of the Euclidean distance ratio threshold should be different when matching different types of images. In other words, the fixed threshold setting does not meet the needs of different types of images. An effective algorithm should be as robust as possible and adaptively adapt to different situations. Therefore, when performing feature matching, the Euclidean distance ratio threshold parameter can be adaptively adjusted by matching, thereby ensuring an optimal match between images.

The Euclidean distance ratio represents the ratio between the nearest neighbor Euclidean distance and the next nearest neighbor Euclidean distance. When the ratio is within a certain threshold t, it is considered that the pair of feature points corresponding to the nearest neighbor Euclidean distance is the matching feature point.

The distance ratio probability density distribution is shown in Fig. 12. As can be seen from Fig. 12, it is reasonable to set the distance ratio threshold parameter to 0.8. At this point, although the correct matching point of 5% is lost, the wrong matching point of 90% will be reduced at the same time. However, Fig. 12 does not reflect the Euclidean distance ratio probability density distribution of all images, that is, this method of fixing the threshold is not a very reasonable method. Thresholds do not always work best when experimenting with different images, so the parameters are not valid for all images. An effective algorithm should make the threshold parameters as robust as possible to meet the needs of different situations. To this end, the thresholds are adaptively adjusted for different images so that the feature points can be more accurately matched.

Fig. 12
figure 12

Probability density distribution of Euclidean distance ratio

In order to select an optimal Euclidean distance ratio threshold parameter, it is necessary to add a parameter optimization process to the algorithm implementation, and whether the parameter is optimal is measured by the repetition rate. The higher the repetition rate, the higher the matching degree, the more reasonable the selection of the Euclidean distance ratio threshold; the smaller the repetition rate, the lower the matching degree, and the more unreasonable the selection of the Euclidean distance ratio threshold. However, there is no clear functional relationship between the repetition rate and the Euclidean distance ratio threshold, which brings trouble to the optimization process. A large number of experiments have shown that the Euclidean distance ratio threshold is more suitable between [0.4, 0.8]. If the threshold is too large, the matching requirement is too broad, and the number of mismatches will increase. If the threshold is too small, the matching requirement is too strict, and the correct matching point will be reduced.

Figure 13 represents an iterative process of distance ratio threshold parameters. Through the optimization algorithm, the distance ratio threshold parameter and the repetition rate are continuously optimized until the maximum repetition rate and the optimal distance ratio threshold. The algorithm performed 37 steps of iteration, and the repetition rate finally took 0.95. The distance ratio threshold parameter finally took 0.72. It can be seen that the distance ratio threshold parameter is continuously iterated, and the method of narrowing the interval is half-finished, and finally the required value is obtained.

Fig. 13
figure 13

Iterative process of the distance ratio threshold

Through the optimization of the distance ratio threshold parameter, the optimal distance ratio threshold parameter under the image is obtained. At the edge of the initial calculation interval, the optimal distance ratio threshold corresponding to the maximum repetition rate can be found, so that a large amount of calculation is not caused.

The values of the different distances than the threshold parameter pairs are chosen differently and have different distributions, so the selection should be made for different images. The distance ratio threshold parameter sometimes has a value that is not a certain value, but a value in a certain interval, as shown in Fig. 14.

Fig. 14
figure 14

Relationship between distance ratio threshold and repetition rate

4.3 Simulation of correct matching rate and algorithm processing speed

Image registration is one of the applications sensitive to input changes in image processing tasks. It requires that the estimation of the transformation relationship is only derived from the image itself. The quality of the estimation is determined by the number and location of the feature points identified in the image. Therefore, a large, uniform distribution of good feature points is a key factor in ensuring the quality of the registration.

When the feature detection is performed, the SIFT algorithm compares the extreme values of the surrounding 26 pixels to determine whether it is a feature point. Due to the small detection range of the algorithm, the number of detected feature points is large. But because of this, the detected feature points represent only the extreme values of 93 × 27 pixels, which is likely to fall into the local extremes. The spatial distribution of the detected feature points tends to be concentrated in a certain range, which may result in clustering. When clustering occurs, the feature points may reflect the characteristics of one or several objects in the image. The required feature points should reflect the overall characteristics of the image, not just some local features.

In order to obtain feature points with a more uniform distribution, it is conceivable to perform detection of feature points in a larger range. Because the detection range of the feature points is larger, the range of local extremum represented by the feature points is larger. When the feature point is a local extremum in a larger range, the farther the feature points are from each other, the more uniform the detected feature point distribution is. With this uniformity feature detection method, not only can feature points with uniform distribution be detected, but also the number of detections can be reduced. As the detection range becomes larger, the number of detections will decrease. Then, by setting a flag bit for each pixel, the detection step size is dynamically adjusted, and the number of detections is further reduced.

In order to compare the performance of the original SIFT algorithm with the improved algorithm in terms of time consumption, we have three algorithms running under the same conditions. The time consumption of the three algorithms is shown in Fig 15. It can be seen that since the MAD algorithm needs to be iteratively optimized in the middle of the MAD algorithm, the feature point detection time consumption is high, which makes the total time consumption of the MAD algorithm high, which is also a disadvantage of the MAD algorithm. For the NCC algorithm, even if the feature point extraction part consumes a little more time than the original SIFT algorithm, since the feature points extracted by the improved algorithm are more suitable for matching and the extracted feature points are more accurate, in the feature point matching stage, the SIFT algorithm consumes less time than the MAD and NCC algorithms. By comparing the overall consumption time of the SIFT algorithm with the NCC algorithm, it can be seen that the SIFT algorithm does not introduce additional complexity, and sometimes it takes less time than the MAD algorithm. This is a superior to other methods of adding color features to the SIFT algorithm.

Fig. 15
figure 15

Matching time comparison

It can be seen from Fig. 16 that in these five different cases, the correct matching rate of the improved algorithm is higher than the original SIFT algorithm. At the same time, the correct matching rate of the NCC algorithm is also higher than the correct matching rate of the MAD algorithm.

Fig. 16
figure 16

Comparison of correct match rates

In summary, through the analysis of the results of SIFI, MAD, and NCC, the SIFI algorithm can outperform MAD in the performance of containing color information, but the complexity of the algorithm is larger, and the time consumption is higher than the original algorithm. The SIFI algorithm is significantly superior to the MAD algorithm and the NCC algorithm in terms of color information inclusion, and at the same time, no additional complexity is introduced. That is, the performance of the algorithm from high to low is SIFT→MAD→NCC, and the time consumption from bottom to high is SIFT→MAD→NCC.

5 Conclusion

The recognition technology of incomplete 3D image information has been a key problem to be solved in the field of scientific research and engineering application. In this paper, on the basis of the existing literature and results, the incomplete information image recognition technology and related theories were discussed and analyzed in depth, and the design of incomplete 3D image information recognition system based on SIFT algorithm was studied. Then the problem of image transformation from “blur recognition” to “clear recognition” was solved. After that, according to the influence of image noise, the processing method of image feature vector was selected, in which the enhanced image can enhance the recognition clarity of the image. Aiming at the SIFT computing method, based on the local feature description theory of the original image, the matching technology between feature vector extraction of SIFT algorithm and image features was improved. Finally, combining with the actual application characteristics, the simulation of the algorithm was carried out. It is proved that the incomplete 3D image information recognition system based on SIFT optimization algorithm is scientific and feasible. Because the algorithm extracts a large number of feature points compared with the traditional SIFT algorithm, it takes a lot of time to match. It is not enough to simplify the construction process of SIFT operator. The next step is to improve the feature point matching method to improve the matching efficiency.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

SIFT:

Scale Invariant Feature Transform

References

  1. L. Jia, W. Fu, W. Wen, et al., Image matching based on improved SIFT algorithm. Chin. J. Sci. Instrum. 34(5), 1107–1112 (2013)

    Google Scholar 

  2. B. Liao, H. Wang, The Optimization of SIFT Feature Matching Algorithm on Face Recognition Based on BP Neural Network. Appl. Mech. Mater. 743, 359–364 (2015)

    Article  Google Scholar 

  3. E. Sadeghipour, N. Sahragard, Face Recognition Based on Improved SIFT Algorithm. Int. J. Adv. Comput. Sci. Appl. 7(1) (2016)

  4. Y.W. Dong, L. Wan, Z.M. Shi, et al., An Image Registration Algorithm Based on Improved SIFT Feature. Appl. Mech. Mater. 347-350(347-350), 3411–3415 (2013)

    Article  Google Scholar 

  5. K. Dhandapani, K. Venugopal, J.V. Kumar, Ecofriendly and green synthesis of carbon nanoparticles from rice bran: characterization and identification using image processing technique. Int. J. Plast. Technol. 23(1), 56–66 (2019)

    Article  Google Scholar 

  6. A.K. Chandran, L.A. Poh, P. Vadakkepat, Real-time identification of pedestrian meeting and split events from surveillance videos using motion similarity and its applications. J. Real-Time Image Proc. 16(4), 1–17 (2016)

    Google Scholar 

  7. V.E. Antsiperov, Object Identification on Low-Count Images by Means of Maximum-Likelihood Descriptors of Precedents. Pattern Recog. Image Analysis 29(1), 21–34 (2019)

    Article  Google Scholar 

  8. S.L. Zhang, G.J. Wu, X.G. Yang, et al., Digital Image-based Identification Method for the Determination of the Particle Size Distribution of Dam Granular Material. KSCE J. Civ. Eng. 22(10), 1–14 (2017)

    Google Scholar 

  9. H.M. Yang, J.G. Sun, An Improved Face Recognition Algorithm Based on SIFT and LBP. Appl. Mech. Mater. 427-429, 1999–2004 (2013)

    Article  Google Scholar 

  10. Q. Gao, An Image Matching Algorithm Based on Difference Measure and Improved SIFT Algorithm. J. Inform. Comput. Sci. 11(10), 3631–3642 (2014)

    Article  Google Scholar 

  11. J. Zhang, G. Chen, Z. Jia, An Image Stitching Algorithm Based on Histogram Matching and SIFT Algorithm. Int. J. Pattern Recognit. Artif. Intell. 31(04), 1754006 (2016)

    Article  Google Scholar 

  12. Y. Zhou, F.U. Zhizhong, L. Zhang, Research of Face Recognition Based on the PCA-SIFT Algorithm. J. Jiangnan Univ. (2014)

  13. X. Ding, S. Han, H.H. Yang, et al., A Study of Recognition Algorithms of Large-Scale Image Based on the Fusion of SIFT Features and BP Neutral Network. Adv. Mater. Res. 1049-1050, 1558–1560 (2014)

    Article  Google Scholar 

  14. L. Dong, Q. Guo, W. Wu, Speech corpora subset selection based on time-continuous utterances features. J. Comb. Optim. 37(4), 1237–1248 (2019)

    Article  MathSciNet  Google Scholar 

  15. S. Qi, H. Liu, Research on Image Matching Method Based on Fractional Order Differential and SIFT Algorithm. Semiconductor Optoelectronics (2016)

  16. M.Y. Yin, F. Guan, P. Ding, et al., Implementation of Image Matching Algorithm Based on SIFT Features. Appl. Mech. Mater. 602-605, 3181–3184 (2014)

    Article  Google Scholar 

  17. H. Nie, K. Long, J. Ma, et al., Using an Improved SIFT Algorithm and Fuzzy Closed-Loop Control Strategy for Object Recognition in Cluttered Scenes. PLoS One 10(2), e0116323 (2015)

    Article  Google Scholar 

  18. Q. Li, Q.Y. Sun, Research on Matching Pair Purification Methods of Image Based on SIFT Algorithm. Appl. Mech. Mater. 713-715, 1851–1854 (2015)

    Article  Google Scholar 

  19. Z. Wang, N. Li, The research and design of intelligent traffic sign recognitionsystem based on SIFT algorithm. Microcomp. Applic. (2014)

  20. H.Y. Wang, G.U. Su-Hang, L.V. Ji-Dong, Partially Occluded Object Recognition Based on SIFT Features under Hidden Markov Model. Comput. Technol. Automation (2016)

  21. S. Chen, L. Wang, An improved SIFT feature matching based on RANSAC algorithm. Inform. Technol. (2016)

  22. K.D. Lakshmi, V. Vaithiyanathan, Image Registration Techniques Based on the Scale Invariant Feature Transform. IETE Tech. Rev. 75(6), 1–8 (2016)

    Google Scholar 

  23. Z. Li, B. Zhao, Z. Liu, et al., Research of monocular vision recognition algorithm based on SIFT and Hu feature fusion. Microcomp. Applic. (2013)

  24. X. Shen, W. Bao, The Remote Sensing Image Matching Algorithm Based on the Normalized Cross-Correlation and SIFT. J. Indian Society. Remote Sensing 42(2), 417–422 (2014)

    Article  Google Scholar 

Download references

Acknowledgements

Science and Technology Correspondent Project of Tianjin China (18JCTPJC67200).

Funding

Science and Technology Correspondent Project of Tianjin China (18JCTPJC67200).

Author information

Authors and Affiliations

Authors

Contributions

Zhang Zhixin is responsible for the experimental part of the article, and Jiang Shuhao is responsible for the theoretical part of the article. The authors read and approved the final manuscript.

Authors’ information

ZhiXin Zhang(1984–), male. Master of Computer Science and Technology. Graduated from the Tianjin University of Technology. He is currently a lecturer in the College of Information Engineering, Tianjin University of Commerce.His research interests include image recognition and intelligence computing.

Shuhao Jiang(1980–), male. Master of Computer Science and Technology. Graduated from the Tianjin Normal University. He is currently a lecturer in the College of Information Engineering, Tianjin University of Commerce.His research interests include image recognition and intelligence computing.

Corresponding author

Correspondence to Jiang Shuhao.

Ethics declarations

Ethics approval and consent to participate

This article does not contain any studies with human participants or animals performed by any of the authors.

All authors agree to submit this version and claim that no part of this manuscript has been published or submitted elsewhere.

Competing interests

All authors declares that he has no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhixin, Z., Shuhao, J. Design of incomplete 3D information image recognition system based on SIFT algorithm and wireless network. J Wireless Com Network 2020, 95 (2020). https://doi.org/10.1186/s13638-020-01726-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-020-01726-0

Keywords