Skip to main content

A biologically-inspired embedded monitoring network system for moving target detection in panoramic view

Abstract

An embedded monitoring network system is based on the visual principle of compound eye, which meets the acquirements in field angle, detecting efficiency, and structural complexity of panoramic monitoring network. Three fixed wide-angle cameras are adopted as sub-eyes, and a main camera is installed on a high-speed platform. The system ensures the continuity of tracking with high sensitivity and accuracy in a field of view (FOV) of 360 × 180°. In the non-overlapping FOV of the sub-eyes, we adopt Gaussian background difference model and morphological algorithm to detect moving targets. However, in the overlapping FOV, we use the strategy of lateral inhibition network which improves the continuity of detection and speed of response. The experimental results show that our system locates a target within 0.15 s after it starts moving in the non-overlapping field; when a target moves in the overlapping field, it takes 0.23 s to locate it. The system reduces the cost and complexity in traditional panoramic monitoring network and lessens the labor intensity in the field of monitoring.

1 Introduction

Conventional surveillance cameras are of limited field of view (FOV) and fail in continuous panoramic monitoring of 360 × 180° FOV. In order to solve the problem, a parallel network of multiple cameras is commonly used to cover the panoramic monitoring area [1]. However, such network is expensive and complex. Worse still, multi-channel parallel video processing may affect the real-time character of the system and increase the misjudgment rate. In recent years, the fish-eye lens [2] have gradually popularized in panoramic monitoring. However, distortion is large in the edge of the field, where no effective information can be obtained.

Biologically-inspired design methods are developing rapidly [310]. The compound eye vision system of insects has large FOV and high sensitivity. Such systems have advantages over conventional vision systems in applications of community monitoring, robot vision system and intelligent vehicle, etc. By means of that, the whole system can be of small volume, light weight, large field of view, and high sensitivity to moving targets. Compound eye vision system obtains original image information from different directions at the same time. Its unique structure significantly enlarges the field range and has paraxial optical path for each view angle, which decreases distortion [1113]. Besides, the concept of lateral inhibition [14] among sub-eyes can be used in artificial bionic network to improve sensitivity. Therefore, bionic compound eye network can realize continuous tracking and locating of moving targets in panoramic view robustly. It provides a new mode for the development of detectors and sensors.

Starting from the insects' compound eye system, this paper describes an embedded network system used for continuous tracking moving targets in panoramic view. A panoramic detection with low distortion is realized by multiple cameras. Meanwhile, global low-speed acquisition and local high-speed image acquisition are combined to shorten the time used by the detection algorithm and to improve sensitivity as well. Besides, high-resolution automatic tracking mode and lateral inhibition is used in improving the limitations of the current system in the aspects of field angle, detecting efficiency, and structural complexity.

2 The system principle and implementation

2.1 System components and setup

Three wide-angle cameras are fixed in a ring, and we call them ‘sub-eye cameras.’ Each sub-eye camera covers about 120° in horizontal field and 180° in meridian plane; thus, the total field of view is 360° in the sagittal plane. The panoramic video information of 360 × 180° space angle is then obtained. Each sub-eye camera has a charge-coupled device (CCD) of 1/3 inches, with a resolution of 704 × 576.

A high-resolution camera is used as the main camera. The camera uses a CCD of 1/2 inches and ×18 optical lens with automatic zooming. Its FOV is 45°, and its resolution is 1,280 × 1,024. The main camera is installed on a rotary platform, which has a highest rotary speed of 400°/s, presets 128 positions, and has a baud rate of 9,600 bps. The platform uses a pitching and horizontal rotating axis motor system and a processing module. The system architecture is shown in Figure 1.

Figure 1
figure 1

The architecture of the monitoring network system structure. (a) System probe of the detection network. (b) Overview of all sub-eyes.

2.2 Detection process

After a setup process, the three sub-eye cameras start global sub-sampling, that is, sampling pixel values in alternative lines. When a sub-eye camera detects a moving target, it switches to full-resolution sampling mode, which is, sampling every pixel value. Then, it extracts the centroid of the target and calculates the distance between the target and its optical axis. According to the calibrated position of the main camera, the visual information obtained by the sub-eye cameras is delivered to the main camera through serial communication in PELCO-D protocol [15]. The main camera immediately turns to the direction of the target and tracks it with its high resolution. The main camera zooms automatically and thus accurately locates and images targets in various distances. The images are saved in real time by a flip-flop register until the target leaves the FOV.

When the target moves out of the FOV of the main camera, the sub-eye cameras restart the detection mode. When a camera spots a moving target, it immediately sends a signal to the main camera; the camera again positions the target. A panoramic continuous detection is thus achieved.

When multiple targets are spotted at the same time, the system adopts a default detection mode (size-priority mode, speed-priority mode, etc.). The entire process does not need complex manual operation.

Figure 2 shows the overall flowchart of this panoramic detection. On the one hand, in order to avoid loss of information caused by dead zone, every two sub-eye cameras share a certain overlapping FOV. On the other hand, in order to prevent information aliasing and positions targets more accurately, different tracking strategies are used in the overlapping and non-overlapping FOV. In the non-overlapping FOV, background difference method under Gaussian background model is used, while in the overlapping FOV, lateral inhibition algorithm is used.

Figure 2
figure 2

Strategy flow chart of panoramic detection.

3 Tracking algorithm and experiments in non-overlapping FOV

3.1 Self-adaptive Gaussian background difference method

After sub-eye cameras obtain images by global sub-sampling, primary detection is done by background difference method. Target extraction algorithms that are commonly used include background difference method [16] and frame difference method [17]. We adopt the adaptive Gaussian background model to obtain the foreground target and update the background model synchronously.

We assume that the background changes are consistent with a random probability distribution. In avoiding the unpredictability when light changes slowly, we build a sub-eye self-adaptive Gaussian background model, which adapts different environment change and gets a better background estimation.

In the Gaussian background model, we assume that each pixel value, say, f (x, y), accords with Gaussian distribution in time domain [18]. We establish a Gaussian model for each pixel in view. By fitting the new frame with the Gaussian model, we extract the background image. The background is synchronously updated to make the algorithm adaptive.

The probability distribution of Gaussian background model is:

p x = 1 2 π σ e x μ 1 2 σ
(1)

where μ is the mean value; σ is the standard deviation. In this model, a Gaussian probability distribution η (μ, σ2) is established for each pixel. Let f k (x, y) be the pixel value of the image of the k th frame.

  1. (1)

    Background image initialization:

    μ 0 = 1 N k = 1 k = N f k x , y
    (2)
    σ 0 2 x , y = 1 N k = 1 k = N f k x , y μ 0 x , y 2
    (3)

where μ0 and σ02 are the estimates of the mean and variance of a point in the initialization background, respectively. N is the number of frames. The value of N should be appropriate, not too large. Here, we let N = 5.

  1. (2)

    Background image update. After the background model is built, we subtract the background model from the current frame and get a difference image. Now we set a threshold. In Section 2.2, we will describe in detail how a threshold is selected. If the pixel value in the difference image is larger than a threshold, that is as follows:

    f k x , y f k x , y Th
    (4)

The object is then taken as a moving target. For each pixel, we have:

μ k x , y = μ k 1 ( x , y ) σ k 2 = σ k 1 2 .
(5)

If less than the threshold, which means:

| f k x , y f k 1 x , y | < Th .
(6)

Then, it is considered as background. Here, the background model should be continuously updated. The update rules are as follows:

μ k x , y = ( 1 α ) μ k 1 ( x , y ) + α I k ( x , y ) σ k 2 x , y = ( 1 α ) σ k 1 2 ( x , y ) + α f k x , y μ k x , y 2 ,
(7)

where I k (x, y) is a pixel value in the k th frame; α is the background updating rate, ranging from 0 to 1. The larger α is, the faster the system updates. If the system updates too fast, noise occurs; while if it updates slow, it takes a long time to adapt to the background. So we should give α an appropriate value. Here, we initialize α = 0.5. α accords with the probability distribution:

α = A 1 2 π σ k 1 exp f k x , y μ k 1 2 2
(8)

where A is a modulation factor. This makes the background update automatically according to a certain statistical regularity.

3.2 The adaptive threshold

After the background model is built, a binarization image from the difference image is needed to show the target. It is important to select an appropriate threshold, Th. If the threshold is too large, a target point may be mistaken for a background point; while if the threshold is too small, a background point may be mistaken for a target point.

The threshold is conventionally set artificially, which lacks adaptability and requires manual intervention. Here, we select the threshold using self-adaptive iteration [19] and get the global optimal threshold. Thus, we achieve a satisfying adaptability, as Figure 3 shows. The detailed iteration process is as follows:

  1. (1)

    Calculate the maximum and minimum gray value t 1 and t k , and initialize the threshold value as T 0 = (t 1 + t 0)/2.

  2. (2)

    Segment the image into two parts: the target and the background. Calculate the pixel number in each part, N 1 k and N 2 k, and then, calculate the average gray level of the two parts t 0 and t A:

    t 0 = ( Σ t i , j × N ( i , j ) ) / N 1 k t ( i , j ) < T k t A = ( Σ t i , j × N ( i , j ) ) / N 2 k t ( i , j ) > T k ,
    (9)

where t (i, j) is the gray level of point (i, j); N (i, j) is the weight of point (i, j). We set N (i, j) = 1.

  1. (3)

    Calculate the new threshold T k+1 = (t A + t 0)/2.

  2. (4)

    If T k= T k+1or k > M, then T k is the suitable global threshold, and the iteration is over. Otherwise, go to step (2) and continue further iteration. M is the maximum iteration.

Figure 3
figure 3

Comparison between different threshold segmentation methods. (a) Using the static threshold. (b) Using the adaptive iterative threshold.

Moreover, the target image on the obtained binarization image has shadows and discontinuity. We use the morphological opening operation [20] to enhance the target image.

After the target is extracted from sub-eye images, we extract its centroid to determine whether the target has moved into overlapping FOV or not. Once it moves out of the certain rectangle boundary region, the system will automatically switch to the algorithm for overlapping FOV.

3.3 Experiment of tracking in non-overlapping FOV

Figure 4 shows an experiment using human body as a moving target. The three pictures on the left are images obtained by the three sub-eye cameras. Black means no moving targets detected, while white regions are images of detected targets after morphological closing operation [20]. The picture on the right is the current field image by the main camera.

Figure 4
figure 4

A non-overlapping FOV detection experiment. (a) A moving target appears in non-overlapping FOV. (b) The main camera detects the target with high resolution.

In Figure 4, no.1 sub-eye camera detects a moving target in its 96th frame. The target is in non-overlapping FOV and is not in the view of the main camera. The main camera then rotates to the direction of the target. Later, when no.1 sub-eye camera gets its 100th frame, the target occurs in the FOV of the main camera. The detection takes 0.15 s.

In comparison, if the sub-eye cameras sample every pixel from the beginning, the detection time is about 0.4 s. It is thus clear that the use of sub-sampling method improves detection sensitivity.

4 Tracking algorithm and experiments in overlapping FOV

The detection strategy applied in the overlapping FOV is different from that used in the non-overlapping fields. In this case, two sub-cameras are involved, and thus, if we simply use the background difference method applied in non-overlapping FOV, the extracted edge of the target is blurred, incomplete, and has shadows. Also, in this case, the stability and efficiency of the algorithm are relatively low. It is difficult to determine the centroid position of the target. So, we adopt the lateral inhibition algorithm that is conventionally used in bionic compound eye systems and extract the edge of the moving targets stably.

The phenomenon of lateral inhibition widely exists in the compound eye systems of insects. It refers to the fact that a receptor is inhibited by receptors around it, and this inhibitory effect is a spatial additive. Besides, a receptor is inhibited more strongly by receptors near it than by those far away from it.

For the overlapping FOV, we first extract the edge quickly by adopting lateral inhibition algorithm and then extract the target image by background difference method. This algorithm is stable and accurate, resistant to gray scale change, and improves the detection accuracy and sensitivity.

We adopt the centroid position tracking method to determine whether the target moves into overlapping fields. When a sub-eye camera detect a target whose centroid is 4/5 its image width to the farthest vertical edge; meanwhile, a sub-eye next to it also detects the target, and its centroid is 1/5 the image width to the nearest edge. We consider the target in the overlapping field, and the system automatically switches to the lateral inhibition algorithm. After the target has been detected, the main camera turns to the orientation of the bisector of the angle between the optical axes of the two sub-eye cameras.

We take each pixel as a sub-eye receptor. The spatial contrast is large on the edge of the target. According to the bionic lateral inhibition principle, the nearest receptors inhibit strongly the receptor that detects the edge, and such inhibition is stronger from the nearer receptors. We enhance the edge according to the inhibition coefficient [21]. We analyze this method in the time domain below.

Take a simple two-unit inhibition network as an example. Let y1,y2 be gray values of input units. We assume:

y 2 = ky , 1
(10)

where 0 < k < 1. Outputs of the network are X1, X2:

X 1 = y 1 β X 2 X 2 = y 2 β X 1
(11)

where we set 0 < β < k < 1, so that X1/X2 is non-negative.

y1/y2 is used to measure the input contrast while X1/X2 describes the output contrast. According to (10) and (11), we have:

X 1 = y 1 β y 2 1 β 2 , X 2 = y 2 β y 1 1 β 2
(12)
X 1 / X 2 y 1 / y 2 = k 1 βk k β = k β k 2 k β > 1
(13)

Equation 13 shows that the output contrast is larger than the input contrast, proving that the inhibition network enhances target edge.

The inhibition model of the overlapping fields is:

r p = j = 1 m k p , j I j .
(14)

For a 3 × 3 network, it corresponds to the image:

I m , n = f i = 1 1 j = 1 1 α i , j I 0 m + i , n + j = f R 0 m , n ,
(15)

where, I (m, n) is the pixel gray value after the inhibition process; αi,j is the lateral inhibition coefficient for the position (i, j) in the network; f is a function indicating the inhibition competing relationship between input and output; R0 (m, n) is the lateral inhibition coefficient for position (m, n) in the network.

According to the mechanism of compound eye vision system, the lateral relationship between a certain nerve cell on the compound eyes and those surrounding it is relatively stable and coincident. For there is no direction constraint for edges, weights are symmetric about the center. Suppose the centroid weight is α00, and the 8 is the weight around equal α1. Then, the lateral inhibition coefficient is as follows:

R 0 m , n = α 00 × I 0 m , n + α 1 i = 1 1 j = 1 1 I 0 × m + i , n + j I 0 m , n
(16)

As the optic cells are on a plane of the same inhibition, the lateral inhibition coefficient is approximately zero. So α00 + 8α1 = 0. Here, we let α00 = 1, α1 = −0.125, and the template for inhibition is as follows:

0.125 0.125 0.125 0.125 1 0.125 0.125 0.125 0.125

Take the template coefficients to (16), we have:

I m , n = 255 R 0 m , n T 0 R 0 m , n T .
(17)

Figure 5 compares the edge detection results with and without the use of lateral inhibition method. In Figure 5a, b are background difference images by no.1 and no.2 sub-eye cameras in overlapping field using lateral inhibition method, while c,d are background difference images by Roberts operator treatment [22]. From the pictures, we can see that we obtain the edge of the target in the overlapping FOV more accurately and clearly when using the method proposed in this paper. The total detection time is 0.23 s.

Figure 5
figure 5

Comparing the edge detection results with and without the use of lateral inhibition method. (a) Background difference detection image by no.1 sub-eye camera using lateral inhibition method. (b) Background difference detection image by no.2 sub-eye camera, using lateral inhibition method. (c) Background difference detection image by no.1 sub-eye camera using Roberts operator. (d) Background difference detection image by no.2 sub-eye camera using Roberts operator.

5 Experiments of multi-target panoramic detection

We detect multiple targets continuously by this embedded monitoring network system. We use the speed-priority mode. The experimental results are shown in Figure 6. Target 1 first appears in the FOV of no.2 camera, and the main camera tracks it immediately. Later, target 2 appears in the FOV of no.3 camera, which moves faster than target 1. The main camera then turns to track target 2 immediately. Meanwhile, target 1 is still detected by sub-eye cameras. Thus, we realized continuous tracking.

Figure 6
figure 6

Panoramic detection for continuous multiple targets. (a) Target 1 appears in the FOV of no.2 camera. (b) Target 1 tracked by the main camera. (c) Target 2 appears in the FOV of no.3 camera. (d) Target 2 tracked by the main camera.

6 Conclusions

This paper proposes a bionic compound eye sensing network and continuous tracking strategy used for panoramic target tracking. We introduced the system structure and related algorithm. The experimental results show that the system has a panoramic view and a high sensitivity and continuity. It extracts moving targets clearly, stably, and accurately. This system can be widely used in security surveillance industry.

References

  1. Black J, Ellis TJ, Makris D: Wide area surveillance with a multi camera network. In Proceedings of the Intelligent Distributed Surveillance Systems. London; 2004:21-25.

    Chapter  Google Scholar 

  2. Huang F, Shen X, Wang Q, Zhou B, Hu W, Shen H, Li L: Correction method for fisheye image based on the virtual small-field camera. Opt. Lett. 2013, 38(9):1392-1394. 10.1364/OL.38.001392

    Article  Google Scholar 

  3. Liang Q: Biologically-inspired target recognition in radar sensor networks. EURASIP J. Wirel. Commun. Netw. 2010, 2010: 523435.

    Google Scholar 

  4. Liang Q, Cheng X, Samn S: NEW: network-enabled electronic warfare for target recognition. IEEE T. Aero. Elec. Sys. 2010, 46(2):558-568.

    Article  Google Scholar 

  5. Liang Q: Automatic target recognition using waveform diversity in radar sensor networks. Pattern Recognit. Lett. 2008, 29(2):377-381.

    Article  Google Scholar 

  6. Liang Q, Cheng X: KUPS: knowledge-based ubiquitous and persistent sensor networks for threat assessment. IEEE Trans. Aerosp. Electron. Syst. 2008, 44(3):1060-1069.

    Article  Google Scholar 

  7. Liang Q: Waveform design and diversity in radar sensor networks: theoretical analysis and application to automatic target recognition. In Third Annual IEEE Communications Society on Sensor and Ad Hoc Communications and Networks. Volume 2. Reston; 2006:684-689.

    Chapter  Google Scholar 

  8. Liang Q: Situation understanding based on heterogeneous sensor networks and human-inspired favor weak fuzzy logic system. IEEE Syst. J. 2011, 5(2):156-163.

    Article  Google Scholar 

  9. Liang Q: Radar sensor networks: algorithms for waveform design and diversity with application to ATR with delay-Doppler uncertainty. EURASIP J. Wirel. Commun. Netw. 2007, 2007: 89103. 10.1155/2007/89103

    Article  Google Scholar 

  10. Zhong Z, Liang Q, Wang L: Biologically-inspired energy efficient distributed acoustic sensor networks. Ad Hoc & Sensor Wireless Networks 2011, 13(1–2):1-12.

    Google Scholar 

  11. Horisaki R, Irie S, Ogura Y, Tanida J: Three-dimensional information acquisition using a compound imaging system. Opt. Rev. 2007, 14(5):347-350. 10.1007/s10043-007-0347-z

    Article  Google Scholar 

  12. Duparré JW, Wippermann FC: Micro-optical artificial compound eyes. Bioinspir. Biomim. 2006, 1(1):R1-16. 10.1088/1748-3182/1/1/R01

    Article  Google Scholar 

  13. Krishnasamy R, Wong W, Shen E, Pepic S, Hornsey R, Thomas PJ: High precision target tracking with a compound-eye image sensor. Can. Con. El. Comp. En. 2004, 4: 2319-2323.

    Google Scholar 

  14. Strausfeld NJ, Campos-Ortega JA: Vision in insects: pathways possibly underlying neural adaptation and lateral inhibition. Science 1977, 195(4281):894-897. 10.1126/science.841315

    Article  Google Scholar 

  15. Yu X, Liu J, Sheng Q: The application for underwater special monitoring equipment based on the PELCO-D protocol. Appl. Mech. Mater. 2012, 217–219: 2550-2554.

    Article  Google Scholar 

  16. Tai J, Tseng S, Lin C, Song K: Real-time image tracking for automatic traffic monitoring and enforcement applications. Image Vision Comput. 2004, 22: 485-501. 10.1016/j.imavis.2003.12.001

    Article  Google Scholar 

  17. Liang R, Yan L, Gao P, Qian X, Zhang Z, Sun H: Aviation video moving-target detection with inter-frame difference. 3rd International Congress on Image and Signal Processing (CISP) 2010, 3: 1494-1497.

    Article  Google Scholar 

  18. Yan R, Song X, Yan S: Moving object detection based on an improved Gaussian mixture background model. In ISECS International Colloquium on Computing, Communication, Control, and Management. Volume 1. Sanya; 2009:12-15.

    Chapter  Google Scholar 

  19. Al-amri SS, Kalyankar NV, Khamitkar SD: Image segmentation by using threshold techniques. J. Comput. 2010, 2(5):83-86.

    Google Scholar 

  20. Soille P, Vogt P: Morphological segmentation of binary patterns. Pattern Recognit. Lett. 2009, 30(4):456-459. 10.1016/j.patrec.2008.10.015

    Article  Google Scholar 

  21. Gao K, Dong M, Li D, Cheng W: An algorithm of extracting infrared image edge based on lateral inhibition network and wavelet phase filtration. In 9th International Conference on Electronic Measurement & Instruments (ICEMI '09). Beijing; 2009:303-307.

    Google Scholar 

  22. Bansal B, Saini JS, Bansal V, Kaur G: Comparison of various edge detection techniques. J. inform. Oper. Manag. 2012, 3(1):103-106.

    Google Scholar 

Download references

Acknowledgments

This study was supported by the National Basic Research Program of China (973 Program, grant no. 2011CB706705), the National Natural Science Foundation of China (no. 90923038, 51175377), and the Tianjin Natural Science Foundation (no.12JCQNJC02700).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Le Song.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Song, L., Zhang, Z. & Zhang, H. A biologically-inspired embedded monitoring network system for moving target detection in panoramic view. J Wireless Com Network 2013, 175 (2013). https://doi.org/10.1186/1687-1499-2013-175

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-175

Keywords