- Open Access
A biologically-inspired embedded monitoring network system for moving target detection in panoramic view
© Song et al.; licensee Springer. 2013
- Received: 22 May 2013
- Accepted: 30 May 2013
- Published: 24 June 2013
An embedded monitoring network system is based on the visual principle of compound eye, which meets the acquirements in field angle, detecting efficiency, and structural complexity of panoramic monitoring network. Three fixed wide-angle cameras are adopted as sub-eyes, and a main camera is installed on a high-speed platform. The system ensures the continuity of tracking with high sensitivity and accuracy in a field of view (FOV) of 360 × 180°. In the non-overlapping FOV of the sub-eyes, we adopt Gaussian background difference model and morphological algorithm to detect moving targets. However, in the overlapping FOV, we use the strategy of lateral inhibition network which improves the continuity of detection and speed of response. The experimental results show that our system locates a target within 0.15 s after it starts moving in the non-overlapping field; when a target moves in the overlapping field, it takes 0.23 s to locate it. The system reduces the cost and complexity in traditional panoramic monitoring network and lessens the labor intensity in the field of monitoring.
- Moving targets
- Panoramic monitoring
- Gaussian background difference
- Lateral inhibition
- Detection network
Conventional surveillance cameras are of limited field of view (FOV) and fail in continuous panoramic monitoring of 360 × 180° FOV. In order to solve the problem, a parallel network of multiple cameras is commonly used to cover the panoramic monitoring area . However, such network is expensive and complex. Worse still, multi-channel parallel video processing may affect the real-time character of the system and increase the misjudgment rate. In recent years, the fish-eye lens  have gradually popularized in panoramic monitoring. However, distortion is large in the edge of the field, where no effective information can be obtained.
Biologically-inspired design methods are developing rapidly [3–10]. The compound eye vision system of insects has large FOV and high sensitivity. Such systems have advantages over conventional vision systems in applications of community monitoring, robot vision system and intelligent vehicle, etc. By means of that, the whole system can be of small volume, light weight, large field of view, and high sensitivity to moving targets. Compound eye vision system obtains original image information from different directions at the same time. Its unique structure significantly enlarges the field range and has paraxial optical path for each view angle, which decreases distortion [11–13]. Besides, the concept of lateral inhibition  among sub-eyes can be used in artificial bionic network to improve sensitivity. Therefore, bionic compound eye network can realize continuous tracking and locating of moving targets in panoramic view robustly. It provides a new mode for the development of detectors and sensors.
Starting from the insects' compound eye system, this paper describes an embedded network system used for continuous tracking moving targets in panoramic view. A panoramic detection with low distortion is realized by multiple cameras. Meanwhile, global low-speed acquisition and local high-speed image acquisition are combined to shorten the time used by the detection algorithm and to improve sensitivity as well. Besides, high-resolution automatic tracking mode and lateral inhibition is used in improving the limitations of the current system in the aspects of field angle, detecting efficiency, and structural complexity.
2.1 System components and setup
Three wide-angle cameras are fixed in a ring, and we call them ‘sub-eye cameras.’ Each sub-eye camera covers about 120° in horizontal field and 180° in meridian plane; thus, the total field of view is 360° in the sagittal plane. The panoramic video information of 360 × 180° space angle is then obtained. Each sub-eye camera has a charge-coupled device (CCD) of 1/3 inches, with a resolution of 704 × 576.
2.2 Detection process
After a setup process, the three sub-eye cameras start global sub-sampling, that is, sampling pixel values in alternative lines. When a sub-eye camera detects a moving target, it switches to full-resolution sampling mode, which is, sampling every pixel value. Then, it extracts the centroid of the target and calculates the distance between the target and its optical axis. According to the calibrated position of the main camera, the visual information obtained by the sub-eye cameras is delivered to the main camera through serial communication in PELCO-D protocol . The main camera immediately turns to the direction of the target and tracks it with its high resolution. The main camera zooms automatically and thus accurately locates and images targets in various distances. The images are saved in real time by a flip-flop register until the target leaves the FOV.
When the target moves out of the FOV of the main camera, the sub-eye cameras restart the detection mode. When a camera spots a moving target, it immediately sends a signal to the main camera; the camera again positions the target. A panoramic continuous detection is thus achieved.
When multiple targets are spotted at the same time, the system adopts a default detection mode (size-priority mode, speed-priority mode, etc.). The entire process does not need complex manual operation.
3.1 Self-adaptive Gaussian background difference method
After sub-eye cameras obtain images by global sub-sampling, primary detection is done by background difference method. Target extraction algorithms that are commonly used include background difference method  and frame difference method . We adopt the adaptive Gaussian background model to obtain the foreground target and update the background model synchronously.
We assume that the background changes are consistent with a random probability distribution. In avoiding the unpredictability when light changes slowly, we build a sub-eye self-adaptive Gaussian background model, which adapts different environment change and gets a better background estimation.
In the Gaussian background model, we assume that each pixel value, say, f (x, y), accords with Gaussian distribution in time domain . We establish a Gaussian model for each pixel in view. By fitting the new frame with the Gaussian model, we extract the background image. The background is synchronously updated to make the algorithm adaptive.
- (1)Background image initialization:(2)(3)
- (2)Background image update. After the background model is built, we subtract the background model from the current frame and get a difference image. Now we set a threshold. In Section 2.2, we will describe in detail how a threshold is selected. If the pixel value in the difference image is larger than a threshold, that is as follows:(4)
where A is a modulation factor. This makes the background update automatically according to a certain statistical regularity.
3.2 The adaptive threshold
After the background model is built, a binarization image from the difference image is needed to show the target. It is important to select an appropriate threshold, Th. If the threshold is too large, a target point may be mistaken for a background point; while if the threshold is too small, a background point may be mistaken for a target point.
Calculate the maximum and minimum gray value t 1 and t k , and initialize the threshold value as T 0 = (t 1 + t 0)/2.
- (2)Segment the image into two parts: the target and the background. Calculate the pixel number in each part, N 1 k and N 2 k , and then, calculate the average gray level of the two parts t 0 and t A:(9)
Calculate the new threshold T k+1 = (t A + t 0)/2.
If T k = T k+1or k > M, then T k is the suitable global threshold, and the iteration is over. Otherwise, go to step (2) and continue further iteration. M is the maximum iteration.
Moreover, the target image on the obtained binarization image has shadows and discontinuity. We use the morphological opening operation  to enhance the target image.
After the target is extracted from sub-eye images, we extract its centroid to determine whether the target has moved into overlapping FOV or not. Once it moves out of the certain rectangle boundary region, the system will automatically switch to the algorithm for overlapping FOV.
3.3 Experiment of tracking in non-overlapping FOV
In Figure 4, no.1 sub-eye camera detects a moving target in its 96th frame. The target is in non-overlapping FOV and is not in the view of the main camera. The main camera then rotates to the direction of the target. Later, when no.1 sub-eye camera gets its 100th frame, the target occurs in the FOV of the main camera. The detection takes 0.15 s.
In comparison, if the sub-eye cameras sample every pixel from the beginning, the detection time is about 0.4 s. It is thus clear that the use of sub-sampling method improves detection sensitivity.
The detection strategy applied in the overlapping FOV is different from that used in the non-overlapping fields. In this case, two sub-cameras are involved, and thus, if we simply use the background difference method applied in non-overlapping FOV, the extracted edge of the target is blurred, incomplete, and has shadows. Also, in this case, the stability and efficiency of the algorithm are relatively low. It is difficult to determine the centroid position of the target. So, we adopt the lateral inhibition algorithm that is conventionally used in bionic compound eye systems and extract the edge of the moving targets stably.
The phenomenon of lateral inhibition widely exists in the compound eye systems of insects. It refers to the fact that a receptor is inhibited by receptors around it, and this inhibitory effect is a spatial additive. Besides, a receptor is inhibited more strongly by receptors near it than by those far away from it.
For the overlapping FOV, we first extract the edge quickly by adopting lateral inhibition algorithm and then extract the target image by background difference method. This algorithm is stable and accurate, resistant to gray scale change, and improves the detection accuracy and sensitivity.
We adopt the centroid position tracking method to determine whether the target moves into overlapping fields. When a sub-eye camera detect a target whose centroid is 4/5 its image width to the farthest vertical edge; meanwhile, a sub-eye next to it also detects the target, and its centroid is 1/5 the image width to the nearest edge. We consider the target in the overlapping field, and the system automatically switches to the lateral inhibition algorithm. After the target has been detected, the main camera turns to the orientation of the bisector of the angle between the optical axes of the two sub-eye cameras.
We take each pixel as a sub-eye receptor. The spatial contrast is large on the edge of the target. According to the bionic lateral inhibition principle, the nearest receptors inhibit strongly the receptor that detects the edge, and such inhibition is stronger from the nearer receptors. We enhance the edge according to the inhibition coefficient . We analyze this method in the time domain below.
where we set 0 < β < k < 1, so that X1/X2 is non-negative.
Equation 13 shows that the output contrast is larger than the input contrast, proving that the inhibition network enhances target edge.
where, I (m, n) is the pixel gray value after the inhibition process; αi,j is the lateral inhibition coefficient for the position (i, j) in the network; f is a function indicating the inhibition competing relationship between input and output; R0 (m, n) is the lateral inhibition coefficient for position (m, n) in the network.
This paper proposes a bionic compound eye sensing network and continuous tracking strategy used for panoramic target tracking. We introduced the system structure and related algorithm. The experimental results show that the system has a panoramic view and a high sensitivity and continuity. It extracts moving targets clearly, stably, and accurately. This system can be widely used in security surveillance industry.
This study was supported by the National Basic Research Program of China (973 Program, grant no. 2011CB706705), the National Natural Science Foundation of China (no. 90923038, 51175377), and the Tianjin Natural Science Foundation (no.12JCQNJC02700).
- Black J, Ellis TJ, Makris D: Wide area surveillance with a multi camera network. In Proceedings of the Intelligent Distributed Surveillance Systems. London; 2004:21-25.View ArticleGoogle Scholar
- Huang F, Shen X, Wang Q, Zhou B, Hu W, Shen H, Li L: Correction method for fisheye image based on the virtual small-field camera. Opt. Lett. 2013, 38(9):1392-1394. 10.1364/OL.38.001392View ArticleGoogle Scholar
- Liang Q: Biologically-inspired target recognition in radar sensor networks. EURASIP J. Wirel. Commun. Netw. 2010, 2010: 523435.Google Scholar
- Liang Q, Cheng X, Samn S: NEW: network-enabled electronic warfare for target recognition. IEEE T. Aero. Elec. Sys. 2010, 46(2):558-568.View ArticleGoogle Scholar
- Liang Q: Automatic target recognition using waveform diversity in radar sensor networks. Pattern Recognit. Lett. 2008, 29(2):377-381.View ArticleGoogle Scholar
- Liang Q, Cheng X: KUPS: knowledge-based ubiquitous and persistent sensor networks for threat assessment. IEEE Trans. Aerosp. Electron. Syst. 2008, 44(3):1060-1069.View ArticleGoogle Scholar
- Liang Q: Waveform design and diversity in radar sensor networks: theoretical analysis and application to automatic target recognition. In Third Annual IEEE Communications Society on Sensor and Ad Hoc Communications and Networks. Volume 2. Reston; 2006:684-689.View ArticleGoogle Scholar
- Liang Q: Situation understanding based on heterogeneous sensor networks and human-inspired favor weak fuzzy logic system. IEEE Syst. J. 2011, 5(2):156-163.View ArticleGoogle Scholar
- Liang Q: Radar sensor networks: algorithms for waveform design and diversity with application to ATR with delay-Doppler uncertainty. EURASIP J. Wirel. Commun. Netw. 2007, 2007: 89103. 10.1155/2007/89103View ArticleGoogle Scholar
- Zhong Z, Liang Q, Wang L: Biologically-inspired energy efficient distributed acoustic sensor networks. Ad Hoc & Sensor Wireless Networks 2011, 13(1–2):1-12.Google Scholar
- Horisaki R, Irie S, Ogura Y, Tanida J: Three-dimensional information acquisition using a compound imaging system. Opt. Rev. 2007, 14(5):347-350. 10.1007/s10043-007-0347-zView ArticleGoogle Scholar
- Duparré JW, Wippermann FC: Micro-optical artificial compound eyes. Bioinspir. Biomim. 2006, 1(1):R1-16. 10.1088/1748-3182/1/1/R01View ArticleGoogle Scholar
- Krishnasamy R, Wong W, Shen E, Pepic S, Hornsey R, Thomas PJ: High precision target tracking with a compound-eye image sensor. Can. Con. El. Comp. En. 2004, 4: 2319-2323.Google Scholar
- Strausfeld NJ, Campos-Ortega JA: Vision in insects: pathways possibly underlying neural adaptation and lateral inhibition. Science 1977, 195(4281):894-897. 10.1126/science.841315View ArticleGoogle Scholar
- Yu X, Liu J, Sheng Q: The application for underwater special monitoring equipment based on the PELCO-D protocol. Appl. Mech. Mater. 2012, 217–219: 2550-2554.View ArticleGoogle Scholar
- Tai J, Tseng S, Lin C, Song K: Real-time image tracking for automatic traffic monitoring and enforcement applications. Image Vision Comput. 2004, 22: 485-501. 10.1016/j.imavis.2003.12.001View ArticleGoogle Scholar
- Liang R, Yan L, Gao P, Qian X, Zhang Z, Sun H: Aviation video moving-target detection with inter-frame difference. 3rd International Congress on Image and Signal Processing (CISP) 2010, 3: 1494-1497.View ArticleGoogle Scholar
- Yan R, Song X, Yan S: Moving object detection based on an improved Gaussian mixture background model. In ISECS International Colloquium on Computing, Communication, Control, and Management. Volume 1. Sanya; 2009:12-15.View ArticleGoogle Scholar
- Al-amri SS, Kalyankar NV, Khamitkar SD: Image segmentation by using threshold techniques. J. Comput. 2010, 2(5):83-86.Google Scholar
- Soille P, Vogt P: Morphological segmentation of binary patterns. Pattern Recognit. Lett. 2009, 30(4):456-459. 10.1016/j.patrec.2008.10.015View ArticleGoogle Scholar
- Gao K, Dong M, Li D, Cheng W: An algorithm of extracting infrared image edge based on lateral inhibition network and wavelet phase filtration. In 9th International Conference on Electronic Measurement & Instruments (ICEMI '09). Beijing; 2009:303-307.Google Scholar
- Bansal B, Saini JS, Bansal V, Kaur G: Comparison of various edge detection techniques. J. inform. Oper. Manag. 2012, 3(1):103-106.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.