 Research
 Open Access
 Published:
Camshift tracking method based on correlation probability graph for model pig
EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 108 (2020)
Abstract
The identification and tracking for model pigs, as a vital research content for studying the habits of model pigs, drawed more and more considerable attention. To fulfill people requirements for the effectiveness of the nonsignificant model pig tracking in breeding environment, a Camshift tracking approach based on correlation probability graph, i.e., CamTracor_{−PG}, is proposed in this paper, in which the correlation probability graph is introduced to achieve target positioning and tracking. Technically, acquiring images through a vision sensor, according to the circular arrangement of pixels in the inverse probability projection graph, and multiplying the inverse projection probability value of a pixel by its surrounding pixels could obtain the weighted sum. Then, the target projection grayscale graph is established by utilizing the correlation probability value for positioning, identification, and tracking of model pigs. Finally, extensive experiments are conducted to validate reliability and efficiency of our approach.
Introduction
With the everincreasing significance of model pigs in many fields such as life sciences, medicine, and health, the identification and tracking of model pigs raise a hot issue in machine vision research [1, 2]. Aiming at the video of model pig motion based on the video monitoring system of pig farm, the comparative research for detection and tracking of the pig object was conducted by the machine vision [3, 4]. In combination with the living habits of pigs, an abnormal evaluation system with respect to the pig movement track is established [5–7]. However, due to the complexity of the environment and the interference of similar backgrounds, the effect falls short in recognizing the target object. Complex environment: there are railings, fodder and so on within the breeding environment [8, 9]. Similar background: (1) The colors of the model pigs are indistinguishable from its surroundings [10, 11]. (2) The skin colors between the model pigs are similar [12, 13]. In light of the strong realtime demands for usual tracking system, therefore, more stringent requirements are imposed on the computational complexity for the target recognition and tracking algorithm, which signifies that the mobile targets need to be accurately identified and tracked only with a handful of calculation [14–16]. In view of this challenge, we develop probabilistic graphical model (PGM) mechanism for suppressing the probability values of the background and the target in the projected image simultaneously, and we put forward a novel Camshift tracking approach based on correlation probability graph called CamTracor_{−PG}. Our CamTracor_{−PG} can effectively achieve a good tradeoff among the recognition accuracy, scalability, and computational expense. Generally, the major contributions of this paper are threefold. First, the remainder of this paper is organized as follows. In Section 2, a novel Camshift tracking method is put forward. In Section 3, we compare the laboratory results in three different cases and proved the superiority of the method in this paper. Related work is briefly surveyed in Section 4. In Section 5, we summarize the paper and point out our future research directions.
The design of the tracking method
Correlated calculation of probabilistic projection graphs
The Camshift tracking method employs the chrominance (H) information in the HSV color space to establish a histogram model of the target [17, 18]. Subsequently, on the basis of the established histogram, an inverse probability projection graph of the target in the tracking window is set up [19, 20]. Since the histogram model is relatively simple to establish in some ways, the computational cost of the tracking method is little [21, 22].
Assume that the chromaticity value in the HSV color space is divided into m levels [23], and there are a total of S pixels from the target area, where the coordinate position of the ith pixel is (xi,yi),i=1,2,…,S, the corresponding chromaticity value of the point is b(xi,yi) [24, 25]. Then, we could achieve the target chromaticity histogram model for the target area by the following formulae.
Set the chromaticity value of the pixel at position (x,y) in the tracking window as u. According to the target histogram model by Eq. (1), the value of inverse projection probability at this point can be obtained:
The pixel gray value represents the probability value, and the projection gray value corresponding to the above inverse probability value is p_{g}(x,y):
The symbol “ ⌊⌋” signifies rounding. The gray value of inverse probability projection is obtained through Eq. (3) for all pixels within the tracking window, then the inverse probability projection graph could be obtained. The gray value of the pixel in the inverse probability projection graph varies from 0 to 255. Particularly, the pixel with a gray value of 255 appears white, which means that the pixel belongs to the target area in significant measure, and the pixel with a gray value of 0 is black, which indicates the probability that the pixel belongs to the target area is comparatively tiny.
In the abovedescribed target inverse probability projection graph, the correlation information between adjacent pixels is not taken into account only considering the individual information of each pixel. Therefore, when it comes to the chromaticity, if the background is analogous with the target chromaticity, each pixel from background graph will also obtain a higher probability value in the inverse projection graph, which would lead to huge interference to the object identification.
Considering this challenge, there are two ways to deal with, that is, to increase the probability gray value of the target area within the inverse projection graph or to suppress the probability gray value of the background area, respectively. Hence, we propose a chromaticity correlation calculation for the inverse probability projection graph obtained by Eq. (2), that is, each pixel point is associated with the probability value of the surroundings, so as to determine the inverse projection correlation probability value. The distribution of a pixel a_{0,0} and its surrounding pixels in the inverse projection graph is depicted in Fig. 1.
In which a_{i,j} represents the jth pixel around the ith circle of a_{0,0}, and the probability value calculated by Eq. (2) for this pixel is \( p^{0}_{i,j} \). A correlation calculation is performed on the probability value \( p^{0}_{0,0}\) of the pixel a_{0,0} to obtain \(p^{1}_{0,0}\):
According to Eq. (4), the results of multiple correlation calculations can be deduced. For example, the ktime correlation results for pixel a_{0,0} are:
Although the probability of the target areas will be suppressed through the formula in (5), but the probability value for the background area will be suppressed more significantly. In this way, we could achieve a good tradeoff among the highlighting for target area and the control for the background area, so that ultimately enable the goal to be more prominent. Next, the obtained probability value by (5) is conducted in the normalization process, and then the inverse projection probability gray value is calculated as follows:
Among them, \(p^{k}_{\text {max}}\) denotes the maximum probability value in the tracking window after ktime correlation calculations, that is \(p^{k}_{\text {max}}= \text {max}\left \lbrace p^{k}_{i,j}\right \rbrace \). The accuracy of the target recognition will be considerably fruitful owing to utilizing the inverse projection probability graph after correlation calculation to identify and locate the target.
Algorithm description
Specifically, the algorithm for target recognition and tracking is described as follows in detail:
Step 1. Select the tracked target and employ the foregoing Eq. (1) to establish the chromaticity histogram model for the tracked target.
q={q_{u}},u=1,2,…,m
Step 2. Calculating the inverse projection probability value by Eq. (2).
Step 3. Searching the ktime correlation probability value through Eq. (5).
Step 4. Building the inverse projection probability graph by using Eq. (6).
Step 5. Computing the zero and firstorder moment for the search window based on the gray value of the inverse probabilitybased projection graph:
Step 6. Figuring up the centroid position (x_{c},y_{c}) of the search window by using the zero and firstorder moments obtained in step 5:
Step 7. Adaptively adjust the side length of the search window:
The center of the search window is drifted to the centroid of the search window. Then, comparing the drift distance with the set threshold, and if the drift distance is greater than the set threshold, we need to repeat steps 5–7 and continue the next step 8 until the drift distance is less than the set threshold.
Step 8. Calculating the secondorder moment of the search window based on the correlation probability value:
According to the second moment obtained above, the following three parameters are calculated:
Accordingly, on the basis of the obtained parameters, the size and direction of the target area are adaptively updated. Concretely, the length, width, and direction of the target area is updated as in (19), (20), and (21), respectively:
The width of the target area is updated as:
The direction of the target area is updated to:
Until here, the recognition and tracking of this frame are smoothly completed.
Step 9. Return to step 1, focus on the next frame, reidentify, locate, and track the target of the next frame by employing steps 1–8.
Through the above series of steps of our proposed CamTracor_{−PG}, a model pig could be more effective in tracking and recognizing with better precision in a scalable manner. In the process of described tracking above, the target is recognized and tracked well by means of multicorrelation probability grayscales. Formally, our proposal is specified by the following pseudocode. The performance of our proposal is specified by the following Section 3.
Results and discussion analysis
To verify the effectiveness of this method, our proposed approach in this paper are compared with the basic Camshift method, the Camshift method of multifeature fusion under the similar background, and the actual complex environment. The parameters for computing chromaticity correlation are selected as follows: N = 1, k = 1.
The data of this experiment is from the Zhuozhou experimental demonstration base of the China Agricultural University. The total data is 425 G, lasting for 160 days; it records the whole process of the model pig from entry to exit. The tested data are randomly selected from all data, in comparing detection and tracking effect, and some frames with large interference are selected for comparative analysis.
Motion target tracking in actual complex environment
In many practical applications, the tracking of the target is usually very complicated, and there are a large number of interference areas [26–28]. The three tracking methods mentioned are used to track model pigs in actual farms. As shown in Figs. 2, 3, 4, 5, 6, and 7, the body color of the model pig is single black, the background in the scene is also grayish black, and the illumination is not uniform. If one model pig is selected for tracking and the other one becomes a disturbance, it is difficult for the tracking algorithm to distinguish between the two model pigs and the tracking task cannot be realized.
In this paper, the small target in the right eye of the left model pig is selected as the tracking object, and the pig is subjected to a motion tracking experiment. The tracking results obtained by the above methods and the corresponding probability projection graphs are shown in Figs. 2, 3, 4, 5, 6, and 7.
As can be seen from Figs. 2 and 3, in the basic Camshift tracking method, when the pig’s posture changes, the influence of disturbances such as changes in illumination makes the difference in chromaticity between the target and the background smaller. As a result, the target is easy to get positioning error, which leads to inaccurate tracking.
As can be seen in Figs. 4 and 5, with the multifeature fusion Camshift tracking method, the fusion of various features can overcome some of the influence, which comes from the weakening of the target feature due to the changes in illumination and posture.
However, due to the small size of the tracking template, there is little obvious difference between the various features which are merged and the background features, resulting in a low probability of the inverse projection graph, and it still cannot make the algorithm obtain strong antiinterference ability; eventually, the target is incorrectly positioned and the target tracking task under this complex background could not be completed.
As can be seen in Figs. 6 and 7, the projection probability graph is established by using the target’s correlation probability results in this method. Although the target’s inverse projection probability value is close to the background, but after the correlation calculation of a pixel position, the probability value of the background area around the target can be suppressed, that is, in the local area, a relatively significant target areas can be obtained; therefore, the interference caused by the change of illumination and the background interference to the target positioning can be overcome, and the effectiveness of positioning in the target area can be guaranteed; finally, the target tracking task in a complex background can be completed.
In this profile, the tracking accuracy values of the three approaches are tested and compared, and the average overlap rates of the tracking targets for the three methods are presented in Fig. 8.
As shown in Fig. 8, in the tracking result of the basic Camshift tracking method, the average overlap rate of the tracking targets is 71% (136th frame), 66% (288th frame), 49% (304th frame), and 28% (410th frame); in the tracking result of the multifeature fusion Camshift method, the average overlap rate of the tracking targets is 60% (136th frame), 39% (288th frame), 34% (304th frame), and 20% (410th frame); and in the tracking result for this paper, the average overlap rate of the tracking targets is 91% (136th frame), 85% (288th frame), 88% (304th frame), and 89% (410th frame). The method in this paper is superior to the other two commonly used methods in experimental results.
The average overlap rate is one of the core criteria in the evaluation of target tracking. We further analyze the proportion of the trajectory to which the target is largely tracked, that is, the MT index. As shown in Fig. 9, in the tracking result of the basic Camshift tracking method, the MT rate of the tracking targets is 85% (1 h), 78% (2 h), 71% (3 h), and 60% (4 h); in the tracking result of the multifeature fusion Camshift method, the MT rate of the tracking targets is 80% (1 h), 70% (2 h), 50% (3 h), and 10% (4 h); and in the tracking result of this paper, the MT rate of the tracking targets is 90% (1 h), 88% (2 h), 85% (3 h), and 83% (4 h). The method in this paper is superior to the other two commonly used methods in experimental results.
Related work
Owing to its stability characteristic in practical applications, Meanshift algorithm has become one of the most effective techniques in various target tracking field [29]. In this section, we briefly survey the related work about tracking algorithm approaches from two perspectives: Camshift algorithm and the method of combining multiple features.
Camshift algorithm
Camshift algorithm, as an improved algorithm for the Meanshift algorithm [30], possesses the function of adjusting the target size adaptively. At the same time, the cost of calculation is little so that can meet the requirements for realtime tracking [31]. However, the Camshift algorithm is mainly suitable for the tracking of salient targets. That is to say, when the difference between the target and the surrounding tones is obvious, the Camshift algorithm can obtain ideal performance; when the target is similar to the surrounding background, the goal will be submerged in the background and cannot be identified or tracked. This is one of the key technical challenges in target identification and tracking [32]; in other words, the complexity of the environment and the similarity of the background have a significant influence on the performance of the target identification algorithm [33]. Even if the environment or background looks like straightforward, the objective will be submerged in the background so that it is pretty difficult to be effectively identified and tracked while the chromaticity of the background is similar to the tracked target [34].
The method of combining multiple features
The method of combining multiple features is recruited to compensate for the consequences that the target cannot be accurately identified due to a single feature [35, 36]. For example, in order to enhance the accuracy of targeting, a promising approach is to raise the features like texture and edge based on the chromaticity features by utilizing the complementary relationship between multiple features. However, in practical situation, they may also cause interference with each other due to the interaction of features such as chromaticity, texture, and edge [37]. For instance, if an area in the background domain is similar to the texture or edge of the target, the tracking effect may be considerably compromised [38].
Conclusion
To deal with the problem of model pig tracking under similar background and complex background, a Camshift tracking approach for correlation probability calculation is proposed. Specially, each pixel point in the inverse projection graph is correlated with its surrounding projection probability values, which could commendably control the probability value within the background area. Simultaneously, it also highlights the relative probability value of the target area in disguised form, so that the target is not submerged in the background, and then fulfills the requirement for improving the target tracking performance. From the extensive experiments displayed, on the one hand, while the model pigs have similar chromaticity characteristics to the background area, this approach can significantly separate the target from the background area; on the other hand, while the target is in a complex background, the method can effectively suppress the probability gray value of the interference area; at the same time, it could realize a good tradeoff between the accuracy and effectiveness of identification for model pig.
Availability of data and materials
The dataset supporting the conclusions of this article is available, which can be downloaded at https://pan.baidu.com/s/1RilVR7OzH1jWrfzslJ0nPQ.password:fctw.
Abbreviations
 C a m T r a c o r _{−PG} :

Camshift tracking approach based on correlation probability graph
References
Y. J. He, M. Li, J. Zhang, J. P. Yao, Infrared target tracking via weighted correlation filter. Infrared Phys. Technol.73:, 103–114 (2015).
L. Qi, Q. He, F. Chen, W. Dou, S. Wan, X. Zhang, X. Xu, Finding all you need: web APIs recommendation in web of things through keywords search. IEEE Trans. Comput. Soc. Syst. (2019). https://doi.org/10.1109/tcss.2019.2906925.
G. w. Yuan, Y. Gao, D. Xu, A moving objects tracking method based on a combination of local binary pattern texture and hue. Procedia Eng.15:, 3964–3968 (2011).
H. p. Sun, X. Wen, Research on learning progress tracking of multimedia port user based on improved CamShift algorithm. Multimed. Tools Appl., 1–14 (2019). https://doi.org/10.1007/s11042019077614.
X. Xu, X. Zhang, H. Gao, Y. Xue, L. Qi, W. Dou, BeCome: blockchainenabled computation offloading for IoT in mobile edge computing. IEEE Trans. Ind. Inform.PP:, 1–1 (2019).
X. Xu, Y. Chen, X. Zhang, Q. Liu, X. Liu, L. Qi, A blockchainbased computation offloading method for edge computing in 5G networks. Softw. Pract. Exp. (2019). https://doi.org/10.1002/spe.2749.
X. Xu, S. Fu, Zhang Qi X., Q. Liu, Q. He, S. Li, An IoToriented data placement method with privacy preservation in cloud environment. J. Netw. Comput. Appl.124:, 148–157 (2018).
G. Du, P. Zhang, A novel human–manipulators interface using hybrid sensors with Kalman filter and particle filter. Robot. Comput. Integr. Manuf.38:, 93–101 (2016).
S. Ding, S. Qu, y Xi, A. K Sangaiah, S. Wan, Image caption generation with highlevel image features. Pattern Recogn. Lett.123:, 89–95 (2019).
X. Xu, Huang Li T., Y. Xue, K. Peng, L. Qi, W. Dou, An energyaware computation offloading method for smart edge computing in wireless metropolitan area networks. J. Netw. Comput. Appl.133:, 75–85 (2019).
C. h. DU, Z. Hong, L. m. LUO, L. Jie, X. y. HUANG, Face detection in video based on AdaBoost algorithm and skin model. J. China Univ. Posts Telecomm.20:, 6–24 (2013).
L. Qi, W. Dou, W. Wang, G. Li, H. Yu, S. Wan, Dynamic mobile crowdsourcing selection for electricity load forecasting. IEEE Access. 6:, 46926–46937 (2018).
Y. Xu, L. Qi, W. Dou, J. Yu, Privacypreserving and scalable service recommendation based on simhash in a distributed cloud environment. Complexity (2017). https://doi.org/10.1155/2017/3437854.
X. Xu, Y. Xue, L. Qi, Y. Yuan, X. Zhang, T. Umer, S. Wan, An edge computingenabled computation offloading method with privacy preservation for internet of connected vehicles. Futur. Gener. Comput. Syst.96:, 89–100 (2019).
R. Belaroussi, M. Milgram, A comparative study on face detection and tracking algorithms. Expert Syst. Appl.39(8), 7158–7164 (2012).
L. Qi, X. Zhang, W. Dou, C. Hu, C. Yang, J. Chen, A twostage localitysensitive hashing based approach for privacypreserving mobile service recommendation in crossplatform edge environment. Futur. Gener. Comput. Syst.88:, 636–643 (2018).
Z. Gao, D. Wang, S. Wan, H. Zhang, Y. Wang, Cognitiveinspired classstatistic matching with tripleconstrain for camera free 3D object retrieval. Futur. Gener. Comput. Syst.94:, 641–653 (2019).
X. Xu, X. Liu, L. Qi, Y. Chen, Z. Ding, J. Shi, Energyefficient virtual machine scheduling across cloudlets in wireless metropolitan area networks. Mob. Netw. Appl., 1–15 (2019). https://doi.org/10.1007/s11036019012426.
I. Kyriakides, Target tracking using adaptive compressive sensing and processing. Signal Process.127:, 44–55 (2016).
X. Xu, Q. Liu, Y. Luo, K. Peng, X. Zhang, S. Meng, L. Qi, A computation offloading method over big data for IoTenabled cloudedge computing. Futur. Gener. Comput. Syst. 95:, 522–533 (2019).
Z. Gao, H. Z Xuan, H. Zhang, S. Wan, K. K. R. Choo, Adaptive fusion and categorylevel dictionary learning model for multiview human action recognition. IEEE Internet Things J. (2019). https://doi.org/10.1109/jiot.2019.2911669.
K. L. Bell, C. J. Baker, G. E. Smith, J. T. Johnson, M. Rangaswamy, Cognitive radar framework for target detection and tracking. IEEE J. Sel. Top. Signal Process.9(8), 1427–1439 (2015).
S. Wan, Z. Gu, Q. Ni, Cognitive computing and wireless communications on the edge for healthcare service robots. Comput. Commun. (2019). https://doi.org/10.1016/j.comcom.2019.10.012.
S. Wan, Y. Zhao, T. Wang, Z. Gu, Q. H. Abbasi, K. K. R. Choo, Multidimensional data indexing and range query processing via Voronoi diagram for internet of things. Futur. Gener. Comput. Syst.91:, 382–391 (2019).
S. Wan, X. Li, Y. Xue, W. Lin, X. Xu, Efficient computation offloading for internet of vehicles in edge computingassisted 5G networks. J. Supercomput., 1–30 (2019). https://doi.org/10.1007/s11227019030114.
S. Wan, S. Goudos, Faster RCNN for multiclass fruit detection using a robotic vision system. Comput. Netw.168:, 107036 (2020).
Y. Zhao, H. Li, S. Wan, A. Sekuboyina, X. Hu, G. Tetteh, M. Piraud, B. Menze, Knowledgeaided convolutional neural network for small organ segmentation. IEEE J Biomed. Health Inf.23(4), 1363–1373 (2019).
L. Wang, H. Zhen, X. Fang, S. Wan, W. Ding, Y. Guo, A unified twoparallelbranch deep neural network for joint gland contour and segmentation learning. Futur. Gener. Comput. Syst.100:, 316–324 (2019).
M. Coşkun, S. Ünal, Implementation of tracking of a moving object based on camshift approach with a UAV. Procedia Technol.22:, 556–561 (2016).
H. Zhao, K. Xiang, S. Cao, X. Wang, Robust visual tracking via CAMFShift and structural local sparse appearance model. J. Vis. Commun. Image Represent.34:, 176–186 (2016).
R. Zhang, P. Xie, C. Wang, G. Liu, Wan. S., Classifying transportation mode and speed from trajectory data via deep multiscale learning. Comput. Netw.162:, 106861 (2019).
S. Ding, S. Qu, Y. Xi, S. Wan, Stimulusdriven and conceptdriven analysis for image caption generation. Neurocomputing (2019). https://doi.org/10.1016/j.neucom.2019.04.095.
S. Ding, S. Qu, Y. Xi, S. Wan, A long video caption generation algorithm for big video data retrieval. Futur. Gener. Comput. Syst.93:, 583–595 (2019).
H. Zeng, J. Chen, X. Cui, C. Cai, K. K Ma, Quad binary pattern and its application in meanshift tracking. Neurocomputing. 217:, 3–10 (2016).
X. Xu, Y. Chen, Y. Yuan, T. Huang, X. Zhang, L. Qi, Blockchainbased cloudlet management for multimedia workflow in mobile cloud computing. Multimed. Tools Appl., 1–26 (2019). https://doi.org/10.1007/s1104201907900x.
F. MasoumiGanjgah, R. FatemiMofrad, N. Ghadimi, Target tracking with fast adaptive revisit time based on steady state IMM filter. Digit. Signal Process.69:, 154–161 (2017).
S. Wan, Y. Zhang, J. Chen, On the construction of data aggregation tree with maximizing lifetime in largescale wireless sensor networks. IEEE Sensors J.16(20), 7433–7440 (2016).
X. Xu, Q. X. Zhang, J. Zhang, L. Qi, W. Dou, A blockchainpowered crowdsourcing method with privacy preservation in mobile environment. IEEE Trans. Comput. Soc. Syst. (2019). https://doi.org/10.1109/tcss.2019.2909137.
Acknowledgements
The authors acknowledge the Ministry of Education of China and Chinese Academy of Sciences (Grant 444410099609) and the Fundamental Research Funds for the Central Universities (Grant 3142018047). This paper was funded by the China Agricultural University Graduate Internationalization Training Program.
Funding
This research is supported by the Ministry of Education of China and Chinese Academy of Sciences (Grant 444410099609) and the Fundamental Research Funds for the Central Universities (Grant 3142018047). This paper was funded by the China Agricultural University Graduate Internationalization Training Program.
Author information
Authors and Affiliations
Contributions
Xiangnan Zhang, Wenwen Gong, Qifeng He, and Haolong Xiang conceived and designed the study. Dan Li and Yawei Wang performed the simulations. Yifei Chen and Yongtao Liu wrote the paper. All authors reviewed and edited the manuscript. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, X., Gong, W., He, Q. et al. Camshift tracking method based on correlation probability graph for model pig. J Wireless Com Network 2020, 108 (2020). https://doi.org/10.1186/s13638020016990
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13638020016990
Keywords
 Identification and tracking
 Model pigs
 Vision sensor
 Correlation probability