A hierarchical propelled fusion strategy for SAR automatic target recognition
- Zongyong Cui^{1},
- Zongjie Cao^{1}Email author,
- Jianyu Yang^{1} and
- Jilan Feng^{1}
https://doi.org/10.1186/1687-1499-2013-39
© Cui et al.; licensee Springer. 2013
Received: 23 November 2012
Accepted: 15 January 2013
Published: 19 February 2013
Abstract
Synthetic aperture radar (SAR) automatic target recognition (ATR) is playing a very important role in military and civil field. Much work has been done to improve the performance of SAR ATR systems. It is well-known that ensemble methods can be used for improving prediction performance. Thus recognition using multiple classifiers fusion (MCF) has become a research hotspot in SAR ATR. Most current researchers focus on the fusion methods by parallel structure. However, such parallel structure has some disadvantages, such as large time consumption, features attribution conflict and low capability on confuser recognition. A hierarchical propelled strategy for multi-classifier fusion (HPSMCF) is proposed in this paper. The proposed HPSMCF has the characters both of series and parallel structure. Features can be used more effective and the recognition efficiency can be improved by extracting features and fusing the probabilistic outputs in a hierarchical propelled way. Meanwhile, the confuser recognition can be achieved by setting thresholds for the confidence in each level. Experiments on MSTAR public data demonstrate that the proposed HPSMCF is robust for variant recognition conditions. Compared with the parallel structure, HPSMCF has better performance both on time consumption and recognition rate.
Keywords
1. Introduction
SAR is playing an important role both in national defense and civil applications, because SAR can work in all weather and day/night conditions. For better taking advantage of SAR data, the problem of target recognition should be overcome. In 1997–2000, L.M. Novak publishes a number of articles on SAR ATR [1–3]. After that many researchers have done lots of work to improve the performance of SAR ATR systems.
It is well-known that ensemble methods can be used for improving prediction performance [4]. The main idea behind the ensemble methodology is to combine several individual classifiers in order to obtain a classifier that outperforms every one of them. In 1998, J. Kittler .etc. develop a common theoretical framework for combining classifiers [5]. In 2010, Lior Rokach proposes an ensemble system which is composed of several independent base-level models [4]. These base-level classifiers are respectively constructed using different techniques and methods. With the development of MCF technology, MCF has been extensively applied in many areas, such as character recognition [6], multi-sensor data classification [7] and SAR ATR [8, 9].
It can be seen that MCF has become a research hotspot. Thus many new theories in pattern recognition are gradually introduced to MCF. For example, Jorge Sánchez and Javier Redolfi propose a novel approach for the combination of classifiers based on a graph defined in the space of concepts and a Markov chain defined on that graph [10]. Some researchers consider the contributions of various types of feature sets and classifiers, so weights are given to features or classifiers [11–13]. It is believed that confidence-weighted learning is more consistent with human’s prediction model. In [14], authors use clustering ensemble for classifiers combination, because the use of cluster analysis techniques in supervised classification tasks has shown that they can enhance the quality of the classification results.
In order to overcome such disadvantages of the PSMCF, a hierarchical propelled strategy for multi-classifier fusion (HPSMCF) is proposed in this paper. HPSMCF has the characters both of series and parallel structure. Features can be used more effective and the recognition efficiency can be improved by extracting features and fusing the probabilistic outputs in a hierarchical propelled way. Determining whether to go to the next level depends on the comparison between confidence and empirical threshold, so HPSMCF can reduce the time consumption without lower the recognition rate. Also the confuser recognition can be achieved by setting thresholds for making decisions in each level. Experiments on MSTAR public data demonstrate that the proposed HPSMCF gets better performance than PSMCF when applied to SAR ATR.
The rest of the paper is organized as follows. Section 2 introduces the flow of the proposed fusion strategy, the definition of classification confidence and classification weight in each level. In section 3, the detail description of three levels HPSMCF is given. Then Section 4 presents experimental results and analysis on MSTAR database. Finally in section 5 the conclusion and future work are stated.
2. Hierarchical propelled fusion strategy
If there are c classes samples, the classifier in level l (1 ≤ l ≤ L) can get posterior probability output of each class shown by P^{ l } = {p_{1}, p_{2}, ...p_{ c }}.
1) Classification confidence
$\left\{{P}^{l}\backslash \underset{c}{\text{max}}\left({p}_{i}\right)\right\}$ means the set P^{ l } except $\underset{c}{\text{max}}\left({p}_{i}\right)$. If conf (P^{ l }) is greater than a determined value, it means that the class represented by $\underset{c}{\text{max}}\left({p}_{i}\right)$ is acceptable.
In level l (l > 1), there are two thresholds (T_{ l }, T_{ l }^{*}). T_{ l }indicates the threshold in level l before fusion, and T_{ l }^{*} is used for the fusion result.
2) Weights of P^{ l }
The confused matrix of weights
Fusion in P^{ l }in | Level 2 | Level 3 | Level 4 | Level j | Level L |
---|---|---|---|---|---|
Level 1 | _{w 12} | _{w 13} | _{w 14} | _{w 1j} | _{w 1L} |
Level 2 | _{ w22 } | _{ w23 } | _{ w24 } | _{ w2j } | _{w 2L} |
Level 3 | × | _{w 33} | _{w 34} | _{w 3j} | _{w 3L} |
Level 4 | × | × | _{w 44} | _{w 4j} | _{w 4L} |
Levell | × | × | × | _{ wlj } | _{w 1L} |
Level L | × | × | × | × | _{w LL} |
where 0 < μ < 1. It means that if the probability output P^{ l } cannot give enough evidence to get the classification result in level j-1, the weight of P^{ l } should be reduced when P^{ l } is used in level j and the reduced coefficient is μ.
3) Basic flow
The detailed process at each level is described as follows:
In level 1, extracting the first feature, and feeding the feature to classifier. Computing the confidence of P^{ 1 }. If conf(P^{1}) > T_{1}, output the classification result; otherwise the flow goes to level 2;
In level 2, extracting the second feature, and feeding the feature to classifier. Computing the confidence of P^{2}. If conf(P^{2}) > T_{2}, output the classification result; otherwise fusion w_{12}P^{1} and w_{22}P^{2}. Computing the confidence of the fusion result. If conf(fusion(w_{12}P^{1}, w_{22}P^{2})) > ${T}_{2}^{*}$, output the classification result; else go to next level;
The processes in the following levels are the same as level 2, except in level L. The last level L makes the final decision. If in level L, the classification result cannot be got, all the process will be ended, and output ‘cannot recognition’.
It can be seen that if the threshold T_{ L }^{*} is set to 0 and all of the other thresholds are set to 1, our HPSMCF is simplified as PSMCF; if the fusion processes are cancelled, HPSMCF is simplified as series structure.
3. Application of the proposed three-tier HPSMCF to SAR ATR
In order to verify the feasibility of the strategy proposed in this study, three-tier HPSMCF is applied to SAR images automatic target recognition. This work includes three important parts: Feature Extracting, Classification, and Decision Fusion.
1) Feature extracting
In our current research, three projection features are used. They are Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Non-negative Matrix Factor (NMF).
A. PCA
PCA is a common feature extraction method in pattern recognition, which has been widely used for target classification in SAR images. PCA is based on the assumption that high information corresponds to high variance.
Function EIG can get all the eigenvalues and eigenvectors of Q. It is proved that the first few principal components account for most of the variation. They can be used to describe the data, leading to a reduced-dimension representation. Choose the eigenvectors correspond to the k largest positive eigenvalues to form the transformation matrix. Then the PCA feature of X is extracted by multiplying the transformation matrix. The detailed information about PCA is introduced in reference [15, 16].
B. LDA
Where x_{ i }^{ j } is the i th sample of class j, μ_{ j } is the mean of class j, c is the number of classes, and N_{ j } is the number of samples in class j.
where μ represents the mean of all classes.
The goal of LDA is to maximize the between-class measure while minimizing the within-class measure. One way to do it is to maximize the ratio det |S_{ b }|/ det |S_{ w }|. If S_{ w } is a nonsingular matrix, when the column vectors of the projection matrix W = PCA(S_{ w }^{−1}S_{ b }), the ratio is maximized. It should be noted that the number of eigenvectors of W is at most c-1.
C. NMF
NMF has been successfully used for matrix decomposition and dimensionality reduction. The non-negativity constraint leads to a part-based representation because it allows only additive, not subtractive, combinations of the original data [18, 19].
with 0 ≤ i < n − 1, 0 ≤ j < m − 1 and 0 ≤ μ < r − 1.
for 0 ≤ a < r, 0 ≤ μ < m and 0 ≤ i < n. Appropriate W, H can be found by iteration.
D. Feature ordering
In our current research, the principle of feature ordering is the computational complexity for extracting each feature. The computational complexity of PCA is the smallest, and the computational complexity of NMF is the largest. So in the three-tier HPSMCF, PCA is used in level 1, LDA is used in level 2 and NMF is used in level 3.
2) Classifier
In order to make the metric of probability output same in each level, Support Vector Machine (SVM) is used in all levels. SVM classification method has extraordinary potential capacity. Using kernel function, SVM can well solve the non-linear classification problem [6, 20].
SVM discriminates two classes by fitting an optimal linear separating hyperplane (OSH). The optimization principle is based on structural risk minimization (SRM). SRM aims to maximize the margins between the OSH and the closest training samples. These closest training samples are called support vectors. The details of solving the optimization problem in SVM are introduced in [21].
where 0 < j ≤ c, c is the number of class. P_{ j } is the probability that the test sample belongs to class j.
and the parameters of C and y are set to C = 32, γ = 1/32.
3) Fusion theory
Dempster-Shafer Evidence Theory is close to human decision principle and has been widely used in information fusion and classifier fusion [22, 23].
If A ⊆ Θ and m(A) > 0, A is called a Focal Element.
In level l, there are l evidences need to be fused. So in the second and third level of our three-tier HPSMCF, the value of n in (13) is 2 and 3, respectively.
4. Experiment analysis
Part of MSTAR public database
Target number | Target type | Depression angle | ||
---|---|---|---|---|
17° | 15° | |||
1 | BMP2 | sn-c9563 | 233 | 195 |
sn-c9566 | 232 | 196 | ||
sn-c21 | 233 | 196 | ||
2 | BTR70 | 233 | 196 | |
3 | T72 | sn-132 | 232 | 196 |
sn-812 | 231 | 195 | ||
sn-s7 | 228 | 191 | ||
4 | 2S1 | 299 | 274 | |
5 | D7 | 299 | 274 | |
6 | ZIL131 | 299 | 274 | |
7 | ZSU23-4 | 299 | 274 |
The format of the data is RAW including amplitude and phase information. In the following experiments, only amplitude information is used. All the images are cropped by extracting 64×64 patches from the center of the image but without any other preprocessing.
(1) 3-Class recognition and confuser recognition
1) 3-Class recognition
In this experiment, 3 classes targets (BMP2, BTR70, T72) are used. BMP2 and T72 have three series as shown in Table 2. Only the images of BMP2 sn-c9563, BTR70 and T72 sn-132 at depression 17° are used for training data. All of the images of these three classes at depression 15° are used as the testing data.
The confusion matrix of HPSMCF performance
Testing data | BMP2 | BTR70 | T72 | Can’t recognition | Recognition rate |
---|---|---|---|---|---|
BMP2 sn-9563 | 181 | 4 | 7 | 3 | 85.5% |
BMP2 sn-9566 | 153 | 15 | 24 | 4 | |
BMP2 sn-c21 | 168 | 14 | 8 | 6 | |
BTR70 sn-c71 | 1 | 192 | 3 | 0 | 98% |
T72 sn-132 | 8 | 1 | 187 | 0 | 91.2% |
T72 sn-812 | 15 | 3 | 176 | 1 | |
T72 sn-s7 | 15 | 6 | 168 | 2 |
2) Confuser recognition
In this experiment four nontarget vehicles (2S1, D7, ZIL131, and ZSU23-4), are added to the testing set in Table 3 as confusers.
Confuser rejection rate
Methods | 2S1 | D7 | ZIL131 | ZSU23-4 | Average |
---|---|---|---|---|---|
PSMCF | 46.6% | 57.6% | 53.8% | 51.4% | 52.4% |
HPSMCF | 61.9% | 75.6% | 71% | 72.4% | 70.2% |
(2) Depression angle and configuration variance
New division of 3-class data
Set serial | Type | Depression angle | Size | |
---|---|---|---|---|
training set | set 1 | BMP2-c21 | 17° | 233 |
BTR70-c71 | 17° | 233 | ||
T72-132 | 17° | 232 | ||
testing set | set 2 | BMP2-c21 | 15° | 196 |
BTR70-c71 | 15° | 196 | ||
T72-132 | 15° | 196 | ||
set 3 | BMP2-9563 | 17° | 233 | |
BMP2-9566 | 17° | 232 | ||
T72-812 | 17° | 232 | ||
set 4 | BMP2-9563 | 15° | 195 | |
BMP2-9566 | 15° | 196 | ||
T72-812 | 15° | 195 |
Recognition results on new division
Methods | Set 2 | Set 3 | Set 4 | Average |
---|---|---|---|---|
PCA+SVM | 94.22% | 79.02% | 73.72% | 82.14% |
LDA+SVM | 92.52% | 80.03% | 77.47% | 83.16% |
NMF+SVM | 97.79% | 88.07% | 82.76% | 89.47% |
PSMCF | 98.30% | 86.49% | 84.47% | 89.57% |
HPSMCF | 99.32% | 90.52% | 87.54% | 92.35% |
Meanwhile, Table 6 and Figure 5 also show that the PSMCF indeed outperforms the method which uses single feature, but the proposed hierarchy framework HPSMCF in this paper has the best results.
The hierarchy depth of three testing sets
Hierarchy depth | Set 2 (588) | Set 3 (696) | Set 4 (586) |
---|---|---|---|
Level 1 | 522 | 357 | 288 |
Level 2 | 21 | 90 | 91 |
Level 3 | 44 | 244 | 201 |
Cannot recognition | 1 | 5 | 6 |
Comparison with on average RR and used time
Methods | Average RR | Time |
---|---|---|
PSMCF | 89.57% | 5.02 s |
HPSMCF | 92.35% | 3.9 s |
5. Conclusion and future work
In order to overcome the disadvantages of the common fusion method by parallel structure, a hierarchical propelled strategy of multiple classifiers fusion (HPSMCF) is proposed in this paper. The recognition efficiency can be improved by extracting features and fusing the probabilistic outputs in the hierarchical propelled way. Also the confuser recognition can be achieved by computing confidence and making decisions in each level. Experiments on MSTAR public data set demonstrate the effectiveness of the proposed hierarchical propelled fusion strategy. Compared to the single classifier based recognition processes, HPSMCF has higher recognition rate. Meanwhile, the proposed method outperforms the traditional parallel structure on both time consumption and recognition rate.
The next step in our research work will consist in selecting the threshold T adaptively and using more features and more classifiers to evaluate the feasibility of this system. On these bases, our goal is to build a recognition framework based on human cognition theory. Meanwhile, the proposed strategy can also be considered for the multiple sensors fusion, etc.
Declarations
Acknowledgement
This work is supported by the National Natural Science Foundation of China under Projects 60802065 and Projects 61271287.
Authors’ Affiliations
References
- Novak LM, Halversen SD, Owirka GJ, Hiett M: Effects of polarization and resolution on SAR ATR. IEEE Trans. Aerosp. Electron. Syst. 1997, 33(1):102-116.View ArticleGoogle Scholar
- Novak LM, Owirka GJ, Weaver AL: Automatic target recognition using enhanced resolution SAR data. IEEE Trans. Aerosp. Electron. Syst. 1999, 35(1):157-175. 10.1109/7.745689View ArticleGoogle Scholar
- Novak LM, Owirka GJ, Brower WS: Performance of 10- and 20-target MSE classifiers. IEEE Trans. Aerosp. Electron. Syst. 2000, 36(4):1279-1289. 10.1109/7.892675View ArticleGoogle Scholar
- Rokach L: Ensemble-based classifiers. Artif. Intell. Rev. 2010, 33(1–2):1-39.View ArticleGoogle Scholar
- Kittler J, Hatef M, Duin RPW, Matas J: On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20(3):226-239. 10.1109/34.667881View ArticleGoogle Scholar
- Rahman AFR, Fairhurst MC: Multiple classifier decision combination strategies for character recognition: a review. Doc. Anal. Recognit. 2003, 5(4):166-194. 10.1007/s10032-002-0090-8View ArticleGoogle Scholar
- Waske B, Benediktsson JA: Fusion of support vector machines for classification of multisensor data. IEEE Trans. Geosci. Remote Sens. 2007, 45(12):3858-3866.View ArticleGoogle Scholar
- Xin Y, Yukuan L, Jiao LC: SAR automatic target recognition based on classifiers fusion. In Proceeding of2011 International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping (M2RSM). Xiamen; 2011:1-5.Google Scholar
- Huan R, Pan Y: Decision fusion strategies for SAR image target recognition. IET Radar Sonar Navigat. 2011, 5(7):747-755. 10.1049/iet-rsn.2010.0319View ArticleGoogle Scholar
- Sánchez J, Redolfi J: Classifier combination using random walks on the space of concepts. Prog. Pattern Recognit. Image Anal. Comput. Visi. Appl. 2012, 7441: 789-796. 10.1007/978-3-642-33275-3_97View ArticleGoogle Scholar
- Busagala LSP, Ohyama W, Wakabayashi T, Kimura F: Multiple feature-classifier combination in automated text classification. In Proceeding of 2012 10th IAPR International Workshop on Document Analysis Systems (DAS). Gold Cost, QLD, Australia; March 2012:43-47.View ArticleGoogle Scholar
- Crammer K, Dredze M, Pereira F: Confidence-weighted linear classification for text categorization. J. Mach. Learn. Res. 2012, 13: 1891-1926.MATHMathSciNetGoogle Scholar
- Jian H, Zhan-Shen F, Bo-Ping Z: A graph-theoretic approach to classifier combination. In Proceeding of2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Kyoto; March 2012:1017-1020.Google Scholar
- Duval-Poo M, Sosa-García J, Guerra-Gandón A, Vega-Pons S, Ruiz-Shulcloperet J: A new classifier combination scheme using clustering ensemble. Prog. Pattern Recognit. Image Anal. Comput. Vis. Appl. 2012, 7441: 154-161. 10.1007/978-3-642-33275-3_19View ArticleGoogle Scholar
- Martinez AM, Kak AC: PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23(2):228-233. 10.1109/34.908974View ArticleGoogle Scholar
- Changzhen Q, Hao R, Huanxin Z, Shilin Z: Performance comparison of target classification in SAR images based on PCA and 2D-PCA features. In Proceeding of 2009 2nd Asian-Pacific Conference on Synthetic Aperture Radar (APSAR). Xian, Shanxi; October 2009:868-871.Google Scholar
- Belhumeur PN, Hespanha JP, Kriegman DJ: Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19(7):711-720. 10.1109/34.598228View ArticleGoogle Scholar
- Lee DD, Seung HS: Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401: 788-791. 10.1038/44565View ArticleGoogle Scholar
- Nikolaus R: Learning the parts of objects using non-negative matrix factorization. Term Paper, MMER Team; 2007.Google Scholar
- Ying W, Ping H, Xiaoguang L, Renbiao W, Jingxiong H: The performance comparison of Adaboost and SVM applied to SAR ATR. In Proceeding of 2006 International Conference on Radar. Shanghai; 2006:1-4.Google Scholar
- Vapnik VN: Statistical Learning Theory. Wiley, New York; 1998.MATHGoogle Scholar
- Kuncheva CP: Classifiers, Methods and Algorithms. Wiley, New York; 2004.MATHGoogle Scholar
- Van-Nam H, Tri Thanh N, Cuong Anh L: Adaptively entroy-based weighting classifiers in combination using dempster-shafer theory for word sending disambiguation. Comput. Speech Lang 2009, 24(3):461-473.Google Scholar
- Haichao Z, Nasser MN, Yanning Z, Thomas SH: Multi-view automatic target recognition using joint sparse representation. IEEE Trans. Aerosp. Electron. Syst. 2012, 48(3):2481-2497.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.