 Research
 Open Access
 Published:
Orthogonal maximum margin projection subspace for radar target HRRP recognition
EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 72 (2016)
Abstract
In this paper, a novel target recognition method, namely orthogonal maximum margin projection subspace (OMMPS), is proposed for radar target recognition using highresolution range profile (HRRP). The core of OMMPS is to maximize the betweenclass margin by increasing the betweenclass scatter distance and reducing the withinclass scatter distance simultaneously. By introducing the nonlinear mapping function, we also derive the kernel version of OMMPS, namely orthogonal kernel maximum margin projection subspace (OKMMPS). Compared with maximum margin criterion (MMC) method, OMMPS are optimal in meaning of maximum margin due that the coordinate axes of OMMPS are obtained sequentially by solving the constrained optimization problem, thus improves the recognition performance. In addition, the number of efficient features for OMMPS is not limited by the number of pattern classes, and the appropriate features can still be obtained for separating the classes, even in highdimensional space with only a few classes. Moreover, the coordinate axes of OMMPS are mutually orthogonal, and the features extracted by OMMPS reduce the redundancy. The extensive experimental results show that the proposed method has better recognition performance than the other methods such as MMC and LDA.
Introduction
We are able to obtain the highresolution range profile (HRRP) by the wideband radar. The HRRP is the amplitude of radarreturned echoes from target as a function of range cell, which represents the distribution of projection of target radar scattering centers along the radar line of sight. It can provide geometric structure information such as target size and information of scattering centers, which is very useful in target classification. Therefore, the radar target recognition using HRRP has intensively been focused by radar target recognition community [1–7]. K. T. Kim et al. propose some invariant features for HRRP [8–9]. Y. Shi et al. [10] use a novel neural network classifier for HRRP recognition. S. K. Wong [11] presents a feature selection method in frequency domain. D. E. Nelson et al. [12] study a new iterated wavelet feature for HRRP classification. R. A. Mitchell et al. [13] extract some robust statistical features from HRRP for radar target recognition. X. J. Liao et al. [14] use sequential HRRPs to identify the ground targets. C. Y. Wang et al. [15] model the radar echoes for radar HRRP recognition by Tmixture model. M. Li et al. [16] propose a sparse representationbased denoizing method for improving recognition performance using HRRP. L. Du et al. [17] apply the statistical model for recognizing the radar HRRP. L. Shi et al. [18] use the local factor analysis to model the nonGaussianity of the radar HRRP data. J. S. Fu et al. [19] extract the betweenclass discriminant information and amongclass discriminant information for improving the classification performance. However, HRRP is sensitive to target aspect, timeshift, and amplitudescale. These factors increase the betweenclass ambiguities which must be resolved and degrade the classification accuracy. Moreover, HRRP is typically highdimensional, nonGaussian, and interdimension dependently distributed and increases the difficulties in statistical modeling of pattern objects. Thus, the radar target recognition using HRRP is still a challenging task.
Many previous works have shown that subspace methods are very effective in pattern recognition task. For example, the principal component analysis (PCA) can preserve the large variance directions [20]. The linear discriminant analysis (LDA) is able to maximize the betweenclass distance and minimize the withinclass distance simultaneously [21]. PCA and LDA are widely applied for feature extraction and dimension reduction. In order to process the nonlinear problem, KPCA [22] and KFDA [23] are proposed based on kernel trick. However, the performance of these methods cannot be improved further when the objects such as HRRPs are usually highdimensional vectors and do not satisfy the assumption of the Gaussian distribution.
Because they only capture the global geometric structure of dataset and do not consider the local geometric structure information that is very important for target recognition.
To capture the local structure information, several manifold learning methods have been proposed. X. F. He et al. [24] present the locality preserving projections (LPP) by means of a weight matrix (called heat kernel). H. T. Chen et al. [25] propose the local discriminant embedding (LDE) using the neighbors and class relations of data. D. Cai et al. [26] study the orthogonal Laplacianfaces (OLPP) by computing a set of orthogonal basis functions. L. Zhu et al. [27] propose the orthogonal discriminant locality preserving projections (ODLPP) by orthogonalizing the basis vectors. S. J. Wang et al. [28] present an exponential locality preserving projections (ELPP) via introducing the matrix exponential function. The above methods can obtain impressive results. However, they only emphasize the compactness between the neighbors or sameclass data points and do not consider the optimal separation between differentclass data points. Therefore, the discriminative power may be improved by combining the manifold learning and discriminant analysis.
Motivated by the idea, S. Yan et al. [29] present margin fisher analysis (MFA) method. MFA uses the intrinsic graph and the penalty graph to characterize the local structure in discriminant analysis and thus increases the intraclass compactness and interclass separability. M. Sugiyama [30] proposes the local fisher discriminant analysis (LFD) approach by taking local structure of the data into account, and the multimodal data can be embedded appropriately. D. Cai at al. [31] study the locality sensitive discriminant analysis (LSDA) method, which utilizes local geometry structure of the data manifold and discriminant information at the same time. T. Zhang et al. [32] present a discriminative locality alignment (DLA) algorithm by imposing discriminative information in the part optimization stage. DLA can attack the distribution nonlinearity of measurements and preserve the discriminative ability while avoiding the small sample size problem. B. Li et al. [33] propose the locally linear discriminant embedding (LLDE) method. LLDE apply the constrained weights to strengthen the classification ability. Y. Chen et al. [34] present a local coordinate factorization (NLCF) method by adding a local coordinate constraint into the standard NMF objective function. Q. Gao et al. [35] propose the stable orthogonal local discriminate embedding algorithm by introducing the orthogonal constraint on the basis vectors. C. Hou et al. [36] propose a unified framework that explicitly unfolds the manifold and reformulate local approaches as the semidefinite programs and thus improves the performance of some algorithms such as locally linear embedding (LLE), laplacian eigenmaps (LE), and local tangent space alignment (LTSA). Although the above methods are successful in many applications, their recognition performance may decrease when objects as HRRPs suffer from the large withinclass variation due to the fact that these methods are often in lack of robustness and generalization.
Inspired by the maximum margin of SVM, A. Kocsor et al. [37] propose the margin maximizing discriminant analysis (MMDA) approach. The core of MMDA is to maximize the betweenclass margin on the decision boundary by applying the normals of a set of pairwise orthogonal margin maximizing hyperplanes to construct a projection subspace. But MMDA is only fit for binary classification problem and cannot be applied for multiclass classification problem directly. Based on the similar idea, H. F. Li et al. [38] present the maximum margin criterion (MMC) method. The aim of MMC is to maximize the trace of the difference of the betweenclass scatter matrix and withinclass scatter matrix. It can be applied for multiclass classification directly and avoid the small sample size (SSS) problem. However, the coordinate axes of MMC subspace are not optimal in meaning of maximum margin due to the fact that they are solved by (SVD) on the difference between the betweenclass scatter matrix and withinclass scatter matrix without exerting the constraints. Thus, its performance can be improved further.
In this paper, a novel target recognition method, namely orthogonal maximum margin projection subspace (OMMPS), is proposed for radar target HRRP recognition. The aim of OMMPS is to maximize the betweenclass margin by increasing the betweenclass scatter distance and reducing the withinclass scatter distance simultaneously. By exerting the orthogonality constraint on the objective function, we can solve the OMMPS. The OMMPS has three advantages: First, the number of features does not depend on the number of classes. As a result, the appropriate features can still be obtained for separating the classes, even in highdimensional space with only a few classes. Second, the coordinate axes of OMMPS are optimal in meaning of maximum margin because the coordinate axes are solved sequentially by exerting the orthogonality constraint on the objective function. Third, the coordinate axes of OMMPS are mutually orthogonal and the features extracted by OMMPS reduce the redundancy, thus improves the recognition performance.
OMMPS
Let \( \mathbf{X}=\left[{\mathbf{x}}_{11}\cdots {\mathbf{x}}_{1{N}_1}\cdots {\mathbf{x}}_{C1}\cdots {\mathbf{x}}_{C{N}_C}\right] \)denotes a training sample set,x _{ ij }is the jthndimensional HRRP vector of ith class. Each class contains N _{ i } training samples, and the number of total training sample for C classes is N(N = N _{1} + N _{2} + ⋯ + N _{ C }). Let A represents a n × mdimensional matrix (m < n). Projecting x _{ ij } into mdimensional feature subspace below
where Tdenotes transposition, and y _{ ij }is a mdimensional vector, namely subprofile of x _{ ij }. Firstly, computing the betweenclass scatter distance d _{ B } in subprofile space
where Tr(⋅) is the trace of matrix. \( {\overline{\mathbf{y}}}_i=\frac{1}{N_i}{\displaystyle \sum_{j=1}^{N_i}{\mathbf{y}}_{ij}} \), \( {\overline{\mathbf{y}}}_k=\frac{1}{N_k}{\displaystyle \sum_{k=1}^{N_k}{\mathbf{y}}_{kj}} \), and \( \overline{\mathbf{y}}=\frac{1}{N}{\displaystyle \sum_{i=1}^C{\displaystyle \sum_{j=1}^{N_i}{\mathbf{y}}_{ij}}} \)are the mean vectors of ith class’ training subprofiles, kth class’ training subprofiles, and total training subprofiles, respectively. Substituting Eq. (1) into Eq. (2), it follows that
where \( {\overline{\mathbf{x}}}_i \)is the mean vector of ith class’ training samples, and \( \overline{\mathbf{x}} \) is the mean vector of total training samples. S _{ B } is the betweenclass scatter matrix in original sample space
Secondly, computing the withinclass scatter distance d _{ W } in subprofile space
Substituting Eq. (1) into Eq. (5), we can get
where S _{ W } is the withinclass scatter matrix in original sample space
According to the geometric structure in subprofile space, we define the betweenclass margin in subprofile space below
where d _{ M } is the betweenclass margin.
The aim of OMMPS is to seek an orthogonal projection subspace by maximizing the betweenclass margin based on the orthogonality constraint, i.e., solving the following maximization problem
and
where a _{ r } is the column vector of matrix A, i.e., A = [a _{1}, a _{2} … a _{m}], namely orthogonal maximum margin projection subspace (OMMPS). Although the objective function in Eq. (9) is similar to that of MMC [38], the objective function of MMC does not include the orthogonality constrains. Besides, MMC obtains the projection subspace using the eigenvectors corresponding to the first largest eigenvalues of matrix (S _{B} − S _{w}), and thus, the projection vectors of MMC are not optimal in meaning of maximum margin. We solve the above optimization problem by following steps.
To solve a _{1}, we construct a Lagrangian function using Eqs. (9) and (10)
where λ _{1} is a Lagrangian multiplier. Taking the vector derivative of J(a _{1}, λ _{1}) with respect to a _{1} and set the resultant equation to zero, we can get the generalized eigenvector equation
Let \( {\lambda}_1^{\max } \)is the largest eigenvalue of matrix (S _{ B } − S _{ W })and \( {\boldsymbol{\upmu}}_1^{\max } \)is the corresponding eigenvector; then, we may set
After obtaining the a _{1}, combining Eqs. (9)–(11), we can form the Lagrangian function
where λ _{1}, λ _{2}, λ _{ r − 1}, and λ _{ r } are Lagrangian multipliers. In a similar way, taking the derivative of J(a _{ r }, λ _{1}, λ _{2} ⋯ λ _{ r − 1}, λ _{ r }) in Eq. (15) with respect to a _{ r } and λ _{ l }(l = 1, 2, ⋯, r), solving the resultant equation leads to
Let \( {\lambda}_r^{\max } \)is the largest eigenvalue of Eq. (16) and \( {\boldsymbol{\upmu}}_r^{\max } \)is the corresponding eigenvector; then, we can set
According to the above discussion, it is obvious that the basis vectors of OMMPS are solved sequentially by exerting the orthogonality constraint on the objective function. As a result, they are mutually orthogonal and optimal in meaning of maximum margin. Therefore, OMMPS has better discriminative power than MMC. The steps of feature extraction based on OMMPS are shown in Algorithm 1.
Algorithm 1. The feature extraction based on OMMPS
Task: Solve the linear subprofile features using the training data set
Step 1) Determine the subprofile's dimensionality m
Step 2) Compute the matrix S _{ B }and S _{ W }by equation (4) and (7)
Step 3) SVD to matrix (S _{ B } − S _{ W }), and obtain the a _{1} by equation (14)
Step 4) SVD to matrix \( \left(\mathbf{I}\left({\mathbf{a}}_1{\mathbf{a}}_1^T+\cdots +{\mathbf{a}}_{r1}{\mathbf{a}}_{r1}^T\right)\right)\left({\mathbf{S}}_B{\mathbf{S}}_W\right) \)for r = 2, and obtain the a _{2}by equation (17)
Step 5) Set r = r + 1repeat Step 4 until a _{ m }is obtained. Then A = [a _{1} a _{2} ⋯ a _{ m }]
Step 6) Obtain the linear subprofile of HRRP vector x using equation (1)
Orthogonal kernel maximum margin projection subspace (QKMMPS)
When nonlinear variations in HRRPs are very serious, the HRRPs of different classes may not be separable linearly. We introduce the nonlinear mapping to solve this problem. A nonlinear function φ is used to map x _{ ij } into a highdimensional feature space F below
where the dimensionality of feature space Fis n ^{'}; here, n ^{'} may be any value or infinite. Let A _{ φ } denotes a n ^{'} × m ^{φ}dimensional transformation matrix, namely orthogonal kernel maximum margin projection subspace; then, φ(x _{ ij }) is projected into m ^{φ} dimensional space as follow
where \( {\mathbf{y}}_{ij}^{\varphi } \) is m ^{φ}dimensional column vector, namely nonlinear subprofile of HRRP vector x _{ ij } in lowdimensional feature space. In a similar way, we can compute the betweenclass margin \( {d}_M^{\varphi } \) in nonlinear subprofile space
where \( {\mathbf{S}}_B^{\varphi } \) and \( {\mathbf{S}}_W^{\varphi } \) are the betweenclass scatter matrix and withinclass scatter matrix in highdimensional feature space F, respectively.
where \( {\mathbf{x}}_{ij}^{\varphi }=\varphi \left({\mathbf{x}}_{ij}\right) \), \( {\overline{\mathbf{x}}}_i^{\varphi }=\left(1/{N}_i\right){\displaystyle \sum_{j=1}^{N_i}\varphi \left({\mathbf{x}}_{ij}\right)} \), and \( {\overline{\mathbf{x}}}^{\varphi }=\left(1/N\right){\displaystyle \sum_{i=1}^C}{\displaystyle \sum_{j=1}^{N_i}\varphi \left({\mathbf{x}}_{ij}\right)} \). Based on the aim of orthogonal kernel maximum margin projection subspace (OKMMPS), we may get OKMMPS by solving the following constrained maximization problem
and
where \( {\mathbf{a}}_r^{\varphi } \) is the column vector of matrix A _{ φ }, i.e., \( {\mathbf{A}}_{\varphi }=\left[{\mathbf{a}}_1^{\varphi}\;{\mathbf{a}}_2^{\varphi}\cdots {\mathbf{a}}_m^{\varphi}\right] \), namely OKMMPS. Because the expression of nonlinear mapping φ(⋅) is not defined explicitly, it is impossible to solve the Eq. (23) for obtaining OKMMPS directly. We use kernel trick to solve this problem.
Let
and
where α _{ rij } is a coefficient, x _{ ij }and x _{ lk } are ndimensional column vectors, and k(x _{ ij }, x _{ lk }) is a kernel function. Substituting Eqs. (26) and (27) into Eqs. (23)–(25), it follows that
and
where
where
Combining Eqs. (28) and (29), we can construct the following function for getting α _{1}
where γ _{1} is a Lagrangian multiplier. Taking the vector derivative of J(α _{1}, γ _{1}) with respect to α _{1} and set the resultant equation to zero, we can get the generalized eigenvector equation
Similar to the observation in Section 2, we set
where\( {\boldsymbol{\upmu}}_1^{\alpha, \max } \) is the eigenvector corresponding to the largest eigenvalue \( {\gamma}_1^{\max } \) of matrix \( {\mathbf{K}}^{1}\left({\mathbf{S}}_B^{\varphi }{\mathbf{S}}_W^{\varphi}\right) \).
Combining Eqs. (28), (29), and (30), the function is constructed using Lagrangian multipliers to solve α _{ r }(2 ≤ r ≤ m)
where γ _{1}, γ _{2}, γ _{ r − 1}, and γ _{ r } are Lagrangian multipliers. In a similar way, we can get the following eigenvector equation
Let \( {\gamma}_r^{\max } \)is the largest eigenvalue of Eq. (40) and \( {\boldsymbol{\upmu}}_r^{\alpha, \max } \)is the corresponding eigenvector; then, we set
After obtaining \( {\mathbf{S}}_B^{\alpha } \) α _{1}, α _{2} ⋯, α _{ m }, φ(x) is projected into the nonlinear subprofile space according to Eq. (19); it follows that
where y ^{φ} is the corresponding nonlinear subprofile of x. The steps of feature extraction based on OKMMPS are shown in Algorithm 2.
Algorithm 2. The nonlinear feature extraction based on OKMMPS
Task: Solve the nonlinear subprofile features using the training data set
Step 1) Determine the subprofile's dimensionality m
Step 2) Select the kernel function
Step 3) Compute the matrix K, and \( {\mathbf{S}}_W^{\alpha } \)by equation (27), equation (32) and (33)
Step 4) SVD to matrix \( {\mathbf{K}}^{1}\left({\mathbf{S}}_B^{\alpha }{\mathbf{S}}_W^{\alpha}\right) \), and obtain the α _{1} by equation (38)
Step 5) SVD to matrix \( {\mathbf{K}}^{1}\left(\mathbf{I}\left(\mathbf{K}{\boldsymbol{\upalpha}}_1{\boldsymbol{\upalpha}}_1^T+\cdots +\mathbf{K}{\boldsymbol{\upalpha}}_{r1}{\boldsymbol{\upalpha}}_{r1}^T\right)\right)\left({\mathbf{S}}_B^{\alpha }{\mathbf{S}}_W^{\alpha}\right) \)for r = 2 and obtain the α _{2}by equation (41)
Step 6) Set r = r + 1repeat Step 5 until α _{ m }is obtained
Step 7) Obtain the nonlinear subprofile of HRRP vector xusing equation (42)
Experimental results
To show the effectiveness of the proposed method, we perform the extensive experiments on the measured data of three kinds of airplanes.
Data description
The data used in experiments are HRRPs measured from three airplanes, including An26, Jiang, and Yark42. For each airplane, 240 HRRPs over a wide range of aspects are adopted. For each airplane, one quarter of all HRRPs are used for training and the rest are used for testing. Before running experiments, each HRRP is preprocessed by energy normalization. The HRRPs of three airplanes are illustrated in Fig. 1.
The dimensionality of subspace
In this experiment, we consider the effect of subspace’s dimensionality on recognition performance. The training data and testing data are as described above. The subspace’s dimensionality is set from 1 to 10. The nearestneighbor classifier is applied for classification. Two kernels are used, i.e., radial basis function kernel (RBFK)
and polynomial function kernel (PFK)
where the kernel parameters σ and d are set by the crossvalidation method.
Figure 2 shows the average recognition rates of two methods (MMC [38] and OMMPS) versus the subspace’ dimensionality. From Fig. 2a, it can be seen that there is a big rise in the average recognition rate when the subspace’ dimensionality is increased from 1 to 5, and the average recognition rates keep same approximate when the subspace’ dimensionality is above 5. Thus, the proper dimensionality of MMC is set as 5 in the following experiments. From Fig. 2b, the appropriate dimensionality of OMMPS can also be set as 5. In a similar way, the proper dimensionality of OKMMPS with RBFK and PFK are set as 50 and 67, respectively.
Kernel parameters
In this experiment, we set the appropriate parameters for kernel methods such as OKMMPS, KPCA [22], and KFDA [23] by the crossvalidation method. For radial basis function kernel, the parameter σis set as 5, 10, 20, 30, 40, and 50. For polynomial function kernel, the parameter d varies from 1 to 10. The training data and test data are the same as previous experiments. The nearestneighbor classifier is applied for classification. The experiment is run for each parameter. Tables 1 and 2 illustrate the average recognition rates along with the dimensionalities of three kernel methods for varying value of parameters. As can be seen in Tables 1 and 2, OKMMPS achieves the best recognition results when the radial basis function kernel with σ = 20 and the polynomial function kernel with d = 1 are selected for OKMMPS. We choose the parameters for other kernel methods in similar way. The best kernel parameters chosen for the methods mentioned above are shown in Table 3. In addition, it can also be observed that the methods with the radial basis function kernel have higher recognition rates than those with polynomial function kernel. It shows that the radial basis function kernel can well represent the nonlinearity appearing in HRRP samples for these data.
The variation of target aspect
The HRRPs change largely when the target aspect varies with a few degrees, which increase the difficulty in classifying the targets. In this experiment, we consider the robustness of MMC, OMMPS, and OKMMPS to the variation of target aspect. The training data is the same as the previous experiments. Three subsets of testing data are selected, including 300, 420, and 540 HRRPs, respectively. For each class, 100, 140, and 180 HRRPs are chosen for three subsets of testing data, respectively. It is obvious that the variation of target aspect becomes large when the number of HRRPs increases. The radial basis function kernel is used. The parameters of these methods are set according to the above experiments. The dimensionality of MMC, OMMPS, and OKMMPS is 5, 5, and 50, respectively. The parameter of radial basis function kernel for OKMMPS is set as 20. The nearestneighbor classifier is applied for classification. The recognition results of three methods for three subsets of testing data are illustrated in Fig. 3. From Fig. 3, it is shown that the average recognition rates decrease when the number of testing samples is increased from 100 to 180, i.e., the variation of target aspect becomes large. However, the recognition rates of OMMPS and OKMMPS are still better than those of MMC for three subsets. This means that OMMPS and OKMMPS are more robust to variation of aspect than MMC. The reason is that the basis vectors of OMMPS and OKMMPS are obtained by solving the optimization problem sequentially, and they are optimal in meaning of maximum margin. Thus, the high classification accuracy can be obtained when the withinclass scatter is large due to big change of HRRPs.
Performance comparison
To show the effectiveness of the proposed method further, we evaluate the performance of OMMPS and OKMMPS compared with MMC [38], PCA [20], LDA [21], KPCA [22], and KFDA [23] under different SNR. The SNR is set as 5, 10, 15, 20, 25, and 30 dB. For each SNR, the recognition results are averaged for 50 run. The dimensionality of subspace for MMC, OMMPS, OKMMPS, PCA, LDA, KPCA, and KFDA is 5, 5, 50, 26, 2, 10, and 2, respectively. The radial basis function kernel is used. According to the experimental results of subsection 4.3, the kernel parameter for OKMMPS, KPCA, and KFDA is set as 20, 40, and 10, respectively. The nearestneighbor classifier is applied for classification. Figure 4 shows the average rates of seven methods versus SNR. Some interesting observations can be seen from Fig. 4.
(1) When SNR is above 15 dB, the kernel methods such as OKMMPS, KFDA, and KPCA outperform the corresponding linear methods such as OMMPS, LDA, and PCA. At SNR = 15 dB, the average recognition rates of OKMMPS, KFDA, KPCA, OMMPS, LDA, and PCA are 86.52, 81, 79.67, 85.33, 80, and 79.33 %. It shows that the kernel methods are more robust to noise than the linear methods. This is because the nonlinearity in HRRPs is very obvious due to the effect of noise, and the kernel methods can well represent the nonlinearity variation appearing in HRRP samples by nonlinear mapping. Thus, the separability between the different classes can be improved.
(2) MMC has better recognition performance than LDA for all SNR level when the number of training data is much less than the dimensionality of HRRP. At SNR = 15 dB, the average recognition rates of MMC and LDA are 83.33 and 80 %, respectively. This demonstrates that MMC has better discriminative power than LDA for small size of training data. The reason is that LDA suffers from small sample size (SSS) problem in the case of small size of training data. However, MMC does not need the inversion of the withinclass scatter matrix and may avoid the SSS problem. As a result, the features extracted by MMC are more robust.
(3) The discriminative ability of OMMPS and OKMMPS is superior to that of MMC when the SNR is from 5 to 30 dB. At SNR = 15 dB, the average recognition rates of OMMPS, OKMMPS, and MMC are 85.23, 86.42, and 83.33 %, respectively. The reason is that the basis vectors of OMMPS and OKMMPS are obtained by solving the optimization problem sequentially and they are optimal in meaning of maximum margin. Especially, the basis vectors of OKMMPS are still orthogonal in highdimensional feature space. It means that the features extracted by OMMPS and OKMMPS are more discriminative than those extracted by MMC.
Conclusions
In this paper, we propose a novel radar target recognition method using HRRP, namely orthogonal maximum margin projection subspace (OMMPS). The kernel version, called as orthogonal kernel maximum margin projection subspace (OKMMPS), is also derived. The proposed method is able to maximize the betweenclass margin by increasing the betweenclass scatter distance and reducing the withinclass scatter distance simultaneously. The experimental results on the measured data of three kinds of planes show that

(1)
OMMPS and OKMMPS can still obtain the appropriate dimensionality of subspace for highdimensional HRRP vector with three classes.

(2)
The radial basis function kernel can better represent the nonlinearity appearing in HRRP samples than the polynomial function kernel.

(3)
OMMPS and OKMMPS are more robust to the variation of target aspect than MMC method.

(4)
OMMPS and OKMMPS have higher recognition performance than the other methods.
Abbreviations
 HRRP:

highresolution range profile
 MMC:

maximum margin criterion
 OKMMPS:

orthogonal kernel maximum margin projection subspace
 OMMPS:

orthogonal maximum margin projection subspace
References
 1.
HJ Li, SH Yang, Using range profiles as feature vectors to identify aerospace objects. IEEE Trans. Antennas. Propag. 41(March), 261–268 (1993)
 2.
KB Eom, R Chellappa, Noncooperative target classification using hierarchical modeling of highrange resolution radar signatures. IEEE Trans. Signal Process. 45(September), 2318–2326 (1997)
 3.
SP Jacobs, JA Sullivan, Automatic target recognition using sequences of high range resolution radar range profiles. IEEE Trans. Aerosp. Electron. Syst 36, 364–381 (2000)
 4.
A Zyweck, RE Bogner, Radar target classification of commercial aircraft. IEEE Trans. Aerosp. Electron. Syst. 32(February), 598–606 (1996)
 5.
AK Shaw, R Vasgist, R Williams, HRRATR using eigentemplates with observation in unknown target scenario. Proc. SPIE 4053, 467–478 (2000)
 6.
BM Huther, SC Gustafson, RP Broussad, Wavelet preprocessing for high range resolution radar classification. IEEE. Trans. Aerosp. Electron. Syst 37, 1321–1331 (2001)
 7.
R Wu, Q Gao, J Liu, H Gu, ATR scheme based on 1D HRR profiles. Electronics Letters. 38(December), 1586–1587 (2002)
 8.
KT Kim, DK Seo, HT Kim, Efficient radar target recognition using the MUSIC algorithm and invariant feature. IEEE Trans. Antennas. Propag. 50(March), 325–337 (2002)
 9.
J Zwart, R Heiden, S Gelsema, F Groen, Fast translation invariant classification of HRR range profiles in a zero phase representation. IEE Proc. Radar Sonar Navig. 150(June), 411–418 (2003)
 10.
Y Shi, XD Zhang, A Gabor atom network for signal classification with application in radar target recognition. IEEE Trans. Signal. Process. 49(December), 2994–3004 (2001)
 11.
SK Wong, Noncooperative target recognition in the frequency domain. IEE Proc. Radar Sonar Navig. 151(February), 77–84 (2004)
 12.
DE Nelson, JA Starzyk, DD Ensley, Iterated wavelet transformation and discrimination for HRR radar target recognition. IEEE Trans. System Man and Cyberneticspart: system and humans 33(January), 52–57 (2003)
 13.
RA Mitchell, JJ Westerkamp, Robust statistical feature based aircraft identification. IEEE Trans. Aerosp. Electron. Syst. 35(March), 1077–1093 (1999)
 14.
XJ Liao, P Runkle, L Carin, Identification of ground targets from sequential highrangeresolution radar signatures. IEEE Trans. Aerosp. Electron. Syst. 38(April), 1230–1242 (2002)
 15.
CY Wang, JL Xie, The Tmixture model approach for radar HRRP target recognition. Int. J. Comput. Electr. Eng. 5(5), 500–503 (2013)
 16.
M Li, GJ Zhou, B Zhao, TF Quan, Sparse representation denoising for radar high resolution range profiling. Int. J. Antennas Propag. 2014(3), 1–8 (2014)
 17.
L Du, HW Liu and Z Bao, Radar HRRP statistical recognition: parametric model and model selection, IEEE Transactions on Signal Processing, 56 (5), 1931–1944 (2008).
 18.
L Shi, PH Wang, HW Liu, L Xu, Z Bao, Radar HRRP statistical recognition with local factor analysis by automatic Bayesian YingYang harmony learning. IEEE Trans. Signal Processing 59(2), 610–617 (2011)
 19.
JS Fu, XH Deng, WL Yang, Radar HRRP recognition based on discriminant information analysis. WSEAS Trans. Inf. Sci. Appl. 8(4), 185–201 (2011)
 20.
LM Novak and GJ Owirka, Radar target recognition using an eigenimage approach. IEEE Int. Radar Conf., 129131 (1994).
 21.
BY Liu, WL Yang, Radar target recognition using canonical transformation to extract features. Proc. SPIE 3545, 368–371 (1998)
 22.
B Chen, HW Liu and Z Bao, PCA and kernel PCA for radar high range resolution profiles recognition. 2005 IEEE International Radar conference, Virginia, USA, 2005, pp. 528–533.
 23.
S Mika, G Ratsch, J Weston, B Scholkopf and KR Muler, Fisher discriminant analysis with kernels. IEEE International Workshop on Neural networks for signal processing, Wisconsin, USA, 1999, pp. 41–48.
 24.
XF He, P Niyogi, Locality preserving projections (Proc. Conf. Advances in Neural Information Processing System 16, Vancouver, Canada, 2003)
 25.
HT Chen, HW Chang, TL Liu, Local discriminant embedding and its variants. IEEE Computer Society Conference on Computer Vision & Pattern Recognition, San Diego, California, USA, 2005, 2(2):846–853.
 26.
D Cai, XF He, JW Han, HJ Zhang, Orthogonal Laplacian faces for face recognition. IEEE Transactions on Image Processing, 2006, 15(11):36083614.
 27.
L Zhu, SN Zhu, Face recognition based on orthogonal discriminant locality preserving projections. Neurocomputing 70, 1543–1546 (2007)
 28.
SJ Wang, HL Chen, XJ Peng, CG Zhou, Exponential locality preserving projections for small sample size problem. Neurocomputing 74, 3654–3662 (2011)
 29.
S Yan, D Xu, B Zhang, H Zhang, Q Yang, S Lin, Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)
 30.
M Sugiyama, Local fisher discriminant analysis for supervised dimensionality reduction, Proceedings of the International Conference on Machine Learning (ICML), Las Vegas, Nevada, USA, 2006, pp. 905–912.
 31.
D Cai, X He, K Zhou, J Han, H Bao, Locality sensitive discriminant analysis, Proceedings of the 20th International Joint Conference Artificial Intelligence (IJCAI), Hyderabad, India, 2007, pp. 708–713.
 32.
T Zhang, D Tao, X Li, J Yang, Patch alignment for dimensionality reduction. IEEE Trans. Knowl. Data Eng. 21(9), 1299–1313 (2009)
 33.
B Li, C Zheng, DS Huang, Locally linear discriminant embedding: an efficient method for face recognition. Pattern Recog. 41(12), 3813–3821 (2008)
 34.
Y Chen, J Zhang, D Cai, W Liu, X He, Nonnegative local coordinate factorization for image representation. IEEE Trans. Image Process. 22(3), 969–979 (2013)
 35.
Q Gao, J Ma, H Zhang, X Gao, Y Liu, Stable orthogonal local discriminant embedding for linear dimensionality reduction. IEEE Trans. Image Process. 22(7), 2521–2530 (2013)
 36.
C Hou, C Zhang, Y Wu, Y Jiao, Stable local dimensionality reduction approaches. Pattern Recog. 42(9), 2054–2066 (2009)
 37.
A Kocsor, K Kovacs, C Szepesvari, Margin maximizing discriminant analysis. Proc. 15th Eur. Conf. Mach. Learn. 32(1), 227–238 (2004)
 38.
H Li, T Jiang, K Zhang, Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Netw. 17, 157–165 (2006)
Acknowledgements
The authors would like to thank the radar laboratory of University of Electronic Science and Technology of China (UESTC) for providing the measured data. The authors would also like to thank Prof. Qilian Liang of wireless communication Lab in University of Texas at Arlington (UTA) for his help and advice.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhou, D. Orthogonal maximum margin projection subspace for radar target HRRP recognition. J Wireless Com Network 2016, 72 (2016). https://doi.org/10.1186/s136380160571y
Received:
Accepted:
Published:
Keywords
 Radar target recognition
 HRRP
 Maximum margin criterion
 Orthogonal maximum margin projection subspace