Skip to main content

Orthogonal maximum margin projection subspace for radar target HRRP recognition

Abstract

In this paper, a novel target recognition method, namely orthogonal maximum margin projection subspace (OMMPS), is proposed for radar target recognition using high-resolution range profile (HRRP). The core of OMMPS is to maximize the between-class margin by increasing the between-class scatter distance and reducing the within-class scatter distance simultaneously. By introducing the nonlinear mapping function, we also derive the kernel version of OMMPS, namely orthogonal kernel maximum margin projection subspace (OKMMPS). Compared with maximum margin criterion (MMC) method, OMMPS are optimal in meaning of maximum margin due that the coordinate axes of OMMPS are obtained sequentially by solving the constrained optimization problem, thus improves the recognition performance. In addition, the number of efficient features for OMMPS is not limited by the number of pattern classes, and the appropriate features can still be obtained for separating the classes, even in high-dimensional space with only a few classes. Moreover, the coordinate axes of OMMPS are mutually orthogonal, and the features extracted by OMMPS reduce the redundancy. The extensive experimental results show that the proposed method has better recognition performance than the other methods such as MMC and LDA.

1 Introduction

We are able to obtain the high-resolution range profile (HRRP) by the wideband radar. The HRRP is the amplitude of radar-returned echoes from target as a function of range cell, which represents the distribution of projection of target radar scattering centers along the radar line of sight. It can provide geometric structure information such as target size and information of scattering centers, which is very useful in target classification. Therefore, the radar target recognition using HRRP has intensively been focused by radar target recognition community [1–7]. K. T. Kim et al. propose some invariant features for HRRP [8–9]. Y. Shi et al. [10] use a novel neural network classifier for HRRP recognition. S. K. Wong [11] presents a feature selection method in frequency domain. D. E. Nelson et al. [12] study a new iterated wavelet feature for HRRP classification. R. A. Mitchell et al. [13] extract some robust statistical features from HRRP for radar target recognition. X. J. Liao et al. [14] use sequential HRRPs to identify the ground targets. C. Y. Wang et al. [15] model the radar echoes for radar HRRP recognition by T-mixture model. M. Li et al. [16] propose a sparse representation-based denoizing method for improving recognition performance using HRRP. L. Du et al. [17] apply the statistical model for recognizing the radar HRRP. L. Shi et al. [18] use the local factor analysis to model the non-Gaussianity of the radar HRRP data. J. S. Fu et al. [19] extract the between-class discriminant information and among-class discriminant information for improving the classification performance. However, HRRP is sensitive to target aspect, time-shift, and amplitude-scale. These factors increase the between-class ambiguities which must be resolved and degrade the classification accuracy. Moreover, HRRP is typically high-dimensional, non-Gaussian, and interdimension dependently distributed and increases the difficulties in statistical modeling of pattern objects. Thus, the radar target recognition using HRRP is still a challenging task.

Many previous works have shown that subspace methods are very effective in pattern recognition task. For example, the principal component analysis (PCA) can preserve the large variance directions [20]. The linear discriminant analysis (LDA) is able to maximize the between-class distance and minimize the within-class distance simultaneously [21]. PCA and LDA are widely applied for feature extraction and dimension reduction. In order to process the nonlinear problem, KPCA [22] and KFDA [23] are proposed based on kernel trick. However, the performance of these methods cannot be improved further when the objects such as HRRPs are usually high-dimensional vectors and do not satisfy the assumption of the Gaussian distribution.

Because they only capture the global geometric structure of dataset and do not consider the local geometric structure information that is very important for target recognition.

To capture the local structure information, several manifold learning methods have been proposed. X. F. He et al. [24] present the locality preserving projections (LPP) by means of a weight matrix (called heat kernel). H. T. Chen et al. [25] propose the local discriminant embedding (LDE) using the neighbors and class relations of data. D. Cai et al. [26] study the orthogonal Laplacianfaces (OLPP) by computing a set of orthogonal basis functions. L. Zhu et al. [27] propose the orthogonal discriminant locality preserving projections (ODLPP) by orthogonalizing the basis vectors. S. J. Wang et al. [28] present an exponential locality preserving projections (ELPP) via introducing the matrix exponential function. The above methods can obtain impressive results. However, they only emphasize the compactness between the neighbors or same-class data points and do not consider the optimal separation between different-class data points. Therefore, the discriminative power may be improved by combining the manifold learning and discriminant analysis.

Motivated by the idea, S. Yan et al. [29] present margin fisher analysis (MFA) method. MFA uses the intrinsic graph and the penalty graph to characterize the local structure in discriminant analysis and thus increases the intraclass compactness and interclass separability. M. Sugiyama [30] proposes the local fisher discriminant analysis (LFD) approach by taking local structure of the data into account, and the multimodal data can be embedded appropriately. D. Cai at al. [31] study the locality sensitive discriminant analysis (LSDA) method, which utilizes local geometry structure of the data manifold and discriminant information at the same time. T. Zhang et al. [32] present a discriminative locality alignment (DLA) algorithm by imposing discriminative information in the part optimization stage. DLA can attack the distribution nonlinearity of measurements and preserve the discriminative ability while avoiding the small sample size problem. B. Li et al. [33] propose the locally linear discriminant embedding (LLDE) method. LLDE apply the constrained weights to strengthen the classification ability. Y. Chen et al. [34] present a local coordinate factorization (NLCF) method by adding a local coordinate constraint into the standard NMF objective function. Q. Gao et al. [35] propose the stable orthogonal local discriminate embedding algorithm by introducing the orthogonal constraint on the basis vectors. C. Hou et al. [36] propose a unified framework that explicitly unfolds the manifold and reformulate local approaches as the semi-definite programs and thus improves the performance of some algorithms such as locally linear embedding (LLE), laplacian eigenmaps (LE), and local tangent space alignment (LTSA). Although the above methods are successful in many applications, their recognition performance may decrease when objects as HRRPs suffer from the large within-class variation due to the fact that these methods are often in lack of robustness and generalization.

Inspired by the maximum margin of SVM, A. Kocsor et al. [37] propose the margin maximizing discriminant analysis (MMDA) approach. The core of MMDA is to maximize the between-class margin on the decision boundary by applying the normals of a set of pairwise orthogonal margin maximizing hyperplanes to construct a projection subspace. But MMDA is only fit for binary classification problem and cannot be applied for multiclass classification problem directly. Based on the similar idea, H. F. Li et al. [38] present the maximum margin criterion (MMC) method. The aim of MMC is to maximize the trace of the difference of the between-class scatter matrix and within-class scatter matrix. It can be applied for multiclass classification directly and avoid the small sample size (SSS) problem. However, the coordinate axes of MMC subspace are not optimal in meaning of maximum margin due to the fact that they are solved by (SVD) on the difference between the between-class scatter matrix and within-class scatter matrix without exerting the constraints. Thus, its performance can be improved further.

In this paper, a novel target recognition method, namely orthogonal maximum margin projection subspace (OMMPS), is proposed for radar target HRRP recognition. The aim of OMMPS is to maximize the between-class margin by increasing the between-class scatter distance and reducing the within-class scatter distance simultaneously. By exerting the orthogonality constraint on the objective function, we can solve the OMMPS. The OMMPS has three advantages: First, the number of features does not depend on the number of classes. As a result, the appropriate features can still be obtained for separating the classes, even in high-dimensional space with only a few classes. Second, the coordinate axes of OMMPS are optimal in meaning of maximum margin because the coordinate axes are solved sequentially by exerting the orthogonality constraint on the objective function. Third, the coordinate axes of OMMPS are mutually orthogonal and the features extracted by OMMPS reduce the redundancy, thus improves the recognition performance.

2 OMMPS

Let \( \mathbf{X}=\left[{\mathbf{x}}_{11}\cdots {\mathbf{x}}_{1{N}_1}\cdots {\mathbf{x}}_{C1}\cdots {\mathbf{x}}_{C{N}_C}\right] \)denotes a training sample set,x ij is the jthn-dimensional HRRP vector of ith class. Each class contains N i training samples, and the number of total training sample for C classes is N(N = N 1 + N 2 + ⋯ + N C ). Let A represents a n × m-dimensional matrix (m < n). Projecting x ij into m-dimensional feature subspace below

$$ {\mathbf{y}}_{ij}={\mathbf{A}}^T{\mathbf{x}}_{ij} $$
(1)

where Tdenotes transposition, and y ij is a m-dimensional vector, namely subprofile of x ij . Firstly, computing the between-class scatter distance d B in subprofile space

$$ \begin{array}{l}{d}_B=\frac{1}{2}{\displaystyle \sum_{i=1}^C{\displaystyle \sum_{k=1}^C\frac{N_i{N}_k}{N^2}{\left({\overline{\mathbf{y}}}_i-{\overline{\mathbf{y}}}_k\right)}^T\left({\overline{\mathbf{y}}}_i-{\overline{\mathbf{y}}}_k\right)}}\\ {}\kern1.12em ={\displaystyle \sum_{i=1}^C\frac{N_i}{N}{\left({\overline{\mathbf{y}}}_i-\overline{\mathbf{y}}\right)}^T\left({\overline{\mathbf{y}}}_i-\overline{\mathbf{y}}\right)}\\ {}\kern1.12em =Tr\left({\displaystyle \sum_{i=1}^C\frac{N_i}{N}\left({\overline{\mathbf{y}}}_i-\overline{\mathbf{y}}\right){\left({\overline{\mathbf{y}}}_i-\overline{\mathbf{y}}\right)}^T}\right)\end{array} $$
(2)

where Tr(⋅) is the trace of matrix. \( {\overline{\mathbf{y}}}_i=\frac{1}{N_i}{\displaystyle \sum_{j=1}^{N_i}{\mathbf{y}}_{ij}} \), \( {\overline{\mathbf{y}}}_k=\frac{1}{N_k}{\displaystyle \sum_{k=1}^{N_k}{\mathbf{y}}_{kj}} \), and \( \overline{\mathbf{y}}=\frac{1}{N}{\displaystyle \sum_{i=1}^C{\displaystyle \sum_{j=1}^{N_i}{\mathbf{y}}_{ij}}} \)are the mean vectors of ith class’ training subprofiles, kth class’ training subprofiles, and total training subprofiles, respectively. Substituting Eq. (1) into Eq. (2), it follows that

$$ \begin{array}{l}{d}_B=Tr\left({\mathbf{A}}^T\left({\displaystyle \sum_{i=1}^C\frac{N_i}{N}\left({\overline{\mathbf{x}}}_i-\overline{\mathbf{x}}\right){\left({\overline{\mathbf{x}}}_i-\overline{\mathbf{x}}\right)}^T}\right)\mathbf{A}\right)\\ {}\kern1.12em =Tr\left({\mathbf{A}}^T{\mathbf{S}}_B\mathbf{A}\right)\end{array} $$
(3)

where \( {\overline{\mathbf{x}}}_i \)is the mean vector of ith class’ training samples, and \( \overline{\mathbf{x}} \) is the mean vector of total training samples. S B is the between-class scatter matrix in original sample space

$$ {\mathbf{S}}_B={\displaystyle \sum_{i=1}^C\frac{N_i}{N}\left({\overline{\mathbf{x}}}_i-\overline{\mathbf{x}}\right){\left({\overline{\mathbf{x}}}_i-\overline{\mathbf{x}}\right)}^T} $$
(4)

Secondly, computing the within-class scatter distance d W in subprofile space

$$ \begin{array}{l}{d}_W={\displaystyle \sum_{i=1}^C\left(\frac{N_i}{N}{\displaystyle \sum_{j=1}^{N_i}{\left({\mathbf{y}}_{ij}-{\overline{\mathbf{y}}}_i\right)}^T\left({\mathbf{y}}_{ij}-{\overline{\mathbf{y}}}_i\right)}\right)}\\ {}\kern1.12em =Tr\left({\displaystyle \sum_{i=1}^C\left(\frac{N_i}{N}{\displaystyle \sum_{j=1}^{N_i}\left({\mathbf{y}}_{ij}-{\overline{\mathbf{y}}}_i\right){\left({\mathbf{y}}_{ij}-{\overline{\mathbf{y}}}_i\right)}^T}\right)}\right)\end{array} $$
(5)

Substituting Eq. (1) into Eq. (5), we can get

$$ \begin{array}{l}{d}_W=Tr\left({\mathbf{A}}^T{\displaystyle \sum_{i=1}^C\left(\frac{N_i}{N}{\displaystyle \sum_{j=1}^{N_i}\left({\mathbf{x}}_{ij}-{\overline{\mathbf{x}}}_i\right){\left({\mathbf{x}}_{ij}-{\overline{\mathbf{x}}}_i\right)}^T}\right)}\mathbf{A}\right)\\ {}\kern1.12em =Tr\left({\mathbf{A}}^T{\mathbf{S}}_W\mathbf{A}\right)\end{array} $$
(6)

where S W is the within-class scatter matrix in original sample space

$$ {\mathbf{S}}_W={\displaystyle \sum_{i=1}^C\left(\frac{N_i}{N}{\displaystyle \sum_{j=1}^{N_i}\left({\mathbf{x}}_{ij}-{\overline{\mathbf{x}}}_i\right){\left({\mathbf{x}}_{ij}-{\overline{\mathbf{x}}}_i\right)}^T}\right)} $$
(7)

According to the geometric structure in subprofile space, we define the between-class margin in subprofile space below

$$ {d}_M={d}_B-{d}_W=Tr\left({\mathbf{A}}^T\left({\mathbf{S}}_B-{\mathbf{S}}_W\right)\mathbf{A}\right) $$
(8)

where d M is the between-class margin.

The aim of OMMPS is to seek an orthogonal projection subspace by maximizing the between-class margin based on the orthogonality constraint, i.e., solving the following maximization problem

$$ \begin{array}{l}{\mathbf{a}}_r= \arg \underset{\left\{{\mathbf{a}}_r\right\}}{ \max }{d}_M\\ {}\kern2em = \arg \underset{\left\{{\mathbf{a}}_r\right\}}{ \max}\left\{Tr\left({\mathbf{a}}_{{}^r}^T\left({\mathbf{S}}_B-{\mathbf{S}}_W\right){\mathbf{a}}_r\right)\right\}\\ {}\kern2em = \arg \underset{\left\{{\mathbf{a}}_r\right\}}{ \max}\left\{{\mathbf{a}}_{{}^r}^T\left({\mathbf{S}}_B-{\mathbf{S}}_W\right){\mathbf{a}}_r\right\}\\ {}\kern2em r=1,2,\cdots, m\end{array} $$
(9)

and

$$ {\mathbf{a}}_r^T{\mathbf{a}}_r=1 $$
(10)
$$ {\mathbf{a}}_l^T{\mathbf{a}}_r=0\kern1em l=1,2,\cdots, r-1 $$
(11)

where a r is the column vector of matrix A, i.e., A = [a 1, a 2 … a m], namely orthogonal maximum margin projection subspace (OMMPS). Although the objective function in Eq. (9) is similar to that of MMC [38], the objective function of MMC does not include the orthogonality constrains. Besides, MMC obtains the projection subspace using the eigenvectors corresponding to the first largest eigenvalues of matrix (S B − S w), and thus, the projection vectors of MMC are not optimal in meaning of maximum margin. We solve the above optimization problem by following steps.

To solve a 1, we construct a Lagrangian function using Eqs. (9) and (10)

$$ J\left({\mathbf{a}}_1,{\lambda}_1\right)={\mathbf{a}}_1^T\left({\mathbf{S}}_B-{\mathbf{S}}_W\right){\mathbf{a}}_1-{\lambda}_1\left({\mathbf{a}}_1^T{\mathbf{a}}_1-1\right) $$
(12)

where λ 1 is a Lagrangian multiplier. Taking the vector derivative of J(a 1, λ 1) with respect to a 1 and set the resultant equation to zero, we can get the generalized eigenvector equation

$$ \left({\mathbf{S}}_B-{\mathbf{S}}_W\right){\mathbf{a}}_1={\lambda}_1{\mathbf{a}}_1 $$
(13)

Let \( {\lambda}_1^{\max } \)is the largest eigenvalue of matrix (S B  − S W )and \( {\boldsymbol{\upmu}}_1^{\max } \)is the corresponding eigenvector; then, we may set

$$ {\mathbf{a}}_1={\boldsymbol{\upmu}}_1^{\max } $$
(14)

After obtaining the a 1, combining Eqs. (9)–(11), we can form the Lagrangian function

$$ \begin{array}{l}J\left({\mathbf{a}}_r,{\lambda}_1,{\lambda}_2\cdots {\lambda}_{r-1},{\lambda}_r\right)={\mathbf{a}}_1^T\left({\mathbf{S}}_B-{\mathbf{S}}_W\right){\mathbf{a}}_1-{\lambda}_1{\mathbf{a}}_1^T{\mathbf{a}}_r-{\lambda}_2{\mathbf{a}}_2^T{\mathbf{a}}_r\cdots \\ {}\kern9em -{\lambda}_{r-1}{\mathbf{a}}_{r-1}^T{\mathbf{a}}_r-{\lambda}_r\left({\mathbf{a}}_r^T{\mathbf{a}}_r-1\right)\end{array} $$
(15)

where λ 1, λ 2, λ r − 1, and λ r are Lagrangian multipliers. In a similar way, taking the derivative of J(a r , λ 1, λ 2 ⋯ λ r − 1, λ r ) in Eq. (15) with respect to a r and λ l (l = 1, 2, ⋯, r), solving the resultant equation leads to

$$ \left(\mathbf{I}-\left({\mathbf{a}}_1{\mathbf{a}}_1^T+{\mathbf{a}}_2{\mathbf{a}}_2^T+\cdots +{\mathbf{a}}_{r-1}{\mathbf{a}}_{r-1}^T\right)\right)\left({\mathbf{S}}_B-{\mathbf{S}}_W\right){\mathbf{a}}_r={\lambda}_r{\mathbf{a}}_r $$
(16)

Let \( {\lambda}_r^{\max } \)is the largest eigenvalue of Eq. (16) and \( {\boldsymbol{\upmu}}_r^{\max } \)is the corresponding eigenvector; then, we can set

$$ {\mathbf{a}}_r={\boldsymbol{\upmu}}_r^{\max } $$
(17)

According to the above discussion, it is obvious that the basis vectors of OMMPS are solved sequentially by exerting the orthogonality constraint on the objective function. As a result, they are mutually orthogonal and optimal in meaning of maximum margin. Therefore, OMMPS has better discriminative power than MMC. The steps of feature extraction based on OMMPS are shown in Algorithm 1.

Algorithm 1. The feature extraction based on OMMPS

Task: Solve the linear subprofile features using the training data set

$$ \mathbf{X}=\left[{\mathbf{x}}_{11}\cdots {\mathbf{x}}_{1{N}_1}\cdots {\mathbf{x}}_{C1}\cdots {\mathbf{x}}_{C{N}_C}\right] $$

Step 1) Determine the subprofile's dimensionality m

Step 2) Compute the matrix S B and S W by equation (4) and (7)

Step 3) SVD to matrix (S B  − S W ), and obtain the a 1 by equation (14)

Step 4) SVD to matrix \( \left(\mathbf{I}-\left({\mathbf{a}}_1{\mathbf{a}}_1^T+\cdots +{\mathbf{a}}_{r-1}{\mathbf{a}}_{r-1}^T\right)\right)\left({\mathbf{S}}_B-{\mathbf{S}}_W\right) \)for r = 2, and obtain the a 2by equation (17)

Step 5) Set r = r + 1repeat Step 4 until a m is obtained. Then A = [a 1  a 2 ⋯ a m ]

Step 6) Obtain the linear subprofile of HRRP vector x using equation (1)

3 Orthogonal kernel maximum margin projection subspace (QKMMPS)

When nonlinear variations in HRRPs are very serious, the HRRPs of different classes may not be separable linearly. We introduce the nonlinear mapping to solve this problem. A nonlinear function φ is used to map x ij into a high-dimensional feature space F below

$$ {R}^n:{\mathbf{x}}_{ij}\to F:\varphi \left({\mathbf{x}}_{ij}\right) $$
(18)

where the dimensionality of feature space Fis n '; here, n ' may be any value or infinite. Let A φ denotes a n ' × m φ-dimensional transformation matrix, namely orthogonal kernel maximum margin projection subspace; then, φ(x ij ) is projected into m φ dimensional space as follow

$$ {\mathbf{y}}_{ij}^{\varphi }={\mathbf{A}}_{\varphi}^T\varphi \left({\mathbf{x}}_{ij}\right) $$
(19)

where \( {\mathbf{y}}_{ij}^{\varphi } \) is m φ-dimensional column vector, namely nonlinear subprofile of HRRP vector x ij in low-dimensional feature space. In a similar way, we can compute the between-class margin \( {d}_M^{\varphi } \) in nonlinear subprofile space

$$ {d}_M^{\varphi }=Tr\left({\mathbf{A}}_{\varphi}^T\left({\mathbf{S}}_B^{\varphi }-{\mathbf{S}}_W^{\varphi}\right){\mathbf{A}}_{\varphi}\right) $$
(20)

where \( {\mathbf{S}}_B^{\varphi } \) and \( {\mathbf{S}}_W^{\varphi } \) are the between-class scatter matrix and within-class scatter matrix in high-dimensional feature space F, respectively.

$$ {\mathbf{S}}_B^{\varphi }={\displaystyle \sum_{i=1}^C\frac{N_i}{N}\left({\overline{\mathbf{x}}}_i^{\varphi }-{\overline{\mathbf{x}}}^{\varphi}\right){\left({\overline{\mathbf{x}}}_i^{\varphi }-{\overline{\mathbf{x}}}^{\varphi}\right)}^T} $$
(21)
$$ {\mathbf{S}}_W^{\varphi }={\displaystyle \sum_{i=1}^C\left(\frac{N_i}{N}{\displaystyle \sum_{j=1}^{N_i}\left({\mathbf{x}}_{ij}^{\varphi }-{\overline{\mathbf{x}}}_i^{\varphi}\right){\left({\mathbf{x}}_{ij}^{\varphi }-{\overline{\mathbf{x}}}_i^{\varphi}\right)}^T}\right)} $$
(22)

where \( {\mathbf{x}}_{ij}^{\varphi }=\varphi \left({\mathbf{x}}_{ij}\right) \), \( {\overline{\mathbf{x}}}_i^{\varphi }=\left(1/{N}_i\right){\displaystyle \sum_{j=1}^{N_i}\varphi \left({\mathbf{x}}_{ij}\right)} \), and \( {\overline{\mathbf{x}}}^{\varphi }=\left(1/N\right){\displaystyle \sum_{i=1}^C}{\displaystyle \sum_{j=1}^{N_i}\varphi \left({\mathbf{x}}_{ij}\right)} \). Based on the aim of orthogonal kernel maximum margin projection subspace (OKMMPS), we may get OKMMPS by solving the following constrained maximization problem

$$ \begin{array}{l}{\mathbf{a}}_r^{\varphi }= \arg \underset{\left\{{\mathbf{a}}_r^{\varphi}\right\}}{ \max }{d}_M^{\varphi}\\ {}\kern1em = \arg \underset{\left\{{\mathbf{a}}_r^{\varphi}\right\}}{ \max}\left\{Tr\Big({\left({\mathbf{a}}_r^{\varphi}\right)}^T\left({\mathbf{S}}_B^{\varphi }-\mathbf{S}{}_W{}^{\varphi}\Big){\mathbf{a}}_r^{\varphi}\right)\right\}\\ {}\kern1em = \arg \underset{\left\{{\mathbf{a}}_r^{\varphi}\right\}}{ \max}\left\{{\left({\mathbf{a}}_r^{\varphi}\right)}^T\Big({\mathbf{S}}_B^{\varphi }-\mathbf{S}{}_W{}^{\varphi}\Big){\mathbf{a}}_r^{\varphi}\right\}\\ {}\kern2em r=1,2,\cdots, m\end{array} $$
(23)

and

$$ {\left({\mathbf{a}}_r^{\varphi}\right)}^T{\mathbf{a}}_r^{\varphi }=1 $$
(24)
$$ {\left({\mathbf{a}}_l^{\varphi}\right)}^T{\mathbf{a}}_r^{\varphi }=0\kern1em l=1,2,\cdots, r-1 $$
(25)

where \( {\mathbf{a}}_r^{\varphi } \) is the column vector of matrix A φ , i.e., \( {\mathbf{A}}_{\varphi }=\left[{\mathbf{a}}_1^{\varphi}\;{\mathbf{a}}_2^{\varphi}\cdots {\mathbf{a}}_m^{\varphi}\right] \), namely OKMMPS. Because the expression of nonlinear mapping φ(⋅) is not defined explicitly, it is impossible to solve the Eq. (23) for obtaining OKMMPS directly. We use kernel trick to solve this problem.

Let

$$ {\mathbf{a}}_r^{\varphi }={\displaystyle \sum_{i=1}^C}{\displaystyle \sum_{j=1}^{N_i}}{\alpha}_{rij}\varphi \left({\mathbf{x}}_{ij}\right) $$
(26)

and

$$ k\left({\mathbf{x}}_{ij},{\mathbf{x}}_{lk}\right)={\varphi}^T\left({\mathbf{x}}_{ij}\right)\varphi \left({\mathbf{x}}_{lk}\right) $$
(27)

where α rij is a coefficient, x ij and x lk are n-dimensional column vectors, and k(x ij , x lk ) is a kernel function. Substituting Eqs. (26) and (27) into Eqs. (23)–(25), it follows that

$$ \begin{array}{l}{\boldsymbol{\upalpha}}_r=\underset{\left\{{\alpha}_r\right\}}{ \max}\kern0.5em \left\{\kern0.5em {\left({\boldsymbol{\upalpha}}_r\right)}^T\Big({\mathbf{S}}_B^{\alpha }-\mathbf{S}{}_W{}^{\alpha}\Big){\boldsymbol{\upalpha}}_r\kern0.5em \right\}\\ {}\kern2em r=1,2,\cdots, m\end{array} $$
(28)

and

$$ {\boldsymbol{\upalpha}}_r^T\mathbf{K}{\boldsymbol{\upalpha}}_r=1 $$
(29)
$$ {\boldsymbol{\upalpha}}_l^T\mathbf{K}{\boldsymbol{\upalpha}}_r=0\kern1em l=1,2,\cdots, r-1 $$
(30)

where

$$ {\boldsymbol{\upalpha}}_r={\left[{\alpha}_{r11}\kern0.5em {\alpha}_{r12}\kern0.5em \cdots \kern0.5em {\alpha}_{rC{N}_C}\right]}^T $$
(31)
$$ {\mathbf{S}}_B^{\alpha }={\displaystyle \sum_{i=1}^C\frac{N_i}{N}}\left({\mathbf{P}}_i-\mathbf{P}\right){\left({\mathbf{P}}_i-\mathbf{P}\right)}^T $$
(32)
$$ {\mathbf{S}}_W^{\alpha }={\displaystyle \sum_{i=1}^C\frac{N_i}{N}}{\displaystyle \sum_{j=1}^{N_i}}\left({\left(\mathbf{K}\right)}_{ij}-{\mathbf{P}}_i\right){\left({\left(\mathbf{K}\right)}_{ij}-{\mathbf{P}}_i\right)}^T $$
(33)

where

$$ {\left({\mathbf{P}}_i\right)}_{lj}=\frac{1}{N_i}{\displaystyle \sum_{k=1}^{N_i}}k\left({\mathbf{x}}_{lj},{\mathbf{x}}_{ik}\right)\kern1em i,l=1,2,\cdots, C\kern1.5em j=1,2,\cdots, {N}_l $$
(34)
$$ \begin{array}{l}{\left({\left(\mathbf{K}\right)}_{ij}\right)}_{lk}=k\left({\mathbf{x}}_{lk},{\mathbf{x}}_{ij}\right)\kern1em \\ {}i,l=1,2,\cdots, C\kern1.5em \\ {}j=1,2,\cdots, {N}_i\kern1em k=1,2,\cdots, {N}_l\end{array} $$
(35)

Combining Eqs. (28) and (29), we can construct the following function for getting α 1

$$ J\left({\boldsymbol{\upalpha}}_1,{\gamma}_1\right)={\boldsymbol{\upalpha}}_1^T\left({\mathbf{S}}_B^{\varphi }-{\mathbf{S}}_W^{\iota}\right){\boldsymbol{\upalpha}}_1-{\lambda}_1\left({\boldsymbol{\upalpha}}_1^T\mathbf{K}{\boldsymbol{\upalpha}}_1-1\right) $$
(36)

where γ 1 is a Lagrangian multiplier. Taking the vector derivative of J(α 1, γ 1) with respect to α 1 and set the resultant equation to zero, we can get the generalized eigenvector equation

$$ {\mathbf{K}}^{-1}\left({\mathbf{S}}_B^{\varphi }-{\mathbf{S}}_W^{\varphi}\right){\boldsymbol{\upalpha}}_1={\gamma}_1{\boldsymbol{\upalpha}}_1 $$
(37)

Similar to the observation in Section 2, we set

$$ {\boldsymbol{\upalpha}}_1={\boldsymbol{\upmu}}_1^{\alpha, \max } $$
(38)

where\( {\boldsymbol{\upmu}}_1^{\alpha, \max } \) is the eigenvector corresponding to the largest eigenvalue \( {\gamma}_1^{\max } \) of matrix \( {\mathbf{K}}^{-1}\left({\mathbf{S}}_B^{\varphi }-{\mathbf{S}}_W^{\varphi}\right) \).

Combining Eqs. (28), (29), and (30), the function is constructed using Lagrangian multipliers to solve α r (2 ≤ r ≤ m)

$$ \begin{array}{l}J\left({\boldsymbol{\upalpha}}_r,{\gamma}_1,{\gamma}_2\cdots {\gamma}_{r-1},{\gamma}_r\right)={\boldsymbol{\upalpha}}_1^T\left({\mathbf{S}}_B^{\varphi }-{\mathbf{S}}_W^{\varphi}\right){\boldsymbol{\upalpha}}_1-{\gamma}_1{\boldsymbol{\upalpha}}_1^T\mathbf{K}{\boldsymbol{\upalpha}}_r-{\gamma}_2{\boldsymbol{\upalpha}}_2^T\mathbf{K}{\boldsymbol{\upalpha}}_r\cdots \\ {}\kern9em -{\gamma}_{r-1}{\boldsymbol{\upalpha}}_{r-1}^T\mathbf{K}{\boldsymbol{\upalpha}}_{r-1}-{\gamma}_r\left({\boldsymbol{\upalpha}}_r^T\mathbf{K}{\boldsymbol{\upalpha}}_r-1\right)\end{array} $$
(39)

where γ 1, γ 2, γ r − 1, and γ r are Lagrangian multipliers. In a similar way, we can get the following eigenvector equation

$$ {\mathbf{K}}^{-1}\left(\mathbf{I}-\left(\mathbf{K}{\boldsymbol{\upalpha}}_1{\boldsymbol{\upalpha}}_1^T+\mathbf{K}{\boldsymbol{\upalpha}}_2{\boldsymbol{\upalpha}}_2^T+\cdots +\mathbf{K}{\boldsymbol{\upalpha}}_{r-1}{\boldsymbol{\upalpha}}_{r-1}^T\right)\right)\left({\mathbf{S}}_B^{\alpha }-{\mathbf{S}}_W^{\alpha}\right){\boldsymbol{\upalpha}}_1={\gamma}_r{\boldsymbol{\upalpha}}_r $$
(40)

Let \( {\gamma}_r^{\max } \)is the largest eigenvalue of Eq. (40) and \( {\boldsymbol{\upmu}}_r^{\alpha, \max } \)is the corresponding eigenvector; then, we set

$$ {\boldsymbol{\upalpha}}_r={\boldsymbol{\upmu}}_r^{\alpha, \max } $$
(41)

After obtaining \( {\mathbf{S}}_B^{\alpha } \) α 1, α 2 ⋯, α m , φ(x) is projected into the nonlinear subprofile space according to Eq. (19); it follows that

$$ {\mathbf{y}}^{\varphi }={\left[{\boldsymbol{\upalpha}}_1\kern1em {\boldsymbol{\upalpha}}_2\cdots {\boldsymbol{\upalpha}}_m\right]}^T\left[\begin{array}{c}\hfill k\left({\mathbf{x}}_{11},\mathbf{x}\right)\hfill \\ {}\hfill k\left({\mathbf{x}}_{12},\mathbf{x}\right)\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill k\left({\mathbf{x}}_{C{N}_C},\mathbf{x}\right)\hfill \end{array}\right] $$
(42)

where y φ is the corresponding nonlinear subprofile of x. The steps of feature extraction based on OKMMPS are shown in Algorithm 2.

Algorithm 2. The nonlinear feature extraction based on OKMMPS

Task: Solve the nonlinear subprofile features using the training data set

$$ \mathbf{X}=\left[{\mathbf{x}}_{11}\cdots {\mathbf{x}}_{1{N}_1}\cdots {\mathbf{x}}_{C1}\cdots {\mathbf{x}}_{C{N}_C}\right] $$

Step 1) Determine the subprofile's dimensionality m

Step 2) Select the kernel function

Step 3) Compute the matrix K, and \( {\mathbf{S}}_W^{\alpha } \)by equation (27), equation (32) and (33)

Step 4) SVD to matrix \( {\mathbf{K}}^{-1}\left({\mathbf{S}}_B^{\alpha }-{\mathbf{S}}_W^{\alpha}\right) \), and obtain the α 1 by equation (38)

Step 5) SVD to matrix \( {\mathbf{K}}^{-1}\left(\mathbf{I}-\left(\mathbf{K}{\boldsymbol{\upalpha}}_1{\boldsymbol{\upalpha}}_1^T+\cdots +\mathbf{K}{\boldsymbol{\upalpha}}_{r-1}{\boldsymbol{\upalpha}}_{r-1}^T\right)\right)\left({\mathbf{S}}_B^{\alpha }-{\mathbf{S}}_W^{\alpha}\right) \)for r = 2 and obtain the α 2by equation (41)

Step 6) Set r = r + 1repeat Step 5 until α m is obtained

Step 7) Obtain the nonlinear subprofile of HRRP vector xusing equation (42)

4 Experimental results

To show the effectiveness of the proposed method, we perform the extensive experiments on the measured data of three kinds of airplanes.

4.1 Data description

The data used in experiments are HRRPs measured from three airplanes, including An-26, Jiang, and Yark-42. For each airplane, 240 HRRPs over a wide range of aspects are adopted. For each airplane, one quarter of all HRRPs are used for training and the rest are used for testing. Before running experiments, each HRRP is preprocessed by energy normalization. The HRRPs of three airplanes are illustrated in Fig. 1.

Fig. 1
figure 1

The HRRPs of the three airplanes. a An-26. b Jiang. c Yark-42

4.2 The dimensionality of subspace

In this experiment, we consider the effect of subspace’s dimensionality on recognition performance. The training data and testing data are as described above. The subspace’s dimensionality is set from 1 to 10. The nearest-neighbor classifier is applied for classification. Two kernels are used, i.e., radial basis function kernel (RBFK)

$$ k\left(\mathbf{x},\mathbf{y}\right)={e}^{-\frac{{\left\Vert \mathbf{x}-\mathbf{y}\right\Vert}^2}{\sigma^2}} $$
(43)

and polynomial function kernel (PFK)

$$ k\left(\mathbf{x},\mathbf{y}\right)={\left(\mathbf{x}\cdot \mathbf{y}+1\right)}^d $$
(44)

where the kernel parameters σ and d are set by the cross-validation method.

Figure 2 shows the average recognition rates of two methods (MMC [38] and OMMPS) versus the subspace’ dimensionality. From Fig. 2a, it can be seen that there is a big rise in the average recognition rate when the subspace’ dimensionality is increased from 1 to 5, and the average recognition rates keep same approximate when the subspace’ dimensionality is above 5. Thus, the proper dimensionality of MMC is set as 5 in the following experiments. From Fig. 2b, the appropriate dimensionality of OMMPS can also be set as 5. In a similar way, the proper dimensionality of OKMMPS with RBFK and PFK are set as 50 and 67, respectively.

Fig. 2
figure 2

The average recognition rates of two methods (MMC and OMMPS) versus dimensionality of subspace. a MMC. b OMMPS

4.3 Kernel parameters

In this experiment, we set the appropriate parameters for kernel methods such as OKMMPS, KPCA [22], and KFDA [23] by the cross-validation method. For radial basis function kernel, the parameter σis set as 5, 10, 20, 30, 40, and 50. For polynomial function kernel, the parameter d varies from 1 to 10. The training data and test data are the same as previous experiments. The nearest-neighbor classifier is applied for classification. The experiment is run for each parameter. Tables 1 and 2 illustrate the average recognition rates along with the dimensionalities of three kernel methods for varying value of parameters. As can be seen in Tables 1 and 2, OKMMPS achieves the best recognition results when the radial basis function kernel with σ = 20 and the polynomial function kernel with d = 1 are selected for OKMMPS. We choose the parameters for other kernel methods in similar way. The best kernel parameters chosen for the methods mentioned above are shown in Table 3. In addition, it can also be observed that the methods with the radial basis function kernel have higher recognition rates than those with polynomial function kernel. It shows that the radial basis function kernel can well represent the nonlinearity appearing in HRRP samples for these data.

Table 1 The average recognition rates along with the dimensionalities using radial basis function kernel (%)
Table 2 The average recognition rates using polynomial function kernel (%)
Table 3 The best kernel parameters for kernel-based methods

4.4 The variation of target aspect

The HRRPs change largely when the target aspect varies with a few degrees, which increase the difficulty in classifying the targets. In this experiment, we consider the robustness of MMC, OMMPS, and OKMMPS to the variation of target aspect. The training data is the same as the previous experiments. Three subsets of testing data are selected, including 300, 420, and 540 HRRPs, respectively. For each class, 100, 140, and 180 HRRPs are chosen for three subsets of testing data, respectively. It is obvious that the variation of target aspect becomes large when the number of HRRPs increases. The radial basis function kernel is used. The parameters of these methods are set according to the above experiments. The dimensionality of MMC, OMMPS, and OKMMPS is 5, 5, and 50, respectively. The parameter of radial basis function kernel for OKMMPS is set as 20. The nearest-neighbor classifier is applied for classification. The recognition results of three methods for three subsets of testing data are illustrated in Fig. 3. From Fig. 3, it is shown that the average recognition rates decrease when the number of testing samples is increased from 100 to 180, i.e., the variation of target aspect becomes large. However, the recognition rates of OMMPS and OKMMPS are still better than those of MMC for three subsets. This means that OMMPS and OKMMPS are more robust to variation of aspect than MMC. The reason is that the basis vectors of OMMPS and OKMMPS are obtained by solving the optimization problem sequentially, and they are optimal in meaning of maximum margin. Thus, the high classification accuracy can be obtained when the within-class scatter is large due to big change of HRRPs.

Fig. 3
figure 3

The average recognition rates of three methods versus the number of testing samples

4.5 Performance comparison

To show the effectiveness of the proposed method further, we evaluate the performance of OMMPS and OKMMPS compared with MMC [38], PCA [20], LDA [21], KPCA [22], and KFDA [23] under different SNR. The SNR is set as 5, 10, 15, 20, 25, and 30 dB. For each SNR, the recognition results are averaged for 50 run. The dimensionality of subspace for MMC, OMMPS, OKMMPS, PCA, LDA, KPCA, and KFDA is 5, 5, 50, 26, 2, 10, and 2, respectively. The radial basis function kernel is used. According to the experimental results of subsection 4.3, the kernel parameter for OKMMPS, KPCA, and KFDA is set as 20, 40, and 10, respectively. The nearest-neighbor classifier is applied for classification. Figure 4 shows the average rates of seven methods versus SNR. Some interesting observations can be seen from Fig. 4.

Fig. 4
figure 4

The average recognition rates of seven methods versus SNR

(1) When SNR is above 15 dB, the kernel methods such as OKMMPS, KFDA, and KPCA outperform the corresponding linear methods such as OMMPS, LDA, and PCA. At SNR = 15 dB, the average recognition rates of OKMMPS, KFDA, KPCA, OMMPS, LDA, and PCA are 86.52, 81, 79.67, 85.33, 80, and 79.33 %. It shows that the kernel methods are more robust to noise than the linear methods. This is because the nonlinearity in HRRPs is very obvious due to the effect of noise, and the kernel methods can well represent the nonlinearity variation appearing in HRRP samples by nonlinear mapping. Thus, the separability between the different classes can be improved.

(2) MMC has better recognition performance than LDA for all SNR level when the number of training data is much less than the dimensionality of HRRP. At SNR = 15 dB, the average recognition rates of MMC and LDA are 83.33 and 80 %, respectively. This demonstrates that MMC has better discriminative power than LDA for small size of training data. The reason is that LDA suffers from small sample size (SSS) problem in the case of small size of training data. However, MMC does not need the inversion of the within-class scatter matrix and may avoid the SSS problem. As a result, the features extracted by MMC are more robust.

(3) The discriminative ability of OMMPS and OKMMPS is superior to that of MMC when the SNR is from 5 to 30 dB. At SNR = 15 dB, the average recognition rates of OMMPS, OKMMPS, and MMC are 85.23, 86.42, and 83.33 %, respectively. The reason is that the basis vectors of OMMPS and OKMMPS are obtained by solving the optimization problem sequentially and they are optimal in meaning of maximum margin. Especially, the basis vectors of OKMMPS are still orthogonal in high-dimensional feature space. It means that the features extracted by OMMPS and OKMMPS are more discriminative than those extracted by MMC.

5 Conclusions

In this paper, we propose a novel radar target recognition method using HRRP, namely orthogonal maximum margin projection subspace (OMMPS). The kernel version, called as orthogonal kernel maximum margin projection subspace (OKMMPS), is also derived. The proposed method is able to maximize the between-class margin by increasing the between-class scatter distance and reducing the within-class scatter distance simultaneously. The experimental results on the measured data of three kinds of planes show that

  1. (1)

    OMMPS and OKMMPS can still obtain the appropriate dimensionality of subspace for high-dimensional HRRP vector with three classes.

  2. (2)

    The radial basis function kernel can better represent the nonlinearity appearing in HRRP samples than the polynomial function kernel.

  3. (3)

    OMMPS and OKMMPS are more robust to the variation of target aspect than MMC method.

  4. (4)

    OMMPS and OKMMPS have higher recognition performance than the other methods.

Abbreviations

HRRP:

high-resolution range profile

MMC:

maximum margin criterion

OKMMPS:

orthogonal kernel maximum margin projection subspace

OMMPS:

orthogonal maximum margin projection subspace

References

  1. HJ Li, SH Yang, Using range profiles as feature vectors to identify aerospace objects. IEEE Trans. Antennas. Propag. 41(March), 261–268 (1993)

    Article  Google Scholar 

  2. KB Eom, R Chellappa, Noncooperative target classification using hierarchical modeling of high-range resolution radar signatures. IEEE Trans. Signal Process. 45(September), 2318–2326 (1997)

    Article  Google Scholar 

  3. SP Jacobs, JA Sullivan, Automatic target recognition using sequences of high range resolution radar range profiles. IEEE Trans. Aerosp. Electron. Syst 36, 364–381 (2000)

    Article  Google Scholar 

  4. A Zyweck, RE Bogner, Radar target classification of commercial aircraft. IEEE Trans. Aerosp. Electron. Syst. 32(February), 598–606 (1996)

    Article  Google Scholar 

  5. AK Shaw, R Vasgist, R Williams, HRR-ATR using eigen-templates with observation in unknown target scenario. Proc. SPIE 4053, 467–478 (2000)

    Article  Google Scholar 

  6. BM Huther, SC Gustafson, RP Broussad, Wavelet preprocessing for high range resolution radar classification. IEEE. Trans. Aerosp. Electron. Syst 37, 1321–1331 (2001)

    Article  Google Scholar 

  7. R Wu, Q Gao, J Liu, H Gu, ATR scheme based on 1-D HRR profiles. Electronics Letters. 38(December), 1586–1587 (2002)

    Article  Google Scholar 

  8. KT Kim, DK Seo, HT Kim, Efficient radar target recognition using the MUSIC algorithm and invariant feature. IEEE Trans. Antennas. Propag. 50(March), 325–337 (2002)

    Google Scholar 

  9. J Zwart, R Heiden, S Gelsema, F Groen, Fast translation invariant classification of HRR range profiles in a zero phase representation. IEE Proc. Radar Sonar Navig. 150(June), 411–418 (2003)

    Article  Google Scholar 

  10. Y Shi, XD Zhang, A Gabor atom network for signal classification with application in radar target recognition. IEEE Trans. Signal. Process. 49(December), 2994–3004 (2001)

    Article  Google Scholar 

  11. SK Wong, Non-cooperative target recognition in the frequency domain. IEE Proc. Radar Sonar Navig. 151(February), 77–84 (2004)

    Article  Google Scholar 

  12. DE Nelson, JA Starzyk, DD Ensley, Iterated wavelet transformation and discrimination for HRR radar target recognition. IEEE Trans. System Man and Cybernetics-part: system and humans 33(January), 52–57 (2003)

    Article  MATH  Google Scholar 

  13. RA Mitchell, JJ Westerkamp, Robust statistical feature based aircraft identification. IEEE Trans. Aerosp. Electron. Syst. 35(March), 1077–1093 (1999)

    Article  Google Scholar 

  14. XJ Liao, P Runkle, L Carin, Identification of ground targets from sequential high-range-resolution radar signatures. IEEE Trans. Aerosp. Electron. Syst. 38(April), 1230–1242 (2002)

    Article  Google Scholar 

  15. CY Wang, JL Xie, The T-mixture model approach for radar HRRP target recognition. Int. J. Comput. Electr. Eng. 5(5), 500–503 (2013)

    Article  Google Scholar 

  16. M Li, GJ Zhou, B Zhao, TF Quan, Sparse representation denoising for radar high resolution range profiling. Int. J. Antennas Propag. 2014(3), 1–8 (2014)

    Google Scholar 

  17. L Du, HW Liu and Z Bao, Radar HRRP statistical recognition: parametric model and model selection, IEEE Transactions on Signal Processing, 56 (5), 1931–1944 (2008).

  18. L Shi, PH Wang, HW Liu, L Xu, Z Bao, Radar HRRP statistical recognition with local factor analysis by automatic Bayesian Ying-Yang harmony learning. IEEE Trans. Signal Processing 59(2), 610–617 (2011)

    Article  MathSciNet  Google Scholar 

  19. JS Fu, XH Deng, WL Yang, Radar HRRP recognition based on discriminant information analysis. WSEAS Trans. Inf. Sci. Appl. 8(4), 185–201 (2011)

    Google Scholar 

  20. LM Novak and GJ Owirka, Radar target recognition using an eigen-image approach. IEEE Int. Radar Conf., 129-131 (1994).

  21. BY Liu, WL Yang, Radar target recognition using canonical transformation to extract features. Proc. SPIE 3545, 368–371 (1998)

    Article  Google Scholar 

  22. B Chen, HW Liu and Z Bao, PCA and kernel PCA for radar high range resolution profiles recognition. 2005 IEEE International Radar conference, Virginia, USA, 2005, pp. 528–533.

  23. S Mika, G Ratsch, J Weston, B Scholkopf and KR Muler, Fisher discriminant analysis with kernels. IEEE International Workshop on Neural networks for signal processing, Wisconsin, USA, 1999, pp. 41–48.

  24. XF He, P Niyogi, Locality preserving projections (Proc. Conf. Advances in Neural Information Processing System 16, Vancouver, Canada, 2003)

    Google Scholar 

  25. HT Chen, HW Chang, TL Liu, Local discriminant embedding and its variants. IEEE Computer Society Conference on Computer Vision & Pattern Recognition, San Diego, California, USA, 2005, 2(2):846–853.

  26. D Cai, XF He, JW Han, HJ Zhang, Orthogonal Laplacian faces for face recognition. IEEE Transactions on Image Processing, 2006, 15(11):3608--3614.

  27. L Zhu, SN Zhu, Face recognition based on orthogonal discriminant locality preserving projections. Neurocomputing 70, 1543–1546 (2007)

    Article  Google Scholar 

  28. SJ Wang, HL Chen, XJ Peng, CG Zhou, Exponential locality preserving projections for small sample size problem. Neurocomputing 74, 3654–3662 (2011)

    Article  Google Scholar 

  29. S Yan, D Xu, B Zhang, H Zhang, Q Yang, S Lin, Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)

    Article  Google Scholar 

  30. M Sugiyama, Local fisher discriminant analysis for supervised dimensionality reduction, Proceedings of the International Conference on Machine Learning (ICML), Las Vegas, Nevada, USA, 2006, pp. 905–912.

  31. D Cai, X He, K Zhou, J Han, H Bao, Locality sensitive discriminant analysis, Proceedings of the 20th International Joint Conference Artificial Intelligence (IJCAI), Hyderabad, India, 2007, pp. 708–713.

  32. T Zhang, D Tao, X Li, J Yang, Patch alignment for dimensionality reduction. IEEE Trans. Knowl. Data Eng. 21(9), 1299–1313 (2009)

    Article  Google Scholar 

  33. B Li, C Zheng, DS Huang, Locally linear discriminant embedding: an efficient method for face recognition. Pattern Recog. 41(12), 3813–3821 (2008)

    Article  MATH  Google Scholar 

  34. Y Chen, J Zhang, D Cai, W Liu, X He, Nonnegative local coordinate factorization for image representation. IEEE Trans. Image Process. 22(3), 969–979 (2013)

    Article  MathSciNet  Google Scholar 

  35. Q Gao, J Ma, H Zhang, X Gao, Y Liu, Stable orthogonal local discriminant embedding for linear dimensionality reduction. IEEE Trans. Image Process. 22(7), 2521–2530 (2013)

    Article  Google Scholar 

  36. C Hou, C Zhang, Y Wu, Y Jiao, Stable local dimensionality reduction approaches. Pattern Recog. 42(9), 2054–2066 (2009)

    Article  MATH  Google Scholar 

  37. A Kocsor, K Kovacs, C Szepesvari, Margin maximizing discriminant analysis. Proc. 15th Eur. Conf. Mach. Learn. 32(1), 227–238 (2004)

    MATH  Google Scholar 

  38. H Li, T Jiang, K Zhang, Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Netw. 17, 157–165 (2006)

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the radar laboratory of University of Electronic Science and Technology of China (UESTC) for providing the measured data. The authors would also like to thank Prof. Qilian Liang of wireless communication Lab in University of Texas at Arlington (UTA) for his help and advice.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daiying Zhou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, D. Orthogonal maximum margin projection subspace for radar target HRRP recognition. J Wireless Com Network 2016, 72 (2016). https://doi.org/10.1186/s13638-016-0571-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-016-0571-y

Keywords