 Research
 Open Access
 Published:
Compressive samplingbased CFOestimation with exploited features
EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 240 (2016)
Abstract
Based on the compressed sensing (CS) technique, the carrier frequency offset (CFO) is estimated in compressive sampling scenarios. We firstly confirm the compressibility of estimation metric vector (EMV) of conventional maximum likelihood (ML)based CFO estimation, and thus conduct the compressive sampling at receiver. By exploiting the EMV features, introducing a circle cluster, and proposing a novel coherencepattern, we then form a featureaided weight coherence (FAWC) optimization to optimize measurementmatrix. Besides the proposed FAWC optimization, by referencing compressive sampling matching pursuit (CoSaMP) algorithm and exploiting EMV features, a metricfeature based CoSaMP (MFBCoSaMP) algorithm is proposed to improve the EMVreconstruction accuracy, and to reduce computational complexity of classic CoSaMP. With reconstructed EMV, we finally develop a CFO estimation method to estimate the coarse CFO and fine CFO. Relative to weighted coherence minimization (WCM) and classic CoSaMP, the elaborate performance evaluations show that the FAWC and MFBCoSaMP can independently or jointly improve accuracy of the CFOestimation (including coarse CFOestimation and fine CFOestimation), and the improvement is robust to system parameters, e.g., sparsity level, number of measurements, etc. Furthermore, the mean squared error (MSE) of proposed CFO estimation method can almost reach to its Cram\(\acute {\texttt {e}}\)rRao lower Bound (CRLB) when a relative large number of measurements, a relative high carriertonoise ratio (CNR), and a reasonable length of observed signals can be obtained.
Introduction
The carrier frequency offset (CFO), which is one of the well understood radio frequency (RF) impairment, may result in severe performance degradation at the receiver [1, 2]. To improve receiver performance, the CFO estimation has been studied comprehensively. In [3–5], the CFO estimations for additive white Gaussian noise (AWGN) channels, flat fading channels and frequencyselective fading channels are respectively addressed. Recently, the proposed compressive sensing (CS) approach [6, 7], which enables subNyquist sampling of sparse or compressible signals in some domain, can be employed to reduce system complexity and to save power significantly. By exploiting the sparsity profile, the CSbased CFO estimation is presented in [8, 9] for multiuser uplink. Compared with the CFO estimation without utilizing CS, the estimation accuracy is improved due to the priori information of sparse approximation. Although various methods of CFO estimation are proposed with and without utilizing CS, the sampling rate of these existing methods for CFO estimation, e.g., [3–5, 8, 9], needs to be at least the Nyquist rate, resulting in excessive power consumption and design difficulty for the analogtodigital converter (ADC) when the high sampling rate is experienced [10, 11].
To reduce the sampling rate, the CS is introduced into synchronization issue in [12–14]. In [12], a fast and rough estimate of pseudonoise (PN) code phase and Doppler frequency with a reduced number of parallel correlators (i.e., compressed correlators) is proposed, where the sparse expression is based on autocorrelation. For binary phaseshift keying (BPSK) signals and binary offset carrier (BOC) modulation signals, the 2D compressed correlator (TDCC) technique for the rough estimate of PN code phase and Doppler frequency is introduced in [13, 14], respectively. Based on hypothesis testing for a code phase and Doppler frequency next to the true hypothesis can yield a nonnegligible amount of signal energy, the compressed correlators technique in [12–14] tests a compressed hypothesis and coherently combines the signal energy in the neighboring hypotheses. Although the number of correlators is reduced, the compressed correlator technique can only roughly estimate Doppler frequency. Furthermore, the features of estimation metric vector (EMV) of CFO estimation are not exploited for compressive sampling and signal reconstruction. Thus, the CSbased CFO estimation, which includes coarse estimation and fine estimation, is not intensively investigated in [12–14].
By exploiting the features of EMV, a novel CSbased CFO estimation is proposed in this paper. The compressive sampling is introduced into Maximum Likelihood (ML)Based CFO estimation to reduce sampling rate while holding estimation performance be not deteriorated significantly. We briefly describe some critical points of the proposed CSbased CFO estimation as the follows.

∙
Feasibility analysis of compressive sampling: Based on the compressibility of EMV in MLbased CFO estimation [15], we first verify the received signal can be obtained with compressive sampling, and the ADC requirement can be reduced.

∙
Optimization of measurement matrix: In compressive sampling, measurement matrix directly determines whether the reconstruction can be realized successfully [6, 7]. The designing of efficient measurement matrices becomes the core problem for higher probability of reconstruction. In [16], Baraniuk et al. have been proved that many random matrices are good measurement matrices, and some optimized methods can also be found in existing literatures, such as [17–22]. These existing methods, however, are not specially designed for CFO estimation, and thus cannot obtain an optimized performance (e.g., reconstructionaccuracy improvement for EMV recovery). To obtain a more suitable measurement matrix, we exploit the features of EMV. Firstly, the EMV is expressed as a circle cluster to reduce the blocksparsity to one (i.e., significant amplitudes are gathered in one subblock when EMV is divided into multiple subblocks). With the special structure of blocksparsity, a novel coherencepattern is proposed to fully utilize structure information of circle cluster. Then, a featureaided weight coherence (FAWC) optimization, based on the algorithm of weighted coherence minimization (WCM) [22], is developed to optimize measurementmatrix without computational complexity increasing.

∙
Reconstruction algorithm: The reconstruction algorithm is another critical factor for successful reconstruction. With the compressive sampling at receiver, many recovery algorithms are proposed. Among these reconstruction algorithms, we mainly reference the compressive sampling matching pursuit (CoSaMP) [23, 24], due to its high reconstruction accuracy and excellent robustness to noise. According to the CoSaMP algorithm and the EMV features, a metric featurebased CoSaMP (MFBCoSaMP) algorithm is proposed to improve the EMV reconstruction accuracy, and to reduce the computational complexity of classic CoSaMP.

∙
CFO estimation: With the reconstructed EMV, we implement the CFO estimation by using a twostep procedure which includes coarse and fine CFO estimation. In the coarse of CFO estimation, the likelihood function is constructed according to EMV and to seek the local maximum. As for the fine CFO estimation, the received signal vector is recovered at Nyquist rate with the reconstructed EMV and then used to generate the likelihood function, with which an interpolation method is employed to seek the local maximum near to the value of coarse CFO estimation.
Performance evaluation shows that the proposed CSbased CFO estimation can be implemented with reduced samplingrate, along with an acceptable estimation deterioration in terms of mean squared error (MSE). Compared with the optimization of weighted coherence minimization (WCM) [22] and the reconstruction algorithm CoSaMP, the elaborate performance evaluations present the proposed FAWC and MFBCoSaMP can independently or jointly improve the accuracy of CFOestimation (including coarse CFOestimation and fine CFOestimation), and the improvement is robust to system parameters, e.g., the sparsity level, the number of measurement, and the length of the observed signal. Furthermore, the MSE of proposed CFOestimation can almost reach to its Cram\(\acute {\texttt {e}}\)rRao lower bound (CRLB) when the reasonable conditions can be obtained.
The main contributions of this paper are summarized as follows.

(a)
We confirm the compressibility of CFO EMV in conventional maximum likelihood (ML)based CFO estimation. Thus, the compressive sampling can be employed for CFO estimation.

(b)
A novel FAWC optimization method is proposed by exploiting the features of EMV. Compared with WCM, the proposed FAWC can obtain a suitable measurement matrix for CFO estimation to improve the reconstruction accuracy with comparative computational complexity. Also, the proposed method is robust to the designed parameters, and can easily to reach its convergence.

(c)
An MFBCoSaMP algorithm is proposed to reconstruct EMV by exploiting its features. Compared with the classic CoSaMP algorithm, the proposed method improves the recovery accuracy and reduces the computational complexity. Furthermore, the improvement of recoveryaccuracy is robust against to the varying of parameters.

(d)
We implement the CFO estimation (including coarse and fine estimation) with compressive sampling. Furthermore, the MSE performance can reach to its CRLB when reasonable system parameters are obtained.
The rest of this paper is organized as follows. In Section 2, we formulate the method of compressive sampling for CFO estimation, where the expression of sampling is derived from the MLbased approach in a conventional system model of Nyquist rate. Section 3 deals with the optimization of measurement matrix by exploiting EMV features. In Section 4, the CFO estimation method is proposed, where we present the MFBCoSaMP recoverymethod, coarse CFO estimation, and fine CFO estimation. Performance evaluations are shown in Section 5. Finally, Section 6 concludes this paper.
Notation: We use boldface letters to denote matrices and column vectors; 0 denotes the zero vector of arbitrary size; (·)^{T}, (·)^{H}, (·)^{−1}, (·)^{†}, and ⌊·⌋, denote the transpose, conjugate transpose, matrix inversion, MoorePenrose matrix inversion, and floor operation, respectively; I _{ P } is P×P identity matrix; G(i,j) is the (i,j)th element of the matrix of G; we write ∥·∥_{ p } for the usual ℓ _{ p } vector norm: \({\left \ {\mathbf {x}} \right \_{p}} = {\left ({\sum {{x_{i}^{p}}}} \right)^{{1 / p}}}\); supp(x)={i:x _{ i }≠0} is the support set that denotes the index set of nonzero elements in x; Φ _{ T } denotes the column submatrix comprising the T columns of Φ; x _{ T } denotes the entries of the vector x in the set T; the complementary set of set T is denoted by T ^{c}, ∅ denotes empty set, and E{·} is the expectation operator.
Compressive sampling for CFO estimation
According to the conventional MLbased CFO estimation, we verify the feasibility of compressive sampling for CFO estimation in this section. In Subsection 2.1, We briefly describe the method of conventional MLbased CFO estimation. Then, in Subsection 2.2, we raise the compressible EMV, and summarize its features. Based on the compressibility of EMV, the feasibility of compressive sampling for CFOestimation is verified in Subsection 2.3, according to the derivation result that the received signals (not EMV) can directly conduct compressive sampling.
Conventional MLbased CFO estimation
From [15], without compressive sampling, the observation of the sampled signal can be expressed as
where Δ f is the frequency offset to be estimated, T _{ s }≤1/(2·Δ f) is the sampling interval, θ is an unknown random phase with uniform probability density in [0,2π), and v _{ k } is a sample of the complex AWGN with zero mean and variance σ ^{2}. The carriertonoise ratio (CNR) ρ, which is the ratio of the signal to noise powers in (1), is defined as [15]
In the conventional estimation method [15], the problem of ML estimation of the frequency Δ f is to seek the maximum of the equivalent likelihood function
where \(\Delta \widetilde f\) is a tentative value for Δ f.
Sparsity of CFO EMV
Assume \(\Psi \left ({\Delta \widetilde f} \right){{= }}\sum \limits _{i = 1}^{N} {{r_{i}}{e^{ j2\pi \Delta \widetilde f \cdot i{T_{s}}}}}\), then the vector form of \({\Psi \left ({\Delta \widetilde f} \right)}\) can be expressed as
where r and \(\Psi \left ({\Delta \widetilde f} \right)\) are, respectively,
and
In this paper, we name \(\Psi \left ({\Delta \widetilde f} \right)\) as CFO estimation metric (EM). From (3), the equivalent likelihood function \(\Lambda \left ({\Delta \widetilde f} \right)\) can be rewritten as
For grid search, P (P≥N) tentative values of Δ f, denoted as \({\Delta {{\widetilde f}_{1}},\Delta {{\widetilde f}_{2}}, \cdots,\Delta {{\widetilde f}_{P}}}\), are considered. For simplicity, we consider P=N in this paper due to the same conclusions. According to the P tentative values, we form an EMV (denoted by \(\widetilde {\boldsymbol {\Psi }}\)) as
Substituting \(\Psi \left ({\Delta {{\widetilde f}_{p}}} \right){{ = }}{{\mathbf {r}}^{T}} {\boldsymbol {\Gamma }}\left ({\Delta {{\widetilde f}_{p}}} \right)\), p=1,2,···,P into (8), then we have
where
In (8), the EMV is approximately sparse. That is, for the elementamplitudes of EMV (i.e.,\(\left  {\Psi \left ({\Delta {{\widetilde f}_{1}}} \right)} \right ,\cdots \), \(\left  {\Psi \left ({\Delta {{\widetilde f}_{P}}} \right)} \right \)), only a few amplitudes are significant and the rest are nearly zero or negligible.
The examples are given in Fig. 1 to illustrate the compressibility of EMV, where N=64, P=64, T _{ s }=10^{−9}s, Δ f.T _{ s }∈(−0.5,0.5) is the normalized CFO. Note that we just consider the noisefree case in order to reveal the CFO features obviously. Four cases of normalized CFO, i.e., Δ f.T _{ s }=−0.4976 (near to −0.5), Δ f.T _{ s }=−0.2441 (between −0.5 and 0), Δ f.T _{ s }=0.3241 (between 0 and 0.5), and Δ f.T _{ s }=0.4757 (near to 0.5), are given in (a) −(d), respectively. From (a) −(d) in Fig. 1, and a large number of other experiments, the intrinsic features of EMV could be summarized as follows.

(a)
Only a few elementamplitudes in EMV are significant.

(b)
The significant amplitudes only gather in one cluster when the normalized CFOs connect to a circle from −0.5 to 0.5. In this paper, this cluster in a circle is denominated as circle cluster (i.e., significant amplitudes form a cluster in a circle).
Note that the CFO estimation metrics are not in a cluster in a strict sense for the special case that the normalized CFOs are located near to −0.5 (or 0.5). When the normalized CFOs are located near to −0.5 (or 0.5), some significant amplitudes appear near to 0.5 (or −0.5). Thus, we still describe this feature as cluster due to its cycle periodicity when the normalized CFOs are connected from −0.5 to 0.5 to a circle. For convenience, we call this cluster in this paper as circle cluster, i.e., a cluster in a circle.
According to the intrinsic features, EMV can be compressed according to the compressed sensing theory [6, 7]. For expression convenience, we also call the intrinsic features of EMV as EMV features in this paper.
Feasibility of compressive sampling for CFO estimation
As verified in Subsection 2.1, the CFO EMV, i.e., \(\widetilde {\boldsymbol {\Psi }}\), can be compressed. However, the compressibility of CFO EMV \(\widetilde {\boldsymbol {\Psi }}\) dose not mean that the received signal r can conduct compressive sampling, for the reason that the sparsity lies in CFO EMV \(\widetilde {\boldsymbol {\Psi }}\) rather than the received signal r. Thus, we need to further analyze the compressibility of CFO EMV \(\widetilde {\boldsymbol {\Psi }}\) and whether it could be mapped to the compressive sampling of the receivesignal r.
Based on the compressed sensing theory [6, 7], an M×P (M≪N≤P) measurement matrix Φ can be employed to compress the EMV \(\widetilde {\boldsymbol {\Psi }}\) due to its sparsity. Then, an M×1 measurements, denoted as y, is given by
Substituting \({\widetilde {\boldsymbol {\Psi }}}={\boldsymbol {\widetilde {\Gamma }\mathbf {r}}}\) (see (9)) into(11), we can derive
where the M×N matrix \({\boldsymbol {\Theta }} = {\boldsymbol {\Phi }}\widetilde {\boldsymbol {\Gamma }}\) is defined as sensing matrix, and can be expressed as
where Θ _{ m }=[θ _{ m1},θ _{ m2},⋯,θ _{ mN }]^{T},m=1,2,⋯,M.
Fortunately, the derived expression in (12) can be directly employed to perform the compressive sampling of received signal r due to its form y=r. Note that, since M is significantly smaller than N, y=Θr infers that r (not EMV) can be compressed by the M×N matrix Θ, i.e., the compressive sampling of received signal r can be directly conducted. With the sensing matrix Θ, we can adopt the generic circuit architecture of analogtoinformation converter (AIC) [25] or modulated wideband converter (MWC) model [26] to implement compressive sampling. Due to M≪N, the sampling rate can be naturally reduced, i.e., the ADCs at subNyquist rate can be employed for CFO estimation.
After conducting the compressive sampling according to (12), we will use reconstructionapproach to reconstruct the EMV and then perform the CFO estimation based on the reconstructed EMV. Especially, the reconstruction accuracy is mainly decided by the measurement matrix and reconstruction algorithms [6, 7]. Thus, we optimize the measurementmatrix in Section 3 and improve the reconstructionalgorithm in Section 4 for a better reconstructionaccuracy of EMV.
Optimization of measurement matrix
In CS theory, the measurement matrix plays an important role to determine the performance of reconstruction [6, 7], because a more efficient measurement matrix for the compressive sampling leads to the higher probability of reconstruction. In [16], Baraniuk et al. have been proved that many random matrices are good measurement matrices. The optimized methods can be found in [17–22]. However, these existing methods are not specially designed for CFO estimation. Thus, the EMV features (see Subsection 2.2) are not exploited for the optimizing of measurement matrices. Usually, the CFO EMV appears as intrinsic features that only a few elementamplitudes in EMV are significant, and the significant amplitudes gather together to form a circle cluster, as depicted in Fig. 1. To optimize measurement matrix Φ, we exploit the EMV features and propose a FAWC optimization method in this paper.
In [22], the optimizationmethod, i.e., the WCM, is proposed for a block sparse case. According to the EMV features, we can also see that the sparsity of EMV is typically blocksparse case, i.e., nonzero entries in EMV gather in some clusters. Furthermore, when circle cluster is introduced, EMV is its special case that the blocksparsity is one. That is, the nonzero entries in EMV occur only in one cluster. Therefore, WCM optimizationmethod in [22] is mainly referenced herein by the proposed FAWC. The main differences between the proposed FAWC and WCM are given as following:

(a)
The circle cluster is introduced to reduce the blocksparsity to one with the subblocklength K (i.e., the sparsity level), while WCM has its subblock uncertainty. See Fig. 1 a, an example that the normalized CFOs are located near to −0.5 is considered. Assuming K=7 (i.e., the amplitudes which are less than 0.05 are treated as ignorable), our FAWC with circle cluster has only one subblock with nonignorable amplitudes and exact number of nonignorable amplitudes in that subblock (i.e., subblocklength is K=7), while the method in [22] has to consider two subblocks (i.e., the blocksparsity is 2) with nonignorable amplitudes and the numbers of nonignorable amplitudes in that two subblocks are uncertain. In fact, the actual numbers of nonignorable amplitudes in Fig. 1 a are, respectively, 3 and 4 in the two subblocks of nonignorable amplitudes according to the method in [22]. However, the two subblocks have to be considered as owning seven nonignorable amplitudes to cover all possibility (i.e., the actual numbers of nonignorable amplitudes maybe 1,2,...,7 in the two subblocks).

(b)
The concerned patterns of Gram matrix G (defined in Eq. (15)) are different. For example, the concerned patterns of WCM and FAWC are given in Fig. 2 a and Fig. 2 b, respectively. In Fig. 2 a, WCM considers three blocks of size 7, and its concerned patterns are based on subblock coherence. Unlike WCM, the concerned patterns in the proposed FAWC are mainly based on the significant amplitudes. Furthermore, minimizing the subblock coherence is the main task of WCM in [22], while we minimize the coherence close to the maximum of the significant amplitudes in CFOestimation metric. The more appropriate minimization of coherence is exploited according to EMV features, which will be verified in the later section.

(c)
The measurement matrix is optimized on the basis of complex matrix, rather than optimizing real measurement matrix.
A summary of the proposed FAWC is exhibited in Table 1. Some details of the proposed FAWC are explained as follows.

1).
Objective of optimization
According to Eq. (9), the sparse vector \(\widetilde {\boldsymbol {\Psi }} = \widetilde {\boldsymbol {\Gamma }}\mathbf {r}\). Then, we have
$$ \mathbf{r} = \widetilde{\boldsymbol{\Gamma}}^{\dag} \widetilde{\boldsymbol{\Psi}}=\mathbf{D}\widetilde{\boldsymbol{\Psi}}, $$(14)where \(\mathbf {D} = \widetilde {\boldsymbol {\Gamma }}^{\dag }\) is just for expression convenience. Equation (14) indicates that D can be viewed as a dictionary under the CS framework. Then the Gram matrix of E=Φ D with normalized columns can be expressed as
$$ {\mathbf{G}} = {{\mathbf{E}}^{H}}{\mathbf{E}}= {{\mathbf{D}}^{H}}{{\boldsymbol{\Phi }}^{H}}{\boldsymbol{\Phi} \mathbf{D}}. $$(15)Similar to [22], the optimization objective in this paper, which minimizes the total coherence of the concerned pattern (the red entries in Fig. 2 b, denoted by \({\mu _{C}^{t}}\)), nonconcerned pattern (the green entries in Fig. 2 b, denoted by \(\mu _{NC}^{t}\)) and the normalization penalty (denoted by η) of Gram matrix G, is given by
$$ {\boldsymbol{\Phi }} =\mathop {\arg \min }\limits_{\boldsymbol{\Phi }} \left\{ {\frac{1}{2}\eta + \left({1  \alpha} \right)\mu_{NC}^{t} + \alpha {\mu_{C}^{t}}} \right\}, $$(16)where 0<α<1 is a weighting parameter between the total coherence of the concerned pattern and the total coherence of nonconcerned pattern. The normalization penalty η, the total coherence of nonconcerned pattern μ _{ NC } and the total coherence of concerned pattern μ _{ C } are defined as
$$ \left\{ \begin{array}{l} \eta = {\sum\limits_{\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{I}}} {\left {G\left({i,j} \right)  1} \right}^{2}} = \sum\limits_{j = 1}^{P} {{{\left {G\left({j,j} \right)  1} \right}^{2}},} \\ \mu_{NC}^{t} = {\sum\limits_{\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{NC}}} {\left {G\left({i,j} \right)} \right}^{2}},\\ {\mu_{C}^{t}} = {\sum\limits_{\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{N}}} {\left {G\left({i,j} \right)} \right}^{2}}. \end{array} \right. $$(17)Where i=1,2,⋯,P and j=1,2,⋯,P; Ω _{ I }, Ω _{ NC } and Ω _{ C } are the index sets of diagonal entries, nonconcerned pattern and concerned pattern of Gram matrix, respectively, (i.e., the index set of the yellow entries, the green entries and the red entries in Fig. 2 b). By defining a complete set Ω={(i,j)1≤i≤P,1≤j≤P}, then we have
$$ \left\{ \begin{array}{l} {{\boldsymbol{\Omega }}_{I}}~~= \left\{ \left({i,j} \right)\left ~i = j \right. \right\},\\ {{\boldsymbol{\Omega }}_{C}}~= \left\{ {\left({i,j} \right)\left {\left {i  j} \right \le \left\lfloor {\frac{K}{2}} \right\rfloor {\kern 1pt} {{,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j} \right.} \right\}\\ ~~~~~~\cup \left\{ {\left({i,j} \right)\left {\left {i  j} \right \ge P  \left\lfloor {\frac{K}{2}} \right\rfloor {{,}}~i \ne j} \right.} \right\},\\ {{\boldsymbol{\Omega }}_{NC}} = {\boldsymbol{\Omega }}  {{\boldsymbol{\Omega }}_{I}}  {{\boldsymbol{\Omega }}_{C}}. \end{array} \right. $$(18)In Eq. (18), Ω _{ NC } is expressed by the difference set of Ω, Ω _{ I } and Ω _{ C }.

2).
Initialization of optimization
DuarteCarvajalino and Sapiro [27] proposed designing Φ by minimizing \(\left \ {{{\mathbf {D}}^{T}}{{\boldsymbol {\Phi }}^{T}}{\boldsymbol {\Phi } \mathbf {D}}  {\mathbf {I}}_{P}} \right \_{F}^{2}\), which is used to initialize Φ in [22] for the algorithm of WCM.
Different from the real dictionary in [22] and [27], the dictionary \(\mathbf {D} = \widetilde {\boldsymbol {\Gamma }}^{\dag }\) is a complex matrix due to the complex value of CFO EM. Thus, we initialize Φ by minimizing \(\left \ {{{\mathbf {D}}^{H}}{{\boldsymbol {\Phi }}^{H}}{\boldsymbol {\Phi } \mathbf {D}}  {\mathbf {I}}_{P}} \right \_{F}^{2}\), i.e.,
$$ {{\boldsymbol{\Phi }}^{\left(0 \right)}}\mathop {{{= }}\min }\limits_{\boldsymbol{\Phi }} \left\ {{{\mathbf{D}}^{H}}{{\boldsymbol{\Phi }}^{H}}{\boldsymbol{\Phi} \mathbf{D}}  {{\mathbf{I}}_{P}}} \right\_{F}^{2}. $$(19)The objective (19) can be solved by using the eigenvalue decomposition (EVD) of D D ^{H}, i.e.,
$$ {\mathbf{DD}}^{H}= {\mathbf{U}} {\boldsymbol{\Lambda}} {\mathbf{U}}^{H}, $$(20)where U is a unitary matrix, and Λ is a real diagonal matrix in which the diagonal entries are the eigenvalues of D D ^{H}. Then, the initial value of Φ, denoted by Φ ^{(0)}, can be determined by
$$ {{\boldsymbol{\Phi }}^{\left(0 \right)}} = \left[ {\begin{array}{cccc} {{{\mathbf{I}}_{M}}}&0 \end{array}} \right]{{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}{{\mathbf{U}}^{H}}. $$(21) 
3).
The nth Iteration of optimization
According to [22], the value of Φ in the nth iteration, i.e., Φ ^{(n+1)} is given by
$$ {{\boldsymbol{\Phi }}^{\left({n+1} \right)}} = {\boldsymbol{\Delta }}_{M}^{\frac{1}{2}}{\mathbf{V}}_{M}^{H}{{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}{{\mathbf{U}}^{H}}, $$(22)where U and Λ can be obtained from eigenvalue decomposition of D D ^{H} (see (20)); Δ _{ M } are the top eigenvalues and V _{ M } are the corresponding eigenvectors of
$$ {\boldsymbol{\Upsilon}}={{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}{{\mathbf{U}}^{H}}{\mathbf{D}}{h_{t}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right){{\mathbf{D}}^{H}}{\mathbf{U}}{{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}. $$(23)In (23), h _{ t }(G ^{(n)}) is defined as
$$ \begin{array}{l} {h_{t}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right) \buildrel \Delta \over = \frac{1}{3}{h_{\eta} }\left({{{\mathbf{G}}^{\left(n \right)}}} \right) + \frac{2}{3}\alpha {h_{{\mu_{C}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\\ ~~~~~~~~~~~~+ \frac{2}{3}\left({1  \alpha} \right){h_{{\mu_{NC}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right). \end{array} $$(24)Where the entries of h _{ η }(G ^{(n)}), \({h_{{\mu _{C}}}}\left ({{{\mathbf {G}}^{\left (n \right)}}} \right)\) and \({h_{{\mu _{NC}}}}\left ({{{\mathbf {G}}^{\left (n \right)}}} \right)\) are defined as
$$ \left\{ \begin{array}{l} {h_{\eta} }\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\left({i,j} \right) = \left\{ {\begin{array}{l} {1,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \left({i,j} \right) \in {{\boldsymbol{\Omega }}_{I}}}\\ {{{\mathbf{G}}^{\left(n \right)}}\left({i,j} \right),else} \end{array}} \right.\\ {h_{{\mu_{C}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\left({i,j} \right) = \left\{ {\begin{array}{l} {{{\mathbf{G}}^{\left(n \right)}}\left({i,j} \right),\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{C}}}\\ {0,{\kern 1pt} {\kern 1pt} {\kern 1pt} else} \end{array}} \right.\\ {h_{{\mu_{NC}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\left({i,j} \right) = \left\{ {\begin{array}{l} {{{\mathbf{G}}^{\left(n \right)}}\left({i,j} \right),\left({i,j} \right) \in {{\mathbf{\Omega }}_{NC}}}\\ {0,{\kern 1pt} {\kern 1pt} {\kern 1pt} else} \end{array}} \right. \end{array} \right.. $$(25)For measurementmatrix optimizing, the proposed FAWC satisfies the conditions of surrogate objective of the boundoptimization method. Moreover, its iterative minimization can guarantee the convergence to a local solution. The proofs are abbreviated here for the reasons that the similar proofs can be obtained from Appendix B and Appendix A in [22] for the conditions of boundoptimization method and the convergence, respectively. Similar to [22], the computation complexity of the proposed optimization algorithm is also O(N ^{3}) (same as WCM), due to the application of EVD (the complexity of EVD is N ^{3}). Therefore, the proposed FAWC maintains a comparative computationcomplexity with WCM.
CFO estimation method
Based on the compressive sampling (see Section 2) and the optimized measurementmatrix (see Section 3), the proposed CFO estimation method first reconstructs EMV. Then, we estimate the coarse CFO by seeking the maximum of the equivalent likelihood function according to the reconstructed EMV. Finally, for the fine CFO estimation, the received signal of Nyquist rate is recovered from the reconstructed EMV, and the likelihood function interpolation locates the local maximum from the result of coarse CFO estimation.
Sparse reconstruction of EMV
In this subsection, we will present the proposed MFBCoSaMP reconstructionmethod for EMV (i.e., \(\widetilde {\boldsymbol {\Psi }}\) recovery. The proposed reconstructionmethod mainly exploits EMV features as priori information, and thus improves its reconstructionaccuracy. We denote the reconstructed EMV as \(\overset \smile {\boldsymbol {\Psi }}\) and implement the CFO estimations (including coarse and fine CFO estimation) on the basis of the reconstructed EMV.
Among these currently available CS signal recovery algorithms, our proposed MFBCoSaMP mainly references the CoSaMP algorithm due to its high reconstruction accuracy and excellent robustness to noise [23, 24]. By further referencing the methodology of modelbased CoSaMP [28], an excellent method of supportset identification is developed. The objective of MFBCoSaMP is to recovery the EMV, i.e., the algorithm output is \(\overset \smile {\boldsymbol {\Psi }}\). We describe some critical points of MFBCoSaMP in detail as follows.
A.1 Initialization of MFBCoSaMP
The input parameters and initialization of MFBCoSaMP are similar to the CoSaMP algorithm. As for the input parameters, we also need the measurement matrix Φ, the noisy measurements y and the sparsity level K. In the initialization step, the initial target vector \({{{\overset \smile {\boldsymbol {\Psi }} }}}^{\left (0 \right)}\) and initial residual v are, respectively, set as a zero vector and y, for the reason that no priori can be obtained.
A.2 Identification based on EMV proxy
Similar to classical CoSaMP algorithm in [23, 24], we form an EMV proxy u for CFO estimation, i.e,
where Φ is the measurement matrix optimized in Section 3, and v is the residual of each iteration. For description convenience, the P×1 vector u is expressed as u=[u _{1},u _{2},...,u _{ P }]^{T}. Unlike CoSaMP algorithm in which 2K largest components of the proxy u are located, the MFBCoSaMP firstly locates the maximal amplitude in u, i.e.,
According to the W _{1}, we then locate the other 2K−1 indexes to form supportset W _{1}. In CoSaMP, the signal components that carry a lot of energy locate in the identification process whereas the EMV features indicate that the significant amplitudes only gather in one circle cluster. Thus, the maximal amplitude is of the special importance to determine the location of circle cluster due to its usual high reliability. Starting from W _{1}, we then search 2K−1 indexes nearest W _{1} to form a circle cluster. The identificationresult, i.e., the support set W _{1}, is given by
where \({\widetilde {\mathbf {W}}_{11}}\) is defined as
Where f _{ I }(X) is an indexindication function defined as
In (28), \({\widetilde {\mathbf {W}}_{12}}\) is determined by the different values of W _{1}:
A.3 Supportset merger and metricvector estimation
After obtaining the identified supportset W _{1}, we unite the supportset of current approximation \({{{{\overset \smile {\boldsymbol {\Psi }} }}}^{\left ({k  1} \right)}}\) to construct a merger supportset T in the kth iteration, i.e.,
Based on the merged supportset T, a leastsquare estimation is employed. Denoting b as b=[b _{1},b _{2},...,b _{ P }]^{T} and the estimated metricvector as b _{T}, we have
Besides the estimated components b _{T}, the other components of b are set as zeros, i.e.,
Compared with CoSaMP algorithm, the procedures of supportset merger and metricvector estimation in the proposed MFBCoSaMP are similar, just with different supportset T.
A.4 Identification based on EMV
In CoSaMP algorithm, K largest components of the estimated b are located. By contrast, the MFBCoSaMP locates the maximal amplitude in b, i.e.,
On the basis of W _{2} and the EMV features, K−1 indexes nearest to W _{2} in the circle cluster are searched. The identificationresult, i.e., the support set W _{2}, is given by
where \({\mathbf {W}}_{2}^{\left (o \right)}\), \({\mathbf {W}}_{21}^{\left (e \right)}\), and \({\mathbf {W}}_{22}^{\left (e \right)}\) are, respectively, given by
and
In (37) −(39), the indexindication function f _{ I }(X) is defined in Eq. (30).
With the novel supportset W _{2}, the components of b which indexes lie in W _{2} are reserved, while the others are set as zeros, i.e.,
A.5) Update of EMV
In the kth iteration, the metric vector \({{{\overset \smile {\boldsymbol {\Psi }} }}}^{\left (k \right)}\) should be updated according to b in (33), (34), and (40). Then, we have
With current samples y and updated metricvector \({{{\overset \smile {\boldsymbol {\Psi }} }}}^{\left (k \right)}\), the residual v (i.e., the part of the metricvector that has not been approximated) is replaced by
After K iterations from A.2 to A.5, the halting criterion of MFBCoSaMP is satisfied. Therefore, the reconstructed \({{\overset \smile {\boldsymbol {\Psi }} }}\) is \({{{\overset \smile {\boldsymbol {\Psi }} }}}^{\left (k \right)}\), i.e.,
A summary of MFBCoSaMP algorithm is exhibited in Table 2. Compared with the classic CoSaMP, the proposed MFBCoSaMP can improve the reconstruction accuracy due to its priori information exploited from the EMV features. In CoSaMP algorithm, the support set W _{1} locates 2K largest components of {u _{1},u _{2},...,u _{ P }}. Based on the EMV features that the significant amplitudes gather in a circle cluster, the MFBCoSaMP first locates the index of maximal amplitude in u in the W _{1} identification and then search the 2K−1 indexes nearest to it. The method of W _{1} identification is also adopted for W _{2} identification, except for the K largest components of {b _{1},b _{2},...,b _{ P }} with the support set W _{2}. In MFBCoSaMP, the maximum of amplitudes is of special importance to determine the location of circle cluster due to its usual highest reliability.
In addition to the accuracy improvement, MFBCoSaMP can also reduce the computational complexity. The comparison of computational complexity between CoSaMP and MFBCoSaMP is assessed as follows. Due to the same processing for initialization, support set merger, metricvector estimating and updating, CoSaMP, and MFBCoSaMP have the same computational complexity in these procedures. The main differences lie in the procedure of supportset identification, which is presented in A.2 and A.4. In A.2 (or A.4), the CoSaMP locates 2K (or K) largest components of the proxy u (or b) from the total P×1 space. Thus, the classic CoSaMP requires \({\sum \nolimits }_{i = 1}^{2K} {\left ({P  i} \right)} + {\sum \nolimits }_{i = 1}^{K} {\left ({P  i} \right)} \) real additions in each iteration. In comparison to CoSaMP, MFBCoSaMP only locates the maximum of u (or b) in the total P×1 space in A.2 (or A.4), and directly chooses the locations of the other 2K−1 (or K−1) components whose locations are located nearest to the location of the maximum. Then, MFBCoSaMP requires 2P real additions in each iteration. Obviously, \(2P<{\sum \nolimits }_{i = 1}^{2K} {\left ({P  i} \right)} + {\sum \nolimits }_{i = 1}^{K} {\left ({P  i} \right)} \) for reasonable K≥1. Therefore, the proposed MFBCoSaMP reduces the computational complexity compared to the classic CoSaMP.
Coarse CFO estimation
Represented by the coarse CFO estimation, denoted as \(\Delta {\widehat f_{{\text {coarse}}}}\), the reconstructed EMV, i.e., \({\overset \smile {\boldsymbol {\Psi }} }\) in (43), could be obtained as\({\overset \smile {\boldsymbol {\Psi }} } = {\left [ {\overset \smile {{\Psi }} \left ({\Delta {{\widetilde f}_{1}}} \right),\overset \smile {{\Psi }} \left ({\Delta {{\widetilde f}_{2}}} \right), \cdots,\overset \smile {{\Psi }} \left ({\Delta {{\widetilde f}_{P}}} \right)} \right ]^{T}}\), then, \(\Delta {\widehat f_{{\text {coarse}}}}\) can be derived as
where p=1,2,...,P.
Fine CFO estimation
To implement the fine CFO estimation, we first utilize the reconstructed EMV to recover the received signal r with Nyquist rate. Then, an interpolation method is employed to construct the equivalent likelihoodfunction. Finally, we seek the local maximum by using the constructed likelihoodfunction to estimate the fine CFO.
From (9), we have \(\mathbf {r} = \widetilde {\boldsymbol {\Gamma }}^{\dag } {\boldsymbol {\widetilde \Psi }}\). With the reconstructed EMV (i.e., \({\overset \smile {\boldsymbol {\Psi }} }\)), the received signal r (sampled with Nyquist rate) can be expressed as
where n is the N×1 noise vector, which is caused by the inaccurate reconstruction and approximate sparsity of \(\boldsymbol {\widetilde \Psi }\). Then, an approximation of r, denoted as \({{\overset \smile {\mathbf {r}} }}\), can be given by
In (46), if the dominant elementamplitudes in EMV (i.e., \({{\boldsymbol {\widetilde \Psi }}}\)) can be reconstructed accurately and its well sparserepresentation can be obtained, the effect of the noise vector n will be insignificant. Fortunately, with a good recoveryalgorithm (e.g., MFBCoSaMP) and sufficient observations (i.e., relatively large N) at a relatively high CNR, it is feasible to ignore the effect of noise vector n.
With the recovered \(\overset \smile {\mathbf {r}}\) and the coarse CFO estimation \(\Delta {\widehat f_{{\text {coarse}}}}\), we estimate the fine CFO (denoted as \(\Delta {\widehat f_{{\text {fine}}}}\)) near to \(\Delta {\widehat f_{{\text {coarse}}}}\), where the frequency range for searching \(\Delta {\widehat f_{{\text {fine}}}}\) is assumed to be \(\left [ {\Delta {{\widehat f}_{{\text {coarse}}}}  \zeta,\Delta {{\widehat f}_{{\text {coarse}}}} + \zeta } \right ]\) with ζ>0. Without loss of generality, ζ is chosen as the half searchstep of the coarse CFO estimation.
According to Eq. (6), we use the tentative frequency \({\Delta \overset \smile {{f}} }\) in \(\left [ {\Delta {{\widehat f}_{{\text {coarse}}}}  \zeta,\Delta {{\widehat f}_{{\text {coarse}}}} + \zeta } \right ]\) to construct N×1 vector \(\overset \smile {\boldsymbol {\Gamma }}\left ({\Delta \overset \smile {{f}}} \right)\) as
After replacing r and \({\boldsymbol {\Gamma }}\left ({\Delta \widetilde f} \right)\) with \(\overset \smile {\mathbf {r}}\) (in (46)) and \(\overset \smile {\boldsymbol {\Gamma }}\left ({\Delta \overset \smile {{f}}} \right)\) (in (47)), respectively, we express the equivalent likelihood function as
Thus, the fine CFO estimation \(\Delta {\widehat f_{{\text {fine}}}}\) can be obtained by seeking the maximum of the equivalent likelihood function \(\Lambda \left ({\Delta \overset \smile {{f}}} \right)\), i.e.,
In (49), the pseudoinverse \(\widetilde {\boldsymbol {\Gamma }}^{\dag }\) can be computed and stored in advance to save the processing resources during the fine CFO estimation.
Performance evaluation
In this section, we will evaluate the performance of proposed methods. For the proposed FAWC, we will evaluate its cost function, recovery performance, and robustness. To evaluate the proposed MFBCoSaMP, we will consider the reconstruction accuracy and robustness. For their combines, the coarse and fine CFOestimations are evaluated, respectively.
Performance of optimized measurementmatrix
To verify the effectiveness of the proposed optimizationmethod FAWC in Section 3, comparisons against the WCM method in [22] are given in this subsection.
Firstly, we give the evolution of the cost function in (16) (i.e., \(\frac {1}{2}\eta + \left ({1  \alpha } \right)\mu _{NC}^{t} + \alpha {\mu _{C}^{t}}\)) in Fig. 3 to observe its convergence behavior, where, N=128, P=N=128, K=5, and M=N/2=64. Three cases, i.e., α=0.1, α=0.9, and α=0.99 are, respectively, considered. For both FAWC and WCM with a relatively large number of iterations, the increasing α decreases the value of cost function of FAWC. After around 20 iterations, the proposed FAWC has a stable value of cost function, while jumping occurs in WCM. Furthermore, for the given dictionary \(\mathbf {D} = \widetilde {\boldsymbol {\Gamma }}^{\dag }\) according to the tentative CFOs, FAWC has a smaller value of cost function to capture small coherence of concerned patterns and nonconcerned patterns.
Similar to the WCM, α≈1 is also a good value for the proposed FAWC method. To avoid completely ignoring the coherence of nonconcerned patterns, setting α=1 is not considered in the following simulations of this subsection. At the same time, for the sake of fairness, both WCM and proposed FAWC method employ the classic CoSaMP method to reconstruct EMV. Note that we do not adopt the proposed MFBCoSaMP for the EMV recovery in this subsection. We just expect to actually reveal the improvement from the optimization of measurement matrix, rather than our reconstruction method. We will evaluate the MSE performance, and the MSE in this paper is defined as
where E{·} denotes the expectation operator, \({\widehat {\mathbf {X}}}\) is the estimation of X.
The MSE performance of EMV recovery is given in Figs. 4 and 5, where N=128, P=N=128, α=0.9, and the reconstruction method is classic CoSaMP. Note that the main purpose of introducing the circlecluster in Section 3 is to solve the uncertainty of subblock when the normalized CFO is near to +0.5 or −0.5. For other cases with the same CoSaMP recovery algorithm, similar MSE performance can be obtained from the two optimization methods. Thus, in this simulation, the unknown normalized CFO randomly generated is near to +0.5 or −0.5. We employ the interval \([0.45,0.5)\bigcup (0.5,0.45]\) to represent the space near to +0.5 or −0.5. The Ksparsity EMV is formed in each simulation by the following procedure: (a) generate a noisefree EMV with a normalizedCFO near to +0.5 or −0.5 randomly; (b) find the maximum among EMV elementamplitudes; (c) set the elements in EMV to zeros except the maximumamplitude element and the other K−1 elements which indexes are the nearest to the maximumamplitude element (similar to form the W _{2} in A.4 of Section 4). The formed EMV passes through the noise channel and then generates measurements according to the different measurement matrices optimize by WCM and proposed optimizationmethod, respectively. Different Ks are adopted while M is kept as N/2=64 unchanged in Fig. 4. From Fig. 4, the proposed FAWC optimizationmethod slightly reduces the MSE, compared with WCM. With the increasing K, a much easier distinguishment of MSE improvement can be observed, due to the increasing significance of suitable concernedpatterns with a larger K. The similar conclusions can also be seen in Fig. 5, where different M are adopted while K is kept as 13. From Fig. 5, the proposed FAWC optimizationmethod slightly improves the MSE performance.
In Fig. 6, M, P, and K vary according to the varying N is simulated, where M=N/2, P=N, K=⌈N/10⌉, α=0.9, and three cases of CNR (i.e., ρ=30 dB, ρ=40 dB, and ρ=50 dB) are considered. From Fig. 6, with the increasing CNR, the improvement of MSE is observed apparently.
Besides coping with the uncertainty of subblock, the proposed optimizationmethod can also help to improve the proposed reconstruction method, which can be seen in the later simulations.
Effectiveness of MFBCoSaMP
In this subsection, we compare the reconstruction performance of EMV when CoSaMP and MFBCoSaMP are, respectively, adopted. To really present the merits of MFBCoSaMP, a Gaussian random matrix [16], which is generated with each entry independently drawn from a Gaussian distribution with zero mean and unit variance, is employed as the measurement matrix for both algorithms. We do not use the optimized measurement matrices (e.g., the matrix optimized by WCM method or proposed FAWC in this paper) to avoid any improvement brought from the optimization of measurement matrix.
Similar to the MSE evaluation in Subsection 5.1 of this section, the same procedure is adopted to generate Ksparsity EMV and to pass through the noise channel. Then, we employ Gaussian randommatrix to compress EMV and obtain the measurements. To compare the reconstruction performance of CoSaMP and MFBCoSaMP, Figs. 7, 8, and 9 give the MSE performance. In Fig. 7, the MSE performance with different sparsity Ks are considered (i.e., K=7, K=9, and K=13), and N=128, P=N=128, M=N/2=64. It can be seen that the proposed MFBCoSaMP effectively reduces MSE relative to the classic CoSaMP. The similar conclusion can also be derived from the cases of Figs. 8 and 9. In Fig. 8, the MSE comparison with different numbers of measurement, where N=128, P=N=128, K=13, and four cases of measurements (i.e., M=64, M=80, M=96, and M=112) are considered. Particularly, in Fig. 9, the M, P, and K are approximate linearvarying with the change of N, i.e., N varies from 128 to 256, while M=N/2, P=N, and K=⌈N/20⌉. Actually, K does not vary linearly with N, describing its as approximate linearvarying for description convenience. Besides these basic parameters, three cases of CNR, i.e., ρ=10 dB, ρ=20 dB, and ρ=30 dB are considered. Compared with CoSaMP, the MFBCoSaMP obviously improve the MSE performance in Figs. 8 and 9. Besides the MSE improvement, Fig. 9 also illuminates that more significant improvement can be obtained with the increase of CNR, due to more remarkable EMV features in the higher CNR. The MSE performance improvements in Figs. 7, 8, and 9 are mainly due to the priorinformation developed from the EMV features (see step c and g) in Table 2, or see A.2 and A.4 in Subsection 4.1 for details).
Performance of CFO estimation
In this subsection, we discuss the influence of proposed methods in the coarse and fine CFO estimation, respectively. For the convenience of expression, some abbreviations are given as follows.

“WCM + CoSaMP” denotes that the measurement matrix is optimized by WCM method, and the reconstruction algorithm is CoSaMP.

“FAWC + CoSaMP” represents that the measurement matrix is optimized by proposed FAWC optimization method, and the reconstruction algorithm is CoSaMP.

“WCM + MFBCoSaMP” denotes that the measurement matrix is optimized by WCM method, and the reconstruction algorithm is MFBCoSaMP.

“FAWC + MFBCoSaMP” represents that the measurement matrix is optimized by FAWC, and the reconstruction algorithm is MFBCoSaMP.

“ML (Nyquist Rate)” denotes MLbased coarse CFO estimation with the Nyquistrate sampling.
Unlike the known sparsity in aforementioned simulations, the sparsity K during the CFO estimation is usually unknown in practical systems. Thus, a sparsity level (i.e., inexact sparsity) is employed in this Section. To obtain a reasonable sparsity level, we use the maximumamplitude of EMV to set the threshold. The maximumamplitude can be expressed as \(\gamma = \max \left \{ {\left  {\Psi \left ({\Delta {{\widetilde f}_{1}}} \right)} \right ,\left  {\Psi \left ({\Delta {{\widetilde f}_{2}}} \right)} \right , \cdots,\left  {\Psi \left ({\Delta {{\widetilde f}_{P}}} \right)} \right } \right \}\), where \(\Psi \left ({\Delta {{\tilde f}_{p}}} \right),p = 1,2, \cdots,P\) is defined in (4). Three thresholds, i.e., T h _{1}=0.1×γ, T h _{2}=0.05×γ, and T h _{3}=0.01×γ, are considered. The amplitude larger than T h _{ i },i=1,2,3, is viewed as the significant amplitude under the threshold T h _{ i }, and the number of significant amplitudes is counted as sparsityK in each experiment. The 10^{5} statistical experiment results are given in Tables 3 and 4, where the ceiling operator is employed to make the mean value and variance be an integer.
From Tables 3 and 4, choosing the moderate threshold T h _{2} can commendably cover significant amplitudes and holds a relatively small K for a good reconstruction accuracy, while T h _{3} keeps a better approximation at the cost of a bigger K. In this paper, we always consider the case P=N. Increasing P or N can make the significant amplitudes of EMV more concentrated and easier to cover with a smaller K. The sparsity levels are given as follows.

For N=128, we choose K=10 as the sparsitylevel according to its mean with T h _{2}. Because T h _{1} is high and only one element of EMV is reserved (according to the mean), T h _{3} results in a too big K to ensure the measurements M≪N.

For N=256, we would like to choose the sparsity level as K=23 according to the mean with T h _{3}.

For N=512 and N=1024, the threshold T h _{3} is usually employed, while the mean and variance with T h _{3} are simultaneously considered, since N is large enough to cover more significantamplitudes of EMV. Then, the sparsity levels are, respectively, chosen as 35 (\(12+ \left \lceil {\sqrt {503}} \right \rceil \)) and 37 (\(7+ \left \lceil {\sqrt {3 \times 287}} \right \rceil \)).

For simulation convenience, we also choose sparsitylevel as K=⌈N/10⌉ in some simulations.
C.1 Performance evaluation of coarse CFO estimation
Compared with the WCM optimization, we firstly verify the correct probability of coarse CFOestimation can be improved by using the proposed FAWC. The correct coarse CFO estimation is defined as
i.e., the offset between the estimated CFO and the real CFO is no more than a half searchstep △. In this paper, the searchstep for coarse CFO estimation is set as △=1/P.
The correct probability of coarse CFO estimation is given in Fig. 10, where N=128, P=N=128, α=0.9, M=N/2=64, and K=10 (according to T h _{2} in Table 3). The measurement matrix is optimized by FAWC and WCM, respectively. As for the unknown normalized CFO, which is randomly generated in [4.5,+0.5) or (−0.5,−4.5] to illuminate the proposed optimization method, can solve the uncertainty of subblock. Compared with “WCM + CoSaMP”, “FAWC + CoSaMP” proves that the proposed measurement matrix can improve the correct probability due to its same recovery algorithm (i.e., CoSaMP). Similar to “WCM + CoSaMP”, “WCM + MFBCoSaMP” shows recovery effectiveness of MFBCoSaMP since the WCM is utilized in both methods. Although MFBCoSaMP presents a significant improvement relative to “WCM + CoSaMP”, the proposed measurement matrix (i.e., the measurement matrix optimized by the FAWC) can arouse further improvement. Thus, the correct probability of “FAWC + MFBCoSaMP”, which is nearest to that of ML, can obtain the best improvement of coarse CFO estimation. Figure 10 manifests the effectiveness of the FAWC and MFBCoSaMP, i.e., FAWC and MFBCoSaMP can independently or jointly improve the correct probability of coarse CFO estimation.
To elaborate the parameter influence, we, respectively, investigate the influences of K, M, and N in Figs. 11, 12, and 13, where normalized CFO is randomly generated in [−0.5,+0.5). With different K (i.e., K=7, K=9, K=13) and the same other parameters of the simulation in Fig. 10, the correct probability of coarse CFO estimation is given in Fig. 11. Again, we can see that the proposed FAWC and MFBCoSaMP can jointly improve the correctprobability of coarse CFO estimation, compared with “WCM + CoSaMP”. Besides the improvement of correct probability, it looks like the smaller K (near the sparsitylevel K=10) obtains better correct probability when CNR is relatively low (e.g., ρ≤−4 dB). When ρ≥−4 dB, the influence of K (near the sparsitylevel) is not clear.
During the simulation in Fig. 11, we fix sparsity K=13 (near to 10, and ⌈N/10⌉=⌈128/10⌉ =13), change M, and keep other parameters the same. The curves of correct probability with different M are plotted in Fig. 12, where N=128, P=N=128, α=0.9, K=13, different measurementmatrices (i.e., optimized by the FAWC method and WCM method), and different Ms (i.e., M=80, M=96, and M=112) are considered.With the increase of M, higher correct probability can be obtained, and the improvement is much easier to observe in lower CNR. In Fig. 13, M, P, and K vary with the change of N, where M=N/2, P=N, α=0.9, K=⌈N/10⌉. Compared with “WCM + CoSaMP”, the “FAWC + MFBCoSaMP” improve the correct probability of coarse CFO estimation. When CNR is relatively low, e.g., ρ≤−4 dB, the bigger N obtains a higher correct probability for both “WCM + CoSaMP” and “FAWC + MFBCoSaMP”, while this rule is not certain for higher CNR. Even so, it is clear that the improvement from “FAWC + MFBCoSaMP” obviously exists.
According to the aforementioned coarse CFO estimation, our “FAWC + MFBCoSaMP” can effectively improve its correct probability compared with the conventional “WCM + CoSaMP”.
C.2 Performance of fine CFO estimation
Under the compressive sampling scenario, our objective is to obtain a better MSE of fine CFOestimation, compared with the conventional CSbased method. Furthermore, the MSE performance, which can reach the Cram\(\acute {\texttt {e}}\)rRao lower Bound (CRLB) of Nyquist rate, is also expected. From [15], the CRLB with Nyquist rate is given by
where the CNR ρ is defined in (2).
Firstly, we investigate the MSE performance of fine CFO estimation with different numbers of measurement. The performance evaluation is given in Fig. 14, where N=128P=N, α=0.9, K=10, different measurement matrices (i.e., constructed by the proposed optimization method and WCM method), and different M (i.e., M=64, M=96, and M=112) are considered. From Fig. 14, the proposed method, i.e., “FAWC + MFBCoSaMP”, improves the MSE performance, compared with the conventional “WCM + CoSaMP”. It is obvious that the increasing M makes a better MSE for both “FAWC + MFBCoSaMP” and “WCM + CoSaMP”. Regrettably, sustained increasing of M cannot obtain significant MSEimprovement for the relative large M (e.g., M≥96) and relative high CNR (CNR ρ≥−2 dB).
Based on the statistics of sparsityK (listed in Tables 3 and 4), the MSE performance with different K is shown in Fig. 15, where N=128P=N, M=96, α=0.9. In addition to the sparsity level K=10, three cases of K, i.e., K=7, K=9, and K=13 are, respectively, considered. From Fig. 15, the proposed method, i.e., “FAWC + MFBCoSaMP”, has a better MSE performance than conventional “WCM + CoSaMP”. For both “FAWC + MFBCoSaMP” and “WCM + CoSaMP”, the smallest MSE is reached at K=7 under the low CNR (e.g., ρ≤−2 dB). When CNR is high (e.g., ρ≥2 dB) the smallest MSE is obtained by K=13. That is to say, the higher CNR or the lower noise makes a larger K which is chosen to cover enough significant amplitudes and get a better EMV approximation. Actually, the influence of the given sparsityK in Fig. 15 is not significant.
Unlike coarse CFOestimation that only EMV requires to recover, the fine CFOestimation usually need the recovered signal r at Nyquist rate to construct the equivalent likelihood function (see Subsection 4.1). An approximation of received signal r is given in (46). In (46), we require a good enough approximation of EMV, and thus the enough “energy” of EMV could be covered. It looks like a larger K is more effective for a relative high CNR (e.g., in Fig. 15). However, larger K usually results in a worse reconstruction accuracy. From Tables 3 and 4, a larger N can make the “energy” more concentrated in EMV. To balance K and covered “energy”, a larger N is a good choice.
To verify the feasibility that the larger N can get the better MSE performance, the simulation is given in Fig. 16, where P=N, M=⌈0.85×N⌉, K=⌈0.1×N⌉, α=0.9, N=128, N=192, and N=256 are, respectively, considered. As we expected, increasing N can reduce the MSE for both “FAWC + MFBCoSaMP” and “WCM + CoSaMP”, and the proposed “FAWC + MFBCoSaMP” can obtain a smaller MSE than “WCM + CoSaMP” for each N.
Another phenomenon observed in Fig. 16 is that the larger N, the closer MSE to its CRLB could be obtained. To verify this, an extended simulation is given in Fig. 17, where P=N, M=⌈0.85×N⌉, K=⌈0.1×N⌉, α=0.9, and three cases of N (i.e., N=256, N=512, and N=1024) are considered. It is obvious that the large N can make MSE easily reach the CRLB in spite of the insignificant discrepancy.
Conclusions
In this paper, a preliminary study for CFO estimation based on compressed sensing has been exhibited. We first confirmed that compressive sampling is feasible for MLbased CFO estimation. To solve the number uncertainty of subblock in blocksparsity CS scenarios, we then introduce the circle cluster, propose a new coherencepattern, and form an FAWC optimizationmethod by exploiting the features of EMV. Compared with WMC, the proposed FAWC shows improvements in fullperformance evaluations. FAWC can obtain smaller value of cost function to capture small coherence, can obtain a better convergence. Beside the properties of small coherence and good convergence, FAWC effectively solves the uncertainty of subblock and thus improve the reconstruction accuracy and robustness to sparsity level, receive signal length, and the measurements. Furthermore, based on the EMV features, the MFBCoSaMP is proposed to boost the support set mergence, improve the reconstruction accuracy, reduce the computational complexity, and hold the improvement robustness against the simulation parameters. Finally, the jointed “FAWC + MFBCoSaMP” has been verified by the elaborate performance evaluations. For example, the MSE performance which is close to the CRLB is better than that of “WCM + CoSaMP”, “WCM + MFBCoSaMP”, or “FAWC + MFBCoSaMP”, while the improvement is robust to the simulation parameters (e.g., sparsity level, number of measurement, and receive signal length).
References
 1
L Haring, A Czylwik, M Speth, in Proc. 2004 International OFDMWorkshop. Analysis of synchronization impairments in multiuser OFDM systems (Dresden, 2004), pp. 91–95.
 2
X Wang, B Hu, A lowcomplexity ML estimator for carrier and sampling frequency offsets in OFDM systems. IEEE Commun. Lett. 18(3), 503–506 (2014).
 3
M Morelli, U Mengali, Feedforward frequency estimation for PSK: A tutorial review. Eur. Trans. Telecommun. 9(2), 103–116 (1998).
 4
W Kuo, M Fitz, Frequency offset compensation of pilot symbol assisted modulation in frequency flat fading. IEEE Trans. Commun. 45(11), 1412–1416 (1997).
 5
M Morelli, U Mengali, Carrierfrequency estimation for transmissions over selective channels. IEEE Trans. Commun. 48(9), 1580–1589 (2000).
 6
D Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).
 7
E Cands, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).
 8
P Cheng, Z Chen, Y Guo, L Gui, in Proc IEEE International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC). Distributed Bayesian compressive sensing based blind carrierfrequency offset estimation for interleaved OFDMA uplink (London, 2013), pp. 801–806.
 9
J Zhang, K Niu, Z He, in Proc IEEE International Conference on Communications (ICC). Multilayer distributed Bayesian compressive sensing based blind carrierfrequency offset estimation in uplink OFDMA systems (Kuala Lumpur, 2016), pp. 1–5.
 10
J Zhou, M Ramirez, S Palermo, S Hoyos, Digitalassisted asynchronous compressive sensing frontend. IEEE J. Emer. Sel. Topics Circuits Syst. 2(3), 482–492 (2012).
 11
X Chen, E Sobhy, Z Yu, S Hoyos, J SilvaMartinez, S Palermo, B Sadler, A subNyquist rate compressive sensing data acquisition frontend. IEEE J. Emerg. Sel. Topics Circuits Syst.2(3), 542–551 (2012).
 12
S Kong, A deterministic compressed GNSS acquisition technique. IEEE Trans. Veh. Technol. 62(2), 511–521 (2013).
 13
S Kong, B Kim, Twodimensional compressed correlator for fast PN code acquisition. IEEE Trans. Wireless Commun. 12(11), 5859–5867 (2013).
 14
B Kim, S Kong, Twodimensional compressed correlator for fast acquisition of BOC(m, n) signals. IEEE Trans. Veh. Technol. 63(6), 2662–2672 (2014).
 15
M Luise, R Reggiannini, Carrier frequency recovery in alldigital modems for burstmode transmissions. IEEE Trans. Commun. 43(234), 1169–1178 (1995).
 16
R Baraniuk, M Davenport, R DeVore, M Wakin, A simple proof of the restricted isometry property for random matrices.Constr. Approx. 28(3), 253–263 (2008).
 17
V Abolghasemi, S Ferdowsi, S Sanei, A gradientbased alternating minimization approach for optimization of the measurement matrix in compressive sensing. Signal Proc.92(4), 999–1009 (2012).
 18
G Li, Z Zhu, D Yang, L Chang, H Bai, On projection matrix optimization for compressive sensing systems. IEEE Trans. Signal Process. 61(11), 2887–2898 (2013).
 19
W Chen, M Rodrigues, I Wassell, Projection design for statistical compressive sensing: a tight frame based approach. IEEE Trans. Signal Process. 61(8), 2016–2029 (2013).
 20
L ZelnikManor, K Rosenblum, Y Eldar, Dictionary optimization for blocksparse representations. IEEE Trans. Signal Process. 60(5), 2386–2395 (2012).
 21
N Cleju, Optimized projections for compressed sensing via rankconstrained nearest correlation matrix. Appl. Comput. Harmon. Anal. 36(3), 495–507 (2014).
 22
L ZelnikManor, K Rosenblum, Y Eldar, Sensing matrix optimization for blocksparse decoding. IEEE Trans. Signal Process. 59(9), 4300–4312 (2011).
 23
D Needell, J Tropp, Cosamp: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM. 5(12), 93–100 (2010).
 24
D Needell, J Tropp, CoSaMP: iterative signal recovery from noisy samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2008).
 25
J Tropp, J Laska, M Duarte, J Romberg, R Baraniuk, Beyond Nyquist: efficient sampling of sparse bandlimited signals. IEEE Trans. Inf. Theory. 56(1), 520–544 (2010).
 26
M Mishali, Y Eldar, Blind multiband signal reconstruction: compressive sensing for analog signals. IEEE Trans. Signal Process. 57(3), 993–1009 (2009).
 27
J DuarteCarvajalino, G Sapiro, Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization. IEEE Trans. Image Process. 18(7), 1395–1408 (2009).
 28
R Baraniuk, V Cevher, M Duarte, C Hegde, Modelbased compressive sensing. IEEE Trans. Inf. Theory. 56(4), 1982–2001 (2010).
Acknowledgements
The authors wish to thank the editor and the anonymous reviewers for their valuable suggestions, which helped significantly improve the quality of the paper. This work is supported in part by the project of Meteorological information and Signal Processing Key Laboratory of Sichuan Higher Education Institutes (Grant No. QXXCSYS201402), the project of science and technology plan of Sichuan Province (Grant No. 2015JY0138), the Xihua University Young Scholars Training Program (Grant No. 01201408), the key scientific research fund of Xihua University (Grant No: Z1120941, Z1120945, Z1320927), the key projects of Education Department of Sichuan Province (Grant No. 15ZA0134), the Open Research Subject of Key Laboratory (Research Base) of signal and information processing (Grant No. szjj2015071), and the Chunhui plan of Ministry of education (Grant No. Z2015113) of China.
Authors’ contributions
All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Qing, C., Wang, J., Huang, C. et al. Compressive samplingbased CFOestimation with exploited features. J Wireless Com Network 2016, 240 (2016). https://doi.org/10.1186/s1363801607301
Received:
Accepted:
Published:
Keywords
 Carrier frequency offset
 Compressed sensing
 Estimation metric
 CoSaMP
 Equivalent Likelihood function