Compressive samplingbased CFOestimation with exploited features
 Chaojin Qing^{1}Email authorView ORCID ID profile,
 Jiafan Wang^{2},
 Chuan Huang^{3} and
 Hongyuan Chen^{4}
https://doi.org/10.1186/s1363801607301
© The Author(s) 2016
Received: 10 April 2016
Accepted: 15 September 2016
Published: 4 October 2016
Abstract
Based on the compressed sensing (CS) technique, the carrier frequency offset (CFO) is estimated in compressive sampling scenarios. We firstly confirm the compressibility of estimation metric vector (EMV) of conventional maximum likelihood (ML)based CFO estimation, and thus conduct the compressive sampling at receiver. By exploiting the EMV features, introducing a circle cluster, and proposing a novel coherencepattern, we then form a featureaided weight coherence (FAWC) optimization to optimize measurementmatrix. Besides the proposed FAWC optimization, by referencing compressive sampling matching pursuit (CoSaMP) algorithm and exploiting EMV features, a metricfeature based CoSaMP (MFBCoSaMP) algorithm is proposed to improve the EMVreconstruction accuracy, and to reduce computational complexity of classic CoSaMP. With reconstructed EMV, we finally develop a CFO estimation method to estimate the coarse CFO and fine CFO. Relative to weighted coherence minimization (WCM) and classic CoSaMP, the elaborate performance evaluations show that the FAWC and MFBCoSaMP can independently or jointly improve accuracy of the CFOestimation (including coarse CFOestimation and fine CFOestimation), and the improvement is robust to system parameters, e.g., sparsity level, number of measurements, etc. Furthermore, the mean squared error (MSE) of proposed CFO estimation method can almost reach to its Cram\(\acute {\texttt {e}}\)rRao lower Bound (CRLB) when a relative large number of measurements, a relative high carriertonoise ratio (CNR), and a reasonable length of observed signals can be obtained.
Keywords
1 Introduction
The carrier frequency offset (CFO), which is one of the well understood radio frequency (RF) impairment, may result in severe performance degradation at the receiver [1, 2]. To improve receiver performance, the CFO estimation has been studied comprehensively. In [3–5], the CFO estimations for additive white Gaussian noise (AWGN) channels, flat fading channels and frequencyselective fading channels are respectively addressed. Recently, the proposed compressive sensing (CS) approach [6, 7], which enables subNyquist sampling of sparse or compressible signals in some domain, can be employed to reduce system complexity and to save power significantly. By exploiting the sparsity profile, the CSbased CFO estimation is presented in [8, 9] for multiuser uplink. Compared with the CFO estimation without utilizing CS, the estimation accuracy is improved due to the priori information of sparse approximation. Although various methods of CFO estimation are proposed with and without utilizing CS, the sampling rate of these existing methods for CFO estimation, e.g., [3–5, 8, 9], needs to be at least the Nyquist rate, resulting in excessive power consumption and design difficulty for the analogtodigital converter (ADC) when the high sampling rate is experienced [10, 11].
To reduce the sampling rate, the CS is introduced into synchronization issue in [12–14]. In [12], a fast and rough estimate of pseudonoise (PN) code phase and Doppler frequency with a reduced number of parallel correlators (i.e., compressed correlators) is proposed, where the sparse expression is based on autocorrelation. For binary phaseshift keying (BPSK) signals and binary offset carrier (BOC) modulation signals, the 2D compressed correlator (TDCC) technique for the rough estimate of PN code phase and Doppler frequency is introduced in [13, 14], respectively. Based on hypothesis testing for a code phase and Doppler frequency next to the true hypothesis can yield a nonnegligible amount of signal energy, the compressed correlators technique in [12–14] tests a compressed hypothesis and coherently combines the signal energy in the neighboring hypotheses. Although the number of correlators is reduced, the compressed correlator technique can only roughly estimate Doppler frequency. Furthermore, the features of estimation metric vector (EMV) of CFO estimation are not exploited for compressive sampling and signal reconstruction. Thus, the CSbased CFO estimation, which includes coarse estimation and fine estimation, is not intensively investigated in [12–14].
 ∙
Feasibility analysis of compressive sampling: Based on the compressibility of EMV in MLbased CFO estimation [15], we first verify the received signal can be obtained with compressive sampling, and the ADC requirement can be reduced.
 ∙
Optimization of measurement matrix: In compressive sampling, measurement matrix directly determines whether the reconstruction can be realized successfully [6, 7]. The designing of efficient measurement matrices becomes the core problem for higher probability of reconstruction. In [16], Baraniuk et al. have been proved that many random matrices are good measurement matrices, and some optimized methods can also be found in existing literatures, such as [17–22]. These existing methods, however, are not specially designed for CFO estimation, and thus cannot obtain an optimized performance (e.g., reconstructionaccuracy improvement for EMV recovery). To obtain a more suitable measurement matrix, we exploit the features of EMV. Firstly, the EMV is expressed as a circle cluster to reduce the blocksparsity to one (i.e., significant amplitudes are gathered in one subblock when EMV is divided into multiple subblocks). With the special structure of blocksparsity, a novel coherencepattern is proposed to fully utilize structure information of circle cluster. Then, a featureaided weight coherence (FAWC) optimization, based on the algorithm of weighted coherence minimization (WCM) [22], is developed to optimize measurementmatrix without computational complexity increasing.
 ∙
Reconstruction algorithm: The reconstruction algorithm is another critical factor for successful reconstruction. With the compressive sampling at receiver, many recovery algorithms are proposed. Among these reconstruction algorithms, we mainly reference the compressive sampling matching pursuit (CoSaMP) [23, 24], due to its high reconstruction accuracy and excellent robustness to noise. According to the CoSaMP algorithm and the EMV features, a metric featurebased CoSaMP (MFBCoSaMP) algorithm is proposed to improve the EMV reconstruction accuracy, and to reduce the computational complexity of classic CoSaMP.
 ∙
CFO estimation: With the reconstructed EMV, we implement the CFO estimation by using a twostep procedure which includes coarse and fine CFO estimation. In the coarse of CFO estimation, the likelihood function is constructed according to EMV and to seek the local maximum. As for the fine CFO estimation, the received signal vector is recovered at Nyquist rate with the reconstructed EMV and then used to generate the likelihood function, with which an interpolation method is employed to seek the local maximum near to the value of coarse CFO estimation.
Performance evaluation shows that the proposed CSbased CFO estimation can be implemented with reduced samplingrate, along with an acceptable estimation deterioration in terms of mean squared error (MSE). Compared with the optimization of weighted coherence minimization (WCM) [22] and the reconstruction algorithm CoSaMP, the elaborate performance evaluations present the proposed FAWC and MFBCoSaMP can independently or jointly improve the accuracy of CFOestimation (including coarse CFOestimation and fine CFOestimation), and the improvement is robust to system parameters, e.g., the sparsity level, the number of measurement, and the length of the observed signal. Furthermore, the MSE of proposed CFOestimation can almost reach to its Cram\(\acute {\texttt {e}}\)rRao lower bound (CRLB) when the reasonable conditions can be obtained.
 (a)
We confirm the compressibility of CFO EMV in conventional maximum likelihood (ML)based CFO estimation. Thus, the compressive sampling can be employed for CFO estimation.
 (b)
A novel FAWC optimization method is proposed by exploiting the features of EMV. Compared with WCM, the proposed FAWC can obtain a suitable measurement matrix for CFO estimation to improve the reconstruction accuracy with comparative computational complexity. Also, the proposed method is robust to the designed parameters, and can easily to reach its convergence.
 (c)
An MFBCoSaMP algorithm is proposed to reconstruct EMV by exploiting its features. Compared with the classic CoSaMP algorithm, the proposed method improves the recovery accuracy and reduces the computational complexity. Furthermore, the improvement of recoveryaccuracy is robust against to the varying of parameters.
 (d)
We implement the CFO estimation (including coarse and fine estimation) with compressive sampling. Furthermore, the MSE performance can reach to its CRLB when reasonable system parameters are obtained.
The rest of this paper is organized as follows. In Section 2, we formulate the method of compressive sampling for CFO estimation, where the expression of sampling is derived from the MLbased approach in a conventional system model of Nyquist rate. Section 3 deals with the optimization of measurement matrix by exploiting EMV features. In Section 4, the CFO estimation method is proposed, where we present the MFBCoSaMP recoverymethod, coarse CFO estimation, and fine CFO estimation. Performance evaluations are shown in Section 5. Finally, Section 6 concludes this paper.
Notation: We use boldface letters to denote matrices and column vectors; 0 denotes the zero vector of arbitrary size; (·)^{ T }, (·)^{ H }, (·)^{−1}, (·)^{ † }, and ⌊·⌋, denote the transpose, conjugate transpose, matrix inversion, MoorePenrose matrix inversion, and floor operation, respectively; I _{ P } is P×P identity matrix; G(i,j) is the (i,j)th element of the matrix of G; we write ∥·∥_{ p } for the usual ℓ _{ p } vector norm: \({\left \ {\mathbf {x}} \right \_{p}} = {\left ({\sum {{x_{i}^{p}}}} \right)^{{1 / p}}}\); supp(x)={i:x _{ i }≠0} is the support set that denotes the index set of nonzero elements in x; Φ _{ T } denotes the column submatrix comprising the T columns of Φ; x _{ T } denotes the entries of the vector x in the set T; the complementary set of set T is denoted by T ^{ c }, ∅ denotes empty set, and E{·} is the expectation operator.
2 Compressive sampling for CFO estimation
According to the conventional MLbased CFO estimation, we verify the feasibility of compressive sampling for CFO estimation in this section. In Subsection 2.1, We briefly describe the method of conventional MLbased CFO estimation. Then, in Subsection 2.2, we raise the compressible EMV, and summarize its features. Based on the compressibility of EMV, the feasibility of compressive sampling for CFOestimation is verified in Subsection 2.3, according to the derivation result that the received signals (not EMV) can directly conduct compressive sampling.
2.1 Conventional MLbased CFO estimation
where \(\Delta \widetilde f\) is a tentative value for Δ f.
2.2 Sparsity of CFO EMV
In (8), the EMV is approximately sparse. That is, for the elementamplitudes of EMV (i.e.,\(\left  {\Psi \left ({\Delta {{\widetilde f}_{1}}} \right)} \right ,\cdots \), \(\left  {\Psi \left ({\Delta {{\widetilde f}_{P}}} \right)} \right \)), only a few amplitudes are significant and the rest are nearly zero or negligible.
 (a)
Only a few elementamplitudes in EMV are significant.
 (b)
The significant amplitudes only gather in one cluster when the normalized CFOs connect to a circle from −0.5 to 0.5. In this paper, this cluster in a circle is denominated as circle cluster (i.e., significant amplitudes form a cluster in a circle).
Note that the CFO estimation metrics are not in a cluster in a strict sense for the special case that the normalized CFOs are located near to −0.5 (or 0.5). When the normalized CFOs are located near to −0.5 (or 0.5), some significant amplitudes appear near to 0.5 (or −0.5). Thus, we still describe this feature as cluster due to its cycle periodicity when the normalized CFOs are connected from −0.5 to 0.5 to a circle. For convenience, we call this cluster in this paper as circle cluster, i.e., a cluster in a circle.
According to the intrinsic features, EMV can be compressed according to the compressed sensing theory [6, 7]. For expression convenience, we also call the intrinsic features of EMV as EMV features in this paper.
2.3 Feasibility of compressive sampling for CFO estimation
As verified in Subsection 2.1, the CFO EMV, i.e., \(\widetilde {\boldsymbol {\Psi }}\), can be compressed. However, the compressibility of CFO EMV \(\widetilde {\boldsymbol {\Psi }}\) dose not mean that the received signal r can conduct compressive sampling, for the reason that the sparsity lies in CFO EMV \(\widetilde {\boldsymbol {\Psi }}\) rather than the received signal r. Thus, we need to further analyze the compressibility of CFO EMV \(\widetilde {\boldsymbol {\Psi }}\) and whether it could be mapped to the compressive sampling of the receivesignal r.
where Θ _{ m }=[θ _{ m1},θ _{ m2},⋯,θ _{ mN }]^{ T },m=1,2,⋯,M.
Fortunately, the derived expression in (12) can be directly employed to perform the compressive sampling of received signal r due to its form y=r. Note that, since M is significantly smaller than N, y=Θr infers that r (not EMV) can be compressed by the M×N matrix Θ, i.e., the compressive sampling of received signal r can be directly conducted. With the sensing matrix Θ, we can adopt the generic circuit architecture of analogtoinformation converter (AIC) [25] or modulated wideband converter (MWC) model [26] to implement compressive sampling. Due to M≪N, the sampling rate can be naturally reduced, i.e., the ADCs at subNyquist rate can be employed for CFO estimation.
After conducting the compressive sampling according to (12), we will use reconstructionapproach to reconstruct the EMV and then perform the CFO estimation based on the reconstructed EMV. Especially, the reconstruction accuracy is mainly decided by the measurement matrix and reconstruction algorithms [6, 7]. Thus, we optimize the measurementmatrix in Section 3 and improve the reconstructionalgorithm in Section 4 for a better reconstructionaccuracy of EMV.
3 Optimization of measurement matrix
In CS theory, the measurement matrix plays an important role to determine the performance of reconstruction [6, 7], because a more efficient measurement matrix for the compressive sampling leads to the higher probability of reconstruction. In [16], Baraniuk et al. have been proved that many random matrices are good measurement matrices. The optimized methods can be found in [17–22]. However, these existing methods are not specially designed for CFO estimation. Thus, the EMV features (see Subsection 2.2) are not exploited for the optimizing of measurement matrices. Usually, the CFO EMV appears as intrinsic features that only a few elementamplitudes in EMV are significant, and the significant amplitudes gather together to form a circle cluster, as depicted in Fig. 1. To optimize measurement matrix Φ, we exploit the EMV features and propose a FAWC optimization method in this paper.
 (a)
The circle cluster is introduced to reduce the blocksparsity to one with the subblocklength K (i.e., the sparsity level), while WCM has its subblock uncertainty. See Fig. 1 a, an example that the normalized CFOs are located near to −0.5 is considered. Assuming K=7 (i.e., the amplitudes which are less than 0.05 are treated as ignorable), our FAWC with circle cluster has only one subblock with nonignorable amplitudes and exact number of nonignorable amplitudes in that subblock (i.e., subblocklength is K=7), while the method in [22] has to consider two subblocks (i.e., the blocksparsity is 2) with nonignorable amplitudes and the numbers of nonignorable amplitudes in that two subblocks are uncertain. In fact, the actual numbers of nonignorable amplitudes in Fig. 1 a are, respectively, 3 and 4 in the two subblocks of nonignorable amplitudes according to the method in [22]. However, the two subblocks have to be considered as owning seven nonignorable amplitudes to cover all possibility (i.e., the actual numbers of nonignorable amplitudes maybe 1,2,...,7 in the two subblocks).
 (b)
The concerned patterns of Gram matrix G (defined in Eq. (15)) are different. For example, the concerned patterns of WCM and FAWC are given in Fig. 2 a and Fig. 2 b, respectively. In Fig. 2 a, WCM considers three blocks of size 7, and its concerned patterns are based on subblock coherence. Unlike WCM, the concerned patterns in the proposed FAWC are mainly based on the significant amplitudes. Furthermore, minimizing the subblock coherence is the main task of WCM in [22], while we minimize the coherence close to the maximum of the significant amplitudes in CFOestimation metric. The more appropriate minimization of coherence is exploited according to EMV features, which will be verified in the later section.
 (c)
The measurement matrix is optimized on the basis of complex matrix, rather than optimizing real measurement matrix.
 1).
Objective of optimization
According to Eq. (9), the sparse vector \(\widetilde {\boldsymbol {\Psi }} = \widetilde {\boldsymbol {\Gamma }}\mathbf {r}\). Then, we have$$ \mathbf{r} = \widetilde{\boldsymbol{\Gamma}}^{\dag} \widetilde{\boldsymbol{\Psi}}=\mathbf{D}\widetilde{\boldsymbol{\Psi}}, $$(14)where \(\mathbf {D} = \widetilde {\boldsymbol {\Gamma }}^{\dag }\) is just for expression convenience. Equation (14) indicates that D can be viewed as a dictionary under the CS framework. Then the Gram matrix of E=Φ D with normalized columns can be expressed as$$ {\mathbf{G}} = {{\mathbf{E}}^{H}}{\mathbf{E}}= {{\mathbf{D}}^{H}}{{\boldsymbol{\Phi }}^{H}}{\boldsymbol{\Phi} \mathbf{D}}. $$(15)Similar to [22], the optimization objective in this paper, which minimizes the total coherence of the concerned pattern (the red entries in Fig. 2 b, denoted by \({\mu _{C}^{t}}\)), nonconcerned pattern (the green entries in Fig. 2 b, denoted by \(\mu _{NC}^{t}\)) and the normalization penalty (denoted by η) of Gram matrix G, is given by$$ {\boldsymbol{\Phi }} =\mathop {\arg \min }\limits_{\boldsymbol{\Phi }} \left\{ {\frac{1}{2}\eta + \left({1  \alpha} \right)\mu_{NC}^{t} + \alpha {\mu_{C}^{t}}} \right\}, $$(16)where 0<α<1 is a weighting parameter between the total coherence of the concerned pattern and the total coherence of nonconcerned pattern. The normalization penalty η, the total coherence of nonconcerned pattern μ _{ NC } and the total coherence of concerned pattern μ _{ C } are defined as$$ \left\{ \begin{array}{l} \eta = {\sum\limits_{\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{I}}} {\left {G\left({i,j} \right)  1} \right}^{2}} = \sum\limits_{j = 1}^{P} {{{\left {G\left({j,j} \right)  1} \right}^{2}},} \\ \mu_{NC}^{t} = {\sum\limits_{\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{NC}}} {\left {G\left({i,j} \right)} \right}^{2}},\\ {\mu_{C}^{t}} = {\sum\limits_{\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{N}}} {\left {G\left({i,j} \right)} \right}^{2}}. \end{array} \right. $$(17)Where i=1,2,⋯,P and j=1,2,⋯,P; Ω _{ I }, Ω _{ NC } and Ω _{ C } are the index sets of diagonal entries, nonconcerned pattern and concerned pattern of Gram matrix, respectively, (i.e., the index set of the yellow entries, the green entries and the red entries in Fig. 2 b). By defining a complete set Ω={(i,j)1≤i≤P,1≤j≤P}, then we have$$ \left\{ \begin{array}{l} {{\boldsymbol{\Omega }}_{I}}~~= \left\{ \left({i,j} \right)\left ~i = j \right. \right\},\\ {{\boldsymbol{\Omega }}_{C}}~= \left\{ {\left({i,j} \right)\left {\left {i  j} \right \le \left\lfloor {\frac{K}{2}} \right\rfloor {\kern 1pt} {{,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j} \right.} \right\}\\ ~~~~~~\cup \left\{ {\left({i,j} \right)\left {\left {i  j} \right \ge P  \left\lfloor {\frac{K}{2}} \right\rfloor {{,}}~i \ne j} \right.} \right\},\\ {{\boldsymbol{\Omega }}_{NC}} = {\boldsymbol{\Omega }}  {{\boldsymbol{\Omega }}_{I}}  {{\boldsymbol{\Omega }}_{C}}. \end{array} \right. $$(18)In Eq. (18), Ω _{ NC } is expressed by the difference set of Ω, Ω _{ I } and Ω _{ C }.
 2).
Initialization of optimization
DuarteCarvajalino and Sapiro [27] proposed designing Φ by minimizing \(\left \ {{{\mathbf {D}}^{T}}{{\boldsymbol {\Phi }}^{T}}{\boldsymbol {\Phi } \mathbf {D}}  {\mathbf {I}}_{P}} \right \_{F}^{2}\), which is used to initialize Φ in [22] for the algorithm of WCM.
Different from the real dictionary in [22] and [27], the dictionary \(\mathbf {D} = \widetilde {\boldsymbol {\Gamma }}^{\dag }\) is a complex matrix due to the complex value of CFO EM. Thus, we initialize Φ by minimizing \(\left \ {{{\mathbf {D}}^{H}}{{\boldsymbol {\Phi }}^{H}}{\boldsymbol {\Phi } \mathbf {D}}  {\mathbf {I}}_{P}} \right \_{F}^{2}\), i.e.,$$ {{\boldsymbol{\Phi }}^{\left(0 \right)}}\mathop {{{= }}\min }\limits_{\boldsymbol{\Phi }} \left\ {{{\mathbf{D}}^{H}}{{\boldsymbol{\Phi }}^{H}}{\boldsymbol{\Phi} \mathbf{D}}  {{\mathbf{I}}_{P}}} \right\_{F}^{2}. $$(19)The objective (19) can be solved by using the eigenvalue decomposition (EVD) of D D ^{ H }, i.e.,$$ {\mathbf{DD}}^{H}= {\mathbf{U}} {\boldsymbol{\Lambda}} {\mathbf{U}}^{H}, $$(20)where U is a unitary matrix, and Λ is a real diagonal matrix in which the diagonal entries are the eigenvalues of D D ^{ H }. Then, the initial value of Φ, denoted by Φ ^{(0)}, can be determined by$$ {{\boldsymbol{\Phi }}^{\left(0 \right)}} = \left[ {\begin{array}{cccc} {{{\mathbf{I}}_{M}}}&0 \end{array}} \right]{{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}{{\mathbf{U}}^{H}}. $$(21)  3).
The nth Iteration of optimization
According to [22], the value of Φ in the nth iteration, i.e., Φ ^{(n+1)} is given by$$ {{\boldsymbol{\Phi }}^{\left({n+1} \right)}} = {\boldsymbol{\Delta }}_{M}^{\frac{1}{2}}{\mathbf{V}}_{M}^{H}{{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}{{\mathbf{U}}^{H}}, $$(22)where U and Λ can be obtained from eigenvalue decomposition of D D ^{ H } (see (20)); Δ _{ M } are the top eigenvalues and V _{ M } are the corresponding eigenvectors of$$ {\boldsymbol{\Upsilon}}={{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}{{\mathbf{U}}^{H}}{\mathbf{D}}{h_{t}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right){{\mathbf{D}}^{H}}{\mathbf{U}}{{\boldsymbol{\Lambda }}^{ \frac{1}{2}}}. $$(23)In (23), h _{ t }(G ^{(n)}) is defined as$$ \begin{array}{l} {h_{t}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right) \buildrel \Delta \over = \frac{1}{3}{h_{\eta} }\left({{{\mathbf{G}}^{\left(n \right)}}} \right) + \frac{2}{3}\alpha {h_{{\mu_{C}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\\ ~~~~~~~~~~~~+ \frac{2}{3}\left({1  \alpha} \right){h_{{\mu_{NC}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right). \end{array} $$(24)Where the entries of h _{ η }(G ^{(n)}), \({h_{{\mu _{C}}}}\left ({{{\mathbf {G}}^{\left (n \right)}}} \right)\) and \({h_{{\mu _{NC}}}}\left ({{{\mathbf {G}}^{\left (n \right)}}} \right)\) are defined as$$ \left\{ \begin{array}{l} {h_{\eta} }\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\left({i,j} \right) = \left\{ {\begin{array}{l} {1,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \left({i,j} \right) \in {{\boldsymbol{\Omega }}_{I}}}\\ {{{\mathbf{G}}^{\left(n \right)}}\left({i,j} \right),else} \end{array}} \right.\\ {h_{{\mu_{C}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\left({i,j} \right) = \left\{ {\begin{array}{l} {{{\mathbf{G}}^{\left(n \right)}}\left({i,j} \right),\left({i,j} \right) \in {{\boldsymbol{\Omega }}_{C}}}\\ {0,{\kern 1pt} {\kern 1pt} {\kern 1pt} else} \end{array}} \right.\\ {h_{{\mu_{NC}}}}\left({{{\mathbf{G}}^{\left(n \right)}}} \right)\left({i,j} \right) = \left\{ {\begin{array}{l} {{{\mathbf{G}}^{\left(n \right)}}\left({i,j} \right),\left({i,j} \right) \in {{\mathbf{\Omega }}_{NC}}}\\ {0,{\kern 1pt} {\kern 1pt} {\kern 1pt} else} \end{array}} \right. \end{array} \right.. $$(25)For measurementmatrix optimizing, the proposed FAWC satisfies the conditions of surrogate objective of the boundoptimization method. Moreover, its iterative minimization can guarantee the convergence to a local solution. The proofs are abbreviated here for the reasons that the similar proofs can be obtained from Appendix B and Appendix A in [22] for the conditions of boundoptimization method and the convergence, respectively. Similar to [22], the computation complexity of the proposed optimization algorithm is also O(N ^{3}) (same as WCM), due to the application of EVD (the complexity of EVD is N ^{3}). Therefore, the proposed FAWC maintains a comparative computationcomplexity with WCM.Table 1FAWC optimization
Objective: Measurement matrix optimization with
given dictionary D, i.e.,
\({\boldsymbol {\Phi }} =\mathop {\arg \min }\limits _{\boldsymbol {\Phi }} \left \{ {\frac {1}{2}\eta + \left ({1  \alpha } \right)\mu _{NC}^{t} + \alpha {\mu _{C}^{t}}} \right \}.\)
Initialization:Set n=0, and calculate the
eigenvalue decomposition of D D ^{ H }, i.e.,
D D ^{ H }=U Λ U ^{ H }.
Then, we calculate the initial value of Φ
according to
\({{\boldsymbol {\Phi }}^{\left (0 \right)}} = \left [ {\begin {array}{cccc} {{{\mathbf {I}}_{M}}}&0 \end {array}} \right ]{{\boldsymbol {\Lambda }}^{ \frac {1}{2}}}{{\mathbf {U}}^{H}}.\)
Repeat:
a). Update G ^{(n)} according to Φ ^{(n)}:
G ^{(n)}=(Φ ^{(n)} D)^{ H } Φ ^{(n)} D.
b). Calculate h _{ t }(G ^{(n)}) according to the
forthcoming Eq. (24), and form the matrix
\( {\boldsymbol {\Upsilon }=} {{\boldsymbol {\Lambda }}^{ \frac {1}{2}}}{{\mathbf {U}}^{H}}{\mathbf {D}}{h_{t}}\left ({{{\mathbf {G}}^{\left (n \right)}}} \right){\mathbf {U}}{{\mathbf {D}}^{H}}{{\boldsymbol {\Lambda }}^{ \frac {1}{2}}}.\)
c). Calculate the eigenvalue decomposition of Υ,
and find its M top eigenvalues Δ _{ M } and the
corresponding eigenvectors V _{ M } of Υ.
d). Update measurement matrix according to
\({{\boldsymbol {\Phi }}^{\left ({n+1} \right)}} = {\boldsymbol {\Delta }}_{M}^{\frac {1}{2}}{\mathbf {V}}_{M}^{H}{{\boldsymbol {\Lambda }}^{ \frac {1}{2}}}{{\mathbf {U}}^{H}}.\)
e). n=n+1.
Until: Convergence criterion is satisfied.
4 CFO estimation method
Based on the compressive sampling (see Section 2) and the optimized measurementmatrix (see Section 3), the proposed CFO estimation method first reconstructs EMV. Then, we estimate the coarse CFO by seeking the maximum of the equivalent likelihood function according to the reconstructed EMV. Finally, for the fine CFO estimation, the received signal of Nyquist rate is recovered from the reconstructed EMV, and the likelihood function interpolation locates the local maximum from the result of coarse CFO estimation.
4.1 Sparse reconstruction of EMV
In this subsection, we will present the proposed MFBCoSaMP reconstructionmethod for EMV (i.e., \(\widetilde {\boldsymbol {\Psi }}\) recovery. The proposed reconstructionmethod mainly exploits EMV features as priori information, and thus improves its reconstructionaccuracy. We denote the reconstructed EMV as \(\overset \smile {\boldsymbol {\Psi }}\) and implement the CFO estimations (including coarse and fine CFO estimation) on the basis of the reconstructed EMV.
Among these currently available CS signal recovery algorithms, our proposed MFBCoSaMP mainly references the CoSaMP algorithm due to its high reconstruction accuracy and excellent robustness to noise [23, 24]. By further referencing the methodology of modelbased CoSaMP [28], an excellent method of supportset identification is developed. The objective of MFBCoSaMP is to recovery the EMV, i.e., the algorithm output is \(\overset \smile {\boldsymbol {\Psi }}\). We describe some critical points of MFBCoSaMP in detail as follows.
A.1 Initialization of MFBCoSaMP
The input parameters and initialization of MFBCoSaMP are similar to the CoSaMP algorithm. As for the input parameters, we also need the measurement matrix Φ, the noisy measurements y and the sparsity level K. In the initialization step, the initial target vector \({{{\overset \smile {\boldsymbol {\Psi }} }}}^{\left (0 \right)}\) and initial residual v are, respectively, set as a zero vector and y, for the reason that no priori can be obtained.
A.2 Identification based on EMV proxy
A.3 Supportset merger and metricvector estimation
Compared with CoSaMP algorithm, the procedures of supportset merger and metricvector estimation in the proposed MFBCoSaMP are similar, just with different supportset T.
A.4 Identification based on EMV
In (37) −(39), the indexindication function f _{ I }(X) is defined in Eq. (30).
A.5) Update of EMV
MFBCoSaMP algorithm
Input:Measurement matrix Φ, noisy measurements y, and sparsity level K. 

Output: CFO EMV \(\overset \smile {\boldsymbol {\Psi }}\) 
Initial: \({\overset \smile {\boldsymbol {\Psi }}^{\left (0 \right)}} \leftarrow {\mathbf {0}}\), v←y, k←0. 
Repeat: 
a). k=k+1. 
b). Form the metricvector proxy: u=Φ ^{ H } v. 
c). Identify the circlecluster location according to u 
W _{1}={i:u _{ i }= max{u _{1},u _{2},⋯,u _{ P }}}; 
W _{1} ← the 2K indexes nearest to W _{1} in index set { 1,2, ⋯,P} including W _{1}. 
d). Merge the support set: 
\({\mathrm {T}} \leftarrow {\text {supp}}\left ({{\overset \smile {\boldsymbol {\Psi }}^{\left ({k  1} \right)}}} \right) \bigcup {{\mathbf {W}}_{1}}.\) 
e). Least square estimation: b _{T}←(Φ _{T})^{ † } y. 
f). \(\phantom {\dot {i}\!}{\mathbf {b}}\left  {{~}_{{{{\mathrm {T}}}^{c}}}} \right. \leftarrow {\mathbf {0}}\). 
g). Identify circlecluster location according to b 
W _{2}={i:b _{ i }= max{b _{1},b _{2},⋯,b _{ P }}}; 
W _{2} ←the K indexes nearest to W _{2} in index set { 1,2, ⋯, P} including W _{2}. 
h). \({\mathbf {b}}\left  {{~}_{{\mathbf {W}}_{2}^{c}}} \right. \leftarrow {\mathbf {0}}.\) 
i). Prune to obtain the next approximation: 
\({\overset \smile {\boldsymbol {\Psi }}^{\left (k \right)}} \leftarrow {\mathbf {b}}.\) 
j). Update current samples \({\mathbf {v }} \leftarrow {\mathbf {y}}  {\boldsymbol {\Phi }}{\overset \smile {\boldsymbol {\Psi }}^{\left (k \right)}}.\) 
Until: k=K 
\(\overset \smile {\boldsymbol {\Psi }} \leftarrow {\overset \smile {\boldsymbol {\Psi }}^{\left (K \right)}}\). 
In addition to the accuracy improvement, MFBCoSaMP can also reduce the computational complexity. The comparison of computational complexity between CoSaMP and MFBCoSaMP is assessed as follows. Due to the same processing for initialization, support set merger, metricvector estimating and updating, CoSaMP, and MFBCoSaMP have the same computational complexity in these procedures. The main differences lie in the procedure of supportset identification, which is presented in A.2 and A.4. In A.2 (or A.4), the CoSaMP locates 2K (or K) largest components of the proxy u (or b) from the total P×1 space. Thus, the classic CoSaMP requires \({\sum \nolimits }_{i = 1}^{2K} {\left ({P  i} \right)} + {\sum \nolimits }_{i = 1}^{K} {\left ({P  i} \right)} \) real additions in each iteration. In comparison to CoSaMP, MFBCoSaMP only locates the maximum of u (or b) in the total P×1 space in A.2 (or A.4), and directly chooses the locations of the other 2K−1 (or K−1) components whose locations are located nearest to the location of the maximum. Then, MFBCoSaMP requires 2P real additions in each iteration. Obviously, \(2P<{\sum \nolimits }_{i = 1}^{2K} {\left ({P  i} \right)} + {\sum \nolimits }_{i = 1}^{K} {\left ({P  i} \right)} \) for reasonable K≥1. Therefore, the proposed MFBCoSaMP reduces the computational complexity compared to the classic CoSaMP.
4.2 Coarse CFO estimation
where p=1,2,...,P.
4.3 Fine CFO estimation
To implement the fine CFO estimation, we first utilize the reconstructed EMV to recover the received signal r with Nyquist rate. Then, an interpolation method is employed to construct the equivalent likelihoodfunction. Finally, we seek the local maximum by using the constructed likelihoodfunction to estimate the fine CFO.
In (46), if the dominant elementamplitudes in EMV (i.e., \({{\boldsymbol {\widetilde \Psi }}}\)) can be reconstructed accurately and its well sparserepresentation can be obtained, the effect of the noise vector n will be insignificant. Fortunately, with a good recoveryalgorithm (e.g., MFBCoSaMP) and sufficient observations (i.e., relatively large N) at a relatively high CNR, it is feasible to ignore the effect of noise vector n.
With the recovered \(\overset \smile {\mathbf {r}}\) and the coarse CFO estimation \(\Delta {\widehat f_{{\text {coarse}}}}\), we estimate the fine CFO (denoted as \(\Delta {\widehat f_{{\text {fine}}}}\)) near to \(\Delta {\widehat f_{{\text {coarse}}}}\), where the frequency range for searching \(\Delta {\widehat f_{{\text {fine}}}}\) is assumed to be \(\left [ {\Delta {{\widehat f}_{{\text {coarse}}}}  \zeta,\Delta {{\widehat f}_{{\text {coarse}}}} + \zeta } \right ]\) with ζ>0. Without loss of generality, ζ is chosen as the half searchstep of the coarse CFO estimation.
In (49), the pseudoinverse \(\widetilde {\boldsymbol {\Gamma }}^{\dag }\) can be computed and stored in advance to save the processing resources during the fine CFO estimation.
5 Performance evaluation
In this section, we will evaluate the performance of proposed methods. For the proposed FAWC, we will evaluate its cost function, recovery performance, and robustness. To evaluate the proposed MFBCoSaMP, we will consider the reconstruction accuracy and robustness. For their combines, the coarse and fine CFOestimations are evaluated, respectively.
5.1 Performance of optimized measurementmatrix
To verify the effectiveness of the proposed optimizationmethod FAWC in Section 3, comparisons against the WCM method in [22] are given in this subsection.
where E{·} denotes the expectation operator, \({\widehat {\mathbf {X}}}\) is the estimation of X.
Besides coping with the uncertainty of subblock, the proposed optimizationmethod can also help to improve the proposed reconstruction method, which can be seen in the later simulations.
5.2 Effectiveness of MFBCoSaMP
In this subsection, we compare the reconstruction performance of EMV when CoSaMP and MFBCoSaMP are, respectively, adopted. To really present the merits of MFBCoSaMP, a Gaussian random matrix [16], which is generated with each entry independently drawn from a Gaussian distribution with zero mean and unit variance, is employed as the measurement matrix for both algorithms. We do not use the optimized measurement matrices (e.g., the matrix optimized by WCM method or proposed FAWC in this paper) to avoid any improvement brought from the optimization of measurement matrix.
5.3 Performance of CFO estimation

“WCM + CoSaMP” denotes that the measurement matrix is optimized by WCM method, and the reconstruction algorithm is CoSaMP.

“FAWC + CoSaMP” represents that the measurement matrix is optimized by proposed FAWC optimization method, and the reconstruction algorithm is CoSaMP.

“WCM + MFBCoSaMP” denotes that the measurement matrix is optimized by WCM method, and the reconstruction algorithm is MFBCoSaMP.

“FAWC + MFBCoSaMP” represents that the measurement matrix is optimized by FAWC, and the reconstruction algorithm is MFBCoSaMP.

“ML (Nyquist Rate)” denotes MLbased coarse CFO estimation with the Nyquistrate sampling.
Mean value of spasityK with different thresholds
P (=N)  128  256  512  1024 

Meanvalue of K (T h _{1})  1  1  1  1 
Meanvalue of K (T h _{2})  10  6  4  3 
Meanvalue of K (T h _{3})  61  23  12  7 
Variance of spasityK with different thresholds
P (=N)  128  256  512  1024 

Variance of K (T h _{1})  0  0  0  0 
Variance of K (T h _{2})  35  36  23  13 
Variance of K (T h _{3})  1792  828  503  278 

For N=128, we choose K=10 as the sparsitylevel according to its mean with T h _{2}. Because T h _{1} is high and only one element of EMV is reserved (according to the mean), T h _{3} results in a too big K to ensure the measurements M≪N.

For N=256, we would like to choose the sparsity level as K=23 according to the mean with T h _{3}.

For N=512 and N=1024, the threshold T h _{3} is usually employed, while the mean and variance with T h _{3} are simultaneously considered, since N is large enough to cover more significantamplitudes of EMV. Then, the sparsity levels are, respectively, chosen as 35 (\(12+ \left \lceil {\sqrt {503}} \right \rceil \)) and 37 (\(7+ \left \lceil {\sqrt {3 \times 287}} \right \rceil \)).

For simulation convenience, we also choose sparsitylevel as K=⌈N/10⌉ in some simulations.
C.1 Performance evaluation of coarse CFO estimation
i.e., the offset between the estimated CFO and the real CFO is no more than a half searchstep △. In this paper, the searchstep for coarse CFO estimation is set as △=1/P.
During the simulation in Fig. 11, we fix sparsity K=13 (near to 10, and ⌈N/10⌉=⌈128/10⌉ =13), change M, and keep other parameters the same. The curves of correct probability with different M are plotted in Fig. 12, where N=128, P=N=128, α=0.9, K=13, different measurementmatrices (i.e., optimized by the FAWC method and WCM method), and different Ms (i.e., M=80, M=96, and M=112) are considered.With the increase of M, higher correct probability can be obtained, and the improvement is much easier to observe in lower CNR. In Fig. 13, M, P, and K vary with the change of N, where M=N/2, P=N, α=0.9, K=⌈N/10⌉. Compared with “WCM + CoSaMP”, the “FAWC + MFBCoSaMP” improve the correct probability of coarse CFO estimation. When CNR is relatively low, e.g., ρ≤−4 dB, the bigger N obtains a higher correct probability for both “WCM + CoSaMP” and “FAWC + MFBCoSaMP”, while this rule is not certain for higher CNR. Even so, it is clear that the improvement from “FAWC + MFBCoSaMP” obviously exists.
According to the aforementioned coarse CFO estimation, our “FAWC + MFBCoSaMP” can effectively improve its correct probability compared with the conventional “WCM + CoSaMP”.
C.2 Performance of fine CFO estimation
where the CNR ρ is defined in (2).
Unlike coarse CFOestimation that only EMV requires to recover, the fine CFOestimation usually need the recovered signal r at Nyquist rate to construct the equivalent likelihood function (see Subsection 4.1). An approximation of received signal r is given in (46). In (46), we require a good enough approximation of EMV, and thus the enough “energy” of EMV could be covered. It looks like a larger K is more effective for a relative high CNR (e.g., in Fig. 15). However, larger K usually results in a worse reconstruction accuracy. From Tables 3 and 4, a larger N can make the “energy” more concentrated in EMV. To balance K and covered “energy”, a larger N is a good choice.
6 Conclusions
In this paper, a preliminary study for CFO estimation based on compressed sensing has been exhibited. We first confirmed that compressive sampling is feasible for MLbased CFO estimation. To solve the number uncertainty of subblock in blocksparsity CS scenarios, we then introduce the circle cluster, propose a new coherencepattern, and form an FAWC optimizationmethod by exploiting the features of EMV. Compared with WMC, the proposed FAWC shows improvements in fullperformance evaluations. FAWC can obtain smaller value of cost function to capture small coherence, can obtain a better convergence. Beside the properties of small coherence and good convergence, FAWC effectively solves the uncertainty of subblock and thus improve the reconstruction accuracy and robustness to sparsity level, receive signal length, and the measurements. Furthermore, based on the EMV features, the MFBCoSaMP is proposed to boost the support set mergence, improve the reconstruction accuracy, reduce the computational complexity, and hold the improvement robustness against the simulation parameters. Finally, the jointed “FAWC + MFBCoSaMP” has been verified by the elaborate performance evaluations. For example, the MSE performance which is close to the CRLB is better than that of “WCM + CoSaMP”, “WCM + MFBCoSaMP”, or “FAWC + MFBCoSaMP”, while the improvement is robust to the simulation parameters (e.g., sparsity level, number of measurement, and receive signal length).
Declarations
Acknowledgements
The authors wish to thank the editor and the anonymous reviewers for their valuable suggestions, which helped significantly improve the quality of the paper. This work is supported in part by the project of Meteorological information and Signal Processing Key Laboratory of Sichuan Higher Education Institutes (Grant No. QXXCSYS201402), the project of science and technology plan of Sichuan Province (Grant No. 2015JY0138), the Xihua University Young Scholars Training Program (Grant No. 01201408), the key scientific research fund of Xihua University (Grant No: Z1120941, Z1120945, Z1320927), the key projects of Education Department of Sichuan Province (Grant No. 15ZA0134), the Open Research Subject of Key Laboratory (Research Base) of signal and information processing (Grant No. szjj2015071), and the Chunhui plan of Ministry of education (Grant No. Z2015113) of China.
Authors’ contributions
All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 L Haring, A Czylwik, M Speth, in Proc. 2004 International OFDMWorkshop. Analysis of synchronization impairments in multiuser OFDM systems (Dresden, 2004), pp. 91–95.Google Scholar
 X Wang, B Hu, A lowcomplexity ML estimator for carrier and sampling frequency offsets in OFDM systems. IEEE Commun. Lett. 18(3), 503–506 (2014).View ArticleGoogle Scholar
 M Morelli, U Mengali, Feedforward frequency estimation for PSK: A tutorial review. Eur. Trans. Telecommun. 9(2), 103–116 (1998).View ArticleGoogle Scholar
 W Kuo, M Fitz, Frequency offset compensation of pilot symbol assisted modulation in frequency flat fading. IEEE Trans. Commun. 45(11), 1412–1416 (1997).View ArticleGoogle Scholar
 M Morelli, U Mengali, Carrierfrequency estimation for transmissions over selective channels. IEEE Trans. Commun. 48(9), 1580–1589 (2000).View ArticleGoogle Scholar
 D Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).MathSciNetView ArticleMATHGoogle Scholar
 E Cands, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).MathSciNetView ArticleMATHGoogle Scholar
 P Cheng, Z Chen, Y Guo, L Gui, in Proc IEEE International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC). Distributed Bayesian compressive sensing based blind carrierfrequency offset estimation for interleaved OFDMA uplink (London, 2013), pp. 801–806.Google Scholar
 J Zhang, K Niu, Z He, in Proc IEEE International Conference on Communications (ICC). Multilayer distributed Bayesian compressive sensing based blind carrierfrequency offset estimation in uplink OFDMA systems (Kuala Lumpur, 2016), pp. 1–5.Google Scholar
 J Zhou, M Ramirez, S Palermo, S Hoyos, Digitalassisted asynchronous compressive sensing frontend. IEEE J. Emer. Sel. Topics Circuits Syst. 2(3), 482–492 (2012).View ArticleGoogle Scholar
 X Chen, E Sobhy, Z Yu, S Hoyos, J SilvaMartinez, S Palermo, B Sadler, A subNyquist rate compressive sensing data acquisition frontend. IEEE J. Emerg. Sel. Topics Circuits Syst.2(3), 542–551 (2012).View ArticleGoogle Scholar
 S Kong, A deterministic compressed GNSS acquisition technique. IEEE Trans. Veh. Technol. 62(2), 511–521 (2013).View ArticleGoogle Scholar
 S Kong, B Kim, Twodimensional compressed correlator for fast PN code acquisition. IEEE Trans. Wireless Commun. 12(11), 5859–5867 (2013).View ArticleGoogle Scholar
 B Kim, S Kong, Twodimensional compressed correlator for fast acquisition of BOC(m, n) signals. IEEE Trans. Veh. Technol. 63(6), 2662–2672 (2014).View ArticleGoogle Scholar
 M Luise, R Reggiannini, Carrier frequency recovery in alldigital modems for burstmode transmissions. IEEE Trans. Commun. 43(234), 1169–1178 (1995).View ArticleGoogle Scholar
 R Baraniuk, M Davenport, R DeVore, M Wakin, A simple proof of the restricted isometry property for random matrices.Constr. Approx. 28(3), 253–263 (2008).MathSciNetView ArticleMATHGoogle Scholar
 V Abolghasemi, S Ferdowsi, S Sanei, A gradientbased alternating minimization approach for optimization of the measurement matrix in compressive sensing. Signal Proc.92(4), 999–1009 (2012).View ArticleGoogle Scholar
 G Li, Z Zhu, D Yang, L Chang, H Bai, On projection matrix optimization for compressive sensing systems. IEEE Trans. Signal Process. 61(11), 2887–2898 (2013).MathSciNetView ArticleGoogle Scholar
 W Chen, M Rodrigues, I Wassell, Projection design for statistical compressive sensing: a tight frame based approach. IEEE Trans. Signal Process. 61(8), 2016–2029 (2013).View ArticleGoogle Scholar
 L ZelnikManor, K Rosenblum, Y Eldar, Dictionary optimization for blocksparse representations. IEEE Trans. Signal Process. 60(5), 2386–2395 (2012).MathSciNetView ArticleGoogle Scholar
 N Cleju, Optimized projections for compressed sensing via rankconstrained nearest correlation matrix. Appl. Comput. Harmon. Anal. 36(3), 495–507 (2014).MathSciNetView ArticleMATHGoogle Scholar
 L ZelnikManor, K Rosenblum, Y Eldar, Sensing matrix optimization for blocksparse decoding. IEEE Trans. Signal Process. 59(9), 4300–4312 (2011).MathSciNetView ArticleGoogle Scholar
 D Needell, J Tropp, Cosamp: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM. 5(12), 93–100 (2010).View ArticleMATHGoogle Scholar
 D Needell, J Tropp, CoSaMP: iterative signal recovery from noisy samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2008).MathSciNetView ArticleMATHGoogle Scholar
 J Tropp, J Laska, M Duarte, J Romberg, R Baraniuk, Beyond Nyquist: efficient sampling of sparse bandlimited signals. IEEE Trans. Inf. Theory. 56(1), 520–544 (2010).MathSciNetView ArticleGoogle Scholar
 M Mishali, Y Eldar, Blind multiband signal reconstruction: compressive sensing for analog signals. IEEE Trans. Signal Process. 57(3), 993–1009 (2009).MathSciNetView ArticleGoogle Scholar
 J DuarteCarvajalino, G Sapiro, Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization. IEEE Trans. Image Process. 18(7), 1395–1408 (2009).MathSciNetView ArticleGoogle Scholar
 R Baraniuk, V Cevher, M Duarte, C Hegde, Modelbased compressive sensing. IEEE Trans. Inf. Theory. 56(4), 1982–2001 (2010).MathSciNetView ArticleGoogle Scholar