- Research
- Open Access
- Published:

# Coprime sampling for nonstationary signal in radar signal processing

*EURASIP Journal on Wireless Communications and Networking*
**volume 2013**, Article number: 58 (2013)

## Abstract

Estimating the spectrogram of non-stationary signal relates to many important applications in radar signal processing. In recent years, coprime sampling and array attract attention for their potential of sparse sensing with derivative to estimate autocorrelation coefficients with all lags, which could in turn calculate the power spectrum density. But this theoretical merit is based on the premise that the input signals are wide-sense stationary. In this article, we discuss how to implement coprime sampling for non-stationary signal, especially how to attain the benefits of coprime sampling meanwhile limiting the disadvantages due to lack of observations for estimations. Furthermore, we investigate the usage of coprime sampling for calculating ambiguity function of matched filter in radar system. We also examine the effect of it and conclude several useful guidelines of choosing configuration to conduct the sparse sensing while retain the detection quality.

## Introduction

Both of the designs of radar system and sensor network could be attribute to obtaining sufficient samples to generate the correlation function so that a good ambiguity scale or spectrum estimation could be obtained[1]. The design of radar system needs to take advantage of the ambiguity function (AF) between received signal and transmitted signal to determine the resolution of the radar, side lobe behavior, and ambiguities in both time and Doppler domains. AF is calculated via the convolution of transmitted signal with received signal, which contains the copy of transmitted signal, noise, and Doppler shift caused by the movement of the target. Furthermore, considering cost of deployment in broad range, many applications of sensor network require to distribute the sensor elements sparsely. The power spectral density (PSD) acquired by these sensors could describe the power incidents for the given direction and area, and PSD is the Fourier transform of autocorrelation function of received signal or correlation function among the signal received in different sensors in the array. Hence, both scenarios could benefit from sparse sensing a rapidly changing signal sequence with optimal performance in terms of retaining the resolution or detecting ability compared with dense sampling.

The degree of freedom (DoF) of sampling defined the minimum number of sample points, which could specify certain properties of the sequence as a whole[2]. Before the research of coprime samplers, the available sensors were considered as a signal array and increasing DoF could be achieved by performing an augmentation algorithm on the covariances obtained via minimum redundancy arrays (MRA)[3], which consisted uniformly linear arrays with maximum possible aperture. Bedrosian[4] extended the linear array to non-uniformly distribution such that their pairwise differences could generate full coverage for certain span, the article also enumerated the array size *M* from 3 to 11 to achieve full coverage as much as *M*(*M*−1)/2. The algorithm proposed in[5] could find near-optimal integer sensor locations that maximized the number of distinct nonnegative integers, but it also restated the fact that location of elements in an MRA could only be approximated rather than specified in closed form. Besides, there were other ways to generate extra freedoms, including higher order statistics based methods, Khatri-Rao product based methods, and nested array[6]. Besides, the article[7] developed the application of nested array beyond focusing on the DoF, finding nested array could improve the spectrum efficiency.

Coprime sampling first had been used for identifying sinusoids in noise[8] along with other methods proposed for synthetic aperture radar locating and imaging of moving targets[9]. Further research explored the properties and applications of coprime sampling and array in both time and frequency domains. The article[10] used coprime samplers to increase the dimensions of DFT filter banks after sensor arrays as well as to estimate the power spectrum density of received signal. In the article[11], the multidimensional coprime sensing extended the previous implementations to acquire densely sampled domain. The article[12] proposed spatial smoothing algorithm together with coprime sampling to estimate frequencies of sinusoids buried in noise and directions-of-arrival of impinging signals on a sensor array.

Note that the article presenting coprime sampling[10] strictly confines discussion within the underlying assumption of wide-sense stationary signal so that the expectation of autocorrelation could approach the real value via multi-times averaging. This increased delay is used to compensate the variation introduced by sub-Nyquist sampling. On the other hand, however, in the real-world application, just as the description in the first paragraph, the working scenarios of many applications involves non-stationary signal. The sampled points could not simply ascribe to independent and identical distribution either. Consequently, the autocorrelation coefficients might change dramatically during a short period. In this article, we deal with this inconsistency and discuss the coprime sampling for non-stationary signal to obtain its second order statistic properties. In general, the classic point of view for processing non-stationary signal regards it as piece-wise stationary signal, but as these two theories combining together there are many research problems such as stability of estimation, coverage of second order derivatives, and so on. In the following content, we will discuss these problems and our tentative solutions in detail.

The rest of this article is organized as follows, we first quickly revisit the basic concepts and properties about coprime sampling in Section Theory and properties for coprime sampling. In Section STFT for coprime sampling non-stationary signal, we propose and simulate the algorithm of two-steps coprime sampling especially used for the non-stationary signal. In Section Implementation in radar signal processing, we extend the implementation scenario to radar signal processing and discuss several critical trade-offs in designing the radar signal processing system with coprime sampling. Finally, we conclude the research discussion in Section Conclusions and future research.

## Theory and properties for coprime sampling

The algorithm of coprime sampling was introduced in[10]. The input signal is *S*(*T*). Original sample rate is *T*_{
s
}, and the down sampling rate for two sample streams are *M* and *N* whose greatest common divisor is one. Then, except the beginning point, the two generated sample streams do not have any overlap in origin signal sequence.

### Definition 1

The difference co-array *x*_{
k
}[*n*_{1},*n*_{2}] is generated by two sample sequences *x*_{1}[*n*_{1}] and *x*_{2}[*n*_{2}] coprime sampled from input signal. Its index *k* satisfies

The markers ⌊*Z*⌋stand for the largest integer less than certain values *Z*, and *L* stands for the total length of the signal segment. The coprimality of *M* and *N* can be used to show that the range of distinct value in *x*_{
k
}[*n*_{1},*n*_{2}] is the product of the coprime factors[10]. That is

First of all, the physical meaning of this difference co-array is that via this difference co-array between the two coprime sampled steams the correlation of the original sequence could be calculated at all lags. Note that it does not confine the rate of down sampling, which might result the sample rate way below the Nyquist-sampling restriction. That is, the sampling might be arbitrarily sparse. On the other hand, however, there are two major drawbacks relevant with large values of coprime pairs: the latency in the time domain and the resolution range in the frequency domain. We will discuss them in detail in the following Section.

Besides, the minor differences in value ranges of coprime sampled signal streams generate different coverage of difference co-array and result in different coverage of autocorrelation coefficients.

### Property 1

With *n*_{1} and *n*_{2} restricted to the range 0 ≤ *n*_{1} ≤ *N* − 1 and 0 ≤ *n*_{2} ≤ *M* − 1, index of the resulting difference co-array *k* = *M* *n*_{1} − *N* *n*_{2} will have *MN* distinct values in the range −(*M* − 1)*N* ≤ *k* ≤ (*N* − 1)*M*, which also indicates that there are absent values in the given range of *k*.

### Property 2

If the ranges of *n*_{1} and *n*_{2} are 0 ≤ *n*_{1} ≤ *N* − 1 and −*M* + 1 ≤ *n*_{2} ≤ *M* − 1, the resulting index of difference co-array will achieve full coverage for 0 ≤ *k* ≤ *MN* − 1.

The detailed demonstration of two properties above could be found in[10]. Furthermore, in this article, we implement coprime sampling beyond the limit of *MN* − 1, which leads to the following property.

### Property 3

Given sample points in the range (−*L*,*L*), the largest coprime pair that it could have is *M* and *N* subject to *MN* < *L*, such that *n*_{1} and *n*_{2} restricted to the range 0 ≤ *n*_{1} ≤ ⌊*L*/*M*⌋ and −⌊*L*/*N*⌋ ≤ *n*_{2} ≤ ⌊*L*/*N*⌋, the resulting index of difference co-array *k* = *M* *n*_{1} − *N* *n*_{2} will achieve full coverage in the range 0 ≤ *k* ≤ *L* − 1.

### Proof

Following from the Euclid’s Theorem[13], we could conclude that with any integer *k* in the range [0,*L* − 1], there are always integers *n*_{1}’ and *n*_{2}’ such that *k* = *M* *n*_{1}^{′} − *N* *n*_{2}^{′}. □

Adding *lMN* to both terms in the right hand side of the formula with proper selection of variable *l*, we could let *n*_{1} = *n*_{1}^{′} + *lN* such that *n*_{1} ∈[0, ⌊*L*/*M*⌋]. Then we have

Since we have already known that *k* ∈[0,*L* − 1] and *M* *n*_{1} ∈[0,*L*], the range of *N*(*n*_{2}^{′} + *lM*) should be [ −*L*,*L*]. Let *n*_{2} = *n*_{2}^{′} + *lM*, we could have *n*_{2} ∈[−⌊*L*/*N*⌋⌊*L*/*n*⌋] which concludes the proof.

Moreover, in the range −*MN* + 1 ≤ *k* ≤ 0, there are still absent values. But based on the symmetry property of autocorrelation, these results could be used for averaging the expectation of the symmetric positive counterpart.

## STFT for coprime sampling non-stationary signal

### Short time Fourier transform with coprime sampling

The presumption to generate autocorrelation from the coprime sampled sequence based on the previous chapter is that the second-order expectations of the sequence remain unchanged over time, which is essentially the wide-sense stationary (WSS) signal. In the application of radar signal processing, however, this criteria cannot hold anymore. In this Section, we will discuss how to combine coprime sampling with short time Fourier transform (STFT-CS) to process non-stationary signal, and demonstrate this algorithm is useful to preserve both the original quality of the signal and at the same time dramatic decrease the sample rate.

The choice of short time Fourier transform (STFT) is because this method is widely used in analyzing the time-frequency properties of non-stationary signals. In an STFT, the signal is segmented by a window function and performed Fourier transform within the window. The width of the window is a trade-off between temporal resolution and frequency resolution–better time resolution is achieved by narrow window while wider window could achieve better frequency resolution. In addition, in the scenario of coprime sampling, based on the Property 3, the window size also dictates the upper bound of the values of coprime pairs. Consequently, it determines the trade-off between stability of the estimation and the computational complexity of STFT-CS.

First of all, there is one definition to simplify the description of algorithm. Because the number of available autocorrelation estimation is changing along with the choice of coprime pairs, we define the procedure of finding the average as a single operator.

#### Definition 2

*E*(*R*_{
xy
}(*k*)) stands for mathematical expectation of autocorrelation *R*(*k*) for a given *k* using all available estimations. The value of *k* is determined by two independent index variables of the input sequence *x* and *y*.

The algorithm involves several important independent variables listed in Table1.

Based on STFT, within every slicing window we consider the sequence

The estimate of autocorrelation is

where *c*_{
xx
}[ −*m*] = *c*_{
xx
}[*m*],

The implementation of this estimate could be implemented via using fast N-point DFT algorithm three times.

Finally, we could calculate the PSD of input signal via estimate of autocorrelation${\widehat{\phi}}_{\mathit{\text{xx}}}\left[m\right]$.

The resulted PSD for given sliced signal is

Along with the moving of slicing window, we can acquire the spectrogram of input signal via STFT-CS.

We implement the algorithm with linear frequency modulation (LFM) to test it validity. The sample rate of the signal is 8000 Hz, sweeping frequency from 0 Hz to 4000 Hz in ten seconds, which can be observed from the top row of Figure1. The configurations of important variables corresponding to the Table1 include: the length of slicing window is 256 sample points (sp), the length of STFT is 512 sp, the processed length of autocorrelation is 255 sp, the window function is Hamming window with window size equal to the size of Fourier transform.

As shown in the Figure1, the first row is the standard algorithm to calculate STFT generating spectrogram, and the other rows are using the algorithm STFT-CS mentioned above. We can see that both standard STFT and STFT-CS could accurately trace the change of frequency.

Besides, based on the comparison of the lower three sub-figures in Figure1 using SFTF-CS, we can see that as the increase of coprime pairs, there are more and more traces of aliasing frequency appearing in the spectrogram. This is because as the algorithm select less sample points to estimate the autocorrelation, there will be more variation.

On one hand, the decreasing of sample points is desirable for signal processing. For example, the fourth row in the Figure1 only utilize about 17 percent of the sample points to achieve the same instantaneous PSD estimation with minor quality degeneration. On the other hand, however, the variation become more obvious if we continue increasing the values of coprime pair. This is the motivation for us to develop the 2-step STFT-CS presented in the following section.

### 2-steps STFT coprime sampling

As the spectrogram described above, large values of coprime pair could generate lots of noise. An intuitive method to identify fundamental frequency buried under noise is to calculate its autocorrelation. Then, it becomes an interesting procedure of iterative autocorrelation, that is, estimating the autocorrelation via using convolution three times.

In time domain, we calculate the autocorrelation function based on (7)

where the *φ*_{1xx}[*m*] and *φ*_{2xx}[*m*] are two autocorrelation estimations which could be either same or different values of coprime pairs. The counterpart in frequency domain is straightforward. It is the product of PSD generated by two coprime pairs.

In Figure2, we show the result of 2-steps STFT-CS together comparing with three results of 1-step STFT-CS with different configurations. The first row lists STFT-CS without coprime sampling as benchmark. The second and third rows are consistent with what we found in the previous section. When the coprime pair increases to 17 and 19, we can hardly distinguish the real trace of spectrogram from the noise aliasing. The fourth row is the result of 2-steps STFT-CS using *M*_{1} = 17, *N*_{1} = 19, and *M*_{2} = 11, *N*_{2} = 13. The resulting sequence has roughly the same degree of down-sampling rate (about 27 percent of the original sample points) as the experiment in second row. But we can observe that via the 2-step autocorrelation the false positive PSD estimates are obviously decreased.

### Variation analysis for estimating autocorrelation

In the article[10], the coprime sampling is the method dealing with the sub-Nyquist sampling frequency. Though it does provide promising potential of dramatically decreasing the sampling rate via coprime pair, the estimation is inherently suffering the problem of taking much longer latency. While in the non-stationary scenario, this situation would raise the major problem generating pronounced estimation variation for the reason that only a small piece of samples could be considered as stationary and processed once with autocorrelation estimation in STFT-CS. There is not enough latency permitted for averaging.

In other words, the statistical stability is sacrificed negatively proportional to the degree of coprime sampling. As the choice of coprime pair increases, the density of differential array generated would decrease correspondingly, though the coprime sampling might still calculate the full coverage of all lag by satisfying the Property 4. Then the correlation estimates at that lag could be deteriorated offsetting from the real values.

The article[14] examined the error of estimating autocorrelation and the article[15] linked the variation with sampling rate and refined it in the form of mean-square error. Besides, this article also advocated that for short data records, whose sample points were less than 50 or the product of bandwidth and sampling period is less than 25, the preferred sampling rate was the twice of Nyquist rate. Otherwise, there would be obvious increases in the variance of the estimation.

Comparing this claim with the scenario of experiments in this article, the sampling periods would fall into the category of short-data records while the sampling rate should be regarded as sub-Nyquist rate which is much lower than the desired rate in this criteria. Hence, the estimation will definitely suffer from significant variance.

The method of statistical differential could be used for estimating the covariances of autocorrelation coefficients[16]. For convenience of analysis, we could treat the LFM as piecewise stationary signal and define it as

where the series${\sum}_{n=0}^{\infty}{h}_{n}$ are absolutely convergent, and *ϵ*_{
n
} is a WSS process with zero means and variance *δ*^{2}, that is

Then, the real value of autocorrelation is

and the estimation of autocorrelation is

standing for averaging all of the available values of${x}_{{n}_{1}M}{x}_{{n}_{2}N}$ to calculate the autocorrelation *k* within the range *L*.

Assume *h*_{
t
} = 0, we could calculate the covariance based on (16)

where *κ*_{4} = *E*(*ϵ*^{4}) − 3*δ*^{4}.

Therefore, we could have[17]

and the particular case is the variance of autocorrelation

Another estimator for the autocorrelation is

which confines estimate only based on the available sample points.

Similarly to (22), (23), we have

Compared with (22), we could have

Based on (23), (26) with Schwarz Inequality, we could have two measures for the variation of autocorrelation estimation with the length of available sample points.

From (28), (29) we can see why the estimate variation is increase as the decrease of sample points. This is an inherent problem confining the choices of coprime pairs in processing non-stationary signal using coprime sampling.

## Implementation in radar signal processing

The working principle of matched filter in radar signal processing is to output the cross-correlation of target-plus-noise signal and transmitted signal[18]. So, it is possible to implement the matched filter as a correlation process. When the signal-to-noise (SNR) ratio is large, the output of the matched filter can usually be approximated be the autocorrelation function of the transmitted signal. Hence, we could use much less sampling points via coprime sampling to estimation the output of matched filter.

In this section, we still consider the typical LFM waveform, which is consistent with the previous section and also used as a basic waveform in radar transmission because it could independently control pulse energy through its duration and range resolution through its bandwidth[19]. Thus, if the transmitted signal could be processed to have long duration and narrowly concentrated autocorrelation, both good range resolution and good energy can be obtained simultaneously.

Considering a modified waveform *x*^{′}(*t*) by modulating *x*(*t*) with a LFM complex chirp and compute its complex ambiguity function

The instantaneous frequency of this waveform is the derivative of the phase function

in which the *βτ* is called time-bandwidth product of the LFM pulse. The time-delay measurement error is proportional to *τ* and the frequency measurement error is proportional to 1/*τ*.

In many radar application, the moving target generate Doppler shift in its echo signal, which makes the output of the matched filter should be considered as the cross correlation between the Doppler-shifted received signal and the transmitted signal. In this case, we use ambiguity function (AF) to generate the behavior of a waveform paired with its matched filter. Based on the analysis of AF, we could easily examining resolution, side lobe behavior, and ambiguities in both time and Doppler domains.

Assume the Doppler frequency is *F*_{
D
}, then the input waveform with a Doppler-shifted response is$x\left(t\right){e}^{j2\Pi {F}_{D}t}$. Also assume that the filter is designed to peak at *T*_{
M
} = 0, which means that the time axis at the filter output is relative to the expected peak output time for the range of a target. Assuming *M* and *N* are the coprime pair and *T*_{
s
} is the sampling rate. Then the AF could be defined as

where *k* is the difference between two sample points, and$\xc2(k,{F}_{D})$ is the original complex ambiguity function for the simple pulse signal

And its amplitude is

Then we can have the amplitude for the AF of the LFM waveform

The zero-Doppler cut of the LFM ambiguity function, which is just the matched filter output when there is no Doppler mismatch, is

and the zero-delay response is

In the experiment, we use coprime sampling on both transmitted signal in matched filter and received signal. Because the length of the chirp is predefined and need to fully analyze, based on Property 2, we could only have the difference co-array of index with missing values. But since the missing values will be more often for the autocorrelation with larger values, and we have already assumed *T*_{
M
} = 0 making the AF located relative to the time axis, there is not obvious effect of the missing values for the image generated by coprime sampled AF. The following simulation also confirms this claim.

From Figure3, we can see that when we use small values of coprime pair in the upper right plotting, the resulting AF has inconspicuous degradation comparing with the upper left one, which is derived directly from formula. But as the values of coprime pair increase, there will be duplicated aliasing parts getting closer to the correct estimation. When we choose *M* = 9 and *N* = 7, the aliasing parts could still be easily eliminated, but when the pair becomes *M* = 10 and *N* = 11, or even bigger, the resulting AF is unable to use because all of the estimations overlap with each other.

Then, based on Figures4 and5, we can observe different effects of the coprime sampling to the estimate of Doppler shift and time delay. Both of them are generated simultaneously with Figure3. In Figure4, because the coprime sampling is implemented in the time domain, the variation becomes more and more obvious as the increase of coprime factors. We have thoroughly discuss the reason of this phenomenon in the previous section. In Figure5, since we keep the iteration along the Doppler axis the same, there is no variation existing. As the values of coprime pair increase, however, the distance between Doppler shift becomes smaller and smaller. Hence, we can conclude that as the increase of values of coprime pair, it will have deleterious effects including amplifying variation along time axis and decrease the scope of Doppler shift frequency.

To further quantify the effect of coprime sampling, we enumerate all coprime combinations under 17. The reason that we choose the threshold as 17 is because if the values of pair above this threshold severe overlaping of aliasing parts make the output useless. Besides, as shown in the following experiments, we find most of the results could be consistently arranged according to the products of coprime pairs. That is, four out of five important properties of coprime sampling AF are relevant with the product of coprime pairs rather than the value of either factor.

The distance between main lobes in Doppler axis determines the scope of Doppler frequency. From Figure6, we can see that this distance is decreasing monotonically from out-of-scope to about 33 Hz along with the increase of the product of coprime pair. Considering the width of main lobe provided in Figure7, for the case of 33 Hz distance, the second lobes of two AF estimations would overlap together. Note that for product less than 50, there will be no duplicated main lobe in the scope. For the worst case, the largest side lobes of each duplicate have overlapped together.

The width of the main lobe in Doppler axis determines the Doppler resolution. In the Figure7, its range is from 19.8 to 16.2 Hz. The width has only three discrete possible values and does not directly relevant with the product of coprime pair, though the general trend of width is getting smaller with larger products. This finding is instructive to find such coprime pair with narrow main lobe width but also less variation in time domain and longer distance among main lobes in Doppler axis.

Despite the largest side lobes in the Doppler axis become larger along with increasing coprime pairs, as shown in Figure8, this is still not the major challenge comparing with the main lobes approaching to each other shown in Figure6. Note that there is one abnormal value generated by *M* = 14 and *N* = 11. But it is more like a cutting-off main lobe located in the edge of scope rather than a real side lobe.

In Figure9, the radiated shape shows no obvious relationship between the trend of main lobe and the choice of coprime pairs.

Comparing Figure10 with Figure8, we can see the main problem in time domain is caused by the variation, which in turn make the largest side lobes comparable to the main lobe. Note that there is a turning point in the production of 88 for the ratio changing from stable around 18 percent to increasing with the production.

## Conclusions and future research

In this article, we develop the algorithm STFT-CS to deal with non-stationary signal. The decreasing of processed data is favorable for sparse sampling as well as decreasing the computation complexity, but the cost is increasing estimate variation. To alleviate the side-effects, we introduce two-steps STFT-CS. The simulation indicates it is effective to eliminate aliasing estimations.

Besides, we also implement the coprime sampling with the matched filter of radar signal processing, and quantify the effect of coprime sampling in such process. Based on our analysis, one could integrate the coprime sampling in radar system to detect targets, and choose the suitable configuration based on specific circumstance and needs.

The future research directions include further optimizing the algorithm and using it with real-world radar data. Besides, coprime sampling and coprime sensor array do have many interesting features which might be useful for other applications, such as wireless communication or image/audio signal processing. Moreover, just as using STFY-CS converting time domain signal to more meaningful PSD representation, coprime sampling could be regarded as preprocessing for contaminant data to restore the fundamental information.

## References

- 1.
Lang S, Duckworth G, McClellan J: Array design for MEM and MLM array processing. In

*IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP’81*. Atlanta, Georgia; 1981:145-148. - 2.
Hastie T, Tibshirani R, Friedman J:

*The Elements of Statistical Learning: Data Mining, Inference, and Prediction*. Springer, Berlin: Springer Science+Business Media Deutschland GmbH, Heidelberger Platz 3, 14197; 2009. - 3.
Moffet A: Minimum-redundancy linear arrays.

*IEEE Trans. Antennas Propag*1968, 16(2):172-175. 10.1109/TAP.1968.1139138 - 4.
Bedrosian S: Nonuniform linear arrays: Graph-theoretic approach to minimum redundancy.

*Proc. IEEE*1986, 74(7):1040-1043. - 5.
Pearson D, Pillai S, Lee Y: An algorithm for near-optimal placement of sensor elements.

*IEEE Trans. Inf. Theory*1990, 36(6):1280-1284. 10.1109/18.59928 - 6.
Pal P, Vaidyanathan P: Nested arrays: a novel approach to array processing with enhanced degrees of freedom.

*IEEE Trans. Signal Process*2010, 58(8):4167-4181. - 7.
Chen J, Liang Q, Wang J, Choi HA, Wang X, Zheng R: Spectrum efficiency of nested sparse sampling. In

*Wireless Algorithms, Systems, and Applications vol. 7405 of Lecture Notes in Computer Science*. Edited by: Jing T, Xing K. Berlin Heidelberg: Springer; 2012:574-583. 10.1007/978-3-642-31869-6_50 - 8.
Xia XG: On estimation of multiple frequencies in undersampled complex valued waveforms.

*IEEE Trans. Signal Process*1999, 47(12):3417-3419. 10.1109/78.806088 - 9.
Li G, Xu J, Peng YN, Xia XG: Location and imaging of moving targets using nonuniform linear antenna array SAR.

*IEEE Trans. Aerosp. Electron. Syst*2007, 43(3):1214-1220. - 10.
Vaidyanathan P, Pal P: Sparse sensing with co-prime samplers and arrays.

*IEEE Trans. Signal Process*2011, 59(2):573-586. - 11.
Vaidyanathan P, Pal P: Theory of sparse coprime sensing in multiple dimensions.

*IEEE Trans. Signal Process*2011, 59(8):3592-3608. - 12.
Pal P, Vaidyanathan P: Coprime sampling and the music algorithm. In

*2011 IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE)*. Sedona, Arizona; 2011:289-294. - 13.
Nagell T:

*Introduction to Number Theory*. Providence, RI: American Mathematical Society; 2001. - 14.
Marriott F, Pope J: Bias in the estimation of autocorrelations.

*Biometrika*1954, 41: 390-402. - 15.
Kay S: The effect of sampling rate on autocorrelation estimation.

*IEEE Trans. Acoustics Speech Signal Process*1981, 29(4):859-867. 10.1109/TASSP.1981.1163634 - 16.
Lomnicki Z, Zaremba S: On the estimation of autocorrelation in time series.

*Annals Math. Stat*1957, 28: 140-158. 10.1214/aoms/1177707042 - 17.
Bartlett M: On the theoretical specification and sampling properties of autocorrelated time-series.

*Suppl. J. Royal Stat. Soc*1946, 8: 27-41. 10.2307/2983611 - 18.
Skolnik M:

*Introduction to Radar Systems*. P.O. Box 182605, Columbus, OH 43218: McGraw-Hill Higher Education’s Mathematics for Science/Engineering and Advanced Mathematics, McGraw-Hill Education; 2002. - 19.
Richards MA:

*Fundamentals of Radar Signal Processing*. P.O. Box 182604, Columbus, OH 43272: The McGraw-Hill Companies; 2005.

## Acknowledgements

This study was supported by Office of Naval Research under Grants N00014-13-1-0043, N00014-11-1-0071, N00014-11-1-0865, and U.S. National Science Foundation under Grants CNS-1247848, CNS-1116749, CNS-0964713.

## Author information

### Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Wu, Q., Liang, Q. Coprime sampling for nonstationary signal in radar signal processing.
*J Wireless Com Network* **2013, **58 (2013). https://doi.org/10.1186/1687-1499-2013-58

Received:

Accepted:

Published:

### Keywords

- Coprime sampling
- Non-stationary signal
- Short-time Fourier transform
- Radar signal processing