Performance Analysis and Optimal Power Allocation for Linear Receivers Based on Superimposed Training

In this paper, we derive a performance comparison between two training-based schemes for Multiple-Input Multiple-Output (MIMO) systems. The two schemes are thetime-division multiplexing scheme and the recently proposed data-dependent superimposed pilot scheme. For both schemes, a closed-form expressions for the Bit Error Rate (BER) is provided. We also determine, for both schemes, the optimal allocation of power between pilot and data that minimizes the BER.


Introduction
The use of Multiple-Input Multiple-Output (MIMO) antenna systems enables high data rates without any increase in bandwidth or power consumption. However, the good performance of MIMO systems requires a priori knowledge of the channel at the receiver. In many practical systems, the receiver estimates the channel by time division multiplexing pilot symbols with the data. Although high quality of channel estimation could be achieved especially when using a large number of pilot symbols [1], this method may entail a waste of the available channel ressources. An alternative method is the conventional superimposed training. It consists in transmitting pilots and data at the same time. However, since during channel estimation, data symbols act as a source of noise, channel estimation is affected. In the literature, the impact of channel estimation error upon the performance indexes has been investigated. In [2] and [3], a comparison between the performance of the conventional superimposed training scheme and the time-multiplexing based scheme has been carried out. The optimal power allocation between pilot and data that maximizes a lower bound of the maximum mutual information criterion has been provided. It has been shown that the use of the optimal conventional superimposed training scheme entails a gain in terms of channel capacity only in special scenarios (many receive antennas and/or short coherence time). In other scenarios, the superimposed training scheme suffers from high channel estimation errors and its gain over the time-multiplexing based scheme is often lost. For this reason, many alternatives to the conventional superimposed training scheme have been proposed in recent works.
In [4], M. Ghogho et al proposed to introduce a distortion to the data symbols, prior to adding the known pilot in such a way to guarantee the orthogonality between pilot and data sequences. It is shown that the channel estimation performance is by far enhanced as compared to the standard superimposed scheme. This technique is referred to as the data-dependent superimposed training (DDST). While the DDST scheme exhibits the same channel performance as its TDMT counterpart, the effect of the introduced distortion may considerably affect the detection performance. The aim of this paper is thus to study the BER performance of the DDST and TDMT schemes and to evaluate to which extent, the performance of the DDST scheme is altered.
In the literature, the few works focusing on BER performance have been based on unrealistic assumptions like the uncorrelation between the noise and channel estimation error, [5], [6]. These assumptions make calculations feasible for fixed size dimensions but are far away from being realistic. To make derivations possible while keeping realistic conditions, we will relax the assumption of finite size dimensions by allowing the space and time dimensions to grow to infinity at the same rate. Working with the asymptotic regime, allows us to simplify the derivations and at the same time, we observe that the obtained results apply as well to usual sample and antenna-array sizes. We show also that the obtained expressions can be used to determine the optimal power allocation that minimize the BER.
The remainder of this paper is as follows: In the next section, we introduce the system model. After that, we review in section 3 the channel estimation and data detection processes for the TDMT and DDST schemes. Section 4 is dedicated to the derivation of the asymptotic BER expressions. Based on these results, we determine the optimal allocation of power between data and training for both schemes. Finally, simulation results are provided in section 7 to validate the analytical derivation.
Notation: Subscripts H , # and Tr (.) denote hermitian, pseudo-inverse and trace operators. The statistical expectation and the Kronecker product are denoted by E and ⊗. The (K × K) identity matrix is denoted by I K , and the (Q × Q) matrix of all ones by 1 Q . The (i, j) th entry of a matrix A is denoted by A i,j .
2 System model and problem setting 2.1 Time-division multiplexing scheme We consider a M × K MIMO system operating over a flat fading channel. Two phases are considered: First phase: In the first phase, each transmitting antenna sends N 1 pilot symbols. The received symbol Y 1 writes as: where P t is the K × N 1 pilot matrix and Assumption A.1. H is the M × K channel matrix with independent and identically distributed (i.i.d.) Gaussian variables with zero mean and variance 1 K , Assumption A.2. V 1 is the M × N 1 matrix whose entries are i.i.d. with variance σ 2 v Second phase: In the second phase, N 2 data symbols with power σ 2 wt are sent by each antenna so that the received signal matrix Y 2 writes as: where Assumption A.3. W t is the K × N 2 data matrix with i.i.d. bounded data symbols of power σ 2 wt and V 2 is the M × N 2 additive Gaussian noise matrix with entries of zero mean and variance σ 2 v . Moreover, W t is independent of V 1 and V 2 .

Data-dependent superimposed training scheme (DDST)
An other alternative to TDMT based schemes is to send the training and data sequences at the same time.
Since data is transmitted all the times, these schemes allow efficient bandwidth efficiency but may suffer from the interference caused by the training sequence. Ghogho et al proposed thus to distort the data so that is becomes orthogonal to the training sequence. The proposed distortion matrix D is defined as: (we assume that N K is integer valued, N being the sample size). This distortion matrix was shown to be optimal in the sense that it minimizes the averaged euclidean distance between the distorted and non-distorted data, [7]. The received signal matrix at each block is therefore given by: where Assumption A.4. W d is the data matrix with i.i.d. bounded data symbols of power σ 2 w d , and V is the M × N matrix whose entries are i.i.d. zero mean with variance σ 2 v .
Moreover, P d is the K ×N training matrix . The chosen pilot matrix P d should fulfill two requirements. It should be orthogonal to the distortion matrix D, thus satisfying DP H d = 0, and also verify the orthogonality relation P d P H d = N σ 2 P d I K in order to minimize the channel estimation error subject to a fixed training power. A possible pilot matrix that meets these requirements is : P d (k, n) = σ P d exp (2πkn/K) with k = 0, · · · , K − 1 and n = 0, · · · , N − 1. (1) 3 Channel estimation and data detection 3.1 TDMT scheme In the first phase, we assume that the receiver estimates the channel in the least-square sense. Hence, the channel estimate is given by: Thus the mean square error writes as: As it has been shown in [1], the optimal training matrix that minimizes the MSE under a constant training energy N 1 σ 2 Pt should satisfy: Pt denotes the amount of power devoted to the transmission of a pilot symbol. The optimal minimum value for the MSE t is then given by: In the data transmission phase, the linear receiver uses the channel estimate in order to retrieve the transmitted data. After channel inversion, the estimated data matrix is given by: where H t # denotes the pseudo-inverse matrix of H t . Assuming that the channel estimation error is small, the pseudo-inverse of the estimated matrix can be approximated by the linear part of the Taylor expansion as, [8]: Substituting H # by (H H H) −1 H H in (2), we obtain: H H is the orthogonal projector on the null space of H. Hence, the zero-forcing estimate of the transmitted matrix can be expressed as: Consequently, the effective post-processing noise ∆W t = W t − W t could be written as:

DDST scheme
The LS channel estimate is obtained by multiplying Y by P H d (P d P H d ) −1 , thus giving: −1 denotes the channel estimation error matrix for the DDST scheme. As aforementioned above in assumption A.5, the optimal training matrix that minimizes the MSE should satisfy: The MSE is thus given by: For the DDST scheme, we consider a zero-forcing receiver which, prior to inverting the channel matrix, cancels the contribution of the training symbols by right multiplying Y by (I − J), where the matrix W d being the sent data matrix. Thus, the zero-forcing estimate of W d is given by: Hence: 4 Bit error rate performance

TDMT scheme
In order to evaluate the bit error rate performance, we need to evaluate the asymptotic behaviour of the post-processing noise observed at each entry of matrix ∆W t . Using the 'characteristic function' approach, we can prove that conditioned on the channel matrix, the noise behaves asymptotically like a Gaussian random variable. This result is stated in the following theorem but its proof is postponed in appendix A.
and under the asymptotic regime defined as: the post-processing noise experienced by the i-th antenna at each time k, ∆W t (i, k), for the TDMT scheme behaves in the asymptotic regime as a Gaussian random variable: . and K → +∞ refers to this asymptotic regime.
Remark 1. Note that as compared to the results in [9], our results make appear a new additive term of order σ 4 v .
The gaussianity of the post-processing noise being verified in the asymptotic case, we can derive the bit error rate for QPSK constellation and Gray encoding as [10]: where the expectation is taken with respect to the probability density function of the post processing SNR at the i-th branch defined as: From [11] and [12], we know that is a weighted chi-square distributed random variable with 2(M − K + 1) degrees of freedom, whose density function is given by: is the indicator function corresponding to the interval [0, +∞[. Hence, the probability density function of γ t is given by: Plugging (4) into (3), we get: To compute (5), we use the following integral function: The BER is therefore equal to: The integral in (6) has been shown to have, for c > 0 the following closed-form expression, [13]: where 2 F 1 (p, q; n, z) is the Gauss hyper-geometric function [14]. If c = 0 equivalently b = 0, it is easy to note that J(m, a, 0) is equal to 1 2 . When m is restricted to positive integer values, the above equation can be further simplified to [15]: where µ = c 1+c . Plugging (8) into (7), we get: where µ t = 1 2Kδt+1 .

DDST scheme
Unlike the TDMT scheme, the asymptotic distribution of entries of the post-processing noise matrix is not Gaussian. Actually, we prove that: Under assumptions A.4, A.5, and under the asymptotic regime defined as: the post-processing noise experienced by the i-th antenna at each time k behaves asymptotically as a Gaussian mixture random variable, i.e, where : and Q is the cardinal of the set of all possible values of We can also prove that conditioning on the fact that where Q ′ is the cardinal of the set of all possible values W i = c 1 The assumption of the gaussianity of the post processing noise has been always assumed. For time division multiplexed training, this assumption is well-founded, since the post-processing noise, converges to a Gaussian distribution in the asymptotic regime, (see theorem 1).
In the superimposed training case, the distortion caused by the presence of data symbols affects the distribution of the post-processing noise which becomes asymptotically Gaussian mixture distributed. To assess the system performance in this particular case, we will start from the elementary definition of the bit error rate. Let ∆W i,k denotes the post processing noise experienced at the i-th antenna at time k (we omit the subscript d for ease of notations). As it has been previously shown, ∆W i,k behaves as a Gaussian Using the symmetry of the transmitted data, the BER expression at the ith branch, under QPSK constellation and for a given channel realization is given by: In the asymptotic regime, (∆W i,k ) converges to a mixed Gaussian distribution with the probability density function: Hence, conditioned on the channel, the asymptotic bit error rate can be approximated by: Finally, the proposed approximation of the BER can be obtained by averaging with respect to the channel realization H, thus giving: For QPSK constellations, it can be shown that √ Q ′ = 1 c1 , where 1 c1 = N K is assumed to be integer. Moreover, the set S of the values taken by ℜ(α s ) can be given by: with probability p s = ( w d σ d then, the BER expression becomes: where the expectation is taken over the distribution of γ d given by: The computation of the BER can be treated similarly to the TDMT scheme, thus leading to:

Optimal power allocation
So far, we have provided approximations of the BER for the TDMT and DDST schemes. As it has been previously shown, these expressions, depend on the power allocated to data and training, in addition to other parameters. While the system has no control over the noise power or the number of transmitting and receiving antennas, it still can optimize the power allocation in such a way to minimize this performance index. Next, we provide, for the TDMT and DDST schemes, the optimal data and training power amounts that minimize the BER under the constraint of a constant total power.

Optimal power allocation for the TDMT scheme
Referring to the expressions of BER, we can easily see that the optimal amount of power allocated to data and pilot for the TDMT scheme is the one that minimizes δ t . Letc 1 = (1 + r)c 1 , then minimizing δ t with respect to σ 2 wt and σ 2 Pt under the constraint that N 1 σ 2 Pt + N 2 σ 2 wt = (N 1 + N 2 )σ 2 T (σ 2 T being the mean energy per symbol) results in the following lemma: Lemma 3. The optimal power allocation minimizing the BER under

Optimal power allocation for the DDST scheme
For the DDST scheme, we can deduce from (14) that maximizing γ d leads to minimize the BER. To maximize γ d , we need to optimize δ d as a function of σ 2 w d and under the constraint that σ 2 P d + (1 − c 1 )σ 2 w d = σ 2 T . After straightforward calculations, we can find that the optimal values for σ 2 w d and σ 2 P d are given by: . Under the data model, the optimal power allocation minimizing the BER under a total power constraint σ 2 T is given by:

Discussion
To get more insight into the proposed analysis, we provide here some comments and workouts on the theoretical results derived in the previous sections.
High SNR behaviour of the BER: At high SNRs, the error variance parameters δ t and δ d are close to zero and hence, by using a first order Taylor expansion of the BER expressions in (9) and (15), we obtain: where O(x) denotes a real value of the same order of magnitude as x. From these approximated expressions, one can observe that the BER at the TDMT scheme is a monomial function of the estimation error variance parameter δ and the number of transmitters K. For example, if the noise power is decreased by a factor 2, then the BER will decrease by 2 M−K+1 . The diversity gain is thus equal to M − K + 1, which is in accordance with the works in [16] and [5]. Also, we observe that, for the DDST case, we have a floor effect on the BER (i.e. the BER is lower bounded by 1 2 1 c 1 ) due to the data distorsion inherent to this transmission scheme.
Gaussian vs. Gaussian mixture model: In our derivations we have found that the post-processing noise in the DDST case behaves asymptotically as a Gaussian mixture process while, in most of the existing works, the noise is assumed to be asymptotically Gaussian distributed. In fact, one can show that for large sample sizes (i.e. when c 1 −→ 0) the Gaussian mixture converges to a Gaussian distribution allowing us to retrieve the standard Gaussian noise assumption. However, for small or moderate sample sizes the considered Gaussian mixture model leads to a much better approximation of the BER analytical expression than the one we would obtain with a post-processing Gaussian noise model. In other words, Theorem 2 results allow us to derive closed form expressions for the BER that are valid for relatively small sample sizes.
Workouts on the optimal power allocation expressions of the TDMT scheme: We consider here two limit cases: (i) The high SNR case where σ 2 v ≪ σ 2 T and (ii) the case of high dimensional system (the number of transmit antennae is of the same order of magnitude as the number of receive antennae) where c 2 − 1 ≪ 1. From (17), the data to pilot power ratio can then be approximated by: Equation (22) shows that the optimal power allocation in the high SNR case realizes a kind of trade off between the pilot size and its power such that the total energy N 1 σ 2 Pt is kept constant. This suggests us to use the smallest possible pilot size that meet the technical constraint of limited transmit power, to increase the effective channel throughput without loss of performance. Equation (22) shows that in the difficult case of large dimensional system, one needs to allocate the same total energy to pilots and to data symbols, i.e. N 1 σ 2 Pt ≈ N 2 σ 2 wt . In other word, we should give similar importance (in terms of power allocation) to the channel estimation and to the data detection.
Workouts on the optimal power allocation expressions of the DDST scheme: A similar workout is considered here for the DDST scheme. We consider the two previous limit cases and we assume that the sample size is much larger than the number of transmitters, i.e. N ≫ K. In this context, we obtain the following approximations for the data to pilot power ratio: Again we observe that for the large dimensional system case, one needs to allocate the same total energy to pilot and to the data. For high SNRs, one observe a kind of trade off between the pilot power and size but in a different way than the TDMT case. In fact, if we increase by a factor of 4 the sample size, one can increase the data to pilot power ratio by a factor of 2 without affecting the BER performance.
High SNR BER comparison of the two pilot design schemes: For the DDST scheme, the BER expression can be lower bounded as follows (using the convexity of Q( √ bx) as a function of b): the latter inequality comes from the fact that J(m, a, b) is a decreasing function of its last argument. Now, in the high SNR and large sample size scenario (i.e, for σ 2 v /σ 2 T ≪ 1 and N ≫ N 1 , K), we have δ t ≈ δ d and by continuity J(M − K + 1, Kδ d , 1) ≈ J(M − K + 1, Kδ t , 1) = BER t . Consequently, in this context, the TDMT scheme is better than the DDST in terms of BER, i.e. BER d ≥ BER t .

Simulations
Despite being valid only for the asymptotic regime, our results are found to yield a good accuracy even for very small system dimensions. In this section, we present simulation results that compares between the TDMT and DDST schemes.

Performance comparison between DDST and TDMT based schemes
In this section, except when mentioning, we consider a 2 × 4 MIMO system (K = 2, M = 4) with a data block size N = 32. Fig. 1 plots the empirical and theoretical BER under QPSK constellation for N = 32, K = 2 and M = 4 for the TDMT and DDST based schemes. All comparisons are conducted in the context when both schemes have the same total energy. The number of training symbols is set to N 1 = 2 (N 2 = 30) for the TDMT scheme. For low SNR values (SNR below 6 dB), both schemes achieve approximatively the same BER performance, and therefore, the DDST scheme outperforms its TDMT counterpart in terms of data rate, since it has a better bandwidth efficiency. For high SNR values, the noise caused by the data distortion is higher than the additive Gaussian noise, thus affecting the performance of the DDST scheme.

Applications
To compare the efficiency of the TDMT and DDST schemes, we consider applications in which the BER should be below a certain threshold, say 10 −2 . This may be the case for instance of circuit-switched voice applications. Note that for non-coded systems, a target BER of 10 −2 is commonly used. Application 1 In this scenario, we set the SNR σ 2 T σ 2 v to 15 dB. We then vary the ratio c 1 = K N from 0.01 to 0.5. Since we consider K = 2 and M = 4, N = K/c 1 varies also with c 1 . For each value of N we compute the BER by using (9) and (15). Fig. 2 illustrates the obtained results. We also superposed in the same plot the empirical results for the TDMT and the DDST scheme. The results show a good match thereby supporting the usefulness of the derived results. We note that the DDST scheme may be interesting for long   (N ≥ 16). For small frames (high distortion ratio c 1 ), the distortion of the data becomes too high thus reducing the interest of the DDST scheme. Application 2 In this experiment, we propose to determine for the TDMT scheme (K = 2, M = 4, N = 32) the optimal ratio N2 N1 that has to be used to meet a certain quality of service. For that, we consider a scenario where the BER should be below 10 −2 . Using (16), (17) and (9), we determine the minimum number of required training symbols to meet the BER lower bound requirement. We then, plot the corresponding ratio r = N2 N1 with respect to the SNR. We note that if the SNR is below 2 dB, the BER requirement could not be achieved. This is to be compared with the DDST scheme where the SNR should be set at least to 10.5 dB so as to meet the BER lower bound requirement as it can be shown in fig. 3. Moreover, for a SNR more than 8.5 dB, the minimum number of pilot symbols for channel identification (equal to K) is sufficient to meet the BER requirement. A Proof of theorem 1 In the sequel, we propose to determine the asymptotic distribution of the post-processing noise of each entry of the matrix ∆W t . Actually the (i, j) entry of ∆W t is given by: where h # i and h i denote respectively the ith row of H # and (H H H) −1 , and w j and v 2,j denote jth columns of W t and V 2 , respectively. Conditioned on H, V 1 and W t , (∆W t ) i,j is a Gaussian random variable with mean equal to −h # i ∆H t w j and variance σ 2 Since our proof will be based on the 'characteristic function' approach, we shall first recall the expression of the characteristic function for complex random variables: Theorem 5. Let X n be a complex Gaussian random variable with mean m X,n and variance σ 2 X,n , such that E(X n − m X,n ) 2 = 0. Then, X n can be seen as a two-dimensional random variable corresponding to its real and imaginary parts. The characteristic function of X n is therefore given by: Applying Theorem5, the conditional characteristic function of (∆W) i,j can be written as: The limiting behaviour of A σ,K can be derived by using the following known results describing the asymptotic behaviour of an important class of quadratic forms: Noticing that Tr (AA H ) ≤ N A 2 and that Tr (AA H ) p/2 ≤ N A p , we obtain the simpler inequality: Hence, if A and x have respectively finite spectral norm and finite eigth moment, we can conclude, using Borel-Cantelli lemma, about the almost convergence of the quadratic form 1 N x H Ax, thus yielding the following corollary: T be a N × 1 vector where the entries x i are centered i.i.d. complex random variables with unit variance and finite eight order. Let A be a determinsitic N × N complex matrix with bounded spectral norm. Then, By corollary 7, the asymptotic behavior of A σ,K is then given by: Since 1 K Tr (H H H) −1 converges asymptotically to 1 c2−1 as the dimensions go to infinity [18], we get: Note that Theorem7 can be applied since the smallest eigenvalue of the Wishart matrix (H H H) are almost surely uniformely bounded away from zero by (1 − √ c 2 ) 2 > 0, [19]. Before determining the limiting behavior of B σ,K , we shall need the following lemma: be a M × K with Gaussian i.i.d entries. Then, in the asymptotic regime given by: we have: Proof. Without loss of generality, we can restrict our proof to the case where i = 1. Let y 1 , · · · , y K denote the columns of Y. Matrix Y H Y is then given by: . Then, using the formula of the inverse of block matrices, we get: On the other hand, Using corollary 7, we have: → 0 almost surely.
tends to 1 c2−1 almost surely, we get the desired result.
We are now in position to deal with the term B σ,K . Using corollary 7, we get: converge almost surely to 1 c2−1 , (its inverse is the mean of independent random variables [12] ), then: To prove the almost sure convergence to zero of ǫ σ,K , we will be based on the following result, about the asymptotic behaviour of weighted averages: Theorem 9. Almost sure convergence of weighted averages [20] Let a = [a 1 , · · · , a N ] T be a sequence of N ×1 deterministic real vectors with sup N 1 N a T N a N < +∞. Let x N = [x 1 , · · · , x N ] be a N × 1 real random vector with i.i.d. entries, such that Ex 1 = 0 and E|x 1 | < +∞. Therefore, 1 N a T N x N converges almost surely to zero as N tends to infinity.
This theorem was proved in [20] for real variables. Since we are interested in the asymptotic convergence of the real part of ǫ σ,K , it can be possible to transpose our problem into the real case. Indeed, let x = V H 1 h # i and a = P H t (H H H) −1 h # i , then ℜ (ǫ σ,K ) is given by: Let a r , x r (resp. a i , x i ) denote respectively the real parts (resp. imaginary parts) of a and x, then Referring to theorem 9, the convergence to zero of ℜ (ǫ σ,K ) is ensured if 1 2N1 (a T r a r + a T i a i ) = 1 2N1 a 2 2 is finite. This is almost surely true, since: This leads to σ 2 w,K −σ 2 w,K −→ 0 almost surely. whereσ 2 w,K is given by: Substituting σ 2 w,K by its asymptotic equivalent in (26), we get: Also conditioning on W t and H, h # i ∆H t w j is a Gaussian random variable with zero mean and variance Since 1 K w j H w j −→ σ 2 wt almost surely, we get that σ 2 m,K converges almost surely toσ 2 m,K wherẽ σ 2 m,K = Using the fact that the characteristic function of h # i ∆H t w j is we obtain that conditionally on the channel: where e j and J j denotes the jth columns of I N and J, respectively andw i denotes the ith row of the matrix W. Let Using the same techniques as before, it can be proved that: