- Research Article
- Open Access
- Published:

# A Fast Soft Bit Error Rate Estimation Method

*EURASIP Journal on Wireless Communications and Networking*
**volume 2010**, Article number: 372370 (2010)

## Abstract

We have suggested in a previous publication a method to estimate the Bit Error Rate (BER) of a digital communications system instead of using the famous Monte Carlo (MC) simulation. This method was based on the estimation of the probability density function (pdf) of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM) model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

## 1. Introduction

To study the performance of a digital communications system, we need to use, in general, the Monte Carlo (MC) method to estimate the BER. A tutorial exposition of different techniques is provided in [1] with particular reference to four other specific methods: modified Monte Carlo simulation (importance sampling), extreme value theory, tail extrapolation, and quasianalytical method. The modified Monte Carlo is achieved by importance sampling which means that important events, and then errors, are artificially generated by biasing the noise process. At the end of simulation, the error count must be properly unbiased (see also [2]). The extreme value theory (see [3]) assumes that the pdf can be approximated by exponential function. The tail extrapolation method, which is a subset of the previous one, is based on the assumption that only the tail region of the pdf can be described by a generalized exponential class. The quasianalytical method combines noiseless simulation with analytical representation of noise. In [4], High-Order Statistics (HOS) of the bit Log-Likelihood-Ratio (LLR) are used for evaluating performance of turbo-like codes. In this case, the characteristic function of the bit LLR is estimated by using its first cumulants (or moments). The pdf is therefore computed by using the inverse Fourier transform. The reader can find other recent papers on this topic (see [5–8]).

In [9], we have suggested a soft BER estimation based on the nonparametric computation of the pdf of the received data. We have shown, in this last case, that hard decision is not needed to compute the BER and that the total necessary number of transmitted data is very small compared to the classical MC simulation. This allows a significant reduction in run time for computer simulations and may also be used as the basis for rapid real-time BER estimation in real communication systems.

In this paper, we suggest to use a Gaussian Mixture (GM) model to estimate the pdf of the observed samples. Two conditional pdfs are computed corresponding to the transmitted bits equal to . The Expectation Maximisation (EM) algorithm is used to estimate, in an iterative fashion, the different parameters of this mixture, that is, the means, the variances, and the a priori probabilities. The different parameters are estimated by using the Maximum Expectation of joint Likelihood criterion. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. The choice of the number of Gaussians for each pdf is very important. In [10], a method, based on Mutual Information Theory, was presented to find the optimal number of Gaussians in order to give an accurate estimation of the pdf. Our suggested analytical expression of the BER, based on the Gaussian Mixture model, where parameters are jointly estimated by EM algorithm and Mutual Information theory leads to an efficient fast way to estimate the performance of a digital communications system. Simulation results are carried out to compare the three mentioned methods: Monte Carlo, Kernel, and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access (CDMA) system with single user detection and show that attractive performance is achieved compared with conventional Monte Carlo (MC) or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the simulation run-time even at very low BER.

The main idea of this paper is the use, in an iterative way, of the Expectation Maximisation (EM) Algorithm, for the pdf estimation with Gaussian Mixture model, jointly with the Mutual Information Theory, for the computation of the optimal number of Gaussians. The analytical expression of the BER is therefore given by using the different estimated parameters of the Gaussian Mixture.

The EM algorithm was introduced for the first time by Dempster et al. [11]. It is an iterative computation method of maximum likelihood estimates of missing data from observable variables. In this paper observable variables are simply given by soft output values at the receiver of a digital communications system. Missing data is given by unknown true component (Gaussian) from which the observation comes. Two conditional pdfs, according to the transmitted bits , must be estimated. For each pdf, a Gaussian Mixture model, with a large enough initial number of components, is used. Then, the EM algorithm performs, in an iterative way, the estimation of the parameters for each component, that is, means, variances, and a priori probabilities. The Mutual Information (MI), according to Shannon Theory, is computed. A component with positive MI is assumed to be dependent on others components and could be removed without damaging the pdf estimation. The EM algorithm can be performed with a new decreased value of the number of Gaussians. The algorithm stops when the maximum of the computed MI, over all the components, is nonpositive which means that all the reached components are likely independent and therefore gives an optimal structure of the Gaussian Mixture model. The two conditional pdfs are then estimated, in a parallel fashion, by using the Mutual Information Theory to compute iteratively the optimal number of components and a subiteration for the EM algorithm to estimate the different parameters of each component. An analytical expression of the BER is therefore obtained by using all parameters of the Gaussian Model at the last iteration.

Let us recall that the EM algorithm has mainly been used, in the past twenty last years, in image processing or more precisely in image segmentation for different applications such as image or video compression. The reader can find in [12] an example of an application of SEM (Stochastic version of EM) algorithm in SPOT satellite image segmentation where a Gaussian distribution is assumed for each class. In [13], a hybrid version of SEM is used assuming a generalized Dirichlet distribution.

Nonparametric pdf estimation has also been used in different applications such as speech coding and pattern recognition [14, 15]. The Gaussian Mixture model has also been used in speaker identification [16].

Let us consider a general communications system (see Figure 1) where bit information is transmitted using any kind of transmission schemes such as CDMA, FDMA, TDMA, MC-CDMA techniques, with or without channel coding, space time coding, using single or multitransmit antennas, transmitting over Gaussian, fading or multipath, fixed or time variant, channel. At the receiver, any kind of detection such as MIMO equalization, multiuser detection, turbo techniques detection, or simply Rake receiver, may be implemented. The only assumption we use in this paper is the fact that the receiver is able to perform the soft decision ( in Figure 1) which is right before the hard decision, , of each transmitted bit. In this paper, only binary phase shift key (BPSK) modulation is used. The case of other kinds of modulation is left for future work.

Let be a set of transmitted bits. The are assumed to be independent and identically distributed with and , where . Let us note that the corresponding soft output at the receiver such as the hard decision is taken by using its sign: . All the received soft output decisions are random variables having the same pdf, .

Throughout the paper, the following notation is used. The output decisions are random variables having the same pdf, . The cardinality of set is denoted . When is a random variable, and denote the mathematical expectation and variance of , respectively. When is a second derivative function, and denote its first and second derivatives at point , respectively. denotes the sign of the argument, and is the natural log function. is the probability of a given event, and superscript denotes the transpose.

The paper is organized as follows: Section 2 briefly shows how the probability density function (pdf) of the soft output signal at the receiver is estimated using the Kernel method in a nonparametric way by estimating the optimal smoothing parameter. The BER is performed based on all soft observations and the smoothing parameter value. In Section 3, we will show how a Gaussian Mixture model can be performed, for each conditional pdf, by using the Expectation Maximization (EM) algorithm. The Mutual Information is used to compute iteratively the optimal number of Gaussians. The BER is, therefore, simply computed by using all parameters (means, variances, and a priori probabilities) for each conditional pdf given by the EM algorithm at the last iteration. Different simulation results are presented in 4. Finally, a brief conclusion is given in Section 5. Proofs of all theoretical results are given in the appendices.

## 2. Kernel Method for BER Estimation

### 2.1. Pdf Estimation Based on Kernel Method

A brief description of the kernel method simulation will be given in this section. The reader can find more details in our previous work ([9]). Let us note that , the pdf of the output observations, is a mixture of the two conditional pdfs and then can be written as

Where (resp., ) is the conditional pdf of such that (resp., ), , and , with . We assume that we know the exact partitions of the observations into two classes (or partitions) and which, respectively, contains the observed received soft bit such as the corresponding transmitted bit information (resp., ). Let (resp., ) be the cardinality of (resp., ). The kernel method ([17–19]) is used to estimate the different pdfs. In this case, the estimation of the conditional pdf, , can be given by the following formula:

Where is the smoothing parameter which depends on the length of the observed samples, . is any pdf (called the kernel) assumed to be an even and regular (i.e, square integrated) function with unit variance and zero mean. For simplicity reasons, we will not give the corresponding equations for the conditional pdf, . The reader can easily find them by replacing "" by "".

The choice of the smoothing parameter is very important ([14, 15]). Let us note that for conditional Gaussian distribution , and for Gaussian kernel, the optimal smoothing parameter is given by

### 2.2. BER Estimation Based on Kernel Method

The BER is given by:

To estimate the BER of our system, we must evaluate the expression of (4). We can show that for the chosen Gaussian kernel, a soft BER estimation can be given by the following expression (see proof in [20]):

where denotes the complementary unit cumulative Gaussian distribution, for example, .

We have given some theoretical studies, in [20], regarding the convergence of this BER estimator. We have shown that this estimator is asymptotically unbiased. Different details can be found in [20].

## 3. Gaussian Mixture for BER Estimation

In this section, instead of using the Kernel method (given by (2)), a Gaussian Mixture (GM) model will be used. The mixture model is used in general for its mathematical flexibilities. For example, a mixture of two Gaussian distributions with different means and different variances results in a density with two modes, which is not a standard parametric distribution model. Mixture distributions can model extreme events better than the basic Gaussian ones. More details about Mixture distributions can be found in [21].

The following sub section will show how to estimate the two conditional pdfs using a Gaussian Mixture model. The Expectation Maximization (EM) algorithm will be performed to compute the mean, the variance and the a priori probability of each component of this mixture. Therfore, Section 3.2 will show how the BER can be simply computed by using these different estimated parameters. For simplicity reason, equations are developped only for one conditional pdf, . The reader can easily find all the corresponding equations for the estimation of .

### 3.1. Pdf Estimation Based on Gaussian Mixture Method

In this section, we will assume that the conditional pdf is a mixture of Gaussians as follows: (see [21])

where is the a priori probability of the th component for the Gaussian mixture and is a Gaussian pdf with as a mean and as a variance. We have the constraint that .

Let be the soft observed samples corresponding to the transmitted bits equal to . As the pdf of the obseved samples is a mixture of Gaussians, this means (see [21]) that each is produced by one component of this mixture (). We have to find the value of this component: this is the missing data that we will try to compute. Let be the missing data which is a sequence of variables that determines the component from which the observations originate. means that is generated by the th component of the Gaussian mixture, that is, . The a priori probability represents the probability that , that is, .

In order to estimate the conditional pdf, from (6), we have to estimate the unknown parameters represented by . The criterion we will use is the maximization of the conditional expectation of the joint likelihood of both observed samples, , and missing data, (see [11] for details about this criterion). The likelihood function is given by:

where is the indicator function given by:

In this section, we will use the Expection Maximization algorithm to estimate, in an iterative way, the unknown parameter . For each new iteration and for a given estimate of the paramater , computed at a previous iteration, two steps are performed. In the first one, that is, Estimation step, we will compute the different a posteriori probabilities (APP): . In the second one, that is, Maximization step, we will compute the new parameter by maximizing the conditional expectation of the Joint Log likelihood of observed samples, , and missing data, .

#### 3.1.1. Estimation Step

In this step, at iteration , we estimate the unobserved component in the Gaussian mixture for each observed sample using the parameter value , computed at the last Maximization step at the previous iteration . Then, using simple Bayes' rule, we have:

Therefore, for , and for , we have:

#### 3.1.2. Maximization Step

Now, at the current iteration , we will maximize the conditional expectation of the log-likelihood of the joint event, assuming independent observation . Then, we have:

Where is the joint likelihood event given by (7).

We can show that for , the new parameters are given by: (see Appendices A–D for proofs)

### 3.2. BER Estimation Based on Gaussian Mixture Method

In this section, we derive the expression of BER estimate assuming Gaussian Mixture based pdf estimator. Let and the reached values at the last iteration of the EM algorithm described in the previous Section 3.1. The parameter (resp., ) allows the estimation of the conditional pdf (resp., ). Let us underline that we need to perform the EM algorithm two times and in independent way. At each time, a different data base is used: (resp., ), of soft observations corresponding to the transmitted bits equal (resp., ) to estimate (resp., ).

At the last iteration T of EM algorithm, reliable estimates of and are reached and the BER is computed using the obtained estimates. Let us recall the expression of the BER given by (4). Replacing the two conditional pdfs by their Gaussian Mixture based estimates (6) using the parameters and , the BER estimates is simply computed as

Where denotes the classical complementary unit cumulative Gaussian distribution. Details regarding the derivation of (15) are provided in Appendix D.

### 3.3. Optimal Choice of the Number of Components

The choice of and is very important. It is clear that if this number of components is too low, the corresponding pdf will be too smooth and then the BER less reliable. On the other hand, if this number is too high, this means that the same class of observed samples comes from different components and then these components should be correlated which is not useful for simulation since all the observed data are assumed to be independent. Consequently, the optimal number of components has to be the largest one such that all the components are independent. Similar method has been suggested by [16] for speaker identification applications by increasing the number of classes in the -means algorithm. More mathematical details can be found in [10]. Here, we suggest to initialize the algorithm with a high enough value, to perform the EM algorithm to estimate the different components and test their independence after the last iteration. If it is not the case, we have to decrease iteratively the number of components until the independence is reached.

To test the independence of two components and , mutual information theory, as proposed by Shannon [22], can be used. For speaker identification application, in [16], the mutual relationship of the two components has been defined as

Where, is the probability of the mixture (see (6)), and is the joint probability of these two components as follow,

The sign of the expression of (16) allows us to know whether the two components are statistically independent: if then the two components are independent (), if then the two components are statistically dependent and then one of these components can be removed without damaging the estimation of the pdf. If , the two components are much less correlated. So, the following quantity,

allows us to know if we have to reduce the number of components or not. To find the optimal number of components and , we have to just choose a large enough initial number and at the end of the EM algorithm, the sign of (or ) allows us to know if we have to reduce (or ). If , we decrease the number of component by one, otherwise we stop the algorithm. The computation of the optimal values and , which could of course be different, can be performed in a parallel fashion. For a new decreased value, , initial GM parameters of EM algorithm could be given by the output parameters at the last iteration of the previous EM algorithm, where we remove the th component given by

The quantity presents the mutual information for the component and denotes whether this component has a significant and independent contribution to the pdf estimation. The biggest positive value has a less and dependent contribution to the GM estimation and should therefore be removed. The proposed Gaussian Mixture based BER estimation using EM algorithm and Mutual Information theory can now be summarized in Algorithm 1. Figure 2 gives the flow chart of the suggested algorithm.

**Algorithm 1:** Summary of the Gaussian Mixture based BER estimation using EM algorithm and Mutual Information theory.and are computed in a parallel fashion.

**1. Initialization:**

1.1. Classify soft outputs according to their transmitted bits, that is,

. Let

1.2. Deduce the two priori probabilities:

1.3. Choose a large enough number of the Gaussian mixture: and .

**2. EM algorithm for**:

2.1. Initialization of

2.2. At each iteration ,

2.2.1 Estimation step:

Estimate APPs, using (10).

2.2.2 Maximization step:

Computation of by using (12), (13) and (14).

**3. Verification of optimality of**:

3.1. Compute (19)

3.2. If , save go to step 4. Otherwise, : , go to 2.

**4. EM algorithm for**:

4.1. Initialization of

4.2. At each iteration ,

4.2.1 Estimation step:

Estimate APPs, using (10).

4.2.2 Maximization step:

Computation of by using (12), (13) and (14).

**5. Verification of optimality of**:

5.1. Compute (19)

5.2. If , save go to step 6. Otherwise, : , go to 4.

**6. BER Computation:**

6.1. Compute the BER estimate from (15) using and .

## 4. Performance Evaluation

To evaluate the performance of the three methods, we consider the framework of a synchronous CDMA system with two users using binary phase-shift keying (BPSK) and operating over an additive white Gaussian noise (AWGN) channel. We restrict ourselves to the conventional single user CDMA detector. Performance assessment in the case of advanced signaling/receivers is not reported in this paper due to space limitation and is left for future contributions.

With respect to the considered framework, the received chip-level signal vector at discrete time instant can be expressed as

where denotes the spreading factor, and is the spreading code corresponding to user . is the amplitude of user , is the information bit value of user at time instant , and is the temporally and spatially white Gaussian noise, that is, . The *a priori* probabilities of information bits are supposed to be identical and uniform for both users, that is, .

The decision statistic that serves for detecting user at time instant is [23] and is given as,

Where is the normalized cross-correlation between the two spreading codes and , and is the Gaussian noise at the output of the single user detector, that is, . The decision about information bit corresponds to the sign of decision statistic , that is, . Note that the soft output in (22) contains a mixture of a Gaussian noise. Using (22), we can easily show that the BEP for user is

In the following, we use the two spreading codes

Where the cross-correlation is . We consider the case where the two users have equal powers . The SNR at the output of the MF of each user is therefore .

### 4.1. Output Pdf Estimation Comparison

First of all, in this simulation, we would like to compare the three different pdf estimation: Histogram method which leads to MC method, Kernel method and Gaussian Mixture method. In order to make a fair comparison, the three methods are used in optimal conditions. In particular, the length of the bins of the Histogram is chosen equal to the smoothing parameter computed for the kernel method so as the convergence of the histogram in the MSE and IMSE criterion can be guaranteed.

For this first simulation, we have chosen a pdf as a mixture of 3 different pdfs according to Gaussian, Rayleigh and Beta first kind laws with fixed different parameters. So the true chosen pdf is

Where,, and . , and . Figure 3 plots the true pdf with the estimated pdf for the three cases. samples has been generated for this first simulation. GM method has found that 4 components are sufficients to estimate the pdf as

The Integrated Square Error (ISE), which is defined as , has been computed for this simulation. We have carried out one hundred different trials and computed the Mean ISE (MISE) and the variance of the ISE for the three methods. Table 1 summarizes these results and shows that Kernel method gives the best estimation of the pdf in the sense of the minimum of MISE. Gaussian Mixture method seems to be the worst one in the MISE criterion. So, for general applications, such as pattern recognition or speech coding, the Kernel method seems to be the best one to choose. For our application, that is, BER estimation, we do not need an accurate estimate of the whole pdf. We only need an accurate estimate of the tail of the pdf. That is why, in another simulation, we are interested in estimating the area of the tail delimited between, for example, and . In this area, the tail is a mixture of Gaussian and Rayleigh laws. We can easily show that in this case . Let us now estimate the quantity using the three different pdf estimates with one hundred different trials. Table 2 shows the mean and the Standard deviation for the three methods. GM method clearly gives the best estimate of the area computation. This result will be confirmed in the following sub section for BER estimation in the framework of CDMA systems described in the begining of Section 4.

### 4.2. Performance Comparison for the Three Methods

In order to compare the three methods (MC, Kernel and GM method), soft outputs were generated for each SNR to estimate the BER. different trials were simulated, for each method, to compute the Mean, Minimum and Maximum of BER values. The number of EM iterations is fixed to . All these results are given in Figure 4.

In addition, the Mean, the Standard deviation with the theoretical value of BER for different values of SNR are given in Table 3. This table shows that GM method provides the best performance in the sense of the minimum of Mean Squared Error of the BER estimation, even if a small bias is observed for medium values of SNR. This bias will completely disappear for increased number of EM iterations as it will be seen in Section 4.3. In the same time, MC technique fails to do so and stops between and because of the very limited number of transmitted information bits and lack of errors. The Kernel method leads to a smaller bias but has the greatest standard deviation and then less reliable than the GM method.

On the other hand, for GM method and for a fixed SNR value (), we have computed the Mean BER and the Standard deviation for different number of soft outputs (between and ) with different trials.

The number of EM iterations is fixed to . All these results are given in Table 4. The precision of the estimation, defined as the the standard deviation to the mean of BER ratio, does not decrease with the number of samples. This is linked to the observed bias mentioned before.

Figure 5 shows the performance of the receiver, for GM method, with different number of EM iterations and using samples. This figure clearly shows that performance of GM method increases with the number of iterations.

### 4.3. Performance of GM Method in the High SNR Region

We would like to test our suggested algorithm in severe conditions for high values of SNR while using, in the same time, few samples such output observations. Figure 6 shows the performance of the receiver (one random simulation), using EM-iterations. We can see that we obtain an unbelievable result, a BER estimate down to has been measured. This result is only limited by the computer precision. To obtain this figure, the simulation run time is seconds on a conventional PC. For each SNR point, the simulation takes less than seconds. This time is computed by using the CPU-time command of Matlab software. The run time does not depend on the value of SNR which is a huge advantage of our suggested method. For EM iterations, the run time takes less than one second for each SNR value.

It is, of course unimaginable and impossible today, to plot this kind of figures using Monte Carlo method and waiting for 100 errors to have an accurate estimate. Indeed, using our computer with Matlab Software, we need milliseconds to generate output observations. Let us now assume that in the world, there are 10 billion inhabitants, that each one has got 10 PC and each PC is 10,000 times more powerful than the one we used for our simulation. Let us also assume that we can use all these PCs in a parallel and optimized structure. A simple calculation shows that we need to wait for more than one year to estimate a BER at only , hoping that there will be no failure or cut electricity. The CPU time for MC method at different values of SNR is given in Table 5.

### 4.4. Comparison with Importance Sampling Technique

Importance sampling technique, also known as Modified Monte Carlo method, was introduced by Shanmugam and Balaban (see [2]) to estimate error probabilities in digital communications systems. The importance sampling technique is used to modify the probability density function of the noise process in a way to make simulation possible. An estimate of the pdf of soft decision is constructed in a histogram form. The idea is to modify this pdf, that is, the statistical properties of the soft decision sequence, in such a way that higher rate of errors occur in the simulation process. Therefore, the error count has to be modified appropriately to obtain an unbiased estimate of the true error probability. The main drawback of the importance sampling technique is the difficulty, for complex systems, of determining which regions of the pdf to bias and how to bias these regions.

Compared to classical Monte Carlo method, the importance sampling technique, reduces the sample size requirement by a factor ranging from up to . Let us assume that this factor is equal to in order to compare importance sampling technique with our suggested GM method. In this case, the CPU time given in Table 5 for Monte Carlo simulation has to be divided by a factor of to obtain the CPU time for the importance sampling technique simulation. Our suggested GM method has still a huge advantage as the run time does not depend on the value of SNR. In fact, for each SNR point, the simulation takes less than seconds (see Section 4.2). Another advantage of our suggested GM method is the possibility to estimate the performance of a system at a very very low BER (down to for a CDMA system case in this paper, see Figure 6). The only limitation is given by the precision of the used computer.

## 5. Conclusions

In this paper, we considered the problem of BER estimation for a digital communications system using any transmission technology or channel coding. BPSK modulation is used. The receiver is assumed to be able to compute soft decision. We proposed a BER estimation algorithm where only soft observations that serve for computing hard decisions about information bits are used. First of all, we provided a formulation of the problem where we showed that BER estimation is equivalent to the estimation of conditional pdfs of soft observations corresponding to the transmitted bits equal to . We then proposed a BER computation technique using Gaussian Mixture-based pdf estimation. The Expectation Maximisation (EM) algorithm was used to estimate, in an iterative way, the different parameters of this mixture, that is, the means, the variances and the a priori probabilities. The analytical expression of the BER was therefore simply given by using the different estimated parameters of the Gaussian Mixture. The optimal number of Gaussians was computed using Mutual Information Theory. Finally, we evaluated the performance of the proposed BER estimation technique in the framework of CDMA systems. Performance comparison with MC techniques and Kernel method was simulated. Interestingly, we showed that while classical MC method fails to perform BER estimation in the region of high SNR, the proposed GM estimator provides reliable estimates and better, in the sense of minimum Mean Squared Error, than Kernel method using only few soft observations. A measure of BER down to has been reached in less than seconds using only soft outputs samples.

## References

Jeruchim MC: Techniques for estimating the bit error rate in the simulation of digital communication systems.

*IEEE Journal on Selected Areas in Communications*1984, 2(1):153-170. 10.1109/JSAC.1984.1146031Shanmugam KS, Balaban P: A modified Monte Carlo simulation technique for the evaluation of error rate in digital communication systems.

*IEEE transactions on Communications Systems*1980, 28(11):1916-1924. 10.1109/TCOM.1980.1094613Gumbel EJ:

*Statistics of Extremes*. Columbia University Press, New York, NY, USA; 1958.Abedi A, Thompson ME, Khandani AK: Application of cumulant method in performance evaluation of turbo-like codes.

*Proceedings of the IEEE International Conference on Communications (ICC '07), June 2007*986-989.Cavus E, Haymes CL, Daneshrad B: Low BER performance estimation of LDPC codes via application of importance sampling to trapping sets.

*IEEE Transactions on Communications*2009, 57(7):1886-1888.Sakai T, Shibata K: Fast BER estimation of LDPC codes.

*Proceedings of the International ITG conference on Source and Channel Coding, January 2010, Siegen, Germany*1-6.Mahadevan A, Morris JM: SNR-invariant importance sampling for hard-decision decoding performance of linear block codes.

*IEEE Transactions on Communications*2007, 55(1):100-111.Dong L, Wenbo W, Yonghua L: Monte Carlo simulation with error classification for multipath Rayleigh Fading channel.

*Proceedings of the 16th International Conference on Telecommunications (ICT '09), 2009*223-227.Saoudi S, Troudi M, Ghorbel F: An iterative soft bit error rate estimation of any digital communication systems using a nonparametric probability density function.

*EURASIP Journal on Wireless Communications and Networking*2009, 2009:-9.Yang ZR, Zwolinski M: Mutual information theory for adaptive mixture models.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*2001, 23(4):396-403. 10.1109/34.917574Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm.

*Journal of the Royal Statistical Society. Series B*1977, 39(1):1-38.Masson P, Pieczynski W: SEM algorithm and unsupervised statistical segmentation of satellite images.

*IEEE Transactions on Geoscience and Remote Sensing*1993, 31(3):618-633. 10.1109/36.225529Bouguila N, Ziou D: A hybrid SEM algorithm for high-dimensional unsupervised learning using a finite generalized Dirichlet mixture.

*IEEE Transactions on Image Processing*2006, 15(9):2657-2668.Saoudi S, Ghorbel F, Hillion A: Non-parametric probability density function estimation on a bounded support: applications to shape classification and speech coding.

*Applied Stochastic Models and Data Analysis*1994, 10(3):215-231. 10.1002/asm.3150100306Saoudi S, Ghorbel F, Hillion A: Some statistical properties of the kernel-diffeomorphism estimator.

*Applied Stochastic Models and Data Analysis*1997, 13(1):39-58. 10.1002/(SICI)1099-0747(199703)13:1<39::AID-ASM292>3.0.CO;2-JYounjeong L, Ki Yong L, Joohun Lee L: The estimating optimal number of Gaussian mixtures based on incremental k-means for speaker identification.

*International Journal of Information Technology*2006, 12(7):13-21.Rosenblatt M: Remarks on some non-parametric estimates of a density function.

*The Annals of Mathematical Statistics*1956, 27(3):832-837. 10.1214/aoms/1177728190Parzen E: On estimation of a probability density function and mode.

*The Annals of Mathematical Statistics*1962, 33(3):1065-1076. 10.1214/aoms/1177704472Kronmal R, Tarter M: The estimation of probability densities and cumulatives by Fourier series methods.

*Journal of the American Statistical Association*1968, 63: 925-952. 10.2307/2283885Saoudi S, Ait-Idir T, Mochida Y: An unsupervised and nonparametric iterative soft bit error rate estimation. submitted to

*IEEE Transactions on Communications ICC2011*, 5–9 June, kyoto, JapanTitterington D, Smith A, Makov U:

*Statistical Analysis of Finite Mixture Distributions*. John Wiley & Sons, New York, NY, USA; 1985.Shannon CE: A mathematical theory of communication.

*Bell System Technical Journal*1948, 27: 379-423, 623–656.Verdu S:

*Multiuser Detection*. Cambridge University Press, Cambridge,UK; 1998.

## Author information

### Authors and Affiliations

### Corresponding author

## Appendices

### A. Proof of (12)

Proof.

Using (7), the conditional Expectation of the log likelihood function can be written as:

We must maximize taking into account the constraint . If we add a Langrange Multiplier, we get:

Setting the derivative to zero, we find, for

By invoking the fact that and that , it follows from (A.3) that . Equation (12) is then obtained.

### B. Proof of (13)

Proof.

Let us use the conditional Expectation of the log likelihood function given in Appendix A, (A.1). Setting the derivative to zero, we find, for

Then, for , we have,

### C. Proof of (14)

Proof.

Let us use the conditional Expectation of the log likelihood function given in Appendix A,(A.1). Setting the derivative to zero, we find, for

Then, for , we have,

### D. Proof of (15)

Proof.

By using the expression of the Bit Error Probability(4) and the two conditional pdfs estimates, and , we can express the BER estimate as,

Given the fact that the two conditional pdfs are estimated using the Gaussian Mixture based pdf estimate, by EM algorithm, acording to (6), and using the following change of variables, (resp., ), for (resp., ), we get,

The BER estimate given by (15) is simply obtained by combining (D.1), (D.2) and (D.3).

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Saoudi, S., Derham, T., Ait-Idir, T. *et al.* A Fast Soft Bit Error Rate Estimation Method.
*J Wireless Com Network* **2010**, 372370 (2010). https://doi.org/10.1155/2010/372370

Received:

Revised:

Accepted:

Published:

DOI: https://doi.org/10.1155/2010/372370

### Keywords

- Monte Carlo
- Gaussian Mixture Model
- Code Division Multiple Access
- Expectation Maximization Algorithm
- Code Division Multiple Access System