Distributed joint sourcechannel code design for GMAC using irregular LDPC codes
 Iqbal Shahid^{1} and
 Pradeepa Yahampath^{1}Email author
https://doi.org/10.1186/1687149920143
© Shahid and Yahampath; licensee Springer. 2014
Received: 3 September 2013
Accepted: 23 December 2013
Published: 9 January 2014
Abstract
Separate source and channel coding is known to be suboptimal for communicating correlated sources over a Gaussian multiple access channel (GMAC). This paper presents an approach to designing distributed joint sourcechannel (DJSC) codes for encoding correlated binary sources over a twouser GMAC, using systematic irregular lowdensity parity check (LDPC) codes. The degree profile defining the LDPC code is optimized for the joint source probabilities using extrinsic information transfer (EXIT) analysis and linear programming. A key issue addressed is the Gaussian modeling of loglikelihood ratios (LLRs) generated by nodes representing the joint source probabilities in the combined factor graph of the two LDPC codes, referred to as sourcechannel factor (SCF) nodes. It is shown that the analytical expressions based on additive combining of incoming LLRs, as done in variable nodes and parity check nodes of the graph of a single LDPC code, cannot be used with SCF nodes. To this end, we propose a numerical approach based on MonteCarlo simulations to fit a Gaussian density to outgoing LLRs from the SCF nodes, which makes the EXIT analysis of the joint decoder tractable. Experimental results are presented which show that LDPC codes designed with the proposed approach outperforms previously reported DJSC codes for GMAC. Furthermore, they demonstrate that when the sources are strongly dependent, the proposed DJSC codes can achieve code rates higher than the theoretical upperbound for separate source and channel coding.
Keywords
1 Introduction
Wireless communication of multiple correlated information sources to a common receiver has become an important research problem due to potential applications in emerging information gathering systems such as wireless sensor networks (WSNs)[1]. Most of the recent work on this problem has been focused on approaches based on the separation of source and channel coding, which rely on using distributed source coding (DSC)[2] to first generate independent bit streams from two sources and then to use a multipleaccess method (such as TDMA, FDMA, or CDMA)[3] to convert a multipleaccess channel (MAC) into a set of orthogonal channels. While it is known that for communication of correlated sources over orthogonal channels, the sourcechannel separation is optimal[3–5], the same does not hold true for a MAC[6–8]. Hence, there can be a loss of performance when separate source and channel coding is used to transmit correlated sources over a MAC. This is because when the sources are correlated, even if the transmitters cannot communicate with each other, it is possible to generate correlated inputs to a MAC by using a DJSC code and thereby improve the performance relative to a system with independent channel inputs. In contrast, with separate source and channel coding, distributed source coding of correlated sources yields independent bit streams, and hence, the MAC inputs cannot be made dependent unless the transmitters are allowed to collaborate in channel coding. Therefore, DJSC coding can be expected to outperform separate source and channel coding in systems such as WSNs which are equipped with lowcomplexity, narrowband sensors designed to communicate only with a common information gathering receiver.
DJSC coding of correlated sources for a GMAC has been studied sparsely in the literature. While there is no known tractable way to optimize a DJSC code for a given set of correlated sources and a MAC, a suboptimal but effective and tractable framework is to encode each source using an independent channel code in such a manner that the resulting dependence between the MAC input codewords can be exploited by a joint decoder[9, 10]. In particular, the use of a systematic channel code for each source preserves the dependence between the information bits of the MAC input codewords. For example,[9] presents an approach in which much of the correlation between the sources is preserved in MAC input codewords by encoding each source using a systematic lowdensity generator matrix (LDGM) code. However, as LDGM codes exhibit a high error floor due to their poor distance properties, this approach requires the use of two concatenated LDGM codes for each source to achieve good performance which increases the delay and complexity. Furthermore, no known method exists for designing the LDGM codes to ensure that the codes are in some sense matched to the intersource correlation and the channel noise level. An improved system design based on LDGM codes is presented in[10], which however requires an additional channel between each source and the common receiver. In another closely related work, Roumy et al.[11] consider the joint design of LDPC codes for independent sources transmitted over a twoinput GMAC. The design of LDPC codes for correlated sources transmitted over orthogonal channels appears in[12].
In contrast to a previous work, in this paper, we present a DJSC code design approach for a pair of correlated binary sources, in which the degree profile of a systematic irregular LDPC (SILDPC) code is optimized for the joint distribution of the two sources and the signaltonoise ratio (SNR) of the GMAC. Our motivations for using SILDPC codes are the following: (1) systematic codes can be used to exploit intersource correlation in joint decoding of the two codes, (2) LDPC codes can be optimized by linear programming, in conjunction with the EXIT analysis of the belief propagation (BP)based joint decoder, and (3) LDPC codes are known to be capacity achieving for a singleuser case[13] and hence will exhibit very good performance in coding correlated sources as well. One of the key issues addressed here is the mutual information computation (as required for EXIT analysis) for messages passed from factor nodes in the joint factor graph of the two LDPC codes, referred to as source channel factor (SCF) nodes, which represent the joint probabilities of the two sources and the output conditional probability density function (pdf) of the GMAC. It is shown that the analytical computation of mutual information based on additive combining of incoming LLRs in variable nodes and parity check nodes of a factor graph as done in single LDPC codes does not apply to SCF nodes. In order to make the mutual information computation in EXIT analysis, tractable using a Gaussian approximation[14], we propose a simple numerical approach based on MonteCarlo simulations to fit a Gaussian pdf to outgoing LLRs from the SCF nodes. Simulation results show that codes designed based on this method can not only outperform previously reported GMAC codes for both independent and correlated sources[9, 11], but can also achieve code rates higher than the theoretical upperbound for independent sources over the same GMAC, when the sources are strongly dependent.
This paper is organized as follows: Section 2 formulates the DJSC code design problem addressed in this paper and the code optimization procedure is presented in Section 3. Section 4 studies the problem of modeling the pdf of outgoing LLRs from SCF nodes and presents a numerical method for computing the mutual information of these messages in the EXIT analysis. Section 5 presents and discusses the simulation results. Conclusions are given in Section 6.
2 Problem setup
The optimal DJSC code for the given GMAC must induce a distribution P(X_{1},X_{2}) which maximizes the sum rate of the two channel inputs[3]. While there appears to be no known tractable approach for designing such a code, the main idea pursued in this paper is to use a systematic channel code for each source, so that the two sources are essentially transmitted directly over the GMAC and therefore P(X_{1},X_{2}) = P(U_{1},U_{2}) for systematic bits of the channel input codewords (the parity bits of each source are related to information bit as given by paritycheck equations of the code[15]). The two channel codes are decoded by a joint decoder which observes the channel output Y. This approach essentially exploits the intersource correlation to enhance the performance of channel codes. In particular, if two sources are independent, then each channel code requires a sufficient number of parity bits to correct the errors due to channel noise and the mutual interference between the the two independent bit streams. However, when the two sources are dependent, the joint distribution P(X_{1},X_{2}) of the information bits of the two channel input codewords provide an additional joint decoding gain and hence the number of parity bits required for encoding each source is reduced, or equivalently the achievable sum rate is higher. With practical (finite length) channel codes, this implies that the same decoding error probability can be achieved at a higher sum rate. Note that by construction, the aforementioned DJSC coding scheme requires that the code length n and the number of systematic information bits m (and the code rate R_{ c } = m/n) be identical for both sources, and therefore, the resulting designs correspond to symmetric rates. Achieving asymmetric rates will possibly require some form rate splitting[16] and will not be considered in this paper.
The code design approach presented in this paper is based on systematic irregular LDPC (SILDPC) codes. First, consider an nbit SILDPC code[17] whose parity check matrix H can be represented by a factor graph with code bit variable (CBV) nodes x(1),…,x(n) and parity check factor (PCF) nodes (representing parity check equations), channel output variable (COV) nodes y(1),…,y(n), and the channel factor (CF) nodes. In the case of a Gaussian channel, a COV node represents the conditional pdf p(y(n)x(n)). The channel outputs are decoded by applying the BP algorithm to the factor graph[17]. For the purpose of code design, a length n SILDPC code can be completely specified by the parameters (n,λ(x),ρ(x)), where$\lambda (x)={\sum}_{i=1}^{{d}_{\text{vmax}}}{\lambda}_{i}{x}^{i1}$ and$\rho (x)={\sum}_{i=1}^{{d}_{\text{cmax}}}{\rho}_{i}{x}^{i1}$ are the edgeperspective degree polynomials of variable nodes and parity check nodes, respectively, and λ_{ i } (resp. ρ_{ i }) is the fraction of edges connected to CBV (resp. PCF) nodes of degree i (a degree of a node is the number of edges connected to it), satisfying the constraints${\sum}_{i}{\lambda}_{i}=1$ and${\sum}_{i}{\rho}_{i}=1$[17]. The parameters d_{cmax} and d_{vmax} are typically chosen in such a manner that the sparsity of the corresponding factor graph is maintained (i.e., the edges in the factor graph grow linearly with the codeword length[17]). It is known that a concentrated degree polynomial of the form ρ(x) = ρ x^{s2} + (1  ρ)x^{ s } for some s ≥ 2 and 0 < ρ ≤ 1 is sufficient for achieving near optimal performance ([13], Theorem 2).
In the factor graph, representation of (3), each factor node represents a term in the product[17]. As usual, the factors$\mathbb{I}\{{\underset{\_}{x}}_{1}\in \mathcal{C}\}$ and$\mathbb{I}\{{\underset{\_}{x}}_{2}\in \mathcal{C}\}$ are represented by the PCF nodes of the two codes, respectively. On the other hand, each term ϕ_{ i }(·), i = 1,…,n, which is a function of the code bits x_{1}(i) and x_{2}(i), and the channel output y(i) is represented by a SCF node as shown in Figure3. As the codes are systematic, for information bits P(x_{1}(i),x_{2}(i)) is identical to the joint distribution P(u_{1},u_{2}) of the source bits. For parity bits of an LDPC code (which has a dense generator check matrix), it can be assumed that P(x_{1}(i),x_{2}(i)) = P((x_{1}(i))P((x_{2}(i)) with P(x_{1}(i)) = P(x_{2}(i)) = 0.5[18].
Sparse parity check matrices obtained through the EXIT analysis design procedure does not necessarily correspond to systematic generator matrices. As usual, the codes can be converted to systematic form by using Gaussian elimination. However, the resulting codes have dense paritycheck matrices which makes the computational complexity of BP decoding impractically high. In order to get around this problem, a bit remapping operation is used in the joint decoder to rearrange the systematic codebits, so that the codewords correspond to sparse matrices, as shown in Figure3.
3 Code optimization
A wellknown simple method for constructing a nearcapacity achieving SILDPC code for a singleinput AWGN channel with noise variance σ^{2} and some fixed ρ(x), is to determine the coefficients λ_{ i } which maximize the rate of the code under BP decoding, subject to Gaussian approximation (GA) for the messages passed in the decoder[13]. The code design in this case is a linear programming problem of the form ([17], Ch. 4):${\mathit{\text{maximize}}}_{{\lambda}_{i}}{\sum}_{i\ge 2}{\lambda}_{i}/i$, subject to constraints (1)${\sum}_{i}{\lambda}_{i}=1,(0<{\lambda}_{i}\le 1)$ (normalization constraint), (2)${\lambda}_{2}<exp(\frac{1}{2{\sigma}^{2}})/{\sum}_{j}(j1){\rho}_{j}$ (stability condition), and (3) a linear inequality to ensure the convergence of the BP algorithm ([13], Sec. 3) (decoder convergence constraint).
for k = 1,2. In the following,${I}_{v\to p}^{(k)}$ will be shown to be linear in λ_{ i }. Since the objective function and the constraints are all linear in the code parameters λ_{ i }, we can use a linear program to solve the problem. The rest of this section is devoted to EXIT analysis of BP decoding on the joint factor graph and the iterative computation of${I}_{v\to p}^{(k)}(l)$.
The details of BP decoding algorithm and the EXIT analysis for singleuser LDPC codes can be found in[17, 19]. In EXIT analysis, the analytical computation of mutual information is feasible only if the outgoing LLRs from the nodes in the factor graph have a Gaussian (or Gaussianmixture) distribution[19]. When the channel is binaryinput AWGN (BiAWGN) channel, the outgoing LLR values are Gaussian distributed[14]. For other types of channels, the Gaussian approximation of LLRs is known to be a good approximation, due to the universality of the LDPC codes (a code designed for one type of channel performs well on a another type of a channel)[15]. The analytical expressions for iterative mutual information updates through CBV nodes and PCF nodes in EXIT analysis is well known[17]. In particular, the mutual information update through a CBV node stems from the message update through that variable node and the central limit theorem. That is, since an outgoing message of a given node has the mean equal to the sum of means of incoming messages to that node, given a reasonably high node degree, the outgoing message is approximately Gaussian. The mutual information update for a PCF node on the other hand relies on the deterministic relationship between the PCF node and the CBV nodes connected to it, as defined by the parity check equations. As a result, the mutual information update for a PCF node can be computed by simply using duality relationship that it has with a CBV node[20].
4 EXIT analysis
Let the mutual information between two random variables r and s be I(r;s) and define the following:${I}_{v\to p}^{(k)}\triangleq I({X}_{k};{m}_{v\to p}^{(k)})$,${I}_{p\to v}^{(k)}\triangleq I({X}_{k};{m}_{p\to v}^{(k)})$,${I}_{v\to s}^{(k)}\triangleq I({X}_{k};{m}_{v\to s}^{(k)})$, and${I}_{s\to v}^{(k)}\triangleq I({X}_{k};{m}_{s\to v}^{(k)})$. In the l + 1th iteration of EXIT analysis, given${I}_{v\to p}^{(k)}(l)$ and${I}_{v\to s}^{(k)}(l)$,${I}_{p\to v}^{(k)}(l+1)$ and${I}_{s\to v}^{(k)}(l+1)$ are first updated, followed by${I}_{v\to p}^{(k)}(l+1)$ and${I}_{v\to s}^{(k)}(l+1)$. The convergence of the degree polynomial λ(x) to a valid code is then verified by (5).
For approximating an arbitrary distribution by a Gaussian, the transformationbased methods are widely used, see[22, 23]. These methods are essentially parametric where the parameter estimation is usually done through methods such as maximum likelihood (ML) or Bayesian inference. They also require performing the inverse transform operation once the required processing is done on the Gaussian density. For our problem, both parameter estimation and inverse operation can make the LDPC code optimization algorithm intractable. Since our code optimization procedure is based on mutual information transfer through SCF nodes as computed in (17), we seek a computationally simple Gaussian approximation which yields the maximum mutual information${I}_{s\to v}^{(1)}$ for the messages${m}_{s\to v}^{(1)}$, for given${\mu}_{v\to s}^{(2)}$, α, and σ^{2}. To this end, we consider the following three approaches.

Meanmatched Gaussian approximation  The mean μ is estimated from observations and variance set to 2μ.

Modematched Gaussian approximation  The mode m of the pdf is estimated from observations and we set the mean μ = m and variance 2μ. The fitting of a Gaussian distribution at the mode of an arbitrary distribution is closely related to the Laplace approximation[24].

Twocomponent Gaussian mixture approximation  The density is approximated by fitting a two component Gaussian mixture${a}_{1}\mathcal{N}({\mu}_{1},{\sigma}_{1}^{2})+{a}_{2}\mathcal{N}({\mu}_{2},{\sigma}_{2}^{2})$, where${\mu}_{1},{\mu}_{2},{\sigma}_{1}^{2},{\sigma}_{2}^{2},{a}_{1}$, and a_{2} are estimated from the observations.

Step 1: Given the mean value${\mu}_{v\to s}^{(\stackrel{\u0304}{k})}$ generate a sufficiently large number of N samples of${m}_{v\to s}^{(\stackrel{\u0304}{k})}\sim \mathcal{N}({\mu}_{v\to s}^{(\stackrel{\u0304}{k})},2{\mu}_{v\to s}^{(\stackrel{\u0304}{k})})$.

Step 2: Given P(X_{1},X_{2}) and σ^{2}, generate N samples from the pdf of the GMAC output y.

Step 3: Use (9) (if k = 1) or (10) (if k = 2) to compute the corresponding N samples of${m}_{s\to v}^{(k)}$. Estimate the mean m of the pdf of${m}_{s\to v}^{(k)}$ using either meanmatched or modematched approximations described above. Set${\mu}_{s\to v}^{(k)}=m$ and$\text{var}({m}_{s\to v}^{(k)})=2m$.
In the case of Gaussian mixture approximation, mean values μ_{1}, μ_{2} and the weights a_{1},a_{2} can be estimated from the sample set of${m}_{s\to v}^{(k)}$[25].
5 Simulation results
In this section, we present simulation results obtained by designing DJSC codes for a pair of uniformly distributed binary sources (whose statistical dependence is given by α) and a GMAC with noise variance σ^{2}.
First, we investigate the impact of the three message density approximation considered in Section 4. As evidenced by Figures6 and7, codes designed by using the modematched approximation gives the maximum output mutual information from a SCF node. In order to compare the three approximation methods on the basis of code performance, in Figure8, we present the probability of decoding error of codes designed using each of these methods. Here, the correlation parameter α and the channel noise variance σ^{2} are identical to those used in Figure7 (for which the density of outgoing messages from SCF nodes is bimodal). These results confirm that the modematched approximation tends to yield the best codes, owing to the skewed nature of the pdf of the output messages from SCF nodes. For example, at the error probability of 10^{6}, the codeword length required with modematched approximation is approximately 1.7 × 10^{4} bits, while that with meanmatched approximation is approximately 3.8 × 10^{4} bits. In obtaining simulation results in the rest of this section, we have used the modematched approximation.
Degree profiles for LDPC codes generated by the proposed design algorithm
σ ^{ 2 }  

0.3  0.4  0.5  0.6  
ρ(x)  x ^{9}  x ^{8}  x ^{7}  x ^{6} 
λ(x)  0.284x+0.3124x^{2}  0.3012x+0.3321x^{2}  0.3111x+0.3544x^{2}  0.4411x+0.4235x^{2} 
+0.0222x^{4}+0.1344x^{7}  +0.0982x^{3}+0.1322x^{10}  +0.1655x^{3}+0.0786x^{11}  0.0233x^{17}+0.0321x^{17}  
+0.0977x^{8}+0.1277x^{19}  +0.1363x^{99}  +0.0321x^{16}+0.0583x^{99}  +0.0682x^{99}  
+0.08x^{99}  
R _{ c }  0.5614  0.5226  0.4857  0.4528 
I _{ ind }  0.5014  0.4731  0.4393  0.4113 
I _{ c }  0.6328  0.6017  0.5744  0.5509 

Scheme 1: Regardless of the actual intersource correlation α, the two sources are assumed to be independent (α = 0.5) in code design as well as in decoding. Essentially, these codes at best can only achieve a channel capacity of I(X_{1},X_{2};Yα = 0.5). We denote this scheme by (α_{design} = 0.5,α_{decode} = 0.5).

Scheme 2: Independent sources are assumed for code design (α_{design} = 0.5), but the actual value of α is used in joint decoding. We denote this scheme by (α_{design} = 0.5,α_{decode} = α_{actual}).

Scheme 3: The actual value of α is used for both code design and in joint decoding. We denote the scheme by (α_{design} = α_{decode} = α_{actual}).
Degree profiles for LDPC codes used in Figure 14
Code rate R_{ c }  

0.3  0.5  0.6  
ρ(x)  0.3x^{7}+0.7x^{8}  0.3x^{7}+0.7x^{8}  0.29x^{7}+0.71x^{8} 
0.145x+0.231x^{2}  0.308x+0.282x^{2}  0.398x+0.321x^{2}  
λ(x)  +0.154x^{3}+0.086x^{5}  +0.121x^{3}+0.176x^{6}  0.021x^{3}+0.179x^{4} 
+0.055x^{11}+0.329x^{48}  +0.021x^{22}++0.092x^{48}  +0.032x^{40}++0.049x^{48} 
6 Conclusions
An approach to designing a DJSC code with symmetric rates for a pair of correlated binary sources transmitted over a GMAC, based on SILDPC codes has been developed. For EXIT analysis of the joint BP decoder for two sources, the accurate modeling of the density function of the outgoing LLRs from factor nodes in the combined factor graph of two LDPC codes, which represent the joint source probabilities and GMAC output conditional density (SCF nodes), has been investigated. While a tractable analytical expression appears difficult to obtain, a numerical method appropriate for EXIT analysis has been proposed for fitting a Gaussian or Gaussian mixture to model the density function of outgoing LLRs from SCF nodes. Experimental results are presented which show that SILDPC codes designed with this approach outperform previously reported DJSC codes. Furthermore, these results demonstrate that, for strongly dependent sources, the proposed DJSC code can achieve code rates higher than the theoretical upperbound for independent sources over the same GMAC.
Appendix
from which (9) follows.
Declarations
Acknowledgements
This work has been supported by the National Science and Engineering Research Council (NSERC) of Canada.
Authors’ Affiliations
References
 Culler D, Estrin D, Srivastava M: Overview of sensor networks. IEEE Comput 2004, 37(8):4149.View ArticleGoogle Scholar
 Slepian D, Wolf JK: Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19(4):471480. 10.1109/TIT.1973.1055037MathSciNetView ArticleGoogle Scholar
 Cover T, Thomas J: Elements of Information Theory. New York: WileyInterscience; 2006.Google Scholar
 Shamai SS, Verdu S: Capacity of channels with uncodedmessage sideinformation. Proceedings of the International Symposium on Information Theory, Whistler, 1722 September 1995, 77.Google Scholar
 Barros J, Servetto SD: Network information flow with correlated sources. IEEE Trans. Inf. Theory 2006, 52(1):155170.MathSciNetView ArticleGoogle Scholar
 Cover T, El Gamal A, Salehi M: Multipleaccess channel with arbitrarily correlated sources. IEEE Trans. Inf. Theory 1980, IT26(6):648657.MathSciNetView ArticleGoogle Scholar
 Ray S, Medard M, Effros M, Koetter R: On separation for multiple access channels. Proceedings of the IEEE Inf. Theory Workshop, Chengdu, 2226 October 2006, 399403.Google Scholar
 Pradhan SS, Choi S, Ramachandran K: A graphbased framework for transmission of correlated sources over multipleaccess channels. IEEE Trans. Inf. Theory 2007, 53(12):45834604.View ArticleGoogle Scholar
 GarciaFrias J, Zhao Y, Zhong W: Turbolike codes for transmission of correlated sources over noisy channels. IEEE Signal Process. Mag 2007, 24: 5866.View ArticleGoogle Scholar
 Murugan AD, Gopala PK, El Gamal H: Correlated sources over wireless channels: cooperative sourcechannel coding. IEEE J. Selected Areas Commun 2004, 22(6):988998. 10.1109/JSAC.2004.830889View ArticleGoogle Scholar
 Roumy A, Declercq D: Characterization and optimization of LDPC codes for the 2user Gaussian multiple access channel. EURASIP J. Wireless Commun. Netw Article ID 74890 2007.Google Scholar
 Shahid I, Yahampath P: Distributed joint sourcechannel coding using unequal error protection LDPC codes. IEEE Trans. Commun 2013, 61(8):34723482.View ArticleGoogle Scholar
 Chung SY, Forney GD, Richardson TJ, Urbanke R: On the design of lowdensity paritycheck codes within 0.0045 dB of the Shannon limit. IEEE Commun. Lett 2001, 5(2):5860.View ArticleGoogle Scholar
 Chung SY, Richardson T, Urbanke R: Analysis of sumproduct decoding of LDPC codes using a Gaussian approximation. IEEE Trans. Inf. Theory 2001, 47(2):657670. 10.1109/18.910580MathSciNetView ArticleGoogle Scholar
 Ryan WE, Lin S: Channel Codes : Classical and Modern. Cambridge: Cambridge University Press; 2009.View ArticleGoogle Scholar
 Rimoldi B, Urbanke R: A ratesplitting approach to the Gaussian multipleacccess channel. IEEE Trans. Inf. Theory 1996, 42(2):364375. 10.1109/18.485709View ArticleGoogle Scholar
 Richardson T, Urbanke R: Modern Coding Theory. Cambridge: Cambridge University Press; 2008.View ArticleGoogle Scholar
 Sartipi M, Fekri F: Distributed source coding using short to moderate length ratecompatible LDPC codes: the entire SlepianWolf region. IEEE Trans. Commun 2008, 56(3):400411.View ArticleGoogle Scholar
 Richardson T, Shokrollahi A, Urbanke R: Design of capacityapproaching irregular lowdensityparitycheck codes. IEEE Trans. Inf. Theory 2001, 47(2):619637. 10.1109/18.910578MathSciNetView ArticleGoogle Scholar
 ten Brink S, Kramer G, Ashikhmin A: Design of lowdensityparitycheck codes for modulation and detection. IEEE Trans. Communi 2004, 52(4):670678. 10.1109/TCOMM.2004.826370View ArticleGoogle Scholar
 ten Brink S: Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun 2001, 49: 17271737. 10.1109/26.957394View ArticleGoogle Scholar
 Box GEP, Cox DR: An analysis of transformations. J. R. Stat. Soc. B 1964, 26: 211252.MathSciNetGoogle Scholar
 Gasser T, Bacher P, Mocks J: Transformations towards the normal distribution of broad band spectral parameters of the EEG. Electroencephalogr. Clin. Neurophysiol 1982, 53: 119124. 10.1016/00134694(82)901122View ArticleGoogle Scholar
 AzevedoFilho A, Shachter RD: Laplace’s method approximations for probabilistic interference in belief networks with continuous variables. 10th Conference on Uncertainty Artif. Intell, Seattle, 2931 July 1994, 2836.Google Scholar
 Cohen AC: Estimation in mixtures of two Gaussian distributions. Technometrics 1967, 9(1):1528. 10.1080/00401706.1967.10490438MathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.