Skip to main content

Distributed joint source-channel code design for GMAC using irregular LDPC codes

Abstract

Separate source and channel coding is known to be sub-optimal for communicating correlated sources over a Gaussian multiple access channel (GMAC). This paper presents an approach to designing distributed joint source-channel (DJSC) codes for encoding correlated binary sources over a two-user GMAC, using systematic irregular low-density parity check (LDPC) codes. The degree profile defining the LDPC code is optimized for the joint source probabilities using extrinsic information transfer (EXIT) analysis and linear programming. A key issue addressed is the Gaussian modeling of log-likelihood ratios (LLRs) generated by nodes representing the joint source probabilities in the combined factor graph of the two LDPC codes, referred to as source-channel factor (SCF) nodes. It is shown that the analytical expressions based on additive combining of incoming LLRs, as done in variable nodes and parity check nodes of the graph of a single LDPC code, cannot be used with SCF nodes. To this end, we propose a numerical approach based on Monte-Carlo simulations to fit a Gaussian density to outgoing LLRs from the SCF nodes, which makes the EXIT analysis of the joint decoder tractable. Experimental results are presented which show that LDPC codes designed with the proposed approach outperforms previously reported DJSC codes for GMAC. Furthermore, they demonstrate that when the sources are strongly dependent, the proposed DJSC codes can achieve code rates higher than the theoretical upper-bound for separate source and channel coding.

1 Introduction

Wireless communication of multiple correlated information sources to a common receiver has become an important research problem due to potential applications in emerging information gathering systems such as wireless sensor networks (WSNs)[1]. Most of the recent work on this problem has been focused on approaches based on the separation of source and channel coding, which rely on using distributed source coding (DSC)[2] to first generate independent bit streams from two sources and then to use a multiple-access method (such as TDMA, FDMA, or CDMA)[3] to convert a multiple-access channel (MAC) into a set of orthogonal channels. While it is known that for communication of correlated sources over orthogonal channels, the source-channel separation is optimal[35], the same does not hold true for a MAC[68]. Hence, there can be a loss of performance when separate source and channel coding is used to transmit correlated sources over a MAC. This is because when the sources are correlated, even if the transmitters cannot communicate with each other, it is possible to generate correlated inputs to a MAC by using a DJSC code and thereby improve the performance relative to a system with independent channel inputs. In contrast, with separate source and channel coding, distributed source coding of correlated sources yields independent bit streams, and hence, the MAC inputs cannot be made dependent unless the transmitters are allowed to collaborate in channel coding. Therefore, DJSC coding can be expected to outperform separate source and channel coding in systems such as WSNs which are equipped with low-complexity, narrow-band sensors designed to communicate only with a common information gathering receiver.

DJSC coding of correlated sources for a GMAC has been studied sparsely in the literature. While there is no known tractable way to optimize a DJSC code for a given set of correlated sources and a MAC, a sub-optimal but effective and tractable framework is to encode each source using an independent channel code in such a manner that the resulting dependence between the MAC input codewords can be exploited by a joint decoder[9, 10]. In particular, the use of a systematic channel code for each source preserves the dependence between the information bits of the MAC input codewords. For example,[9] presents an approach in which much of the correlation between the sources is preserved in MAC input codewords by encoding each source using a systematic low-density generator matrix (LDGM) code. However, as LDGM codes exhibit a high error floor due to their poor distance properties, this approach requires the use of two concatenated LDGM codes for each source to achieve good performance which increases the delay and complexity. Furthermore, no known method exists for designing the LDGM codes to ensure that the codes are in some sense matched to the inter-source correlation and the channel noise level. An improved system design based on LDGM codes is presented in[10], which however requires an additional channel between each source and the common receiver. In another closely related work, Roumy et al.[11] consider the joint design of LDPC codes for independent sources transmitted over a two-input GMAC. The design of LDPC codes for correlated sources transmitted over orthogonal channels appears in[12].

In contrast to a previous work, in this paper, we present a DJSC code design approach for a pair of correlated binary sources, in which the degree profile of a systematic irregular LDPC (SI-LDPC) code is optimized for the joint distribution of the two sources and the signal-to-noise ratio (SNR) of the GMAC. Our motivations for using SI-LDPC codes are the following: (1) systematic codes can be used to exploit inter-source correlation in joint decoding of the two codes, (2) LDPC codes can be optimized by linear programming, in conjunction with the EXIT analysis of the belief propagation (BP)-based joint decoder, and (3) LDPC codes are known to be capacity achieving for a single-user case[13] and hence will exhibit very good performance in coding correlated sources as well. One of the key issues addressed here is the mutual information computation (as required for EXIT analysis) for messages passed from factor nodes in the joint factor graph of the two LDPC codes, referred to as source channel factor (SCF) nodes, which represent the joint probabilities of the two sources and the output conditional probability density function (pdf) of the GMAC. It is shown that the analytical computation of mutual information based on additive combining of incoming LLRs in variable nodes and parity check nodes of a factor graph as done in single LDPC codes does not apply to SCF nodes. In order to make the mutual information computation in EXIT analysis, tractable using a Gaussian approximation[14], we propose a simple numerical approach based on Monte-Carlo simulations to fit a Gaussian pdf to outgoing LLRs from the SCF nodes. Simulation results show that codes designed based on this method can not only outperform previously reported GMAC codes for both independent and correlated sources[9, 11], but can also achieve code rates higher than the theoretical upper-bound for independent sources over the same GMAC, when the sources are strongly dependent.

This paper is organized as follows: Section 2 formulates the DJSC code design problem addressed in this paper and the code optimization procedure is presented in Section 3. Section 4 studies the problem of modeling the pdf of outgoing LLRs from SCF nodes and presents a numerical method for computing the mutual information of these messages in the EXIT analysis. Section 5 presents and discusses the simulation results. Conclusions are given in Section 6.

2 Problem setup

A block diagram of the system under consideration is shown in Figure1. Let U1 and U2 be two dependent, uniform binary sources. Let the dependence between the two sources be described by the inter-source correlation parameter α [0,0.5], where P(U1 ≠ U2) = α. A sequence of source bits from U k , k = 1,2, is encoded by a source channel code to produce a channel codeword whose bits are modulated to produce the GMAC input X k  {+1,-1} in equivalent base-band representation. The output of the GMACYR can is given by

Y= X 1 + X 2 +W,
(1)

whereWN(0, σ 2 ) is the channel noise andN(μ, σ 2 ) denotes the Gaussian pdf with mean μ and variance σ2. In general, the maximum sum rate achievable over GMAC for two dependent sources is not known. However, when the sources are independent, i.e. α = 0.5, it is known that the maximum sum rate achievable by any code design is I(X1,X2,Y)[3]. In the case of dependent sources, we can consider I(X1,X2,Y) as an upper-bound to the achievable sum rate. Consider the direct transmission of the sources so that P(X1,X2) = P(U1,U2). In Figure2, we have plotted I(X1,X2,Y) of a GMAC with noise variance σ2 as a function of α for different values of σ2. Notice that the maximum of I(X1,X2,Y) depends on (α,σ2), which shows that optimizing the codes for the values of these parameters will result in higher sum rates for dependent sources over the same GMAC, compared to independent sources.

Figure 1
figure 1

Block diagram of the system considered in this paper.

Figure 2
figure 2

I(X 1 ,X 2 ;Y ) as a function of inter-source correlation parameter α for different values of GMAC noise variance σ2.

The optimal DJSC code for the given GMAC must induce a distribution P(X1,X2) which maximizes the sum rate of the two channel inputs[3]. While there appears to be no known tractable approach for designing such a code, the main idea pursued in this paper is to use a systematic channel code for each source, so that the two sources are essentially transmitted directly over the GMAC and therefore P(X1,X2) = P(U1,U2) for systematic bits of the channel input codewords (the parity bits of each source are related to information bit as given by parity-check equations of the code[15]). The two channel codes are decoded by a joint decoder which observes the channel output Y. This approach essentially exploits the inter-source correlation to enhance the performance of channel codes. In particular, if two sources are independent, then each channel code requires a sufficient number of parity bits to correct the errors due to channel noise and the mutual interference between the the two independent bit streams. However, when the two sources are dependent, the joint distribution P(X1,X2) of the information bits of the two channel input codewords provide an additional joint decoding gain and hence the number of parity bits required for encoding each source is reduced, or equivalently the achievable sum rate is higher. With practical (finite length) channel codes, this implies that the same decoding error probability can be achieved at a higher sum rate. Note that by construction, the aforementioned DJSC coding scheme requires that the code length n and the number of systematic information bits m (and the code rate R c  = m/n) be identical for both sources, and therefore, the resulting designs correspond to symmetric rates. Achieving asymmetric rates will possibly require some form rate splitting[16] and will not be considered in this paper.

The code design approach presented in this paper is based on systematic irregular LDPC (SI-LDPC) codes. First, consider an n-bit SI-LDPC code[17] whose parity check matrix H can be represented by a factor graph with code bit variable (CBV) nodes x(1),…,x(n) and parity check factor (PCF) nodes (representing parity check equations), channel output variable (COV) nodes y(1),…,y(n), and the channel factor (CF) nodes. In the case of a Gaussian channel, a COV node represents the conditional pdf p(y(n)|x(n)). The channel outputs are decoded by applying the BP algorithm to the factor graph[17]. For the purpose of code design, a length n SI-LDPC code can be completely specified by the parameters (n,λ(x),ρ(x)), whereλ(x)= i = 1 d vmax λ i x i - 1 andρ(x)= i = 1 d cmax ρ i x i - 1 are the edge-perspective degree polynomials of variable nodes and parity check nodes, respectively, and λ i (resp. ρ i ) is the fraction of edges connected to CBV (resp. PCF) nodes of degree i (a degree of a node is the number of edges connected to it), satisfying the constraints i λ i =1 and i ρ i =1[17]. The parameters dcmax and dvmax are typically chosen in such a manner that the sparsity of the corresponding factor graph is maintained (i.e., the edges in the factor graph grow linearly with the codeword length[17]). It is known that a concentrated degree polynomial of the form ρ(x) = ρ xs-2 + (1 - ρ)xs for some s ≥ 2 and 0 < ρ ≤ 1 is sufficient for achieving near optimal performance ([13], Theorem 2).

Now consider, a two-input GMAC with an SI-LDPC code applied to each input. Since (1) is symmetric with respect to X1 and X2 and the same rate is used for both sources, the same channel code can be used for both sources. While the parity check matrix H of each SI-LDPC code whose code bits are x k (1),…,x k (n) can be represented by a factor graph, for the joint decoding of the two codes, the combined factor graph as shown in Figure3 has to be used, where the COV nodes y(1),…,y(n) are linked to factor nodes ϕ(y(i),x1(i),x2(i)), i = 1,…,n which represent (combined) SCF nodes. As described later in this paper, the message passing to and from these nodes in BP decoding is also crucial to the design of the codes as well. To determine the ϕ(·), consider the maximum a posteriori (MAP) decoding of the codeword x ̲ k = x k ( 1 ) , , x k ( n ) transmitted on GMAC input k = 1,2, based on the GMAC outputs y ̲ = y ( 1 ) , , y ( n ) . Let x ̲ k [ i ] denote those code bits in x ̲ k , except x k (i). Also, for k {1,2} define

k ̄ = 1 if k = 2 2 if k = 1
Figure 3
figure 3

Combined factor-graph used for joint decoding of the two LDPC codes. Bit remapping is used to convert the systematic channel code X k (1),…,X k (n), into equivalent sparse graph form (factor graphs on either side).

Then, it is easy to verify that the MAP decoded value of the i th bit of the input codeword of GMAC input k is given by

x ̂ k (i)=arg max x k ( i ) ± 1 x _ k [ i ] x _ k ̄ f( x _ 1 , x _ 2 , y _ ),
(2)

where i = 1,…,n,

f ( x _ 1 , x _ 2 , y _ ) = p ( y _ | x _ 1 , x _ 2 ) P ( x _ 1 , x _ 2 ) I { x _ 1 C } I { x _ 2 C } = j = 1 n p ( y ( j ) | x 1 ( j ) , x 2 ( j ) ) P ( x 1 ( j ) , x 2 ( j ) ) × I { x _ 1 C } I { x _ 2 C } , = j = 1 n ϕ j y ( j ) , x 1 ( j ) , x 2 ( j ) I { x _ 1 C } I { x _ 2 C } ,
(3)

denotes the set of all codewords of the code,I{·} denotes the indicator function, and

ϕ j y ( j ) , x 1 ( j ) , x 2 ( j ) = p ( y ( j ) | x 1 ( j ) , x 2 ( j ) ) P ( x 1 ( j ) , x 2 ( j ) ) .
(4)

In the factor graph, representation of (3), each factor node represents a term in the product[17]. As usual, the factorsI{ x _ 1 C} andI{ x _ 2 C} are represented by the PCF nodes of the two codes, respectively. On the other hand, each term ϕ i (·), i = 1,…,n, which is a function of the code bits x1(i) and x2(i), and the channel output y(i) is represented by a SCF node as shown in Figure3. As the codes are systematic, for information bits P(x1(i),x2(i)) is identical to the joint distribution P(u1,u2) of the source bits. For parity bits of an LDPC code (which has a dense generator check matrix), it can be assumed that P(x1(i),x2(i)) = P((x1(i))P((x2(i)) with P(x1(i)) = P(x2(i)) = 0.5[18].

Sparse parity check matrices obtained through the EXIT analysis design procedure does not necessarily correspond to systematic generator matrices. As usual, the codes can be converted to systematic form by using Gaussian elimination. However, the resulting codes have dense parity-check matrices which makes the computational complexity of BP decoding impractically high. In order to get around this problem, a bit re-mapping operation is used in the joint decoder to rearrange the systematic code-bits, so that the codewords correspond to sparse matrices, as shown in Figure3.

3 Code optimization

A well-known simple method for constructing a near-capacity achieving SI-LDPC code for a single-input AWGN channel with noise variance σ2 and some fixed ρ(x), is to determine the coefficients λ i which maximize the rate of the code under BP decoding, subject to Gaussian approximation (GA) for the messages passed in the decoder[13]. The code design in this case is a linear programming problem of the form ([17], Ch. 4): maximize λ i i 2 λ i /i, subject to constraints (1) i λ i =1,(0< λ i 1) (normalization constraint), (2) λ 2 <exp( 1 2 σ 2 )/ j (j-1) ρ j (stability condition), and (3) a linear inequality to ensure the convergence of the BP algorithm ([13], Sec. 3) (decoder convergence constraint).

In optimization of an SI-LDPC code for two correlated sources to be transmitted over a GMAC, the objective is to determine the degree distribution λ(x) which maximizes the code rate R c , given the source correlation parameter α, the GMAC noise variance σ2, and some fixed ρ(x). Clearly, the objective function and the constraints (1) and (2) remain the same as for single channel code design. However, since the two sources are decoded jointly using a combined factor-graph, the third constraint has to be re-established. Specifically, let the mutual information between the code bits X k and the LLRs passed from the corresponding CBV nodes to the PCF nodes be I v p ( k ) (l), where l is the iteration in EXIT analysis. Then, for the convergence of BP decoding of the codes on the combined factor graph, it is required that

I v p ( k ) (l+1) I v p ( k ) (l),l=1,,,
(5)

for k = 1,2. In the following, I v p ( k ) will be shown to be linear in λ i . Since the objective function and the constraints are all linear in the code parameters λ i , we can use a linear program to solve the problem. The rest of this section is devoted to EXIT analysis of BP decoding on the joint factor graph and the iterative computation of I v p ( k ) (l).

The details of BP decoding algorithm and the EXIT analysis for single-user LDPC codes can be found in[17, 19]. In EXIT analysis, the analytical computation of mutual information is feasible only if the outgoing LLRs from the nodes in the factor graph have a Gaussian (or Gaussian-mixture) distribution[19]. When the channel is binary-input AWGN (BiAWGN) channel, the outgoing LLR values are Gaussian distributed[14]. For other types of channels, the Gaussian approximation of LLRs is known to be a good approximation, due to the universality of the LDPC codes (a code designed for one type of channel performs well on a another type of a channel)[15]. The analytical expressions for iterative mutual information updates through CBV nodes and PCF nodes in EXIT analysis is well known[17]. In particular, the mutual information update through a CBV node stems from the message update through that variable node and the central limit theorem. That is, since an outgoing message of a given node has the mean equal to the sum of means of incoming messages to that node, given a reasonably high node degree, the outgoing message is approximately Gaussian. The mutual information update for a PCF node on the other hand relies on the deterministic relationship between the PCF node and the CBV nodes connected to it, as defined by the parity check equations. As a result, the mutual information update for a PCF node can be computed by simply using duality relationship that it has with a CBV node[20].

In the case of LDPC codes applied to two correlated sources transmitted over a GMAC and decoded by using a combined factor graph, the mutual information updates through CBV nodes and PCF nodes can be analytically computed as in the case of single-user LDPC codes. Denote the messages passed between various nodes in the factor graph as in Figure4. Accordingly, the message passed from a degree d v CBV node of the code k {1,2}, along its j th edge to a PCF node is given by

m v p , j ( k ) = i j i = 1 d v - 1 m p v , i ( k ) + m s v ( k ) .
(6)
Figure 4
figure 4

Message (LLR) flow through the i th SCF node in the combined factor graph ( i= 1,…, n ).

and the message passed from a PCF node of degree d c on its j th edge to a CBV node is given by

m p v , j ( k ) =2 tanh - 1 i j i = 1 d c tanh m v p , i ( k ) 2 ,
(7)

see ([17], Ch. 2). Similarly, the LLR passed from a CBV node to a SCF node can be formed by the summation of incoming LLRs from the PCF nodes, i.e.,

m v s ( k ) = i = 1 d v - 1 m p v , i ( k ) .
(8)

Note however that the LLRs computed by an SCF node cannot be formed as the sum of incoming LLRs, but are determined by (4). In the Appendix, it is shown that when sources are uniformly distributed

m s v ( 1 ) = log P ( x i ( 1 ) = + 1 | y i ) P ( x i ( 1 ) = - 1 | y i = log q 1 , 1 ( y i ) ( 1 - α ) e m v s ( 2 ) + q 1 , - 1 ( y i ) α q - 1 , 1 ( y i ) α e m v s ( 2 ) + q - 1 , - 1 ( y i ) ( 1 - α ) ,
(9)

where q j , k (y)f( y i | x i ( 1 ) =j, x i ( 2 ) =k), j,k {-1,+1}, and we have used the fact that m v s ( 2 ) =log P ( X 2 = 1 ) P ( X 2 = - 1 ) . Similarly,

m s v ( 2 ) =log q 1 , 1 ( y i ) ( 1 - α ) e m v s ( 1 ) + q - 1 , 1 ( y i ) α q 1 , - 1 ( y i ) α e m v s ( 1 ) + q - 1 , - 1 ( y i ) ( 1 - α ) .
(10)

4 EXIT analysis

Let the mutual information between two random variables r and s be I(r;s) and define the following: I v p ( k ) I( X k ; m v p ( k ) ), I p v ( k ) I( X k ; m p v ( k ) ), I v s ( k ) I( X k ; m v s ( k ) ), and I s v ( k ) I( X k ; m s v ( k ) ). In the l + 1th iteration of EXIT analysis, given I v p ( k ) (l) and I v s ( k ) (l), I p v ( k ) (l+1) and I s v ( k ) (l+1) are first updated, followed by I v p ( k ) (l+1) and I v s ( k ) (l+1). The convergence of the degree polynomial λ(x) to a valid code is then verified by (5).

Let μ v p ( k , d ) =E{ m v p , j ( k ) }, where v represents degree d CBV nodes in the factor graph, d = 2,…,dvmax. Given that all incoming messages to a CBV node are independent, the outgoing message is given by

μ v p ( k , d ) =(d-2) μ p v ( k ) + μ s v ( k ) ,
(11)

Under the assumption that LLRs generated by a CBV node are Gaussian with mean μ and variance 2μ, the mutual information I between the code bit represented by the CBV node and the LLR is given by

I=J(μ)
(12)

where J(·) is given by Equation twenty four of[21]. Thus, the mutual information between degree d CBV nodes and the messages passed to PCF nodes is

I v p ( k , d ) =J ( d - 2 ) J - 1 I p v ( k ) + J - 1 I s v ( k ) .
(13)

Therefore,

I v p ( k ) = d = 2 d vmax λ d J ( d - 2 ) J - 1 I p v ( k ) + J - 1 I s v ( k ) .
(14)

Let mutual information between a CBV nodes and the messages it receives from a degree d PCF nodes be I p v ( k , d ) . Then, from the duality approximation ([17], pp. 236), it follows that,

I p v ( k ) = d = 2 d cmax ρ d ( k ) I p v ( k , d ) = 1 - j = 2 d cmax ρ d ( k ) J ( d - 1 ) J - 1 1 - I v p ( k ) .
(15)

Furthermore, the average mutual information between CBV nodes and the messages passed to SCF nodes is given by

I v s ( k ) = d = 2 d vmax λ d J ( d - 1 ) J - 1 ( I p v ( k , d ) ) .
(16)

Next consider the computation of mutual information I s v ( 1 ) and I s v ( 2 ) between the CBV nodes and the messages m s v ( 1 ) and m s v ( 2 ) as given by (9) and (10), respectively, passed from SCF nodes. Unfortunately, it is not straightforward to compute these quantities, as the mean values of m s v ( k ) , k = 1,2 cannot be computed using an analytical relation as in (11). In this case, μ s v ( k ) is a function of μ v s ( k ̄ ) , α, and σ2. As will be demonstrated below, m s v ( k ) is in general not Gaussian-distributed, even if m v s ( k ) is Gaussian. However, in order to be able to apply EXIT analysis based on (12) suppose that, by using a suitable approach, we model the pdf m s v ( k ) by a Gaussian function with variance equal to twice the mean. Then, we can obtain the required mutual information simply as

I s v ( k ) =J μ s v ( k ) .
(17)

On the other hand, if we model the pdf of m s v ( k ) by a more general Gaussian mixture

a 1 ( k ) N ( μ 1 ( k ) , 2 μ 1 ( k ) ) + a 2 ( k ) N ( μ 2 ( k ) , 2 μ 2 ( k ) ) ,

where a 1 ( k ) , a 2 ( k ) , μ 1 ( k ) , and μ 2 ( k ) are now functions of μ v s ( k ̄ ) , α, and σ2, then[20]

I s v ( k ) = a 1 ( k ) J( μ 1 ( k ) )+ a 2 ( k ) J( μ 2 ( k ) ).
(18)

Next, we present an approach to numerically estimate the mean values μ s v ( k ) . In order to get an idea about the pdf of m s v ( 1 ) , let the LLR m v s ( 2 ) N μ v s ( 2 ) , 2 μ v s ( 2 ) passed from a CBV node v of X2 to a SCF node s as shown in Figure4. Figure5 shows the histograms of m s v ( 1 ) obtained through Monte-Carlo simulations for several values of α and μ v s ( 2 ) , suggesting that the pdf of m s v ( 1 ) can be highly skewed or even bi-modal depending on the values of α and μ v s ( 2 ) . Similar observations hold for m s v ( 2 ) . Therefore, the problem at hand is to find a suitable method to fit a Gaussian pdf to m s v ( k ) , which is effective for range values of α and σ2.

Figure 5
figure 5

Histograms of outgoing messages m s v ( 1 ) from SCF nodes for different values of incoming message mean μ v s ( 2 ) for different values α ( σ2= 5).

For approximating an arbitrary distribution by a Gaussian, the transformation-based methods are widely used, see[22, 23]. These methods are essentially parametric where the parameter estimation is usually done through methods such as maximum likelihood (ML) or Bayesian inference. They also require performing the inverse transform operation once the required processing is done on the Gaussian density. For our problem, both parameter estimation and inverse operation can make the LDPC code optimization algorithm intractable. Since our code optimization procedure is based on mutual information transfer through SCF nodes as computed in (17), we seek a computationally simple Gaussian approximation which yields the maximum mutual information I s v ( 1 ) for the messages m s v ( 1 ) , for given μ v s ( 2 ) , α, and σ2. To this end, we consider the following three approaches.

  •  Mean-matched Gaussian approximation - The mean μ is estimated from observations and variance set to 2μ.

  •  Mode-matched Gaussian approximation - The mode m of the pdf is estimated from observations and we set the mean μ = m and variance 2μ. The fitting of a Gaussian distribution at the mode of an arbitrary distribution is closely related to the Laplace approximation[24].

  •  Two-component Gaussian mixture approximation - The density is approximated by fitting a two component Gaussian mixture a 1 N( μ 1 , σ 1 2 )+ a 2 N( μ 2 , σ 2 2 ), where μ 1 , μ 2 , σ 1 2 , σ 2 2 , a 1 , and a2 are estimated from the observations.

The rationale for using these approximations can be seen from Figure5. Note that for some values of α, μ v s ( 2 ) and channel noise variance σ2, the density of m s v ( 1 ) displays two dominant modes. Note also that, for some values, the density does resemble a Gaussian (for which the mean and mode are equal), while for some values, the density is uni-modal but highly skewed. In particular, the skewed density functions suggest the use of the mode-matched approximation. In order to compare the performance of these approaches on the basis of mutual information of outgoing messages from SCF nodes, we present in Figures6 and7 I s v ( 1 ) as a function of I v s ( 2 ) for two cases selected from Figure5. In Figure6, we compare the mean-matched and mode-matched approximations for a case in which the density is uni-modal, and hence, the Gaussian mixtures do not provide a better fit. In Figure7, we have chosen a case in which the histogram is bi-modal and hence a Gaussian mixture is a better fit. It is however evident that in either case, the highest output mutual information is achieved with the mode-matching approach. The mode-matched method was also found to yield the maximum output mutual information for various other values of α, μ v s ( 2 ) and σ2. Furthermore, as will be shown (see Figure8), the joint codes designed by using this approximation also yield the lowest decoding bit error probability compared to the other two approaches.

Figure 6
figure 6

Mutual information update through SCF nodes. I s v ( 1 ) (output mutual information) as a function of I v s ( 2 ) (input mutual information) for μ v s ( 2 ) = 1.8 and α = 0.01. The corresponding message histogram is shown in Figure5.

Figure 7
figure 7

Mutual information update through SCF nodes. I s v ( 1 ) (output mutual information) as a function of I v s ( 2 ) (input mutual information) for μ v s ( 2 ) = 3.24 and α = 0.01. Note the corresponding bi-modal histogram in Figure5.

Figure 8
figure 8

Decoding error probability of codes designed with three approximation methods shown in Figure 7 ( σ2= 5, α= 0 . 01).

With all three approximation methods the mean value μ s v ( k ) of the pdf of m s v ( k ) can be estimated using Monte-Carlo simulation. For example, μ s v ( k ) can be estimated using mean-matched or mode-matched approximations as follows:

  • Step 1: Given the mean value μ v s ( k ̄ ) generate a sufficiently large number of N samples of m v s ( k ̄ ) N( μ v s ( k ̄ ) ,2 μ v s ( k ̄ ) ).

  • Step 2: Given P(X1,X2) and σ2, generate N samples from the pdf of the GMAC output y.

  • Step 3: Use (9) (if k = 1) or (10) (if k = 2) to compute the corresponding N samples of m s v ( k ) . Estimate the mean m of the pdf of m s v ( k ) using either mean-matched or mode-matched approximations described above. Set μ s v ( k ) =m andvar( m s v ( k ) )=2m.

In the case of Gaussian mixture approximation, mean values μ1, μ2 and the weights a1,a2 can be estimated from the sample set of m s v ( k ) [25].

5 Simulation results

In this section, we present simulation results obtained by designing DJSC codes for a pair of uniformly distributed binary sources (whose statistical dependence is given by α) and a GMAC with noise variance σ2.

First, we investigate the impact of the three message density approximation considered in Section 4. As evidenced by Figures6 and7, codes designed by using the mode-matched approximation gives the maximum output mutual information from a SCF node. In order to compare the three approximation methods on the basis of code performance, in Figure8, we present the probability of decoding error of codes designed using each of these methods. Here, the correlation parameter α and the channel noise variance σ2 are identical to those used in Figure7 (for which the density of outgoing messages from SCF nodes is bi-modal). These results confirm that the mode-matched approximation tends to yield the best codes, owing to the skewed nature of the pdf of the output messages from SCF nodes. For example, at the error probability of 10-6, the codeword length required with mode-matched approximation is approximately 1.7 × 104 bits, while that with mean-matched approximation is approximately 3.8 × 104 bits. In obtaining simulation results in the rest of this section, we have used the mode-matched approximation.

As discussed in Section 2, the capacity of a GMAC can be higher for dependent sources as compared to independent sources. Table1 presents examples of several DJSC code designs together with their code rates R c for sources with a correlation level of α = 0.1. The table also indicates the maximum code rate achievable (channel capacity) with independent sources 1 2 I( X 1 , X 2 ,Y|α=0.5) over the same channel (noise level), as well as the actual value of the joint mutual information 1 2 I( X 1 , X 2 ;Y|α=0.1) of the DJSC code. In particular, note that the DJSC codes can actually achieve code rates (bits/channel-use) higher than the capacity of the GMAC for independent sources. While the capacity of GMAC for correlated sources remains unknown, these results clearly demonstrate that the proposed codes are able to outperform separate source and channel coding for GMAC (with separate source and channel coding, a good DSC would render the two sources nearly independent). In Figure9, we compare the joint source-channel coding (JSC) rates in channel-uses/source-bit RJSC, achieved by the proposed DJSC codes as a function of α, at a codeword length of 106 bits and a decoding error probability of 10-6. The rate lower-bound for independent sources[3] over the same GMAC is also shown. Note that for α ≤ 0.36, DJSC codes can achieve JSC rates below the theoretical lower-bound for independent sources with the same marginal probabilities.

Table 1 Degree profiles for LDPC codes generated by the proposed design algorithm
Figure 9
figure 9

JSC rate R JSC (channel uses per source bit) of proposed DJSC codes. σ2 = 0.5 and codeword length is 106 bits. The theoretical lower-bound for independent sources over the same GMAC is also shown.

In order to further demonstrate the advantage of the proposed DJSC codes compared to separate source and channel coding, we next compare three different system designs which differ in terms of the use of prior information about the inter-source correlation parameter α, as follows:

  • Scheme 1: Regardless of the actual inter-source correlation α, the two sources are assumed to be independent (α = 0.5) in code design as well as in decoding. Essentially, these codes at best can only achieve a channel capacity of I(X1,X2;Y|α = 0.5). We denote this scheme by (αdesign = 0.5,αdecode = 0.5).

  • Scheme 2: Independent sources are assumed for code design (αdesign = 0.5), but the actual value of α is used in joint decoding. We denote this scheme by (αdesign = 0.5,αdecode = αactual).

  • Scheme 3: The actual value of α is used for both code design and in joint decoding. We denote the scheme by (αdesign = αdecode = αactual).

Figure10 shows the code rates (measured in bits per channel use) achieved by scheme 2 and scheme 3 for different values of correlation parameter α, for σ2 = 1 and a decoding error probability of 10-6. While the rate achieved by both schemes increases as the inter-source correlation increases (α decreases) as expected, note that for α = 0.1 and α = 0.2, Scheme 3 can actually achieve code rates higher than the theoretical upper-bound for independent sources over the same channel, which demonstrates the advantage of optimizing the code for the joint distribution of the sources. Figure11 shows the decoding error probability of the three schemes, as a function of α. As expected, the codes optimized for the joint distribution of the sources yields the best performance. It can also be seen that even the codes designed for independent sources can achieve a significant performance improvement if the actual value of joint probabilities of the sources are used in joint decoding. Note that with (αdesign = 0.5,αdecode = 0.5) and (αdesign = 0.5,αdecode = αactual), the same pair of codes have been used for all values of α. As α decreases, the improvement achieved by incorporating the joint source statistics for both code optimization and joint decoding becomes more pronounced. Figure12 shows the probability of decoding error of the three schemes as a function of codeword length n for several values of SNR of the GMAC (for a given value of SNR, the JSC rates are kept the same for all three schemes).

Figure 10
figure 10

Code rates (in bits per channel use) achieved by Scheme 2 and Scheme 3. The points correspond to α = 0.5 (lowest rate points), 0.4,0.3,0.2 and 0.1 (highest rate points). The decoding error probability is 10-6, σ2 = 1, and the codeword length is 106 bits.

Figure 11
figure 11

Comparison of three code-design/decoding schemes at different values of inter-source correlation (codeword length is 10 6 bits).

Figure 12
figure 12

The performance of three code-design/decoding schemes schemes as a function codeword length, at different channel SNR values ( α= 0.2).

It is of interest to compare the performance of the proposed LDPC code constructions with the concatenated LDGM codes reported in[9]. The use of LDGM codes has the advantage that all bits of two GMAC input codewords are correlated, whereas in the proposed scheme with systematic LDPC codes, only the information bits are correlated. However, LDGM codes have an inherent disadvantage that they typically have a high error floor[17]. To get around this problem, the authors in[9] have used serial, parallel, and hybrid concatenation of LDGM codes with interleavers, but this also increases the encoding complexity. This problem is not present in the LDPC codes. Additionally, unlike LDGM codes in[9], the degree polynomials of the LDPC codes can be optimized to correlated sources, as proposed in this paper. While the theoretical limit of SNR required to achieve a given decoding error probability is not known for correlated sources, Figure eight of[9] shows the SNR gain of LDGM codes compared to the theoretical limit for independent sources. In Figure13, we compare SNR gain of proposed LDPC codes and LDGM codes in Figure eight of[9]. The SNR gap (in decibels) in this figure is the difference between the channel SNR for which the code is designed and the theoretical limit of SNR for independent sources.

Figure 13
figure 13

Comparison of the proposed LDPC codes with LDGM codes in (Figure eight of [9]); α= 0.1). The SNR gap refers to the difference between the SNR of the actual channel for which the code is designed and the SNR corresponding to the theoretical limit for independent sources.

While the proposed LDPC code design is aimed at DJSC coding of correlated sources over a GMAC, they can also be applied to channel coding of independent sources over a GMAC, similar to[11]. Specifically, recall that DJSC codes designed for α = 0.5 yields a channel code for independent sources. Figure14 compares performance of channel code designed in this manner with that of the codes reported in (Figure3 of[11]). The degree profiles of the LDPC codes are given in Table2. For example, the proposed code designs achieve a coding gain of ≈ 0.2 dB at a decoding error probability of 10-3 for all coding rates considered here. While the approach in[11] is also based on a combined factor graph for two codes, the improved performance of the codes proposed in this paper is due to more accurate approximation of the density functions of the outgoing messages from SCF nodes as discussed in Section 4.

Figure 14
figure 14

Channel coding of independent sources over a GMAC: comparison of proposed LDPC codes designs with those reported in (Figure three of [11]). R c is the code rate.

Table 2 Degree profiles for LDPC codes used in Figure 14

6 Conclusions

An approach to designing a DJSC code with symmetric rates for a pair of correlated binary sources transmitted over a GMAC, based on SI-LDPC codes has been developed. For EXIT analysis of the joint BP decoder for two sources, the accurate modeling of the density function of the outgoing LLRs from factor nodes in the combined factor graph of two LDPC codes, which represent the joint source probabilities and GMAC output conditional density (SCF nodes), has been investigated. While a tractable analytical expression appears difficult to obtain, a numerical method appropriate for EXIT analysis has been proposed for fitting a Gaussian or Gaussian mixture to model the density function of outgoing LLRs from SCF nodes. Experimental results are presented which show that SI-LDPC codes designed with this approach outperform previously reported DJSC codes. Furthermore, these results demonstrate that, for strongly dependent sources, the proposed DJSC code can achieve code rates higher than the theoretical upper-bound for independent sources over the same GMAC.

Appendix

Since m s v ( 2 ) =log P ( x i ( 2 ) = + 1 ) P ( x i ( 2 ) = - 1 ) , we have

L i ( 2 ) = P ( x i ( 2 ) = + 1 ) P ( x i ( 2 ) = - 1 ) = e m s v ( 2 ) .

Also, define P j , k =P( x i ( 1 ) =j, x i ( 2 ) =k) where j,k {-1,+1}. It follows that,

P 1 , 1 = P ( x i ( 2 ) = + 1 ) P ( x i ( 1 ) = + 1 | x i ( 2 ) = + 1 ) = P ( x i ( 2 ) = + 1 ) ( 1 - α ) ,

and similarly P 1 , - 1 =P( x i ( 2 ) =-1)α, P - 1 , 1 =P( x i ( 2 ) =+1)α, and P - 1 , - 1 =P( x i ( 2 ) =-1)(1-α). Now consider

p ( x i ( 1 ) = + 1 | y i ) = p ( x i ( 1 ) = + 1 , y i ) p ( y i ) = p ( y i , x i ( 1 ) = + 1 , x i ( 2 ) = + 1 ) + p ( y i , x i ( 1 ) = + 1 , x i ( 2 ) = - 1 ) p ( y i ) = p ( y i | x i ( 1 ) = + 1 , x i ( 2 ) = + 1 ) P 1 , 1 + p ( y i | x i ( 1 ) = + 1 , x i ( 2 ) = - 1 ) P 1 , - 1 p ( y i ) = q 1 , 1 ( y i ) P 1 , 1 + q 1 , - 1 ( y i ) P 1 , - 1 p ( y i ) .

Similarly, it can be shown that

p ( x i ( 1 ) = - 1 | y i ) = q - 1 , 1 ( y i ) P - 1 , 1 + q - 1 , - 1 ( y i ) P - 1 , - 1 p ( y i ) .

Therefore

p ( x i ( 1 ) = + 1 | y i ) p ( x i ( 1 ) = - 1 | y i ) = q 1 , 1 ( y i ) P 1 , 1 + q 1 , - 1 ( y i ) P 1 , - 1 q - 1 , 1 ( y i ) P - 1 , 1 + q - 1 , - 1 ( y i ) P - 1 , - 1 = q 1 , 1 ( y i ) P ( x i ( 2 ) = + 1 ) ( 1 - α ) + q 1 , - 1 ( y i ) P ( x i ( 2 ) = - 1 ) α q - 1 , 1 ( y i ) P ( x i ( 2 ) = + 1 ) α + q - 1 , - 1 ( y i ) P ( x i ( 2 ) = - 1 ) ( 1 - α ) = q 1 , 1 ( y i ) L i ( 2 ) ( 1 - α ) + q 1 , - 1 ( y i ) α q - 1 , 1 ( y i ) L i ( 2 ) α + q - 1 , - 1 ( y i ) ( 1 - α )

from which (9) follows.

References

  1. Culler D, Estrin D, Srivastava M: Overview of sensor networks. IEEE Comput 2004, 37(8):41-49.

    Article  Google Scholar 

  2. Slepian D, Wolf JK: Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19(4):471-480. 10.1109/TIT.1973.1055037

    Article  MathSciNet  Google Scholar 

  3. Cover T, Thomas J: Elements of Information Theory. New York: Wiley-Interscience; 2006.

    Google Scholar 

  4. Shamai SS, Verdu S: Capacity of channels with uncoded-message side-information. Proceedings of the International Symposium on Information Theory, Whistler, 17-22 September 1995, 7-7.

    Google Scholar 

  5. Barros J, Servetto SD: Network information flow with correlated sources. IEEE Trans. Inf. Theory 2006, 52(1):155-170.

    Article  MathSciNet  Google Scholar 

  6. Cover T, El Gamal A, Salehi M: Multiple-access channel with arbitrarily correlated sources. IEEE Trans. Inf. Theory 1980, IT-26(6):648-657.

    Article  MathSciNet  Google Scholar 

  7. Ray S, Medard M, Effros M, Koetter R: On separation for multiple access channels. Proceedings of the IEEE Inf. Theory Workshop, Chengdu, 22-26 October 2006, 399-403.

    Google Scholar 

  8. Pradhan SS, Choi S, Ramachandran K: A graph-based framework for transmission of correlated sources over multiple-access channels. IEEE Trans. Inf. Theory 2007, 53(12):4583-4604.

    Article  Google Scholar 

  9. Garcia-Frias J, Zhao Y, Zhong W: Turbo-like codes for transmission of correlated sources over noisy channels. IEEE Signal Process. Mag 2007, 24: 58-66.

    Article  Google Scholar 

  10. Murugan AD, Gopala PK, El Gamal H: Correlated sources over wireless channels: cooperative source-channel coding. IEEE J. Selected Areas Commun 2004, 22(6):988-998. 10.1109/JSAC.2004.830889

    Article  Google Scholar 

  11. Roumy A, Declercq D: Characterization and optimization of LDPC codes for the 2-user Gaussian multiple access channel. EURASIP J. Wireless Commun. Netw Article ID 74890 2007.

    Google Scholar 

  12. Shahid I, Yahampath P: Distributed joint source-channel coding using unequal error protection LDPC codes. IEEE Trans. Commun 2013, 61(8):3472-3482.

    Article  Google Scholar 

  13. Chung SY, Forney GD, Richardson TJ, Urbanke R: On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit. IEEE Commun. Lett 2001, 5(2):58-60.

    Article  Google Scholar 

  14. Chung SY, Richardson T, Urbanke R: Analysis of sum-product decoding of LDPC codes using a Gaussian approximation. IEEE Trans. Inf. Theory 2001, 47(2):657-670. 10.1109/18.910580

    Article  MathSciNet  Google Scholar 

  15. Ryan WE, Lin S: Channel Codes : Classical and Modern. Cambridge: Cambridge University Press; 2009.

    Book  Google Scholar 

  16. Rimoldi B, Urbanke R: A rate-splitting approach to the Gaussian multiple-acccess channel. IEEE Trans. Inf. Theory 1996, 42(2):364-375. 10.1109/18.485709

    Article  Google Scholar 

  17. Richardson T, Urbanke R: Modern Coding Theory. Cambridge: Cambridge University Press; 2008.

    Book  Google Scholar 

  18. Sartipi M, Fekri F: Distributed source coding using short to moderate length rate-compatible LDPC codes: the entire Slepian-Wolf region. IEEE Trans. Commun 2008, 56(3):400-411.

    Article  Google Scholar 

  19. Richardson T, Shokrollahi A, Urbanke R: Design of capacity-approaching irregular low-density-parity-check codes. IEEE Trans. Inf. Theory 2001, 47(2):619-637. 10.1109/18.910578

    Article  MathSciNet  Google Scholar 

  20. ten Brink S, Kramer G, Ashikhmin A: Design of low-density-parity-check codes for modulation and detection. IEEE Trans. Communi 2004, 52(4):670-678. 10.1109/TCOMM.2004.826370

    Article  Google Scholar 

  21. ten Brink S: Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun 2001, 49: 1727-1737. 10.1109/26.957394

    Article  Google Scholar 

  22. Box GEP, Cox DR: An analysis of transformations. J. R. Stat. Soc. B 1964, 26: 211-252.

    MathSciNet  Google Scholar 

  23. Gasser T, Bacher P, Mocks J: Transformations towards the normal distribution of broad band spectral parameters of the EEG. Electroencephalogr. Clin. Neurophysiol 1982, 53: 119-124. 10.1016/0013-4694(82)90112-2

    Article  Google Scholar 

  24. Azevedo-Filho A, Shachter RD: Laplace’s method approximations for probabilistic interference in belief networks with continuous variables. 10th Conference on Uncertainty Artif. Intell, Seattle, 29-31 July 1994, 28-36.

    Google Scholar 

  25. Cohen AC: Estimation in mixtures of two Gaussian distributions. Technometrics 1967, 9(1):15-28. 10.1080/00401706.1967.10490438

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work has been supported by the National Science and Engineering Research Council (NSERC) of Canada.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pradeepa Yahampath.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Shahid, I., Yahampath, P. Distributed joint source-channel code design for GMAC using irregular LDPC codes. J Wireless Com Network 2014, 3 (2014). https://doi.org/10.1186/1687-1499-2014-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2014-3

Keywords