A block diagram of the system under consideration is shown in Figure1. Let U1 and U2 be two dependent, uniform binary sources. Let the dependence between the two sources be described by the inter-source correlation parameter α ∈ [0,0.5], where P(U1 ≠ U2) = α. A sequence of source bits from U
k
, k = 1,2, is encoded by a source channel code to produce a channel codeword whose bits are modulated to produce the GMAC input X
k
∈ {+1,-1} in equivalent base-band representation. The output of the GMAC can is given by
where is the channel noise and denotes the Gaussian pdf with mean μ and variance σ2. In general, the maximum sum rate achievable over GMAC for two dependent sources is not known. However, when the sources are independent, i.e. α = 0.5, it is known that the maximum sum rate achievable by any code design is I(X1,X2,Y)[3]. In the case of dependent sources, we can consider I(X1,X2,Y) as an upper-bound to the achievable sum rate. Consider the direct transmission of the sources so that P(X1,X2) = P(U1,U2). In Figure2, we have plotted I(X1,X2,Y) of a GMAC with noise variance σ2 as a function of α for different values of σ2. Notice that the maximum of I(X1,X2,Y) depends on (α,σ2), which shows that optimizing the codes for the values of these parameters will result in higher sum rates for dependent sources over the same GMAC, compared to independent sources.
The optimal DJSC code for the given GMAC must induce a distribution P(X1,X2) which maximizes the sum rate of the two channel inputs[3]. While there appears to be no known tractable approach for designing such a code, the main idea pursued in this paper is to use a systematic channel code for each source, so that the two sources are essentially transmitted directly over the GMAC and therefore P(X1,X2) = P(U1,U2) for systematic bits of the channel input codewords (the parity bits of each source are related to information bit as given by parity-check equations of the code[15]). The two channel codes are decoded by a joint decoder which observes the channel output Y. This approach essentially exploits the inter-source correlation to enhance the performance of channel codes. In particular, if two sources are independent, then each channel code requires a sufficient number of parity bits to correct the errors due to channel noise and the mutual interference between the the two independent bit streams. However, when the two sources are dependent, the joint distribution P(X1,X2) of the information bits of the two channel input codewords provide an additional joint decoding gain and hence the number of parity bits required for encoding each source is reduced, or equivalently the achievable sum rate is higher. With practical (finite length) channel codes, this implies that the same decoding error probability can be achieved at a higher sum rate. Note that by construction, the aforementioned DJSC coding scheme requires that the code length n and the number of systematic information bits m (and the code rate R
c
= m/n) be identical for both sources, and therefore, the resulting designs correspond to symmetric rates. Achieving asymmetric rates will possibly require some form rate splitting[16] and will not be considered in this paper.
The code design approach presented in this paper is based on systematic irregular LDPC (SI-LDPC) codes. First, consider an n-bit SI-LDPC code[17] whose parity check matrix H can be represented by a factor graph with code bit variable (CBV) nodes x(1),…,x(n) and parity check factor (PCF) nodes (representing parity check equations), channel output variable (COV) nodes y(1),…,y(n), and the channel factor (CF) nodes. In the case of a Gaussian channel, a COV node represents the conditional pdf p(y(n)|x(n)). The channel outputs are decoded by applying the BP algorithm to the factor graph[17]. For the purpose of code design, a length n SI-LDPC code can be completely specified by the parameters (n,λ(x),ρ(x)), where and are the edge-perspective degree polynomials of variable nodes and parity check nodes, respectively, and λ
i
(resp. ρ
i
) is the fraction of edges connected to CBV (resp. PCF) nodes of degree i (a degree of a node is the number of edges connected to it), satisfying the constraints and[17]. The parameters dcmax and dvmax are typically chosen in such a manner that the sparsity of the corresponding factor graph is maintained (i.e., the edges in the factor graph grow linearly with the codeword length[17]). It is known that a concentrated degree polynomial of the form ρ(x) = ρ xs-2 + (1 - ρ)xs for some s ≥ 2 and 0 < ρ ≤ 1 is sufficient for achieving near optimal performance ([13], Theorem 2).
Now consider, a two-input GMAC with an SI-LDPC code applied to each input. Since (1) is symmetric with respect to X1 and X2 and the same rate is used for both sources, the same channel code can be used for both sources. While the parity check matrix H of each SI-LDPC code whose code bits are x
k
(1),…,x
k
(n) can be represented by a factor graph, for the joint decoding of the two codes, the combined factor graph as shown in Figure3 has to be used, where the COV nodes y(1),…,y(n) are linked to factor nodes ϕ(y(i),x1(i),x2(i)), i = 1,…,n which represent (combined) SCF nodes. As described later in this paper, the message passing to and from these nodes in BP decoding is also crucial to the design of the codes as well. To determine the ϕ(·), consider the maximum a posteriori (MAP) decoding of the codeword transmitted on GMAC input k = 1,2, based on the GMAC outputs. Let denote those code bits in, except x
k
(i). Also, for k ∈ {1,2} define
Then, it is easy to verify that the MAP decoded value of the i th bit of the input codeword of GMAC input k is given by
(2)
where i = 1,…,n,
(3)
denotes the set of all codewords of the code, denotes the indicator function, and
(4)
In the factor graph, representation of (3), each factor node represents a term in the product[17]. As usual, the factors and are represented by the PCF nodes of the two codes, respectively. On the other hand, each term ϕ
i
(·), i = 1,…,n, which is a function of the code bits x1(i) and x2(i), and the channel output y(i) is represented by a SCF node as shown in Figure3. As the codes are systematic, for information bits P(x1(i),x2(i)) is identical to the joint distribution P(u1,u2) of the source bits. For parity bits of an LDPC code (which has a dense generator check matrix), it can be assumed that P(x1(i),x2(i)) = P((x1(i))P((x2(i)) with P(x1(i)) = P(x2(i)) = 0.5[18].
Sparse parity check matrices obtained through the EXIT analysis design procedure does not necessarily correspond to systematic generator matrices. As usual, the codes can be converted to systematic form by using Gaussian elimination. However, the resulting codes have dense parity-check matrices which makes the computational complexity of BP decoding impractically high. In order to get around this problem, a bit re-mapping operation is used in the joint decoder to rearrange the systematic code-bits, so that the codewords correspond to sparse matrices, as shown in Figure3.