In order to estimate the aforementioned original information sequence , the optimum joint receiver would symbolwise apply the Maximum A Posteriori (MAP) decision criterium, that is,
where denotes conditional probability. To efficiently perform the above decision criterion, a suboptimum practical scheme would first compute the conditional probabilities of the encoded symbol given the received sequence, which is given, for and , as
where the proportionality stands for , and denotes that all binary variables are included in the sum except , that is, the sum is evaluated for all the possible combinations of the set . Once the conditional probabilities for the th sensor codeword are computed, an estimation of the original sensor sequence would be obtained by performing (1) iterative LDGM decoding based on in an independent fashion with respect to the LDGM decoding procedures of the other sensors and (2) an outer BCH decoding based on the harddecoded sequence at the output of the LDGM decoder. Finally, the recovered sensor sequences () would be fused to render the estimation as
that is, by symbolwise majority voting over the estimated sensor sequences. Notice that this practical scheme performs sequentially channel detection, LDGM decoding, BCH decoding, and fusion of the decoded data.
However, the performance of the above separate approach can be easily outperformed if one notices that, since we assume (see Section 2), the sensor sequences are symbolwise spatially correlated, that is
for . As widely evidenced in the literature related to the transmission of correlated information sources (see references in Section 1), this correlation should be exploited at the receiver in order to enhance the reliability of the fused sequence . In other words, the considered scenario should take advantage of this correlation, not only by means of an enhanced effective SNR at the receiver thanks to the correlationpreserving properties of LDGM codes, but also through the exploitation of the statistical relation between sequences corresponding to different sensors . The latter dependence between and can be efficiently capitalized by (1) describing the joint probability distribution of all the variables involved in the system by means of factor graphs and (2) marginalizing for via the messagepassing SumProduct Algorithm (SPA). This methodology allows decreasing the computational complexity with respect to a direct marginalization based on exhaustive evaluation of the entire joint probability distribution. Particularly, the statistical relation between sensor sequences is exploited in one of the compounding factor subgraphs of the receiver, as will be later detailed.
This factor graph is exemplified in Figure 3(a), where the graph structure of the joint detector, decoder, and data fusion scheme is depicted for sensors. As shown in this plot, this graph is built by interconnecting different subgraphs: the graph modeling the statistical dependence between and for all (labeled as SENSING), the factor graph that relates sensor sequence to codeword through the LDGM parity check matrix and the BCH code (to be later detailed), and the relationship between the received sequence and the codewords , with (labeled as MAC). Observe that the interconnection between subgraphs is done via variable nodes corresponding to and . In this context, since the concatenation of the LDGM and BCH code is systematic, variable nodes and collapse into a single node , which has not been shown in the plots for the sake of clarity. Before delving into each subgraph, it is also important to note that this interconnected set of subgraphs embodies an overall cyclic factor graph over which the SPA algorithm iterates—for a fixed number of iterations —in the order MACLDGMBCHLDGMLDGMBCHSENSING.
Let us start by analyzing the MAC subgraph, which is represented in Figure 3(b). Variable nodes are linked to the received symbol through the auxiliary variable node , which stands for the noiseless version of the MAC output as defined in expression (1). If we denote as the set of possible values of determined by the possible combinations of and the MAC coefficients , then the message corresponding to will be given by the conditional probability distribution of the AWGN channel, that is
where the value of the constant is selected so as to satisfy . On the other hand, the function associated to the check node connecting to is an indicator function defined as
In regard to Figure 3(b), observe that a set of switches controlled by binary variables and drive the connection/disconnection of systematic () and parity () variable nodes from the MAC subgraph. The reason being that, as later detailed in Section 4, the degradation of the iterative SPA due to shortlength cycles in the underlying factor graph can be minimized by properly setting these switches.
The analysis follows by considering Figure 3(c), where the block integrating the BCH decoder is depicted in detail. At this point it is worth mentioning that the rationale behind concatenating the BCH code with the LDGM code lies on the statistics of the errors per simulated block, as the simulation results in Section 4 will clearly show. Based on these statistics, it is concluded that such an error floor is due to most of the simulated blocks having a low number of symbols in error, rather than few blocks with errors in most of their constituent symbols. Consequently, a BCH code capable of correcting up to errors can be applied to detect and correct such few errors per block at a small loss in performance. Having said this, the integration of the BCH decoder in the proposed iterative receiver requires some preliminary definitions.

(i)
: a posteriori soft information for the value of the node , which is computed, at iteration and , as the product of the a posteriori soft information rendered by the SPA when applied to MAC and LDGM subgraphs.

(ii)
: similar to the previously defined , this notation refers to the a posteriori information for the value of node , which is calculated, at iteration and , as the product of the corresponding a posteriori information produced at both MAC and LDGM subgraphs.

(iii)
: extrinsic soft information for built upon the information provided by the rest of sensors at iteration and time tick .

(iv)
: refined a posteriori soft information of node for the value , which is produced as a consequence of the processing stage in Figure 3(c).
Under the above definitions, the processing scheme depicted in Figure 3(c) aims at refining the input soft information coming from the MAC and LDGM subgraphs by first performing a hard decision (HD) on the BCH encoded sequence based on , , and the information output from the SENSING subgraph in the previous iteration, that is, . This is done within the current iteration . Once the binary estimated sequence corresponding to the BCH encoded block at the th sensor is obtained and decoded, the binary output is utilized for adaptively refining the a posteriori soft information as under the flipping rule
which is performed for . It is interesting to observe that in this expression, all those indices in error detected by the BCH decoder will consequently drive a flip in the soft information fed to the SENSING subgraph.
Finally we consider Figure 3(c) corresponding to the SENSING subgraph, where the refined soft information from all sensors is fused to provide an estimation of as . Let denote the soft information on (for the value and computed for ) contributed by sensor at iteration . The SPA applied to this subgraph renders (see [41, equations (5) and (6)])
where denotes the sensing error probability which in turn establishes the amount of correlation between sensors. Factors account for the normalization of each pair of messages, that is, for all . The estimation of at iteration is then given by
that is, by the product of all messages arriving to variable node at iteration . The iteration ends by computing the soft information fed back from the SENSING subgraph directly to the corresponding LDGM decoder, namely,
where as before, represents a normalization factor for each message pair.