# Analog joint source-channel coding over MIMO channels

- Francisco J Vazquez-Araujo
^{1}Email author, - Oscar Fresnedo
^{1}, - Luis Castedo
^{1}and - Javier Garcia-Frias
^{2}

**2014**:25

https://doi.org/10.1186/1687-1499-2014-25

© Vazquez-Araujo et al.; licensee Springer. 2014

**Received: **14 June 2013

**Accepted: **20 January 2014

**Published: **10 February 2014

## Abstract

Analog joint source-channel coding (JSCC) is a communication strategy that does not follow the separation principle of conventional digital systems but has been shown to approach the optimal distortion-cost tradeoff over additive white Gaussian noise channels. In this work, we investigate the feasibility of analog JSCC over multiple-input multiple-output (MIMO) fading channels. Since, due to complexity constraints, directly recovering the analog source information from the MIMO channel output is not possible, we propose the utilization of low-complexity two-stage receivers that separately perform detection and analog JSCC maximum likelihood decoding. We study analog JSCC MIMO receivers that utilize either linear minimum mean square error or decision feedback MIMO detection. Computer experiments show the ability of the proposed analog JSCC receivers to approach the optimal distortion-cost tradeoff both in the low and high channel signal-to-noise ratio regimes. Performance is analyzed over both synthetically computer-generated Rayleigh fading channels and real indoor wireless measured channels.

## 1 Introduction

The splitting of source compression and channel coding is a fundamental design principle in digital communications known as the ‘separation principle’. The use of separate source and channel coding (SSCC) was introduced and shown to be optimum by Shannon[1] for the case of lossless compression and additive white Gaussian noise (AWGN) channels. Source coding aims at compressing the source information down to its ultimate entropy limit, *H*. If the channel capacity limit, *C*, is larger than *H*, the source information can be optimally sent over the channel using an appropriate capacity-approaching channel coding method (such as Turbo codes or LDPC codes) with rate *R*_{
c
} as long as *R*_{
c
}*H*<*C*.

The separation principle has also shown to be optimum by Berger[2] for lossy compression of analog sources. In this case, the source is compressed down to a certain rate, *R*(*D*), which depends on the desired distortion target, *D*.

Again, if *R*_{
c
}*R*(*D*)<*C*, channel coding allows the source information to be sent over the channel with no errors.

Nevertheless, it should be noticed that the optimality of the separation principle grounds on the assumption of infinite complexity and infinite delay. Indeed, when digital communication systems are designed to perform close to their theoretical limit, sources have to be compressed using powerful vector quantization (VQ) and entropy coding methods, and data has to be transmitted using capacity-approaching digital codes that make use of long block lengths. Thus, the suitability of the separation principle for the design of practical communication systems with severe constraints on delay and/or complexity is not clear.

Discrete-time analog communication systems based on the transmission of continuous-amplitude channel symbols can be considered an attractive alternative to digital communication systems. For a lossy source-channel communication system to be optimal, the source distortion and the channel cost should lie on the optimal distortion-cost tradeoff curve. An example of such an optimal system is the direct transmission of discrete-time uncoded Gaussian samples over AWGN channels, both with the same bandwidth[3]. In this case, optimality arises because Gaussian sources are probabilistically matched to the AWGN channel. This idea is further explored in[4] where a set of necessary and sufficient conditions for any discrete-time memoryless point-to-point communication system to be optimal is provided. These conditions are satisfied not only by digital systems designed according to the separation principle but also by analog joint source-channel coded (JSCC) systems for which the complexity and delay can be reduced to the minimum while approaching the optimal distortion-cost tradeoff.

Several authors[5–10] have recently investigated the utilization of non-linear mappings for analog JSCC. These mappings preserve complexity and delay at the minimum and can be used for either bandwidth reduction or bandwidth expansion. Nevertheless, performance of analog JSCC is closer to the optimal distortion-cost curve when the non-linear mappings are used for bandwidth reduction. This is because in the case of bandwidth expansion, it is not possible to envisage a mapping that efficiently fills the entire channel space without simultaneously creating multiple neighbors that are far away in the source space[8]. Thus, the utilization of analog non-linear mappings is particularly well-suited for applications in which broadband analog sources, such as images or audio, are to be transmitted over narrowband channels.

In the literature, most work on analog JSCC focuses on AWGN channels. Exceptions are[11, 12] that consider a two-user single-antenna scenario under a flat fading Rayleigh channel. Another exception is[13] where the implementation on a software-defined radio testbed of a wireless system based on analog JSCC is presented. Excellent performance over wireless channels is attained when the encoder parameters are continuously adapted to the time-varying signal-to-noise ratio (SNR).

In this work, we study the feasibility of analog JSCC over multiple-input multiple-output (MIMO) fading channels that make use of multiple antennas at both transmission and reception.

It is well known from information theory that MIMO channels have a capacity considerably larger than that of their single-input single-output (SISO) counterparts. Thus, broadband analog sources can be transmitted over MIMO channels using a significantly smaller amount of bandwidth.

Optimum decoding of the vector symbols observed at the output of a MIMO channel is difficult due to the non-linear characteristic of the analog JSCC procedure. We circumvent this drawback by considering a low-complexity two-stage receiver structure in which a linear detector is first used to transform the MIMO channel into several parallel SISO channels and then a bank of conventional maximum likelihood (ML) SISO decoders is used to recover the transmitted source samples. Feeding back the SNR information of the equivalent SISO channels allows us to adapt the encoder parameters to the channel time-variations and attain a performance close to the theoretical bounds.

We then examine the feasibility of utilizing a decision feedback (DF) MIMO detector rather than linear detection as the first stage of our receiver. The detector now has a feedforward filter that transforms the MIMO channel into an equivalent low diagonal MIMO channel with unitary diagonal entries. Spatial interference can then be sequentially eliminated with a feedback filter whose input are the decoded symbols from previous antennas. We will show that this non-linear receiver structure exhibits a superior performance with respect to the linear scheme.

The rest of this paper is organized as follows. Section 2 describes the basic principles of analog JSCC and its optimization over SISO channels. Section 3 focuses on analog JSCC over MIMO fading channels. Section 4 presents performance results for the different analog JSCC transmission techniques considered along this work. Two types of channels were considered: synthetical computer-generated Rayleigh fading channels and real indoor fading channels measured with a multiuser MIMO testbed. Finally, Section 5 is devoted to the conclusions.

## 2 Analog joint source-channel coding

At the transmitter, *N*-independent and identically distributed (i.i.d.) analog source symbols are packed into the source vector **x**=(*x*_{1},*x*_{2},…,*x*_{
N
}) and compressed into one channel symbol *s*. The encoding procedure has two steps: the compression function *M*_{
δ
}(·) and the matching function *T*_{
α
}(·).

*M*

_{ δ }(·) that map the

*N*source symbols into a single value$\widehat{\theta}$. As an example, a particular type of parameterized space-filling continuous curves, called spiral-like curves, can be used to encode the source samples. These curves were proposed for the transmission of Gaussian sources over AWGN channels by Chung and Ramstad[5–7]. For the case of 2:1 compression (i.e.,

*N*=2), they are formally defined as

where *θ* is the angle from the origin to the point **z**=(*z*_{1},*z*_{2}) on the curve and *δ* is the analog JSCC parameter that determines the distance between two neighboring spiral arms. In the ensuing section, we explain that the encoder parameter *δ* should be optimized if the optimal cost-distortion tradeoff is to be approached.

Although other non-linear continuous mappings can be used, spiral-like curves are frequently utilized for bandwidth reduction in analog JSCC because they can be interpreted as a parametric approximation to power-constrained channel-optimized vector quantization (PCCOVQ)[14]. Indeed, when connecting the adjacent vectors in a PCCOVQ codebook, we obtain a non-linear continuous curve that, for moderate to high SNR, is very similar to the spiral-like curve defined before.

*M*

_{ δ }(·) provides the value$\widehat{\theta}$ corresponding to the point on the spiral that minimizes the distance to

**x**, i.e.,

Therefore, each pair of analog source samples, *x*_{1} and *x*_{2}, that corresponds to a specific point in ℜ^{2} is represented by a value$\widehat{\theta}\in \Re $ that corresponds to the point on the spiral closest to **x**. It is possible to achieve higher compression rates (i.e., *N*>2) by extending (1) to generate more complex curves[15, 16].

*T*

_{ α }(·) to transform the compressed samples. In[5, 7, 8], the invertible function

with *α*=2 was proposed. However, as shown in[10], the system performance can be improved if *α* is optimized together with *δ*. We have empirically determined through computer simulations that using *α*=1.3 provides a good overall performance for 2:1 analog JSCC systems over AWGN channels and a wide range of SNR and *δ* values.

where$n\sim \mathcal{N}(0,{N}_{0})$ is a real-valued zero-mean Gaussian random variable that represents the channel noise with variance *N*_{0}. Notice that, since the power of the channel symbols is normalized, the SNR is 1/*N*_{0}.

At the receiver, we calculate an estimate$\widehat{\mathbf{x}}$ of the transmitted source symbols given the noisy observation *y*. Previous work[5, 6, 8] considers ML decoding to recover the source symbols from the received symbols. ML decoding exhibits a very low complexity, but it presents a poor performance at low SNRs. This drawback is addressed in[10], where minimum mean square error (MMSE) analog JSCC decoding is proposed as an alternative to ML. When MMSE decoding is employed, the analog system attains a performance close to the optimal distortion-cost tradeoff in the whole SNR region. Unfortunately, it leads to a significant increase on the overall complexity at the receiver, since MMSE estimates are obtained after solving an integral that can be only calculated numerically.

### 2.1 Two-stage approximation to analog JSCC decoding

**x**from the received symbol

*y*, as in standard ML and MMSE decoding methods, it is possible to use an alternative two-stage decoding approach in which we first calculate an estimate of the transmitted channel symbols$\u015d\phantom{\rule{0.1em}{0ex}}$ and then decode the source samples from this symbol estimate[17]. In the case of AWGN channels, the linear MMSE estimate of the channel symbols

*s*is given by

Notice that the complexity of our two-stage receiver is the same as that of the ML receiver, since the estimation of the input channel symbol reduces to a simple factor normalization. This factor normalization is key for the ML decoder to approach the optimal cost-distortion tradeoff at low SNR values.

It is interesting to note that the idea of introducing a linear MMSE estimator prior to ML decoding in digital communications has been analyzed in[18]. In this case, it is shown that MMSE estimation is instrumental for achieving the capacity of AWGN channels when using lattice-type coding.

*h*is a random variable that represents the fading channel response. In the case of Rayleigh fading channels,

*h*is a complex-valued zero-mean circularly symmetric Gaussian-independent and identically distributed (i.i.d.) random variable. Now the SNR fluctuates with the channel response

*h*. Assuming normalized channel symbols, the SNR in fading channels is given by

If the fading channel is normalized so that *E*[ |*h*|^{2}]=1, the average SNR is 1/*N*_{0}.

*s*is given by

where the super-index ^{∗} represents complex conjugation. This symbol estimate$\u015d\phantom{\rule{0.1em}{0ex}}$ can then be decoded using (6).

### 2.2 Code optimization

Thus, denoting the source signal variance as${\sigma}_{x}^{2}$, the SDR is calculated as$\text{SDR}={\sigma}_{x}^{2}/\text{MSE}$.

*δ*that determines the separation between spiral arms. When considering ML decoding, high SNR and

*α*=2, it is possible to obtain an analytic expression for the optimal value of the analog encoder parameter

*δ*[8]. When

*α*≠2, however, the analytical optimization of

*δ*is not feasible. Instead,

*δ*can be numerically optimized by computing the SDR for each SNR over a wide range of values for

*δ*. As an example, Table1 shows the best values of

*δ*that were found via computer simulations for the 2:1 compression of a Gaussian source with${\sigma}_{x}^{2}=1$, using

*α*=1.3 and for different SNR values. Optimum

*δ*values were determined by exhaustive search over the range 0<

*δ*<10 using a 0.1 step-size. A similar optimization procedure can be followed to determine the optimum analog encoder parameters as a function of the SNR for larger reduction ratios (i.e.,

*N*>2).

**Optimal values for**
δ

SNR (dB) | δ |
---|---|

0 | 9.8 |

1 | 8.0 |

2 | 5.6 |

3 | 5.0 |

4 | 4.2 |

5 | 4.0 |

6 | 3.9 |

7 | 3.7 |

8 | 3.6 |

9 | 3.4 |

10 | 3.2 |

11 | 3.1 |

12 | 3.0 |

13 | 2.9 |

14 | 2.7 |

15 | 2.5 |

16 | 2.3 |

17 | 2.2 |

18 | 2.1 |

19 | 2.0 |

20 | 1.8 |

21 | 1.7 |

22 | 1.5 |

23 | 1.4 |

24 | 1.3 |

25 | 1.2 |

26 | 1.1 |

27 | 1.0 |

28 | 0.9 |

29 | 0.8 |

30 | 0.8 |

31 | 0.8 |

32 | 0.7 |

33 | 0.7 |

34 | 0.6 |

35 | 0.6 |

36 | 0.5 |

37 | 0.5 |

38 | 0.4 |

In the case of fading channels, the fact that the transmitter has to select the optimal encoder parameter *δ* prior to transmission implies that *δ* has to be continuously updated according to the instantaneous SNR, which depends on *h*. In a practical setup, the SNR can be estimated at the receiver and sent to the transmitter over a feedback channel. The rate at which the SNR is to be updated depends on the channel coherence time and should be allowed by the feedback channel. The feedback channel delay should also be smaller than the channel coherence time for the available SNR to be an adequate prediction of the actual SNR. Once the SNR is available, the transmitter can select the optimal analog encoder using a look-up table such as Table1 that was obtained for a 2:1 bandwidth reduction system.

## 3 Analog JSCC over MIMO channels

In this section, we focus on analog JSCC over MIMO wireless channels with *n*_{
T
} transmit and *n*_{
R
}≥*n*_{
T
} receive antennas. We assume the source symbols are spatially multiplexed over the *n*_{
T
} transmit antennas. At each transmit antenna, a vector **x**_{
i
} of *N* analog source symbols is encoded into the channel symbol *s*_{
i
}, *i*=1,⋯,*n*_{
T
} using the encoder described in Section 2. Notice that bandwidth reduction in this system setup is equal to *N* *n*_{
T
} and significant bandwidth reductions can be achieved when using multiple transmit antennas.

*n*

_{ R }×

*n*

_{ T }channel matrix

**H**whose entries

*h*

_{ ij }are random variables. The observed symbols at the MIMO channel output are given by

**s**,

**y**, and

**n**are the vectors that represent the channel symbols, the received symbols, and the additive thermal noise, respectively. The thermal noise is assumed to be complex-valued zero-mean circularly symmetric Gaussian and spatially white, i.e., the noise covariance matrix is

*n*

_{ R }-dimension identity matrix. Channel symbols are also spatially white and normalized so that the radiated power at each antenna is 1/

*n*

_{ T }(i.e., total radiated power is one). Thus, the covariance matrix of the channel symbols is

*n*

_{ T }-dimension identity matrix. Hence, the MIMO SNR is

where tr(·) denotes the trace operator and the super-index ^{
H
} represents conjugate transposition. If the MIMO fading channels are normalized so that *E*_{
H
}[tr(**H** **H**^{
H
})]=*n*_{
T
}*n*_{
R
}, the average SNR is 1/*N*_{0}.

where the mapping function *M*_{
δ
}(·) is used to obtain the conditional probability *p*(**y**|**x**) and **x** represents the vector with the *Nn*_{
T
} source samples. Notice that the integral in (14) can only be calculated numerically because *M*_{
δ
}(·) is discontinuous and highly non-linear. The complexity of such detector would be extremely high even for an small number of antennas, since it would involve the discretization of an *Nn*_{
T
} dimensional space.

As an alternative, we can extend the two-stage receiver described for SISO channels to the MIMO case. Instead of directly calculating an MMSE estimate of the source symbols, we first perform an estimation of the channel symbols transmitted from each antenna using a conventional MIMO detector and then proceed to the decoding of the estimated channel symbols. In this work, we will study two suboptimal MIMO detectors: the MMSE linear detector and the MMSE DF with ordering detection. The basic premise of these two detectors is to perform a spatial filtering of the observations to cancel the spatial interferences and thus transform the MIMO channel into *n*_{
T
} parallel SISO channels. This way the analog JSCC encoding and decoding procedures described in the previous section can be applied.

### 3.1 MIMO analog JSCC decoding with MMSE linear detection

**s**and the estimated symbol vector$\widehat{\mathbf{s}}=\mathbf{Wy}$ is given by

**W**

_{MMSE}does not completely cancel the spatial interference of the MIMO channel, i.e., at${\u015d}_{i}$, the desired symbol

*s*

_{ i }is corrupted by thermal noise and a residual spatial interference from symbols transmitted through other antennas. Considering this residual spatial interference as Gaussian noise that adds to the thermal noise, it is shown in[19] that the equivalent SNR at each output of the MMSE linear receiver can be expressed as

where *μ*_{
i
}=(**W**_{MMSE}**H**)_{
ii
}. Thus, the equivalent channel that comprises the concatenation of the MIMO channel and the MIMO linear MMSE receiver can be interpreted as a set of SISO parallel channels, each with an equivalent SNR given by (16). Each entry of the estimated symbol vector can be decoded independently using the analog JSCC decoding approach explained in Section 2.

Similarly to the SISO case, we assume that our MIMO system setup is equipped with a feedback channel that estimates the SNR_{
i
} values at reception and sends them to the transmitter (see Figure2). This way we can approach the optimal cost-distortion tradeoff because we can continuously adapt the analog JSCC encoder parameters *δ*_{
i
}. Again, the look-up Table1 can be used to select the optimal analog encoders for a 2:1 bandwidth reduction system.

It should be noticed that linear MMSE detection is optimum only when the channel symbols are Gaussian, which is not the case. Indeed, even when the sources are Gaussian, the analog JSCC encoder is a non-linear transformation that produces non-Gaussian channel symbols. It is possible to formulate the optimum non-linear MMSE detector but this requires knowledge of the channel symbols probability *p*(**s**) and the calculation of an integral similar to that in (14). Notice the extraordinary complexity of the optimum non-linear MMSE detector. On the one hand, *p*(**s**) has to be discretized and estimated using Monte Carlo methods since it is not possible, in general, to find an analytical expression for *p*(**s**). When considering fading channels, knowledge of *p*(**s**) is particularly difficult since it depends on the analog JSCC encoding parameters which in turn change with the SNR. On the other hand, the non-linear MMSE detection integral has to be computed numerically which requires a refined discretization of *p*(**s**) to approach optimality. In the ensuing subsection, we investigate the utilization of a non-linear receiving structure for analog JSCC over MIMO channels that, although suboptimal, outperforms linear MMSE detection while keeping complexity at a low level.

### 3.2 MIMO analog JSCC decoding with decision feedback detection

**L**is an

*n*

_{ T }×

*n*

_{ T }lower triangular matrix and Δ is an

*n*

_{ T }×

*n*

_{ T }diagonal matrix. If we define the whitening filter

**B**

^{ H }=Δ

^{-1}

**L**

^{-H}, the FF filter is the product of the matched and whitening filters, i.e.,

**W**MMSEDF=

**B**

^{ H }

**H**

^{ H }. The overall response of the FF filter and the channel is

In order to simplify the derivation of the DF receiver, we will assume that there are no decoding errors. Under this assumption, the spatially causal component of the interference in (17) can be successively removed with the FB filter$\mathbf{L}-{\mathbf{I}}_{{n}_{T}}$ without altering the noise statistics at the decoder inputs. An advantage of analog JSCC is that there is no delay in the encoding and re-encoding steps which significantly simplifies the implementation of DF MIMO receivers.

*δ*

_{ i }parameters following the look-up Table1. It can be easily shown that Equation (16) is also valid to calculate the SNR value of each equivalent SISO channel with

where the approximation holds when *N*_{0}≪1.

**P**is a permutation matrix. Thus, the optimum ordering is

where$\stackrel{\u0304}{\mathit{\Delta}}$ results from the Cholesky factorization of${\stackrel{\u0304}{\mathbf{H}}}^{H}\stackrel{\u0304}{\mathbf{H}}+{n}_{T}{N}_{0}{\mathbf{I}}_{{n}_{T}}$. This optimization problem can be readily solved by searching over the *n*_{
T
}! possible permutation matrices and selecting the one that minimizes the MMSE cost function (19). Computer simulations show that, in the low SNR regime, the same ordering results are obtained when considering either the exact or the approximate expression in (19).

## 4 Experimental results

In this section, we present the results of several computer experiments that were carried out to illustrate the performance of the proposed MIMO analog JSCC transmission methods. We considered two types of source distributions: Gaussian and Laplacian. These distributions are typically encountered in practical applications such as image transmission or compressive sensing. System performance is evaluated in terms of the SDR at reception for the given scenario and channel SNR. In the computer experiments, we generated 1,000,000 source symbols of a given distribution, simulate their transmission over the considered channel with given channel SNR, and calculate an estimate of the MSE between the received symbols$\widehat{\mathbf{x}}$ and the original symbols **x** (see (10)).

The optimal distortion-cost tradeoff is the maximum attainable SDR for a given SNR. In the literature, this theoretical limit is known as the optimum performance theoretically attainable (OPTA)[21]. The OPTA is calculated by applying *R*_{
c
}*R*(*D*)=*C*, that is, by equating the product of the number of source samples transmitted per channel use, *R*_{
c
}, and the rate-distortion function, *R*(*D*), to the channel capacity, *C*.

*R*

_{ c }=2

*Nn*

_{ T }source samples in each channel use, the OPTA is calculated as

*n*

_{ T }×

*n*

_{ R }system, the channel capacity (expressed in nats per channel use) is given by

**H**. On the other hand, the rate-distortion function depends on the considered distribution. If we assume the MSE as the measure of distortion, the rate-distortion of a Gaussian source can be expressed as a function of the SDR as

*D*is the distortion and

*h*(

*x*) is the entropy of the source. The entropy of a Laplacian distribution with variance${\sigma}_{x}^{2}$ is given by

Two types of channels were considered during our computer experiments: computer-generated synthetic Rayleigh fading channels and real fading channels obtained after a measurement campaign carried out in a multiuser indoor scenario.

### 4.1 Synthetic MIMO Rayleigh channels

In this subsection, symmetric channels with *n*_{
T
}=*n*_{
R
}=2 and 4 transmit and receive antennas were synthetically generated with random numbers obtained from a computer program. In particular, we emulated ergodic spatially white MIMO Rayleigh fading channels whose entries *h*_{
ij
} are realizations of complex-valued zero-mean circularly symmetric Gaussian independent and identically distributed (i.i.d.) random variables.

*E*

_{ H }[tr(

**H**

**H**

^{ H })]=

*n*

_{ T }

*n*

_{ R }and the channel symbols covariance matrix normalization given by (13). It can be seen that for Gaussian sources and high SNRs, the SDR obtained with DF MIMO receivers is 2 dB below the OPTA while this difference is 4 dB for linear receivers, i.e., DF MIMO receivers produce a distortion that is 2 dB better than that obtained with linear MIMO receivers. For Laplacian sources, the performance of the proposed MIMO receivers is worse than that with Gaussian sources: the distortion obtained with DF and linear MIMO receivers is 3 dB and 4.2 below the OPTA, respectively. Yet, DF MIMO receivers clearly outperform linear MIMO receivers yielding a 1.2 dB better SDR. Similarly to the digital case, the superior performance of DF MIMO receivers is due to the non-linear decoding and reencoding operations carried out during the decision feedback stage.

### 4.2 Real indoor channels

In order to get a more complete assessment of the analog JSCC scheme considered in this work, we carried out a series of computer experiments considering real channels measured from an indoor scenario. Within the COMONSENS project (http://www.comonsens.org), a wireless network hardware demonstrator was constructed for the practical evaluation of multiuser multiantenna transmission techniques. The testbed was jointly designed and implemented by two research groups from University of Cantabria (UC) and University of A Coruña (UDC) in Spain. The testbed consists of three transmit and three receive nodes each equipped with MIMO capabilities. For a detailed description of the COMONSENS Multiuser MIMO testbed, see the URL above.

## 5 Conclusions

Analog JSCC of discrete-time continuous-amplitude sources over MIMO wireless channels has been investigated. Since directly recovering the analog source information from the MIMO channel output is not possible, we proposed the utilization of two-stage receivers that separately perform detection and analog JSCC decoding. We considered analog JSCC MIMO receivers that utilize either linear MMSE or DF MIMO detection. Different computer experiments were carried out to illustrate the ability of the different analog JSCC MIMO receivers to approach the optimal distortion-cost tradeoff. Both synthetic computer-generated Rayleigh fading channels and real indoor wireless measured channels were considered. Particularly remarkable is the performance of analog JSCC DF MIMO receivers, which attain distortion values within 3 dB from the OPTA over all considered channels for both Gaussian and Laplacian sources.

## Declarations

### Acknowledgements

This work has been funded by Xunta de Galicia, MINECO of Spain, and FEDER funds of the EU under grants 2012/287, TEC2010-19545-C04-01, and CSD2008-00010; and by NSF award CIF-0915800.

## Authors’ Affiliations

## References

- Shannon CE: A mathematical theory of communication.
*Bell Syst. Tech. J*1948, 7: 379-423.MathSciNetView ArticleGoogle Scholar - Berger T:
*Rate Distortion Theory: A Mathematical Basis for Data Compression*. Englewood Cliffs: Prentice Hall; 1971.Google Scholar - Goblick TJ: Theoretical limitations on the transmission of data from analog sources.
*IEEE Trans. Inf. Theory*1965, 11(4):558-567. 10.1109/TIT.1965.1053821View ArticleGoogle Scholar - Gastpar M, Rimoldi B, Vetterli M: To code, or not to code: lossy source-channel communication revisited.
*IEEE Trans. Inf. Theory*2003, 49(5):1147-1158. 10.1109/TIT.2003.810631MathSciNetView ArticleGoogle Scholar - Chung SY: On the construction of some capacity-approaching coding schemes. Ph.D. dissertation Dept. EECS Massachusetts Institute of Technology, 2000Google Scholar
- Ramstad TA: Shannon mappings for robust communication.
*Telektronikk*2002, 98: 114-128.Google Scholar - Hekland F, Oien GE, Ramstad TA: Using 2:1 shannon mapping for joint source-channel coding. In
*Proceedings of the 2005 Data Compression Conf. (DCC)*. Snowbird, UT; 2005:223-232.Google Scholar - Hekland F, Floor PA, Ramstad TA: Shannon-Kotelnikov mappings in joint source-channel coding.
*IEEE Trans. Commun*2009, 57: 95-104.View ArticleGoogle Scholar - Rüngeler M, Schotsch B, Vary P: Improved decoding of Shannon-Kotel’nikov mappings. In
*Proceedings of the 2010 Int. Symp. on Information Theory and Its Applications*. Taichung; 2010:633-638.View ArticleGoogle Scholar - Hu Y, Garcia-Frias J, Lamarca M: Analog joint source-channel coding using non-linear curves and MMSE decoding.
*IEEE Trans. Commun*2011, 59(11):3016-3026.View ArticleGoogle Scholar - Brante G, Souza R, Garcia-Frias J: Analog joint source-channel coding in Rayleigh fading channels. In
*Proceedings of the 2011 Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP)*. Prague; 2011:3148-3151.Google Scholar - Brante G, Souza R, Garcia-Frias J: Spatial diversity using analog joint source channel coding in wireless channels.
*IEEE Trans. Commun*2013, 61: 301-311.View ArticleGoogle Scholar - Garcia-Naya J, Fresnedo O, Vazquez-Araujo F, Gonzalez-Lopez M, Castedo L, Garcia-Frias J: Experimental evaluation of analog joint source-channel coding in indoor environments. In
*Proceedings of the 2011 Intl. Conf. Communications (ICC)*. Kyoto; 2011.Google Scholar - Fuldseth A, Ramstad TA: Bandwidth compression for continuous-amplitude channels based on vector approximation to a continuous subset of the source signal space. In
*Proceedings of the ICASSP 1997*. Munich; 1997.Google Scholar - Floor PA, Ramstad TA: Dimension reducing mappings in joint source-channel coding. In
*Proceedings of the 2006 Nordic Signal Processing Symposium (NORSIG)*. Rejkjavik; 2006.Google Scholar - Floor PA: On the theory of Shannon-Kotelnikov mappings in joint source-channel coding. Ph.D. dissertation Norwegian University of Science and Technology, 2008Google Scholar
- Fresnedo O, Vazquez-Araujo FJ, Castedo L, Garcia-Frias J: Low-complexity near-optimal decoding for analog joint source channel coding using space-filling curves.
*IEEE Commun. Lett*2013, 17(4):745-748.View ArticleGoogle Scholar - Forney GDJr: On the role of MMSE estimation in approaching the information-theoretic limits of linear Gaussian channels: Shannon meets Wiener. In
*Proceedings of the 41st Allerton Conf. Communication, Control, and Computing*. Monticello, IL; 2003.Google Scholar - Wang X, Poor HV: Iterative (turbo) soft interference cancellation and decoding for coded CDMA.
*IEEE Trans. Commun*1999, 47(7):1046-1061. 10.1109/26.774855View ArticleGoogle Scholar - Foschini G, Golden G, Valenzuela R, Wolniansky P: Simplified processing for high spectral efficiency wireless communication employing multi-element arrays.
*IEEE J. Sel. Areas Commun*1999, 17(11):1841-1852. 10.1109/49.806815View ArticleGoogle Scholar - Berger T, Tufts D: Optimum pulse amplitude modulation–I: transmitter-receiver design and bounds from information theory.
*IEEE Trans. Inf. Theory*1967, 13(2):196-208.View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.