Open Access

Analog joint source-channel coding over MIMO channels

  • Francisco J Vazquez-Araujo1Email author,
  • Oscar Fresnedo1,
  • Luis Castedo1 and
  • Javier Garcia-Frias2
EURASIP Journal on Wireless Communications and Networking20142014:25

Received: 14 June 2013

Accepted: 20 January 2014

Published: 10 February 2014


Analog joint source-channel coding (JSCC) is a communication strategy that does not follow the separation principle of conventional digital systems but has been shown to approach the optimal distortion-cost tradeoff over additive white Gaussian noise channels. In this work, we investigate the feasibility of analog JSCC over multiple-input multiple-output (MIMO) fading channels. Since, due to complexity constraints, directly recovering the analog source information from the MIMO channel output is not possible, we propose the utilization of low-complexity two-stage receivers that separately perform detection and analog JSCC maximum likelihood decoding. We study analog JSCC MIMO receivers that utilize either linear minimum mean square error or decision feedback MIMO detection. Computer experiments show the ability of the proposed analog JSCC receivers to approach the optimal distortion-cost tradeoff both in the low and high channel signal-to-noise ratio regimes. Performance is analyzed over both synthetically computer-generated Rayleigh fading channels and real indoor wireless measured channels.

1 Introduction

The splitting of source compression and channel coding is a fundamental design principle in digital communications known as the ‘separation principle’. The use of separate source and channel coding (SSCC) was introduced and shown to be optimum by Shannon[1] for the case of lossless compression and additive white Gaussian noise (AWGN) channels. Source coding aims at compressing the source information down to its ultimate entropy limit, H. If the channel capacity limit, C, is larger than H, the source information can be optimally sent over the channel using an appropriate capacity-approaching channel coding method (such as Turbo codes or LDPC codes) with rate R c as long as R c H<C.

The separation principle has also shown to be optimum by Berger[2] for lossy compression of analog sources. In this case, the source is compressed down to a certain rate, R(D), which depends on the desired distortion target, D.

Again, if R c R(D)<C, channel coding allows the source information to be sent over the channel with no errors.

Nevertheless, it should be noticed that the optimality of the separation principle grounds on the assumption of infinite complexity and infinite delay. Indeed, when digital communication systems are designed to perform close to their theoretical limit, sources have to be compressed using powerful vector quantization (VQ) and entropy coding methods, and data has to be transmitted using capacity-approaching digital codes that make use of long block lengths. Thus, the suitability of the separation principle for the design of practical communication systems with severe constraints on delay and/or complexity is not clear.

Discrete-time analog communication systems based on the transmission of continuous-amplitude channel symbols can be considered an attractive alternative to digital communication systems. For a lossy source-channel communication system to be optimal, the source distortion and the channel cost should lie on the optimal distortion-cost tradeoff curve. An example of such an optimal system is the direct transmission of discrete-time uncoded Gaussian samples over AWGN channels, both with the same bandwidth[3]. In this case, optimality arises because Gaussian sources are probabilistically matched to the AWGN channel. This idea is further explored in[4] where a set of necessary and sufficient conditions for any discrete-time memoryless point-to-point communication system to be optimal is provided. These conditions are satisfied not only by digital systems designed according to the separation principle but also by analog joint source-channel coded (JSCC) systems for which the complexity and delay can be reduced to the minimum while approaching the optimal distortion-cost tradeoff.

Several authors[510] have recently investigated the utilization of non-linear mappings for analog JSCC. These mappings preserve complexity and delay at the minimum and can be used for either bandwidth reduction or bandwidth expansion. Nevertheless, performance of analog JSCC is closer to the optimal distortion-cost curve when the non-linear mappings are used for bandwidth reduction. This is because in the case of bandwidth expansion, it is not possible to envisage a mapping that efficiently fills the entire channel space without simultaneously creating multiple neighbors that are far away in the source space[8]. Thus, the utilization of analog non-linear mappings is particularly well-suited for applications in which broadband analog sources, such as images or audio, are to be transmitted over narrowband channels.

In the literature, most work on analog JSCC focuses on AWGN channels. Exceptions are[11, 12] that consider a two-user single-antenna scenario under a flat fading Rayleigh channel. Another exception is[13] where the implementation on a software-defined radio testbed of a wireless system based on analog JSCC is presented. Excellent performance over wireless channels is attained when the encoder parameters are continuously adapted to the time-varying signal-to-noise ratio (SNR).

In this work, we study the feasibility of analog JSCC over multiple-input multiple-output (MIMO) fading channels that make use of multiple antennas at both transmission and reception.

It is well known from information theory that MIMO channels have a capacity considerably larger than that of their single-input single-output (SISO) counterparts. Thus, broadband analog sources can be transmitted over MIMO channels using a significantly smaller amount of bandwidth.

Optimum decoding of the vector symbols observed at the output of a MIMO channel is difficult due to the non-linear characteristic of the analog JSCC procedure. We circumvent this drawback by considering a low-complexity two-stage receiver structure in which a linear detector is first used to transform the MIMO channel into several parallel SISO channels and then a bank of conventional maximum likelihood (ML) SISO decoders is used to recover the transmitted source samples. Feeding back the SNR information of the equivalent SISO channels allows us to adapt the encoder parameters to the channel time-variations and attain a performance close to the theoretical bounds.

We then examine the feasibility of utilizing a decision feedback (DF) MIMO detector rather than linear detection as the first stage of our receiver. The detector now has a feedforward filter that transforms the MIMO channel into an equivalent low diagonal MIMO channel with unitary diagonal entries. Spatial interference can then be sequentially eliminated with a feedback filter whose input are the decoded symbols from previous antennas. We will show that this non-linear receiver structure exhibits a superior performance with respect to the linear scheme.

The rest of this paper is organized as follows. Section 2 describes the basic principles of analog JSCC and its optimization over SISO channels. Section 3 focuses on analog JSCC over MIMO fading channels. Section 4 presents performance results for the different analog JSCC transmission techniques considered along this work. Two types of channels were considered: synthetical computer-generated Rayleigh fading channels and real indoor fading channels measured with a multiuser MIMO testbed. Finally, Section 5 is devoted to the conclusions.

2 Analog joint source-channel coding

Figure1 shows the block diagram of a discrete-time analog-amplitude joint source-channel coded (JSCC) transmission system over an AWGN channel that performs N:1 bandwidth compression. As explained in the previous section, analog JSCC systems that reduce bandwidth are more interesting for wireless communications because they allow for a better usage of the radio spectrum.
Figure 1

Block diagram of a bandwidth reduction N:1 analog JSCC system with AWGN channel.

At the transmitter, N-independent and identically distributed (i.i.d.) analog source symbols are packed into the source vector x=(x1,x2,…,x N ) and compressed into one channel symbol s. The encoding procedure has two steps: the compression function M δ (·) and the matching function T α (·).

As explained in[8], Shannon-Kotel’nikov mappings can be used to define compression functions M δ (·) that map the N source symbols into a single value θ ̂ . As an example, a particular type of parameterized space-filling continuous curves, called spiral-like curves, can be used to encode the source samples. These curves were proposed for the transmission of Gaussian sources over AWGN channels by Chung and Ramstad[57]. For the case of 2:1 compression (i.e., N=2), they are formally defined as
z δ ( θ ) = sign ( θ ) δ π θ sin θ , δ π θ cos θ ,

where θ is the angle from the origin to the point z=(z1,z2) on the curve and δ is the analog JSCC parameter that determines the distance between two neighboring spiral arms. In the ensuing section, we explain that the encoder parameter δ should be optimized if the optimal cost-distortion tradeoff is to be approached.

Although other non-linear continuous mappings can be used, spiral-like curves are frequently utilized for bandwidth reduction in analog JSCC because they can be interpreted as a parametric approximation to power-constrained channel-optimized vector quantization (PCCOVQ)[14]. Indeed, when connecting the adjacent vectors in a PCCOVQ codebook, we obtain a non-linear continuous curve that, for moderate to high SNR, is very similar to the spiral-like curve defined before.

Once a spiral-like curve has been selected, the compression function M δ (·) provides the value θ ̂ corresponding to the point on the spiral that minimizes the distance to x, i.e.,
θ ̂ = M δ ( x ) = arg min θ x - z δ ( θ ) 2 .

Therefore, each pair of analog source samples, x1 and x2, that corresponds to a specific point in 2 is represented by a value θ ̂ that corresponds to the point on the spiral closest to x. It is possible to achieve higher compression rates (i.e., N>2) by extending (1) to generate more complex curves[15, 16].

Next, we use the invertible function T α (·) to transform the compressed samples. In[5, 7, 8], the invertible function
T α ( θ ) = sign ( θ ) | θ | α

with α=2 was proposed. However, as shown in[10], the system performance can be improved if α is optimized together with δ. We have empirically determined through computer simulations that using α=1.3 provides a good overall performance for 2:1 analog JSCC systems over AWGN channels and a wide range of SNR and δ values.

Finally, the coded value is normalized by γ to ensure the average transmitted power is equal to one. Thus, the symbol sent over the channel is constructed as
s = T α ( M δ ( x ) ) γ .
Assuming an AWGN channel, the received symbol is
y = s + n ,

where n N ( 0 , N 0 ) is a real-valued zero-mean Gaussian random variable that represents the channel noise with variance N0. Notice that, since the power of the channel symbols is normalized, the SNR is 1/N0.

At the receiver, we calculate an estimate x ̂ of the transmitted source symbols given the noisy observation y. Previous work[5, 6, 8] considers ML decoding to recover the source symbols from the received symbols. ML decoding exhibits a very low complexity, but it presents a poor performance at low SNRs. This drawback is addressed in[10], where minimum mean square error (MMSE) analog JSCC decoding is proposed as an alternative to ML. When MMSE decoding is employed, the analog system attains a performance close to the optimal distortion-cost tradeoff in the whole SNR region. Unfortunately, it leads to a significant increase on the overall complexity at the receiver, since MMSE estimates are obtained after solving an integral that can be only calculated numerically.

2.1 Two-stage approximation to analog JSCC decoding

Rather than directly decoding the source samples x from the received symbol y, as in standard ML and MMSE decoding methods, it is possible to use an alternative two-stage decoding approach in which we first calculate an estimate of the transmitted channel symbols ŝ and then decode the source samples from this symbol estimate[17]. In the case of AWGN channels, the linear MMSE estimate of the channel symbols s is given by
ŝ = y 1 + N 0 .
Then, ML decoding is used to obtain an estimate of the analog source symbol from ŝ
x ̂ = arg max x curve p ( ŝ | x ) = z δ sign ( ŝ ) | ŝ γ | - α .

Notice that the complexity of our two-stage receiver is the same as that of the ML receiver, since the estimation of the input channel symbol reduces to a simple factor normalization. This factor normalization is key for the ML decoder to approach the optimal cost-distortion tradeoff at low SNR values.

It is interesting to note that the idea of introducing a linear MMSE estimator prior to ML decoding in digital communications has been analyzed in[18]. In this case, it is shown that MMSE estimation is instrumental for achieving the capacity of AWGN channels when using lattice-type coding.

The previously described two-stage decoding approach can be readily extended to analog JSCC over SISO Rayleigh channels. Under the assumption of a flat fading channel, the symbols observed at the output of a SISO Rayleigh channel can be expressed as
y = hs + n ,
where h is a random variable that represents the fading channel response. In the case of Rayleigh fading channels, h is a complex-valued zero-mean circularly symmetric Gaussian-independent and identically distributed (i.i.d.) random variable. Now the SNR fluctuates with the channel response h. Assuming normalized channel symbols, the SNR in fading channels is given by
SNR ( h ) = | h | 2 N 0 .

If the fading channel is normalized so that E[ |h|2]=1, the average SNR is 1/N0.

Assuming this channel model, the linear MMSE estimate of the transmitted symbol s is given by
ŝ = h y | h | 2 + N 0 ,

where the super-index represents complex conjugation. This symbol estimate ŝ can then be decoded using (6).

2.2 Code optimization

The performance of analog JSCC systems is measured in terms of the source signal-to-distortion ratio (SDR) with respect to the SNR. The distortion is the mean square error (MSE) between the decoded and source analog symbols, i.e.,
MSE = 1 N E x - x ̂ 2 .

Thus, denoting the source signal variance as σ x 2 , the SDR is calculated as SDR = σ x 2 / MSE .

System performance not only depends on the SNR but also on the non-linear encoder mapping. In the case of spiral-like curves, the way they fill the multidimensional source space depends on the parameter δ that determines the separation between spiral arms. When considering ML decoding, high SNR and α=2, it is possible to obtain an analytic expression for the optimal value of the analog encoder parameter δ[8]. When α≠2, however, the analytical optimization of δ is not feasible. Instead, δ can be numerically optimized by computing the SDR for each SNR over a wide range of values for δ. As an example, Table1 shows the best values of δ that were found via computer simulations for the 2:1 compression of a Gaussian source with σ x 2 = 1 , using α=1.3 and for different SNR values. Optimum δ values were determined by exhaustive search over the range 0<δ<10 using a 0.1 step-size. A similar optimization procedure can be followed to determine the optimum analog encoder parameters as a function of the SNR for larger reduction ratios (i.e., N>2).
Table 1

Optimal values for δ

SNR (dB)
















































































In the case of fading channels, the fact that the transmitter has to select the optimal encoder parameter δ prior to transmission implies that δ has to be continuously updated according to the instantaneous SNR, which depends on h. In a practical setup, the SNR can be estimated at the receiver and sent to the transmitter over a feedback channel. The rate at which the SNR is to be updated depends on the channel coherence time and should be allowed by the feedback channel. The feedback channel delay should also be smaller than the channel coherence time for the available SNR to be an adequate prediction of the actual SNR. Once the SNR is available, the transmitter can select the optimal analog encoder using a look-up table such as Table1 that was obtained for a 2:1 bandwidth reduction system.

3 Analog JSCC over MIMO channels

In this section, we focus on analog JSCC over MIMO wireless channels with n T transmit and n R n T receive antennas. We assume the source symbols are spatially multiplexed over the n T transmit antennas. At each transmit antenna, a vector x i of N analog source symbols is encoded into the channel symbol s i , i=1,,n T using the encoder described in Section 2. Notice that bandwidth reduction in this system setup is equal to N n T and significant bandwidth reductions can be achieved when using multiple transmit antennas.

We assume channel symbols are sent over a frequency flat MIMO fading channel represented by an n R ×n T channel matrix H whose entries h ij are random variables. The observed symbols at the MIMO channel output are given by
y = Hs + n ,
where s, y, and n are the vectors that represent the channel symbols, the received symbols, and the additive thermal noise, respectively. The thermal noise is assumed to be complex-valued zero-mean circularly symmetric Gaussian and spatially white, i.e., the noise covariance matrix is
C n = E [ n n H ] = N 0 I n R ,
with I n R being the n R -dimension identity matrix. Channel symbols are also spatially white and normalized so that the radiated power at each antenna is 1/n T (i.e., total radiated power is one). Thus, the covariance matrix of the channel symbols is
C s = E [ s s H ] = 1 n T I n T ,
with I n T being the n T -dimension identity matrix. Hence, the MIMO SNR is
SNR ( H ) = tr H C s H H tr C n = tr H H H n T n R N 0 ,

where tr(·) denotes the trace operator and the super-index H represents conjugate transposition. If the MIMO fading channels are normalized so that E H [tr(H H H )]=n T n R , the average SNR is 1/N0.

The optimum MMSE receiver[10] for this channel model would be given by
x ̂ MMSE = E x | y = x p ( x | y ) d x = 1 p ( y ) x p ( y | x ) p ( x ) d x ,

where the mapping function M δ (·) is used to obtain the conditional probability p(y|x) and x represents the vector with the Nn T source samples. Notice that the integral in (14) can only be calculated numerically because M δ (·) is discontinuous and highly non-linear. The complexity of such detector would be extremely high even for an small number of antennas, since it would involve the discretization of an Nn T dimensional space.

As an alternative, we can extend the two-stage receiver described for SISO channels to the MIMO case. Instead of directly calculating an MMSE estimate of the source symbols, we first perform an estimation of the channel symbols transmitted from each antenna using a conventional MIMO detector and then proceed to the decoding of the estimated channel symbols. In this work, we will study two suboptimal MIMO detectors: the MMSE linear detector and the MMSE DF with ordering detection. The basic premise of these two detectors is to perform a spatial filtering of the observations to cancel the spatial interferences and thus transform the MIMO channel into n T parallel SISO channels. This way the analog JSCC encoding and decoding procedures described in the previous section can be applied.

3.1 MIMO analog JSCC decoding with MMSE linear detection

Figure2 shows the block diagram of an analog JSCC MIMO transmission system with MMSE linear detection. The MMSE filter that minimizes the mean squared error between the channel symbol vector s and the estimated symbol vector s ̂ = Wy is given by
W MMSE = H H H + n T N 0 I n T - 1 H H .
Figure 2

Analog MIMO system with MMSE linear detection.

Notice that, contrarily to zero-forcing linear detection, WMMSE does not completely cancel the spatial interference of the MIMO channel, i.e., at ŝ i , the desired symbol s i is corrupted by thermal noise and a residual spatial interference from symbols transmitted through other antennas. Considering this residual spatial interference as Gaussian noise that adds to the thermal noise, it is shown in[19] that the equivalent SNR at each output of the MMSE linear receiver can be expressed as
SNR i = μ i 2 μ i - μ i 2 = μ i 1 - μ i , i = 1 , , n T ,

where μ i =(WMMSEH) ii . Thus, the equivalent channel that comprises the concatenation of the MIMO channel and the MIMO linear MMSE receiver can be interpreted as a set of SISO parallel channels, each with an equivalent SNR given by (16). Each entry of the estimated symbol vector can be decoded independently using the analog JSCC decoding approach explained in Section 2.

Similarly to the SISO case, we assume that our MIMO system setup is equipped with a feedback channel that estimates the SNR i values at reception and sends them to the transmitter (see Figure2). This way we can approach the optimal cost-distortion tradeoff because we can continuously adapt the analog JSCC encoder parameters δ i . Again, the look-up Table1 can be used to select the optimal analog encoders for a 2:1 bandwidth reduction system.

It should be noticed that linear MMSE detection is optimum only when the channel symbols are Gaussian, which is not the case. Indeed, even when the sources are Gaussian, the analog JSCC encoder is a non-linear transformation that produces non-Gaussian channel symbols. It is possible to formulate the optimum non-linear MMSE detector but this requires knowledge of the channel symbols probability p(s) and the calculation of an integral similar to that in (14). Notice the extraordinary complexity of the optimum non-linear MMSE detector. On the one hand, p(s) has to be discretized and estimated using Monte Carlo methods since it is not possible, in general, to find an analytical expression for p(s). When considering fading channels, knowledge of p(s) is particularly difficult since it depends on the analog JSCC encoding parameters which in turn change with the SNR. On the other hand, the non-linear MMSE detection integral has to be computed numerically which requires a refined discretization of p(s) to approach optimality. In the ensuing subsection, we investigate the utilization of a non-linear receiving structure for analog JSCC over MIMO channels that, although suboptimal, outperforms linear MMSE detection while keeping complexity at a low level.

3.2 MIMO analog JSCC decoding with decision feedback detection

Figure3 plots the block diagram of an analog JSCC MIMO transmission system with a DF receiver. Both the feed forward (FF) and the feed backward (FB) filters are optimized according to the MMSE criterion. The FF filter is obtained from the Cholesky factorization of
H H H + n T N 0 I n T = L H Δ L ,
where L is an n T ×n T lower triangular matrix and Δ is an n T ×n T diagonal matrix. If we define the whitening filter B H =Δ-1L-H, the FF filter is the product of the matched and whitening filters, i.e., W MMSEDF=B H H H . The overall response of the FF filter and the channel is
W MMSE DF H = L - n T N 0 Δ - 1 L - H .
Figure 3

MIMO analog JSCC system with decision feedback detection.

In order to simplify the derivation of the DF receiver, we will assume that there are no decoding errors. Under this assumption, the spatially causal component of the interference in (17) can be successively removed with the FB filter L - I n T without altering the noise statistics at the decoder inputs. An advantage of analog JSCC is that there is no delay in the encoding and re-encoding steps which significantly simplifies the implementation of DF MIMO receivers.

Similarly to the case of linear detection, we assume the instantaneous SNR at the decoder inputs is known at the transmitter thanks to the existence of a feedback channel (see Figure3). This allows the continuous update of the δ i parameters following the look-up Table1. It can be easily shown that Equation (16) is also valid to calculate the SNR value of each equivalent SISO channel with
μ i = B H H H H - L + I n T ii = I n T - n T N 0 Δ - 1 L - H ii .
Finally, notice that decoding ordering is important and significantly impacts the performance of DF MIMO receivers[20]. Nevertheless, contrarily to[20], the optimum ordering in our case is the one that minimizes the MMSE at the decoder input, i.e.,
MMSE = N 0 tr Δ - 1 I n T - N 0 L - H L - 1 Δ - 1 N 0 tr Δ - 1 ,

where the approximation holds when N01.

Ordering can be interpreted as a permutation of the columns of the MIMO channel matrix, i.e., H ̄ = H P where P is a permutation matrix. Thus, the optimum ordering is
P opt = arg min P N 0 tr Δ ̄ - 1 ,

where Δ ̄ results from the Cholesky factorization of H ̄ H H ̄ + n T N 0 I n T . This optimization problem can be readily solved by searching over the n T ! possible permutation matrices and selecting the one that minimizes the MMSE cost function (19). Computer simulations show that, in the low SNR regime, the same ordering results are obtained when considering either the exact or the approximate expression in (19).

4 Experimental results

In this section, we present the results of several computer experiments that were carried out to illustrate the performance of the proposed MIMO analog JSCC transmission methods. We considered two types of source distributions: Gaussian and Laplacian. These distributions are typically encountered in practical applications such as image transmission or compressive sensing. System performance is evaluated in terms of the SDR at reception for the given scenario and channel SNR. In the computer experiments, we generated 1,000,000 source symbols of a given distribution, simulate their transmission over the considered channel with given channel SNR, and calculate an estimate of the MSE between the received symbols x ̂ and the original symbols x (see (10)).

The optimal distortion-cost tradeoff is the maximum attainable SDR for a given SNR. In the literature, this theoretical limit is known as the optimum performance theoretically attainable (OPTA)[21]. The OPTA is calculated by applying R c R(D)=C, that is, by equating the product of the number of source samples transmitted per channel use, R c , and the rate-distortion function, R(D), to the channel capacity, C.

Expressing the rate-distortion as a function of the SDR and the channel capacity as a function of the SNR, and since we are sending R c =2Nn T source samples in each channel use, the OPTA is calculated as
2 N n T R ( SDR ) = C ( SNR ) .
For a generic MIMO n T ×n R system, the channel capacity (expressed in nats per channel use) is given by
C ( SNR ) = E H log det I n R + SNR n T H H H ,
where E H [ · ] represents expectation with respect to H. On the other hand, the rate-distortion function depends on the considered distribution. If we assume the MSE as the measure of distortion, the rate-distortion of a Gaussian source can be expressed as a function of the SDR as
R ( SDR ) = 1 2 log SDR .
There is no closed-form expression for the rate-distortion of a Laplacian source, but it can be approximated by the Shannon lower bound for the squared error distortion of any distribution, given by[2]
R ( D ) > = h ( x ) - 1 2 log ( 2 πeD ) ,
where D is the distortion and h(x) is the entropy of the source. The entropy of a Laplacian distribution with variance σ x 2 is given by
h ( x ) = 1 + 1 2 log 2 σ x 2 ,
which, substituting in (24) yields
R ( SDR ) 1 2 log e π SDR .

Two types of channels were considered during our computer experiments: computer-generated synthetic Rayleigh fading channels and real fading channels obtained after a measurement campaign carried out in a multiuser indoor scenario.

4.1 Synthetic MIMO Rayleigh channels

In this subsection, symmetric channels with n T =n R =2 and 4 transmit and receive antennas were synthetically generated with random numbers obtained from a computer program. In particular, we emulated ergodic spatially white MIMO Rayleigh fading channels whose entries h ij are realizations of complex-valued zero-mean circularly symmetric Gaussian independent and identically distributed (i.i.d.) random variables.

Figures4 and5 show the performance results obtained for a 2:1 bandwidth reduction analog JSCC system over a 2×2 MIMO Rayleigh channel with Gaussian and Laplacian source symbols, respectively. The SDR versus SNR performance curves for the two analog JSCC MIMO receivers described in Section 3, together with the OPTA, are plotted in each figure. Notice that the plotted OPTA is an upper bound of the actual performance limit in the case of Laplacian sources. It can be seen that for Gaussian sources and high SNR values, the SDR obtained with DF MIMO receivers is 2 dB below the OPTA while the SDR obtained with linear MMSE MIMO receivers is 3 dB below the OPTA, i.e., DF MIMO receivers produce a distortion that is 1 dB better than that obtained with linear MIMO receivers. For Laplacian sources, DF MIMO receivers perform only slightly better than linear MIMO receivers. The SDR distance to the OPTA for high SNR values is about 2.5 dB in both cases.
Figure 4

Performance of 2:1 analog JSCC over 2×2 MIMO Rayleigh channels with Gaussian sources.

Figure 5

Performance of 2:1 analog JSCC over 2×2 MIMO Rayleigh channels with Laplacian sources.

Performance differences between DF and linear MIMO receivers are significantly larger as the number of transmit and receive antennas increases. Figures6 and7 show the performance results obtained for a 2:1 bandwidth reduction analog JSCC system over a 4×4 MIMO Rayleigh channel with Gaussian and Laplacian source symbols, respectively. Notice the similarity between the OPTA curves in Figures4 and5 (2×2) and Figures6 and7 (4×4) caused by the MIMO fading channel normalization E H [tr(H H H )]=n T n R and the channel symbols covariance matrix normalization given by (13). It can be seen that for Gaussian sources and high SNRs, the SDR obtained with DF MIMO receivers is 2 dB below the OPTA while this difference is 4 dB for linear receivers, i.e., DF MIMO receivers produce a distortion that is 2 dB better than that obtained with linear MIMO receivers. For Laplacian sources, the performance of the proposed MIMO receivers is worse than that with Gaussian sources: the distortion obtained with DF and linear MIMO receivers is 3 dB and 4.2 below the OPTA, respectively. Yet, DF MIMO receivers clearly outperform linear MIMO receivers yielding a 1.2 dB better SDR. Similarly to the digital case, the superior performance of DF MIMO receivers is due to the non-linear decoding and reencoding operations carried out during the decision feedback stage.
Figure 6

Performance of 2:1 analog JSCC over 4×4 MIMO Rayleigh channels with Gaussian sources.

Figure 7

Performance of 2:1 analog JSCC over 4×4 MIMO Rayleigh channels with Laplacian sources.

4.2 Real indoor channels

In order to get a more complete assessment of the analog JSCC scheme considered in this work, we carried out a series of computer experiments considering real channels measured from an indoor scenario. Within the COMONSENS project (, a wireless network hardware demonstrator was constructed for the practical evaluation of multiuser multiantenna transmission techniques. The testbed was jointly designed and implemented by two research groups from University of Cantabria (UC) and University of A Coruña (UDC) in Spain. The testbed consists of three transmit and three receive nodes each equipped with MIMO capabilities. For a detailed description of the COMONSENS Multiuser MIMO testbed, see the URL above.

Figure8 shows a picture of the setup where the location of the different transmitters and receivers is clearly appreciated. A number of 5,844 2×2 MIMO channel realizations were obtained after the measurement campaign. These are freely available to the research community and can be downloaded from the COMONSENS project web page. In this same web address, there is a detailed description of the setup and the measurement campaign from which the real channels were obtained.
Figure 8

Picture of the real indoor scenario setup during measurement campaign.

Figures9 and10 plot the performance results for the proposed scheme when transmission is performed over real measured indoor 2×2 MIMO channels with Gaussian sources and Laplacian sources, respectively. The resulting distortion when DF detection is applied is within 1 dB from the OPTA for Gaussian sources, and within 2 dB for Laplacian sources. Contrary to the synthetic case, a significant improvement in performance (2 dB) is obtained with either Gaussian or Laplacian sources when using DF MIMO detection rather than linear MIMO detection. This is due to the differences in the eigenvalue spread (i.e., the quotient between the largest and the smallest eigenvalue) between the synthetic and measured channels. Similarly to the digital case, the superior performance of DF over linear MIMO receivers is more obvious when the channel eigenvalue spread increases. Such eigenvalue spread is quite small in the synthetic channels whereas it is large in a significant number of measured channels.
Figure 9

Performance of 2:1 analog JSCC over indoor measured 2×2 MIMO channels with Gaussian sources.

Figure 10

Performance of 2:1 analog JSCC over indoor measured 2×2 MIMO channels with Laplacian sources.

5 Conclusions

Analog JSCC of discrete-time continuous-amplitude sources over MIMO wireless channels has been investigated. Since directly recovering the analog source information from the MIMO channel output is not possible, we proposed the utilization of two-stage receivers that separately perform detection and analog JSCC decoding. We considered analog JSCC MIMO receivers that utilize either linear MMSE or DF MIMO detection. Different computer experiments were carried out to illustrate the ability of the different analog JSCC MIMO receivers to approach the optimal distortion-cost tradeoff. Both synthetic computer-generated Rayleigh fading channels and real indoor wireless measured channels were considered. Particularly remarkable is the performance of analog JSCC DF MIMO receivers, which attain distortion values within 3 dB from the OPTA over all considered channels for both Gaussian and Laplacian sources.



This work has been funded by Xunta de Galicia, MINECO of Spain, and FEDER funds of the EU under grants 2012/287, TEC2010-19545-C04-01, and CSD2008-00010; and by NSF award CIF-0915800.

Authors’ Affiliations

Department of Electronics and Systems, University of A Coruña
Department of Electrical and Computer Engineering, University of Delaware


  1. Shannon CE: A mathematical theory of communication. Bell Syst. Tech. J 1948, 7: 379-423.MathSciNetView ArticleGoogle Scholar
  2. Berger T: Rate Distortion Theory: A Mathematical Basis for Data Compression. Englewood Cliffs: Prentice Hall; 1971.Google Scholar
  3. Goblick TJ: Theoretical limitations on the transmission of data from analog sources. IEEE Trans. Inf. Theory 1965, 11(4):558-567. 10.1109/TIT.1965.1053821View ArticleGoogle Scholar
  4. Gastpar M, Rimoldi B, Vetterli M: To code, or not to code: lossy source-channel communication revisited. IEEE Trans. Inf. Theory 2003, 49(5):1147-1158. 10.1109/TIT.2003.810631MathSciNetView ArticleGoogle Scholar
  5. Chung SY: On the construction of some capacity-approaching coding schemes. Ph.D. dissertation Dept. EECS Massachusetts Institute of Technology, 2000Google Scholar
  6. Ramstad TA: Shannon mappings for robust communication. Telektronikk 2002, 98: 114-128.Google Scholar
  7. Hekland F, Oien GE, Ramstad TA: Using 2:1 shannon mapping for joint source-channel coding. In Proceedings of the 2005 Data Compression Conf. (DCC). Snowbird, UT; 2005:223-232.Google Scholar
  8. Hekland F, Floor PA, Ramstad TA: Shannon-Kotelnikov mappings in joint source-channel coding. IEEE Trans. Commun 2009, 57: 95-104.View ArticleGoogle Scholar
  9. Rüngeler M, Schotsch B, Vary P: Improved decoding of Shannon-Kotel’nikov mappings. In Proceedings of the 2010 Int. Symp. on Information Theory and Its Applications. Taichung; 2010:633-638.View ArticleGoogle Scholar
  10. Hu Y, Garcia-Frias J, Lamarca M: Analog joint source-channel coding using non-linear curves and MMSE decoding. IEEE Trans. Commun 2011, 59(11):3016-3026.View ArticleGoogle Scholar
  11. Brante G, Souza R, Garcia-Frias J: Analog joint source-channel coding in Rayleigh fading channels. In Proceedings of the 2011 Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP). Prague; 2011:3148-3151.Google Scholar
  12. Brante G, Souza R, Garcia-Frias J: Spatial diversity using analog joint source channel coding in wireless channels. IEEE Trans. Commun 2013, 61: 301-311.View ArticleGoogle Scholar
  13. Garcia-Naya J, Fresnedo O, Vazquez-Araujo F, Gonzalez-Lopez M, Castedo L, Garcia-Frias J: Experimental evaluation of analog joint source-channel coding in indoor environments. In Proceedings of the 2011 Intl. Conf. Communications (ICC). Kyoto; 2011.Google Scholar
  14. Fuldseth A, Ramstad TA: Bandwidth compression for continuous-amplitude channels based on vector approximation to a continuous subset of the source signal space. In Proceedings of the ICASSP 1997. Munich; 1997.Google Scholar
  15. Floor PA, Ramstad TA: Dimension reducing mappings in joint source-channel coding. In Proceedings of the 2006 Nordic Signal Processing Symposium (NORSIG). Rejkjavik; 2006.Google Scholar
  16. Floor PA: On the theory of Shannon-Kotelnikov mappings in joint source-channel coding. Ph.D. dissertation Norwegian University of Science and Technology, 2008Google Scholar
  17. Fresnedo O, Vazquez-Araujo FJ, Castedo L, Garcia-Frias J: Low-complexity near-optimal decoding for analog joint source channel coding using space-filling curves. IEEE Commun. Lett 2013, 17(4):745-748.View ArticleGoogle Scholar
  18. Forney GDJr: On the role of MMSE estimation in approaching the information-theoretic limits of linear Gaussian channels: Shannon meets Wiener. In Proceedings of the 41st Allerton Conf. Communication, Control, and Computing. Monticello, IL; 2003.Google Scholar
  19. Wang X, Poor HV: Iterative (turbo) soft interference cancellation and decoding for coded CDMA. IEEE Trans. Commun 1999, 47(7):1046-1061. 10.1109/26.774855View ArticleGoogle Scholar
  20. Foschini G, Golden G, Valenzuela R, Wolniansky P: Simplified processing for high spectral efficiency wireless communication employing multi-element arrays. IEEE J. Sel. Areas Commun 1999, 17(11):1841-1852. 10.1109/49.806815View ArticleGoogle Scholar
  21. Berger T, Tufts D: Optimum pulse amplitude modulation–I: transmitter-receiver design and bounds from information theory. IEEE Trans. Inf. Theory 1967, 13(2):196-208.View ArticleGoogle Scholar


© Vazquez-Araujo et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.