# Compressive cooperation for gaussian half-duplex relay channel

- Xiao Lin Liu
^{1}Email author, - Chong Luo
^{2}and - Feng Wu
^{2}

**2012**:227

https://doi.org/10.1186/1687-1499-2012-227

© Liu et al.; licensee Springer. 2012

**Received: **1 March 2012

**Accepted: **11 July 2012

**Published: **23 July 2012

## Abstract

### Abstract

Motivated by the compressive sensing (CS) theory and its close relationship with low-density parity-check code, we propose compressive transmission which utilizes CS as the channel code and directly transmits multi-level CS random projections through amplitude modulation. This article focuses on the compressive cooperation strategies in a relay channel. Four decode-and-forward (DF) strategies, namely receiver diversity, code diversity, successive decoding and concatenated decoding, are analyzed and their achievable rates in a three-terminal half-duplex Gaussian relay channel are quantified. The comparison among the four schemes is made through both numerical calculation and simulation experiments. In addition, we compare compressive cooperation with a separate source channel coding scheme for transmitting sparse sources. Results show that compressive cooperation has great potential in both transmission efficiency and its adaptation capability to channel variations.

## Keywords

## Introduction

Compressive sensing (CS) [1, 2] is an emerging theory concerning the acquisition and recovery of sparse signals from a small number of random linear projections. Recently, it is observed that CS is closely related to the well-known channel code called low-density parity-check (LDPC) codes [3, 4]. In particular, when the measurement matrix in CS is chosen to be the parity-check matrix of an LDPC code, the CS reconstruction algorithm proposed by Baron et al. [5] is almost identical to Luby’s LDPC decoding algorithm [6]. It is the similarity between CS and LDPC codes that inspires us to propose and study *compressive transmission*, which utilizes CS as the channel code and directly transmits multi-level CS random projections through amplitude modulation.

Since CS has both source compression and channel protection capabilities, it can be looked on as a joint source-channel code. When the data being transmitted is sparse or compressible, a conventional scheme will first use source coding to compress the data, and then adopt channel coding to protect the compressed data over the lossy channel. Compressive transmission brings some unique advantages over such a conventional scheme. First, since CS uses random projections to generate measurements regardless of the compressible patterns, it reduces complexity at the sender side. This will benefit thin signal acquisition devices, such as single-pixel camera [7] and sensor nodes. Second, it improves robustness. It is well-known that compressed data are very sensitive to bit errors. In the conventional scheme, when the channel code is not strong enough to protect data in a suddenly deteriorated channel, the entire coding block or even the entire data sequence may become undecodable. In contrast, CS random projections directly operate over source bits, and sporadic bit errors will not affect the overall data quality.

In this article, we will focus on studying the cooperative strategies for compressive transmission in a relay channel, or *compressive cooperation*. We consider a three-terminal Gaussian relay channel consisting of the source, the relay and the destination. Such a relay channel is first introduced by van der Meulen [8] in 1971, and has triggered intense interests since then. However, most previous researches on cooperative strategies are based on binary channel codes, such as LDPC codes [9–11] and turbo codes [12–14]. This article presents the first work that applies CS as the joint source-channel code in a relay channel. In particular, we present four decode-and-forward (DF) strategies, three of which resemble those for binary channel codes and the fourth one is peculiar to CS because it takes the arithmetic property of CS into account. We theoretically analyze the four strategies and quantify their achievable rates in a half-duplex Gaussian relay channel. Numerical studies and simulations show that all strategies except the receiver diversity strategy have high transmission efficiency and small implementation gap while code diversity scheme has the most stable performance. We further compare compressive cooperation with a separate source channel coding scheme, and results show that using CS as a joint source-channel code has great potential in a relay channel.

The rest of the article is organized as follows. Section 2 describes the channel model. Section 3 overviews compressive cooperation and analyzes the information-rate bound in a half-duplex Gaussian relay channel. Section 4 studies four DF schemes and their respective achievable rates. Section 5 reports results of numerical studies and simulations. Section 6 concludes the article.

## Channel model

*S*,

*R*, and

*D*respectively. Let the channel gains of three direct links (

*S*,

*D*), (

*S*,

*R*) and (

*R*,

*D*) be

*c*

_{ sd },

*c*

_{ sr }, and

*c*

_{ rd }. In this work, the relay is considered to locate along the

*S*−

*D*line and has equal distances to the source and the destination. By setting the attenuant exponent to 2, the channel gains are

*c*

_{ sd }=1,

*c*

_{ sr }=

*c*

_{ rd }=4. By half-duplex, we restrict that the relay

*R*cannot receive and transmit at the same time. Therefore, the channel is time-shared by broadcast (BC) mode and multiple access (MAC) mode as depicted in Figure 1. Let

*t*(0≤

*t*≤1) denote the time proportion of BC mode, then 1−

*t*is the time proportion of MAC mode.

*x*

_{1}. Both the relay and the destination are able to hear. The received signals at the relay and the destination are

*y*

_{ r }and ${Y}_{{d}_{1}}$:

where *z*_{
r
}and ${z}_{{d}_{1}}$ are Gaussian noises perceived at *R* and *D*.

*w*based on its received signals. Then in MAC mode, the source transmits

*x*

_{2}while the relay transmits

*w*simultaneously. The destination receives the superposition of the two signals which can be represented by:

where ${z}_{{d}_{2}}$ is the perceived Gaussian noise at *D*. Finally, the destination *D* decodes original message from received signals during BC and MAC modes.

*z*

_{ r }, ${z}_{{d}_{1}}$ and ${z}_{{d}_{2}}$, corresponding to the noises

*z*

_{ r }, ${z}_{{d}_{1}}$ and ${z}_{{d}_{2}}$, have the same unity energy. Thus the system resource can be easily characterized by the transmission energy budget

*E*. Denote ${E}_{{s}_{1}}$, ${E}_{{s}_{2}}$ and

*E*

_{ r }as the average symbol energy for random variables

*x*

_{1},

*x*

_{2}and

*W*, which correspond to

*x*

_{1},

*x*

_{2}and

*w*, respectively. Then the system constraint can be described by the following inequality:

## Compressive transmission overview

### Compressive transmission in a relay channel

*p*to be 1 and probability 1−

*p*to be 0. When

*p*≠0.5, the source is considered sparse or compressible. During transmission, source bits are segmented into length-

*n*blocks. Let

**u**=[

*u*

_{1},

*u*

_{2},…,

*u*

_{ n }]

^{ T }be one source block. In order to transmit

**u**over the relay channel, the source first generates CS measurements using a sparse Rademacher matrix with elements drawn from {0,1,−1}, and transmits them in the BC mode. The transmitted symbols, which consist of

*m*

_{1}measurements, can be represented by:

where ${\alpha}_{{s}_{1}}$ is a power scaling parameter to match with sender’s power constraint.

*m*

_{2}measurements using identical/different Rademacher matrix, which can be represented by:

**u**and transmits them in MAC mode:

where *B* is also a Rademacher matrix, and **w** contains *m*_{2}measurements.

Under these power constraints, the corresponding scaling parameters *α*_{s 1}, *α*_{s 2} and *α*_{
r
} can be derived, where the average power of symbol *A*_{1}**u**, *A*_{2}**u** and *B* **u** are determined by the row weight of corresponding sampling matrix and sparsity probability of **u**.

*m*

_{1}measurements are transmitted in BC mode and

*m*

_{2}measurements are transmitted in MAC mode, the time proportion of BC mode can be calculated as:

where *H*(**u**) is the entropy of **u** and *m*_{1} and *m*_{2} determines the cost time slots for the BC mode and MAC mode, respectively. If the base of the logarithm in entropy computation is 2, the rate *R* is expressed in bits per channel use. It should be noted that the rate *R* in Equation (11) is related with the symbol energy ${E}_{{s}_{1}}$, ${E}_{{s}_{2}}$ and *E*_{
r
}. For the compressive transmission along a link channel, when the corresponding transmission power is larger, higher quality of measurements could be derived and the number of needed measurements for source recovery could be smaller. Therefore higher rate could be achieved from large allocated transmission energy.

In such compressive transmission system, the encoding complexity is rather low because the calculation of measurements at the source node only involves the sums and differences of a small subset of the source vector. The complexity of the belief-propagation based decoding algorithm is *O*(*TMLQ* log(*Q*)) [5], where *L* is the average row weight, *Q* is the dimension of transmitted message in belief propagation process, *T* is the iteration number and *M* is the number of received measurements.

### Information-rate bounds

where *I*(*X*;*Y*) represents the mutual information conveyed by a channel with input *X* and output *Y*. The supremum is taken over *t*(0≤*t*≤1) and all the joint distributions *p*(*x*_{1}*x*_{2}*w*) up to the alphabet constraints on *X*_{1}*X*_{2}and *W*.

In order to approach the capacity, a parameter which is not explicitly shown in (12) also needs to be optimized. It is the correlation between *x*_{2}and *W*, i.e. the codeword sent by the source and the relay in the MAC mode. The source and the relay send identical messages at one extreme (*r*=1), while they send entirely different messages at the other extreme (*r*=0). In the design of cooperative LDPC code, it is observed that the optimal achievable rate can be well approximated by the better case of *r*=0 and *r*=1[9]. We make the same simplification and only consider cases *r*=0 and *r*=1. In these two extreme cases, several terms in (12) can be simplified:

*r*=1:

*r*=0:

The mutual information terms in (12) are determined by both channel signal-to-noise ratio (SNR) and the input alphabet. The input alphabet is determined by the modulation scheme in conventional transmission and jointly determined by the binary source and the measurement matrix in compressive transmission. It is impossible to compute a general information rate curve for compressive cooperation.

*S*→

*D*) and cooperative communication when channel inputs at the source and the relay are the above described CS measurements. When the sensing matrix is determined and the property of source

**u**is known, the distribution of the CS measurements can be calculated. Under the assumption that all these measurements are independent, the mutual information term in (12) are derived from the distribution of channel input and the AWGN channel assumption. It can be seen that cooperative communication can achieve a higher rate than direct communication in a large range of channel SNR. When channel SNR is high, direct and cooperative communications saturate at the same rate, which is determined by the properties of sparse signal

**u**. We can see from the figure that the non-sparse source (

*p*=0.5) has a higher saturation rate than sparse source (

*p*=0.1).

The achievable rate of the compressive transmission is a function of channel SNR when the properties of source message and measurement matrix are given, and two functions are defined for ease of presentation. When all measurements come from the same channel with SNR *P*, the achievable rate is denoted by *R*(*P*). When measurements are received from different channels, the achievable rate is denoted by *R*((*γ*_{1},*P*_{1}),…,(*γ*_{
k
},*P*_{
k
})), where *k* is the number of channel realizations. *γ*_{
i
}and *P*_{
i
}(1≤*i*≤*k*) are the time proportion and SNR of the *i* th channel realization.

## Compressive cooperative strategies

In this section, we specify four DF strategies for compressive cooperation, namely receiver diversity, code diversity, successive decoding and concatenated decoding. The first three strategies resemble those for binary channel codes, and the last one does not have binary counterpart in conventional relay communication, because it combines the arithmetic property of CS with the signal superposition process of MAC mode.

Both *r*=0 and *r*=1 are considered. When *r*=1, the binary message **u** is treated as a whole. The transmitted signals x_{
1
}, x_{
2
} and **w** are the CS measurements of **u** obtained with matrix *A*_{1}, *A*_{2} and *B*, respectively. When *r*=0, message **u** is looked on as the concatenation of two parts, or $\mathbf{u}={\left[{{u}_{1}}^{T}{{u}_{2}}^{T}\right]}^{T}$. The source transmits the measurements of u_{
1
} in BC mode, while it transmits measurements of u_{
2
} in MAC mode. The relay decodes u_{
1
} and then transmits the measurements of u_{
1
}in MAC mode. Therefore, the two transmitted signals **w** and x_{
2
} in MAC mode are the CS measurements of different parts of **u**.

The purpose of cooperative strategy design is to choose appropriate matrices *A*_{1}, *A*_{2}and *B* such that the original message can be recovered at the destination with the minimum number of channel uses. Since we fix to adopt Rademacher sampling matrices in our system, the choice of *A*_{1}, *A*_{2} and *B* is decided by the number of the rows of these matrices, which also implies the selection of time proportion *t* of BC mode. The number of needed measurements for successful CS reconstruction depends on their reliability, which should be ensured by appropriate energy allocation at the source and relay during the BC and MAC mode.

### DF strategies for *r*= 1

*r*=1. As the source and the relay are transmitting measurements of the same message

**u**, we let

*A*

_{2}=

*B*. Then

**w**and x

_{ 2 }contain the same message, and their signal strength will be added up in the air. Using the notations defined in (5), the SNR of ${Y}_{{d}_{2}}$ is:

#### Receiver diversity

In this scheme, the source in BC mode and the relay in MAC mode transmit the same set of CS measurements (*A*_{1}=*A*_{2}=*B*), such that *m*_{1}=*m*_{2} and *t*=0.5. At the destination, the two noisy versions of the same measurement, received in BC and MAC mode, are combined into one through maximal ratio combining (MRC) [16]. The SNR of the combined signal is the sum of SNRs of the received signals from independent Gaussian channels. As the SNR of ${Y}_{{d}_{1}}$ is ${P}_{s{d}_{1}}$, and the SNR of ${Y}_{{d}_{2}}$ is *P*_{2} as defined in (13), the SNR of the combined signal is ${P}_{s{d}_{1}}+{P}_{2}$.

*m*

_{1}≠

*m*

_{2}, i.e.

*A*

_{1}and

*A*

_{2}have different number of rows. In this case, some measurements are received twice and the others are received only once, either in BC mode (corresponding to

*t*>0.5) or in MAC mode (corresponding to

*t*<0.5). By discussing the two categories (

*t*=0.5 can be treated as a special case in either category), we can obtain the achievable rate of receiver diversity strategy:

where the supremum is taken over all time and power allocations that satisfy the constraint (4). The term *tR*(*P*_{
sr
}) expresses the constraint that the relay should be able to fully decode the source message at the end of BC mode.

#### Code diversity

*A*

_{1}≠

*A*

_{2}. The destination jointly decodes the original message from signals received in BC mode and MAC mode. The linear equations to be solved can be written into:

where **z**_{1} and **z**_{2} are unknown realizations corresponding to ${\mathbf{Z}}_{{d}_{1}}$ and ${\mathbf{Z}}_{{d}_{2}}$. The numbers of measurements in **y**_{1}and **y**_{2} are *m*_{1} and *m*_{2} respectively. Be noted that above equation is derived by dividing respective power scaling parameters from received signals. The SNRs of **y**_{1}and **y**_{2}remain ${P}_{s{d}_{1}}$ and *P*_{2}.

### DF strategies for *r*= 0

Intuitively, when channel condition is good, the source can transmit new information to the destination during MAC mode. The destination receives the measurements of message **u**_{1} in BC mode, and receives superposition of measurements of **u**_{1}and **u**_{2} in MAC mode. We propose two different decoding strategies and corresponding matrices design for *r*=0.

#### Successive decoding

Successive decoding is commonly used in conventional relay network. The destination first decodes message **u**_{1}from the signals received in both BC and MAC mode. The information about message **u**_{2} is treated as noise at this stage. After **u**_{1} is decoded, the destination removes the information about **u**_{1} from the signals received in MAC mode, and then decode **u**_{2}.

**u**

_{1}, the destination solves the following linear equations:

**y**

_{1}is ${P}_{s{d}_{1}}$. As

*A*

_{2}

**u**

_{2}is viewed as noise, the SNR of

**y**

_{2}in solving

**u**

_{1}can be computed as:

**u**

_{1}is decoded, the destination generates ${\mathbf{y}}_{2}^{\prime}$ and decode message

**u**

_{2}from:

By definition (5), the SNR of ${\mathbf{y}}_{2}^{\prime}$ is ${P}_{s{d}_{2}}$.

#### Concatenated decoding

Although successive decoding is the capacity achieving decoding algorithm in Gaussian relay channel, it may not be optimal in compressive cooperation. This is due to the fact that the achievable rate of compressive transmission *R*(*P*) has a very different form from Shannon capacity $C=\frac{1}{2}log(1+P)$. The intuition behind concatenate decoding is that higher efficiency may be achieved by jointly decoding **u**_{1} and **u**_{2} rather than treating **u**_{2} as noise when recovering **u**_{1}.

**u**

_{1}and

**u**

_{2}in MAC mode. The superposed signal can be viewed as the measurement of message

**u**, which can be decoded from:

*B*

**u**

_{1}and

*A*

_{2}

**u**

_{2}have matching energy at the destination, or:

**u**

_{1}and

**u**

_{2}are independent messages, and CS measurements for both messages are zero-centered, we can compute the SNR of

**y**

_{2}:

In all the four achievable rate expressions, the supremum is taken over all possible time proportion *t* and transmission powers that satisfy the energy constraint (4).

## Numerical study and simulations

*p*=0.1 is considered. As the source is binary, we can evaluate the channel rate with

*bit rate*and characterize the unperfect transmissions with

*bit error rate*(BER). For convenience, instead of information rate we present the results using bit rate:

*n*is the block length of

**u**. We set

*n*=6000 if not otherwise stated. All the results shown in this section are about

*R*

_{ b }(

*P*). However, we continue to use notation

*R*(

*P*) when the statement is valid for both rates. Actually, for 0.1-sparse data, the bit rate

*R*

_{ b }(

*P*) differs from the information rate

*R*(

*P*) (11) only by a constant coefficient:

*R*((

*γ*

_{1},

*P*

_{1}),…,(

*γ*

_{ k },

*P*

_{ k })) to denote the achievable rate when CS measurements are received from multiple channels. This creates an additional dimension in characterizing channel rates. Without reasonable simplification, we will be unable to compute the optimal rates of different DF schemes even through numerical integration. Therefore, we approximate the achievable rate of combined channels with:

This approximation is reasonable because otherwise a source needs to do per measurement energy allocation to achieve the optimal performance.

### Evaluating compressive cooperation strategies

In the formulation of the proposed four DF schemes, the supremum is taken over all possible time proportion and transmission powers that satisfy (4). The analytical solution to the optimization problem is hard to find since *R*(*P*) is unknown. Therefore, we first obtain *R*(*P*) for compressive transmission through simulations, and then compute the achievable rates of the four DF strategies through numerical integration. Baron et al. [5] have reported that there is an optimal row weight *L*_{opt}≈2/*p* beyond which any performance gain is marginal. We slightly adjust *L* to 15 and use eight −1’s and seven 1’s. For simplicity, we use the amplitude modulation of only one carrier wave. The performance for quadrature amplitude modulation (QAM) can be easily deduced from our reported results.

*codd*(code diversity),

*recd*(receiver diversity),

*succ*(successive decoding), and

*conc*(concatenated decoding). It is observed that transmitting through a relay greatly increases channel throughput when channel SNR is low and the benefit is not significant when SNR is higher than 15 dB.

*R*(

*P*) shows an “S” shape when the

*x*-axis is plotted in dB, it is a concave function with respect to

*P*. Considering that

*R*(0)≥0,

*R*(·) is subadditive, i.e.

Using this property, it can be derived that the rate of receiver diversity is no greater than that of code diversity.

The comparison between the code diversity scheme for *r*=1 and the two *r*=0 schemes draws a consistent conclusion as in conventional relay channels. First of all, the performance difference between *r*=0 schemes and *r*=1 schemes is not significant. Second, *r*=0 schemes show advantage when channel SNR is high, but *r*=1 schemes perform better when SNR is low. Our numerical results show that the achievable rate of *r*=0 schemes is higher than *r*=1 schemes when SNR is higher than 13 dB. Although the two *r*=0 schemes exhibit similar performance, concatenate decoding appears to be better than successive decoding when channel SNR is higher than 13 dB.

^{−5}, which is considered as the threshold of reliable transmission, we increase channel SNR until the BER goes below 10

^{−5}. This SNR-rate pair is plotted on Figure 4.

In Figure 4, simulation results of three DF schemes are compared with the highest numerical rate computed when *r* is either 0 or 1. It can be seen that the implementation gap is within 1.4 dB for all three schemes. During simulation, we observe that code diversity has very stable performance at both high and low SNRs. The performance of the two *r*=0 schemes has a slightly larger variation. In addition, when channel SNR is lower than 12 dB, both *r*=0 schemes degrade to two-hop transmission, i.e. ${E}_{{s}_{2}}=0$. Considering the fact that *r*=0 schemes do not significantly improve channel rate at high SNR, and code diversity is easier to implement, it is a wise choice to stick to code diversity scheme in practical systems.

Actually, when wireless channel state information is unknown for the source node, the channel code based on CS measurements can be generated limitlessly and transmitted until some predefined recovery quality is achieved at the receiver. The reason is that more redundancy will be achieved through increasing the number of CS measurements, which can help overcome the channel noise as shown in [5]. This rateless property makes the compressive cooperation communication system much easier to adapt to channel variation compared with traditional LDPC codes.

In the end of this part, we analyze and compare the computational complexity of the four DF strategies.

### Comparing compressive transmission with a separate source channel coding scheme

We can find that compressive transmission does not incur any rate loss to the reference schemes when the channel SNR is below 15 dB. After 15 dB, the rate achieved by compressive transmission starts to saturate. We have performed other experiments which show that the saturation SNR is determined by the sparsity of the Rademacher measurement matrix. If a denser measurement matrix is used, compressive transmission will cover a larger dynamic range. Even with the current setting, compressive transmission shows a better channel adaptation capability than the reference schemes.

## Conclusion

This article proposes compressive transmission which utilizing CS random projections as the joint source-channel code. We describe and analyze four DF cooperative strategies for compressive transmission in a three-terminal half-duplex Gaussian relay network. Both numerical studies and simulation experiments are carried out to evaluate these strategies’ achievable rates. We have compared compressive cooperation with a conventional separate source channel coding scheme. Results show that the proposed compressive cooperation has great potential in wireless relay channel because it not only has high transmission efficiency, but adapts well with channel variations.

## Declarations

### Acknowledgements

The authors would like to thank the anonymous reviewers, whose valuable comments helped to greatly improve the quality of the article.

## Authors’ Affiliations

## References

- Candès E, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information.
*IEEE Trans. Inf. Theory*2006, 52(2):489-509.View ArticleMathSciNetMATHGoogle Scholar - Donoho D: Compressed sensing.
*IEEE Trans. Inf. Theory*2006, 52(4):1289-1306.MathSciNetView ArticleMATHGoogle Scholar - Zhang F, Pfister H: Compressed sensing and linear codes over real numbers.
*Proc. of Information Theory and Applications Workshop*(IEEE, San Diego, 2008), pp. 558–561Google Scholar - Dimakis A, Vontobel P: LP decoding meets LP decoding: a connection between channel coding and compressed sensing. Proc. of the 47th Annual Allerton Conference on Communication, Control, and Computing (IEEE, Monticello, 2009), pp. 8–15Google Scholar
- Baron D, Sarvotham S, Baraniuk RG: Bayesian compressive sensing via belief propagation.
*IEEE Trans. Signal Process*2010, 58(1):269-280.MathSciNetView ArticleGoogle Scholar - Luby M, Mitzenmacher M: Verification-based decoding for packet-based low-density parity-check codes.
*IEEE Trans. Inf. Theory*2005, 51: 120-127. 10.1109/TIT.2004.839499MathSciNetView ArticleMATHGoogle Scholar - Duarte M, Davenport M, Takhar D, Laska J, Sun T, Kelly K, Baraniuk R: Single-pixel imaging via compressive sampling.
*IEEE Signal Process. Mag*2008, 25: 83-91.View ArticleGoogle Scholar - van der Meulen E: Three-terminal communication channels.
*Adv. Appl. Probab*1971, 3: 120-154. 10.2307/1426331MathSciNetView ArticleMATHGoogle Scholar - Chakrabarti A, De Baynast A, Sabharwal A, Aazhang B: Low density parity check codes for the relay channel.
*IEEE J. Sel. Areas Commun*2007, 25: 280-291.View ArticleGoogle Scholar - Hu J, Duman T: Low density parity check codes over wireless relay channels.
*IEEE Trans. Wirel. Commun*2007, 6(9):3384-3394.View ArticleGoogle Scholar - Razaghi P, Yu W: Bilayer low-density parity-check codes for decode-and-forward in relay channels.
*IEEE Trans. Inf. Theory*2007, 53(10):3723-3739.MathSciNetView ArticleMATHGoogle Scholar - Valenti M, Zhao B: Distributed turbo codes: towards the capacity of the relay channel.
*Proc. of 2003 IEEE 58th Vehicular Technology Conference*vol. 1, (2003), pp. 322–326Google Scholar - Zhang Z, Duman TM: Capacity-approaching turbo coding and iterative decoding for relay channels.
*IEEE Trans. Commun*2005, 53(11):1895-1905. 10.1109/TCOMM.2005.858654View ArticleGoogle Scholar - Zhang Z, Duman TM: Capacity approaching turbo coding for half-duplex relaying.
*IEEE Trans. Commun*2007, 55(9):1822-1822.View ArticleGoogle Scholar - Khojastepour M, Sabharwal A, Aazhang B: On capacity of gaussian ‘cheap’ relay channel.
*Proc. of IEEE GLOBECOM*vol. 3, (2003), pp. 1776–1780Google Scholar - Proakis J, Salehi M:
*Digital Communications*. (McGraw-hill, New York, 2001)Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.