Skip to main content

Compressive cooperation for gaussian half-duplex relay channel

Abstract

Abstract

Motivated by the compressive sensing (CS) theory and its close relationship with low-density parity-check code, we propose compressive transmission which utilizes CS as the channel code and directly transmits multi-level CS random projections through amplitude modulation. This article focuses on the compressive cooperation strategies in a relay channel. Four decode-and-forward (DF) strategies, namely receiver diversity, code diversity, successive decoding and concatenated decoding, are analyzed and their achievable rates in a three-terminal half-duplex Gaussian relay channel are quantified. The comparison among the four schemes is made through both numerical calculation and simulation experiments. In addition, we compare compressive cooperation with a separate source channel coding scheme for transmitting sparse sources. Results show that compressive cooperation has great potential in both transmission efficiency and its adaptation capability to channel variations.

Introduction

Compressive sensing (CS) [1, 2] is an emerging theory concerning the acquisition and recovery of sparse signals from a small number of random linear projections. Recently, it is observed that CS is closely related to the well-known channel code called low-density parity-check (LDPC) codes [3, 4]. In particular, when the measurement matrix in CS is chosen to be the parity-check matrix of an LDPC code, the CS reconstruction algorithm proposed by Baron et al. [5] is almost identical to Luby’s LDPC decoding algorithm [6]. It is the similarity between CS and LDPC codes that inspires us to propose and study compressive transmission, which utilizes CS as the channel code and directly transmits multi-level CS random projections through amplitude modulation.

Since CS has both source compression and channel protection capabilities, it can be looked on as a joint source-channel code. When the data being transmitted is sparse or compressible, a conventional scheme will first use source coding to compress the data, and then adopt channel coding to protect the compressed data over the lossy channel. Compressive transmission brings some unique advantages over such a conventional scheme. First, since CS uses random projections to generate measurements regardless of the compressible patterns, it reduces complexity at the sender side. This will benefit thin signal acquisition devices, such as single-pixel camera [7] and sensor nodes. Second, it improves robustness. It is well-known that compressed data are very sensitive to bit errors. In the conventional scheme, when the channel code is not strong enough to protect data in a suddenly deteriorated channel, the entire coding block or even the entire data sequence may become undecodable. In contrast, CS random projections directly operate over source bits, and sporadic bit errors will not affect the overall data quality.

In this article, we will focus on studying the cooperative strategies for compressive transmission in a relay channel, or compressive cooperation. We consider a three-terminal Gaussian relay channel consisting of the source, the relay and the destination. Such a relay channel is first introduced by van der Meulen [8] in 1971, and has triggered intense interests since then. However, most previous researches on cooperative strategies are based on binary channel codes, such as LDPC codes [911] and turbo codes [1214]. This article presents the first work that applies CS as the joint source-channel code in a relay channel. In particular, we present four decode-and-forward (DF) strategies, three of which resemble those for binary channel codes and the fourth one is peculiar to CS because it takes the arithmetic property of CS into account. We theoretically analyze the four strategies and quantify their achievable rates in a half-duplex Gaussian relay channel. Numerical studies and simulations show that all strategies except the receiver diversity strategy have high transmission efficiency and small implementation gap while code diversity scheme has the most stable performance. We further compare compressive cooperation with a separate source channel coding scheme, and results show that using CS as a joint source-channel code has great potential in a relay channel.

The rest of the article is organized as follows. Section 2 describes the channel model. Section 3 overviews compressive cooperation and analyzes the information-rate bound in a half-duplex Gaussian relay channel. Section 4 studies four DF schemes and their respective achievable rates. Section 5 reports results of numerical studies and simulations. Section 6 concludes the article.

Channel model

We consider a half-duplex three-terminal relay channel [9]. The source, relay, and destination are denoted by S, R, and D respectively. Let the channel gains of three direct links (S, D), (S, R) and (R, D) be c sd , c sr , and c rd . In this work, the relay is considered to locate along the SD line and has equal distances to the source and the destination. By setting the attenuant exponent to 2, the channel gains are c sd =1, c sr =c rd =4. By half-duplex, we restrict that the relay R cannot receive and transmit at the same time. Therefore, the channel is time-shared by broadcast (BC) mode and multiple access (MAC) mode as depicted in Figure 1. Let t (0≤t≤1) denote the time proportion of BC mode, then 1−t is the time proportion of MAC mode.

Figure 1
figure 1

A three-terminal relay network with R operating in half-duplex mode.

In BC mode, the source transmits symbol x1. Both the relay and the destination are able to hear. The received signals at the relay and the destination are y r and Y d 1 :

y r = c sr x 1 + z r
(1)
y d 1 = c sd x 1 + z d 1
(2)

where z r and z d 1 are Gaussian noises perceived at R and D.

At the end of BC mode, the relay generates message w based on its received signals. Then in MAC mode, the source transmits x2while the relay transmits w simultaneously. The destination receives the superposition of the two signals which can be represented by:

y d 2 = c sd x 2 + c rd w+ z d 2
(3)

where z d 2 is the perceived Gaussian noise at D. Finally, the destination D decodes original message from received signals during BC and MAC modes.

Assume that random variables z r , z d 1 and z d 2 , corresponding to the noises z r , z d 1 and z d 2 , have the same unity energy. Thus the system resource can be easily characterized by the transmission energy budget E. Denote E s 1 , E s 2 and E r as the average symbol energy for random variables x1, x2 and W, which correspond to x1, x2and w, respectively. Then the system constraint can be described by the following inequality:

t E s 1 +(1t)( E s 2 + E r )E
(4)

For clarity of presentation, the following notations are defined as the received signal strength at different links:

P s d 1 = c sd E s 1 , P sr = c sr E s 1 P s d 2 = c sd E s 2 , P rd = c rd E r
(5)

Compressive transmission overview

Compressive transmission in a relay channel

In this research, we model the source data as i.i.d. bits with probability p to be 1 and probability 1−p to be 0. When p≠0.5, the source is considered sparse or compressible. During transmission, source bits are segmented into length-n blocks. Let u=[ u1,u2,…,u n ]T be one source block. In order to transmit u over the relay channel, the source first generates CS measurements using a sparse Rademacher matrix with elements drawn from {0,1,−1}, and transmits them in the BC mode. The transmitted symbols, which consist of m1measurements, can be represented by:

x 1 = α s 1 A 1 u
(6)

where α s 1 is a power scaling parameter to match with sender’s power constraint.

In MAC mode, the source generates and transmits another m2measurements using identical/different Rademacher matrix, which can be represented by:

x 2 = α s 2 A 2 u
(7)

This article studies DF strategies and leaves compress-and-forward (CF) strategies to future research. A prerequisite of DF relaying is that the relay can fully decode the messages transmitted by the source in BC mode. With this assumption, the relay can generate new measurements of u and transmits them in MAC mode:

w= α r Bu
(8)

where B is also a Rademacher matrix, and w contains m2measurements.

The power scaling parameters in above equations ensure that:

E X 1 2 E s 1 ;E X 2 2 E s 2 ;E W 2 E r
(9)

Under these power constraints, the corresponding scaling parameters αs 1, αs 2 and α r can be derived, where the average power of symbol A1u, A2u and B u are determined by the row weight of corresponding sampling matrix and sparsity probability of u.

Since m1 measurements are transmitted in BC mode and m2measurements are transmitted in MAC mode, the time proportion of BC mode can be calculated as:

t= m 1 m 1 + m 2
(10)

The destination will perform CS decoding from all the measurements received in both modes. The belief propagation algorithm (CS-BP) proposed by Baron et al. [5] is adopted in our system. If the decoding is successful, the transmission rate can be computed by:

R= H ( u ) m 1 + m 2
(11)

where H(u) is the entropy of u and m1 and m2 determines the cost time slots for the BC mode and MAC mode, respectively. If the base of the logarithm in entropy computation is 2, the rate R is expressed in bits per channel use. It should be noted that the rate R in Equation (11) is related with the symbol energy E s 1 , E s 2 and E r . For the compressive transmission along a link channel, when the corresponding transmission power is larger, higher quality of measurements could be derived and the number of needed measurements for source recovery could be smaller. Therefore higher rate could be achieved from large allocated transmission energy.

In such compressive transmission system, the encoding complexity is rather low because the calculation of measurements at the source node only involves the sums and differences of a small subset of the source vector. The complexity of the belief-propagation based decoding algorithm is O(TMLQ log(Q)) [5], where L is the average row weight, Q is the dimension of transmitted message in belief propagation process, T is the iteration number and M is the number of received measurements.

Information-rate bounds

We concentrate on the DF relaying. It has been shown that the capacity of the above half-duplex Gaussian relay channel under DF strategy is [15]:

C = sup t , p ( · ) min tI X 1 ; Y r + ( 1 t ) I X 2 ; Y d 2 | W , tI X 1 ; Y d 1 + ( 1 t ) I X 2 , W ; Y d 2
(12)

where I(X;Y) represents the mutual information conveyed by a channel with input X and output Y. The supremum is taken over t(0≤t≤1) and all the joint distributions p(x1x2w) up to the alphabet constraints on X1X2and W.

In order to approach the capacity, a parameter which is not explicitly shown in (12) also needs to be optimized. It is the correlation between x2and W, i.e. the codeword sent by the source and the relay in the MAC mode. The source and the relay send identical messages at one extreme (r=1), while they send entirely different messages at the other extreme (r=0). In the design of cooperative LDPC code, it is observed that the optimal achievable rate can be well approximated by the better case of r=0 and r=1[9]. We make the same simplification and only consider cases r=0 and r=1. In these two extreme cases, several terms in (12) can be simplified:

 When r=1:

I X 2 ; Y d 2 | W = 0 I X 2 , W ; Y d 2 = I X 2 ; Y d 2

 When r=0:

I X 2 ; Y d 2 | W = I X 2 , Y d 2 I X 2 , W ; Y d 2 = I X 2 , Y d 2 + I W ; Y d 2 | X 2

The mutual information terms in (12) are determined by both channel signal-to-noise ratio (SNR) and the input alphabet. The input alphabet is determined by the modulation scheme in conventional transmission and jointly determined by the binary source and the measurement matrix in compressive transmission. It is impossible to compute a general information rate curve for compressive cooperation.

Instead, we here provide an intuitive understanding of the information-rate of compressive cooperation by presenting the upper bounds at several different settings in Figure 2. The figure shows the information-rate bounds of direct (SD) and cooperative communication when channel inputs at the source and the relay are the above described CS measurements. When the sensing matrix is determined and the property of source u is known, the distribution of the CS measurements can be calculated. Under the assumption that all these measurements are independent, the mutual information term in (12) are derived from the distribution of channel input and the AWGN channel assumption. It can be seen that cooperative communication can achieve a higher rate than direct communication in a large range of channel SNR. When channel SNR is high, direct and cooperative communications saturate at the same rate, which is determined by the properties of sparse signal u. We can see from the figure that the non-sparse source (p=0.5) has a higher saturation rate than sparse source (p=0.1).

Figure 2
figure 2

Information-rate bounds of compressive cooperation for sources at different sparsities.

The achievable rate of the compressive transmission is a function of channel SNR when the properties of source message and measurement matrix are given, and two functions are defined for ease of presentation. When all measurements come from the same channel with SNR P, the achievable rate is denoted by R(P). When measurements are received from different channels, the achievable rate is denoted by R((γ1,P1),…,(γ k ,P k )), where k is the number of channel realizations. γ i and P i (1≤ik) are the time proportion and SNR of the i th channel realization.

Compressive cooperative strategies

In this section, we specify four DF strategies for compressive cooperation, namely receiver diversity, code diversity, successive decoding and concatenated decoding. The first three strategies resemble those for binary channel codes, and the last one does not have binary counterpart in conventional relay communication, because it combines the arithmetic property of CS with the signal superposition process of MAC mode.

Both r=0 and r=1 are considered. When r=1, the binary message u is treated as a whole. The transmitted signals x 1 , x 2 and w are the CS measurements of u obtained with matrix A1, A2 and B, respectively. When r=0, message u is looked on as the concatenation of two parts, or u= u 1 T u 2 T T . The source transmits the measurements of u 1 in BC mode, while it transmits measurements of u 2 in MAC mode. The relay decodes u 1 and then transmits the measurements of u 1 in MAC mode. Therefore, the two transmitted signals w and x 2 in MAC mode are the CS measurements of different parts of u.

The purpose of cooperative strategy design is to choose appropriate matrices A1, A2and B such that the original message can be recovered at the destination with the minimum number of channel uses. Since we fix to adopt Rademacher sampling matrices in our system, the choice of A1, A2 and B is decided by the number of the rows of these matrices, which also implies the selection of time proportion t of BC mode. The number of needed measurements for successful CS reconstruction depends on their reliability, which should be ensured by appropriate energy allocation at the source and relay during the BC and MAC mode.

DF strategies for r= 1

We propose two cooperative strategies, namely receiver diversity and code diversity, for case r=1. As the source and the relay are transmitting measurements of the same message u, we let A2=B. Then w and x 2 contain the same message, and their signal strength will be added up in the air. Using the notations defined in (5), the SNR of Y d 2 is:

P 2 = c rd E r + c sd E s 2 2 = P rd + P s d 2 2
(13)

Receiver diversity

In this scheme, the source in BC mode and the relay in MAC mode transmit the same set of CS measurements (A1=A2=B), such that m1=m2 and t=0.5. At the destination, the two noisy versions of the same measurement, received in BC and MAC mode, are combined into one through maximal ratio combining (MRC) [16]. The SNR of the combined signal is the sum of SNRs of the received signals from independent Gaussian channels. As the SNR of Y d 1 is P s d 1 , and the SNR of Y d 2 is P2 as defined in (13), the SNR of the combined signal is P s d 1 + P 2 .

A more complicated implementation of receiver diversity is to let m1m2, i.e. A1 and A2 have different number of rows. In this case, some measurements are received twice and the others are received only once, either in BC mode (corresponding to t>0.5) or in MAC mode (corresponding to t<0.5). By discussing the two categories (t=0.5 can be treated as a special case in either category), we can obtain the achievable rate of receiver diversity strategy:

R DF recd = sup min R ( t , P s d 1 + P 2 ) , ( 1 2 t , P 2 ) , tR ( P sr ) if t < 0 . 5 ; sup min R ( 2 t 1 , P s d 1 ) , ( 1 t , P s d 1 + P 2 ) , tR ( P sr ) if t 0 . 5 .
(14)

where the supremum is taken over all time and power allocations that satisfy the constraint (4). The term tR(P sr ) expresses the constraint that the relay should be able to fully decode the source message at the end of BC mode.

Code diversity

In order to utilize code diversity, the source transmits different measurements in BC and MAC mode, i.e. A1A2. The destination jointly decodes the original message from signals received in BC mode and MAC mode. The linear equations to be solved can be written into:

A 1 A 2 u+ z 1 z 2 = y 1 y 2
(15)

where z1 and z2 are unknown realizations corresponding to Z d 1 and Z d 2 . The numbers of measurements in y1and y2 are m1 and m2 respectively. Be noted that above equation is derived by dividing respective power scaling parameters from received signals. The SNRs of y1and y2remain P s d 1 and P2.

Considering the constraint that the relay should fully decode the original message at the end of BC mode, the achievable rate of code diversity strategy is:

R DF codd =supmin tR P sr , R t , P s d 1 , 1 t , P 2
(16)

DF strategies for r= 0

Intuitively, when channel condition is good, the source can transmit new information to the destination during MAC mode. The destination receives the measurements of message u1 in BC mode, and receives superposition of measurements of u1and u2 in MAC mode. We propose two different decoding strategies and corresponding matrices design for r=0.

Successive decoding

Successive decoding is commonly used in conventional relay network. The destination first decodes message u1from the signals received in both BC and MAC mode. The information about message u2 is treated as noise at this stage. After u1 is decoded, the destination removes the information about u1 from the signals received in MAC mode, and then decode u2.

In order to decode u1, the destination solves the following linear equations:

A 1 B u 1 + z 1 c sd α s 2 c rd α r A 2 u 2 + z 2 = y 1 y 2
(17)

The SNR of y1is P s d 1 . As A2u2 is viewed as noise, the SNR of y2in solving u1can be computed as:

P 12 = P rd /( P s d 2 +1)
(18)

After u1 is decoded, the destination generates y 2 and decode message u2from:

A 2 u 2 + z 2 = y 2 , y 2 = c rd α r c sd α s 2 ( y 2 B u 1 )
(19)

By definition (5), the SNR of y 2 is P s d 2 .

With successive decoding, the achievable rate of the relay channel can be expressed by:

R DF succ = sup min tR ( P sr ) , R ( ( t , P s d 1 ) , ( 1 t , P 12 ) ) + ( 1 t ) R ( P s d 2 )
(20)

Concatenated decoding

Although successive decoding is the capacity achieving decoding algorithm in Gaussian relay channel, it may not be optimal in compressive cooperation. This is due to the fact that the achievable rate of compressive transmission R(P) has a very different form from Shannon capacity C= 1 2 log(1+P). The intuition behind concatenate decoding is that higher efficiency may be achieved by jointly decoding u1 and u2 rather than treating u2 as noise when recovering u1.

This scheme is peculiar to compressive cooperation because the destination receives superposition of CS measurements of u1 and u2 in MAC mode. The superposed signal can be viewed as the measurement of message u, which can be decoded from:

A 1 0 B A 2 u+ z 1 z 2 = y 1 y 2 ,u= u 1 u 2
(21)

The equation is valid only when B u1and A2u2have matching energy at the destination, or:

c rd α r = c sd α s 2 η
(22)

Assuming u1and u2 are independent messages, and CS measurements for both messages are zero-centered, we can compute the SNR of y2:

P 2 = E ( ηB u 1 + η A 2 u 2 ) 2 = η 2 E ( B u 1 ) 2 + η 2 E ( A 2 u 2 ) 2 = P rd + P s d 2
(23)

The transmission information rate of the relay channel should satisfy:

H(u)/( m 1 + m 2 )R ( t , P s d 1 ) , ( 1 t , P 2 )
(24)

In addition, the perfect decoding assumption at the relay and the achievable rate region of MAC mode can be shown as:

H ( u 1 ) / m 1 R ( P sr ) H ( u 1 ) / ( m 1 + m 2 ) R ( t , P s d 1 ) , ( 1 t , P rd ) H ( u 2 ) / m 2 R ( P s d 2 )
(25)

With concatenated decoding, the overall achievable rate of the relay channel is:

R DF conc = sup min R ( t , P s d 1 ) , ( 1 t , P 2 ) , tR ( P sr ) + ( 1 t ) R ( P s d 2 ) , R ( t , P s d 1 ) , ( 1 t , P rd ) + ( 1 t ) R ( P s d 2 )
(26)

In all the four achievable rate expressions, the supremum is taken over all possible time proportion t and transmission powers that satisfy the energy constraint (4).

Numerical study and simulations

In the previous section, we have proposed four DF schemes and formulated their achievable rates. In this section we will first evaluate the four compressive cooperation strategies through both numerical studies and Matlab simulations, and then comparison between compressive transmission and a conventional scheme based on source compression and binary channel coding is made. In both evaluations, the binary source message with p=0.1 is considered. As the source is binary, we can evaluate the channel rate with bit rate and characterize the unperfect transmissions with bit error rate (BER). For convenience, instead of information rate we present the results using bit rate:

R b (P)=n/( m 1 + m 2 )
(27)

where n is the block length of u. We set n=6000 if not otherwise stated. All the results shown in this section are about R b (P). However, we continue to use notation R(P) when the statement is valid for both rates. Actually, for 0.1-sparse data, the bit rate R b (P) differs from the information rate R(P) (11) only by a constant coefficient:

R(P)=H(p=0.1)× R b (P)0.469× R b (P)
(28)

At the end of Section 3, we introduce the notion R((γ1,P1),…,(γ k ,P k )) to denote the achievable rate when CS measurements are received from multiple channels. This creates an additional dimension in characterizing channel rates. Without reasonable simplification, we will be unable to compute the optimal rates of different DF schemes even through numerical integration. Therefore, we approximate the achievable rate of combined channels with:

R ( γ 1 , P 1 ) , , ( γ k , P k ) i γ i R ( P i )
(29)

This approximation is reasonable because otherwise a source needs to do per measurement energy allocation to achieve the optimal performance.

Evaluating compressive cooperation strategies

In the formulation of the proposed four DF schemes, the supremum is taken over all possible time proportion and transmission powers that satisfy (4). The analytical solution to the optimization problem is hard to find since R(P) is unknown. Therefore, we first obtain R(P) for compressive transmission through simulations, and then compute the achievable rates of the four DF strategies through numerical integration. Baron et al. [5] have reported that there is an optimal row weight Lopt≈2/p beyond which any performance gain is marginal. We slightly adjust L to 15 and use eight −1’s and seven 1’s. For simplicity, we use the amplitude modulation of only one carrier wave. The performance for quadrature amplitude modulation (QAM) can be easily deduced from our reported results.

Figure 3 shows the achievable rates of the four DF schemes as well as direct transmission. The four schemes are denoted by codd (code diversity), recd (receiver diversity), succ (successive decoding), and conc (concatenated decoding). It is observed that transmitting through a relay greatly increases channel throughput when channel SNR is low and the benefit is not significant when SNR is higher than 15 dB.

Figure 3
figure 3

Comparing the bit rates of different DF schemes.

The receiver diversity scheme underperforms the other three schemes. We find that, although R(P) shows an “S” shape when the x-axis is plotted in dB, it is a concave function with respect to P. Considering that R(0)≥0,R(·) is subadditive, i.e.

R( P 1 )+R( P 2 )R( P 1 + P 2 )
(30)

Using this property, it can be derived that the rate of receiver diversity is no greater than that of code diversity.

The comparison between the code diversity scheme for r=1 and the two r=0 schemes draws a consistent conclusion as in conventional relay channels. First of all, the performance difference between r=0 schemes and r=1 schemes is not significant. Second, r=0 schemes show advantage when channel SNR is high, but r=1 schemes perform better when SNR is low. Our numerical results show that the achievable rate of r=0 schemes is higher than r=1 schemes when SNR is higher than 13 dB. Although the two r=0 schemes exhibit similar performance, concatenate decoding appears to be better than successive decoding when channel SNR is higher than 13 dB.

We next carry out simulations to evaluate the gap between real implementations and numerical computations. The simulations are performed through the following process. First, the optimized parameters, including time proportion and energy allocation, are retrieved from the numerical study for all three schemes. Then, average BER is measured through a set of test runs. If the BER is larger than 10−5, which is considered as the threshold of reliable transmission, we increase channel SNR until the BER goes below 10−5. This SNR-rate pair is plotted on Figure 4.

Figure 4
figure 4

Simulation results of three DF schemes.

In Figure 4, simulation results of three DF schemes are compared with the highest numerical rate computed when r is either 0 or 1. It can be seen that the implementation gap is within 1.4 dB for all three schemes. During simulation, we observe that code diversity has very stable performance at both high and low SNRs. The performance of the two r=0 schemes has a slightly larger variation. In addition, when channel SNR is lower than 12 dB, both r=0 schemes degrade to two-hop transmission, i.e. E s 2 =0. Considering the fact that r=0 schemes do not significantly improve channel rate at high SNR, and code diversity is easier to implement, it is a wise choice to stick to code diversity scheme in practical systems.

We also evaluate the BER performance of compressive cooperation. Because the three DF schemes have very similar BER performance, we only present the results of code diversity scheme in Figure 5. The target rates of the five curves are computed at 6, 8, 10, 12, and 14 dB, respectively. For each target rate and its computed optimal parameters, we slightly vary the channel SNR and evaluate the average BER. An interesting finding from the figure is that the BER of compressive cooperation does not steeply increase when the channel condition decreases from the channel SNR that ensures reliable transmission. It is in sharp contrast to conventional coding and modulation schemes whose typical BER curves can be seen in Figure 6. This special BER property suggests that compressive transmission is more robust for highly dynamic channels where precise channel SNR is hard to obtain.

Figure 5
figure 5

BER performance of compressive cooperation.

Figure 6
figure 6

BER of 8-PAM transmission at typical channel conditions.

Actually, when wireless channel state information is unknown for the source node, the channel code based on CS measurements can be generated limitlessly and transmitted until some predefined recovery quality is achieved at the receiver. The reason is that more redundancy will be achieved through increasing the number of CS measurements, which can help overcome the channel noise as shown in [5]. This rateless property makes the compressive cooperation communication system much easier to adapt to channel variation compared with traditional LDPC codes.

In the end of this part, we analyze and compare the computational complexity of the four DF strategies.

Comparing compressive transmission with a separate source channel coding scheme

Compressive transmission utilizes CS as the joint source-channel code. Therefore, it is necessary to compare its performance with a separate source channel coding scheme. In the reference scheme, sparse sources are first compressed with 7ZIP. Through experiments, we found that the compression ratio is around 1.6 for 0.1-sparse data with block length 6000. Then, compressed bit sequences are protected by regular LDPC codes in a relay channel. We tested both 4-PAM and 8-PAM (pulse-amplitude modulation) modulation and the results are shown in Figure 7.

Figure 7
figure 7

Comparison of compressive cooperation with conventional schemes.

We can find that compressive transmission does not incur any rate loss to the reference schemes when the channel SNR is below 15 dB. After 15 dB, the rate achieved by compressive transmission starts to saturate. We have performed other experiments which show that the saturation SNR is determined by the sparsity of the Rademacher measurement matrix. If a denser measurement matrix is used, compressive transmission will cover a larger dynamic range. Even with the current setting, compressive transmission shows a better channel adaptation capability than the reference schemes.

Conclusion

This article proposes compressive transmission which utilizing CS random projections as the joint source-channel code. We describe and analyze four DF cooperative strategies for compressive transmission in a three-terminal half-duplex Gaussian relay network. Both numerical studies and simulation experiments are carried out to evaluate these strategies’ achievable rates. We have compared compressive cooperation with a conventional separate source channel coding scheme. Results show that the proposed compressive cooperation has great potential in wireless relay channel because it not only has high transmission efficiency, but adapts well with channel variations.

References

  1. Candès E, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52(2):489-509.

    Article  MathSciNet  MATH  Google Scholar 

  2. Donoho D: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52(4):1289-1306.

    Article  MathSciNet  MATH  Google Scholar 

  3. Zhang F, Pfister H: Compressed sensing and linear codes over real numbers. Proc. of Information Theory and Applications Workshop (IEEE, San Diego, 2008), pp. 558–561

    Google Scholar 

  4. Dimakis A, Vontobel P: LP decoding meets LP decoding: a connection between channel coding and compressed sensing. Proc. of the 47th Annual Allerton Conference on Communication, Control, and Computing (IEEE, Monticello, 2009), pp. 8–15

  5. Baron D, Sarvotham S, Baraniuk RG: Bayesian compressive sensing via belief propagation. IEEE Trans. Signal Process 2010, 58(1):269-280.

    Article  MathSciNet  Google Scholar 

  6. Luby M, Mitzenmacher M: Verification-based decoding for packet-based low-density parity-check codes. IEEE Trans. Inf. Theory 2005, 51: 120-127. 10.1109/TIT.2004.839499

    Article  MathSciNet  MATH  Google Scholar 

  7. Duarte M, Davenport M, Takhar D, Laska J, Sun T, Kelly K, Baraniuk R: Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag 2008, 25: 83-91.

    Article  Google Scholar 

  8. van der Meulen E: Three-terminal communication channels. Adv. Appl. Probab 1971, 3: 120-154. 10.2307/1426331

    Article  MathSciNet  MATH  Google Scholar 

  9. Chakrabarti A, De Baynast A, Sabharwal A, Aazhang B: Low density parity check codes for the relay channel. IEEE J. Sel. Areas Commun 2007, 25: 280-291.

    Article  Google Scholar 

  10. Hu J, Duman T: Low density parity check codes over wireless relay channels. IEEE Trans. Wirel. Commun 2007, 6(9):3384-3394.

    Article  Google Scholar 

  11. Razaghi P, Yu W: Bilayer low-density parity-check codes for decode-and-forward in relay channels. IEEE Trans. Inf. Theory 2007, 53(10):3723-3739.

    Article  MathSciNet  MATH  Google Scholar 

  12. Valenti M, Zhao B: Distributed turbo codes: towards the capacity of the relay channel. Proc. of 2003 IEEE 58th Vehicular Technology Conference vol. 1, (2003), pp. 322–326

    Google Scholar 

  13. Zhang Z, Duman TM: Capacity-approaching turbo coding and iterative decoding for relay channels. IEEE Trans. Commun 2005, 53(11):1895-1905. 10.1109/TCOMM.2005.858654

    Article  Google Scholar 

  14. Zhang Z, Duman TM: Capacity approaching turbo coding for half-duplex relaying. IEEE Trans. Commun 2007, 55(9):1822-1822.

    Article  Google Scholar 

  15. Khojastepour M, Sabharwal A, Aazhang B: On capacity of gaussian ‘cheap’ relay channel. Proc. of IEEE GLOBECOM vol. 3, (2003), pp. 1776–1780

    Google Scholar 

  16. Proakis J, Salehi M: Digital Communications. (McGraw-hill, New York, 2001)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers, whose valuable comments helped to greatly improve the quality of the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao Lin Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Liu, X.L., Luo, C. & Wu, F. Compressive cooperation for gaussian half-duplex relay channel. J Wireless Com Network 2012, 227 (2012). https://doi.org/10.1186/1687-1499-2012-227

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2012-227

Keywords