Skip to main content

Lattice-coded cooperation protocol for the half-duplex Gaussian two-way relay channel

Abstract

This paper studies the Gaussian two-way relay channel (GTWRC), where two nodes exchange their messages with the help of a half-duplex relay. We investigate a cooperative transmission protocol, consisting of four phases: multiple access (MA) phase, broadcast (BC) phase, and two cooperative phases. For this setup, we propose a new transmission scheme based on superposition coding for nested lattice codes, random coding, and jointly typical decoding. This scheme divides the message of each node into two parts, referred to as satellite codeword and cloud center. Depending on the phase type, the encoder sends a linear combination of satellite codewords or cloud centers. For comparison, a rate region outer bound using a cut-set bound is provided. We show that the proposed scheme can achieve the capacity region in the high signal-to-noise ratio (SNR) regime. In addition, the achievable rate region is within 0.5 bit of the outer bound, regardless of all channel parameters. Using numerical examples, we show that our proposed scheme achieves a larger rate region than the best known 4-phase transmission strategy so far, called the Hybrid Broadcast (HBC) protocol by Kim et al. Our proposed scheme not only improves upon previous 2- and 3- and 4-phase protocols but also can perform superior at some cases to the introduced 6-phase protocol by Gong, Yue, and Wang (which has more complexity relative to our 4-phase protocol).

1 Introduction

In the last couple of years, cooperative communication and relaying has attracted great interest in wireless networks and some scenarios have been studied from information theory perspective. The first model to study this problem, which consists of 3 nodes, is introduced by Van der Meulen [1]. Cover and El Gamal presented two coding strategies for this model [2]. In addition to one-way relaying, two-way communication between two nodes or bidirectional relaying is of great interest. In the two-way relay channel (TWRC), there exists a relay that facilitates exchange of messages between two nodes. In the full duplex mode, each node is able to transmit and receive simultaneously but in a half-duplex communication, each node can either receive or transmit data at each time slot.

Due to practical constraints on wireless nodes, in this paper, we study the Gaussian two-way relay channel (GTWRC) in a half-duplex mode. In the literature, there exist many transmission protocols for the GTWRC in a half-duplex mode, see, e.g., [37]. Each transmission protocol consists of a sequence of phases (or states) where each phase is specified by the modes of the half-duplex nodes (transmit or receive) all together. For instance, in the 3-node half-duplex TWRC, there exist 8 possible states, out of which there are 6 useful phases as shown in Fig. 1 (the 2 phases in which all nodes are transmitting or all are receiving are not useful [8]).

Fig. 1
figure 1

Possible phases in the half-duplex two-way relay channel

The basic protocol for the TWRC, which consists of two phases (phase 1 and 2 of Fig. 1), is Multiple Access and Broadcast (MABC) protocol. In the first phase, which is referred to as the MA phase, two nodes simultaneously transmit to the relay. In the second phase, i.e, the BC phase, the relay broadcasts a signal to both nodes. There are several practical coding schemes that investigate this protocol, see e.g., [913]. In the BC phase, the relay combines the data from both nodes and broadcasts the combined data back to both nodes. For this phase, there exist several strategies for the processing at the relay node, e.g., an amplify-and-forward (AF) strategy [5], a decode-and-forward (DF) strategy [5, 14], or a compress-and-forward (CF) strategy [15]. The AF protocol is a simple scheme, which amplifies the signal transmitted from both nodes and retransmits it to them, and unlike the DF protocol, no decoding at the relay is performed. In the two-way AF relaying strategy, the signals at the relay are actually combined on the symbol level. Due to amplification of noise, its performance degrades at low signal-to-noise ratios (SNRs). The two-way DF relaying strategy was proposed in [5], where the relay decodes the received data bits from the both nodes. Since the decoded data at the relay can be combined on the symbol level or on the bit level, there has been different data-combining schemes at the relay for the two-way DF relaying strategy: superposition coding, network coding, and lattice coding [16]. In the superposition coding scheme, applied in [5], the data from the two nodes are combined on the symbol level, where the relay sends the linear sum of the decoded symbols from both nodes. Shortly after the introduction of the two-way relay channel, its connection to network coding [17] was observed and investigated. The network coding schemes combine the data from nodes on the bit level using the XOR operation, see, e.g., [10, 1822]. Lattice-based coding uses modulo addition in a multi-dimensional space and utilizes nonlinear operations for data combining. Applying lattice codes over two-way relaying systems was considered in, e.g., [7, 12, 13, 2325]. In general, as in CF or partial DF relaying strategies, the relay node does not need to decode the source messages but only needs to pass sufficient information to the destination nodes. A strategy based on symbol-wise network coding, in which two modulated symbols with different modulation types directly being mapped to a transmitted signal at the relay is investigated in [9]. A combination of bit-wise network coding and channel decoding is considered in [10]. A coding scheme based on distributed linear-dispersion space-time codes is considered in [11]. Nested lattice codes for the GTWRC in the symmetric and asymmetric case are considered in [12, 13], respectively. For the asymmetric case, based on a lattice partition chain, [12] shows that the achievable rate region is within 0.5 bit from the capacity region for each user. Note that in [12] all nodes operate in full-duplex mode and there is no direct channel between the source nodes. Using a compress-and-forward strategy based on nested lattice codes, some new achievable rate regions for the GTWRC are provided [7] where it is assumed that all nodes operate in half-duplex mode without any direct link between the communication nodes. In the proposed scheme in [7], a layered coding is applied: a common layer is decoded by both receivers and a refinement layer is recovered only by the receiver which has the better channel condition. In [24] the GTWRC which operates in full-duplex mode is considered. Based on decoding non-integer linear combination of lattice codewords (instead of decoding linear combination of them), it is shown that the capacity region of the GTWRC under MABC protocol is partially achieved [24]. However, it is shown that the MABC protocol may not perform well when the channel gains are asymmetric [3]. Thus, to improve performance and to achieve a larger rate region, protocols with more phases are proposed in the literature, e.g., [3, 6,2628].

The capacity region of relay channel with state information at the sources or at the relay is investigated in [2936]. The relay channel and the cooperative relay broadcast channel controlled by random parameters are studied in [29]. It is shown that when the state is non-causally known to transmitter and intermediate nodes, the decode-and-forward can achieve the capacity region under some cases. The relay channel with state known non-causally at the relay is investigated in [30, 31]. Using Gelfand-Pinsker coding, rate-splitting and decode-and-forward, a lower bound on channel capacity is obtained for this channel and it is shown that for the degraded Gaussian channels, the lower bound meets the upper bound and thus the capacity region is achievable. The relay channel when the state is available only at the source is studied in [3234]. By obtaining lower and upper bounds, it is shown that in a number of special cases, the capacity region is achievable. A partially cooperative relay broadcast channel (PC-RBC) with state is studied in [35] where two situations including availability of the state non-causally at both the source and the relay and only at the source are analyzed. The relay interference channel with cognitive source where only the source knows (non-causally) the interference from the interferer is considered in [36] and some achievable rate regions are obtained.

In [26], the GTWRC with four phases (phases 1, 2, 5, and 6 of Fig. 1) is considered. It is shown that for both full- and half-duplex modes, partial decode-and-forward can achieve a rate region strictly larger than the time shared region of pure decode-and-forward and direct transmission. Two different decode-and-forward protocols with three and four phases, which have a better performance than MABC under some constraints on the asymmetric model, are considered in [3]. These protocols are referred to as Time Division Broadcast (TDBC) (phases 2, 5, and 6 of Fig. 1) and Hybrid Broadcast (HBC) (phases 1, 2, 5, and 6 of Fig. 1). If channel coefficients tend to symmetric-channel coefficients, then TDBC protocol has a poor performance compared with the MABC. But it is shown that at some cases, the achievable sum rate of HBC protocol contains set of points that are outside the outer bounds of the MABC and TDBC protocols.

To achieve a larger rate region than the HBC protocol in [3, 37], a protocol that uses all possible 6 phases (shown in Fig. 1) is proposed in [27, 28]. Although increasing the number of transmission phases results in improvement of achievable rate region with respect to the HBC protocol, it has more complexity relative to a 4-phase protocol. In [8], by obtaining achievable rate regions and outer bounds for a 6-phase protocol, it is shown that it can achieve a larger rate region compared with other protocols at some cases. In [6], two protocols are investigated: MABC protocol and TDBC protocol. Using decode-and-forward, compress-and-forward, amplify-and-forward, and a new mixed-forward scheme, some achievable rate regions for these protocols are obtained. A 3-phase protocol, which is called Cooperative Multiple Access Broadcast (CoMABC) and consists of phases 1, 2, and 4 of Fig. 1, is proposed in [38]. Using doubly nested lattice codes, an achievable rate region for this scheme is obtained and it is shown numerically that the CoMABC outperforms the MABC and TDBC protocols in terms of sum rate in asymmetric channel conditions. The two-way relaying in a Gaussian diamond channel is considered in [39] and it is shown that lattice codes under certain conditions can achieve rate regions close to the outer bound.

In this paper, in contrast to [27], instead of increasing the number of phases in order to achieve a larger rate region than the HBC protocol, we propose a new 4-phase cooperative MABC protocol for the half-duplex GTWRC, including phases 1, 2, 3, and 4 of Fig. 1. First, consider the MABC protocol, which includes two phases, MA and BC phases. It is well known that lattice codes can achieve the capacity region of the MABC protocol within 0.5 bit [6, 12]. Thus, it seems that we do not need to consider GTWRC with more transmission phases (i.e., three or four or six phases). But, suppose that the link from the relay node to node 2 (1) is very weak (noisy) such that node 2 (1) can correctly decode the message of node 1 (2) at a very low rate. Thus, we require defining other phases to increase the rate. Now, consider the CoMABC protocol [38]. The CoMABC protocol consists of three phases: phases 1 and 2 are similar to the MABC protocol and at the third phase, by cooperating between node 1 and the relay node, we send information to node 2. To explain why we must use the CoMABC protocol, suppose that we are at the MABC protocol. At the end of second phase, the relay sends information bits to both nodes. Node 1 can recover its data while node 2, since has a weak link, can only decode the message of node 1 at a very low rate. Thus, to increase the data transmission rate, we use another phase to send data to node 2. Now, to explain our proposed scheme, again, consider the CoMABC protocol. Suppose that at the end of phase 2, node 1 can recover message of node 2 at a very low rate. Since at phase 3, we send no data to node 1, thus, we must define other phases to send extra data to node 1 to decode the message of node 2 at a higher rate. In our proposed scheme, at phases 3 and 4, we send some data to node 1 and node 2 to decode the message of the other node at a higher rate.

Our proposed protocol is denoted by 2-CoMABC. In phase 1 and phase 2, the protocol is similar to the MABC. In these phases, both nodes cannot completely recover the message of each other. Thus, we introduce two other phases. In phase 3 and phase 4, each node with the help of the relay and the other node tries to recover the message of the other node. These two phases are referred to as the cooperative phases. For the first time in this paper, we propose a scheme based on a “superposition coding for nested lattice codes” for the GTWRC. In superposition coding, we divide the message of each node into two parts using nested lattice codes: satellite codeword and cloud center. Thus, if we want to recover a message, we must recover the satellite codeword and the cloud center simultaneously. In phase 1, based on the idea of computation coding [40], we recover a linear combination of messages. Due to structured codes, we can calculate the satellite codeword and cloud center for linear combination of messages. Then, we send the satellite codewords to nodes in phase 2 using “random coding”. In phase 3 and 4, we send the cloud centers to both nodes. At the end of phase 4, we can recover both messages. Although we apply the superposition coding using nested lattice codes to the 2-CoMABC protocol, but one can use it to achieve better or same rate regions which are obtained at other papers. For example, we can apply it to the CoMABC protocol proposed in [38] and show that our proposed scheme includes the CoMABC scheme as a special case.

Finally, by examining many numerical examples (out of which some are presented here) and comparing the achievable rate region of the proposed scheme with that of the HBC protocol, it can be observed that our proposed scheme has a better performance than the HBC protocol. In addition, this scheme not only improves upon previous 2- and 3- and 4-phase protocols but also can perform superior at some cases to the 6-phase protocol, proposed in [27] (which has more complexity relative to a 4-phase protocol).

In summary, our main contributions are as follows:

  • Proposing a new transmission scheme based on “superposition coding for nested lattice codes” and “random coding”.

  • Analyzing the proposed protocol in a new cooperative transmission scheme and showing that our proposed scheme can achieve the capacity rate region in the high SNR regime and within 0.5 bit in general.

  • Improving the rate region of the 4-phase HBC protocol without increasing any complexity (in contrast to the proposed scheme at [27]).

The remainder of the paper is organized as follows. We present the channel model and the preliminaries on lattice codes in Section 2. In Section 3, first, we present the superposition coding for nested lattice codes and then we introduce and analyze our proposed scheme. In Section 4, an achievable rate region as well as a rate region outer bound based on the cut-set bound are provided. Using numerical examples, achievable rate regions of different cooperative protocols are compared in Section 5. Section 6 concludes the paper.

Notations: Let \(C\,(x)=\frac {1}{2}\log \left (1+x\right)\). Logarithms are of base two. The random variables (RV) and their realizations are denoted by capital and small letters, respectively. x stands for a vector of length n, (x 1,x 2,…,x n ). [x]+= max{x,0} for \(x\in \mathbb {R}\).

2 Preliminaries: channel model and lattices

2.1 Channel model

The channel model for the GTWRC via 2-CoMABC protocol is shown in Fig. 2. This paper studies a GTWRC with four phases which operates in half-duplex mode, i.e., each node can only either listen or transmit at the same time. In this model, nodes 1 and 2 intend to exchange independent messages \(W_{1}\in \left \{ 1,2,\ldots,2^{nR_{1}}\right \} \) and \(W_{2}\in \left \{ 1,2,\ldots,2^{nR_{2}}\right \} \) with the assistance of a relay (represented by node r). We denote the relative time duration of the mth phase by t m where \(\sum _{m}t_{m}=1\). For a given block size n, we denote the normalized duration of the mth phase by t m,n , and in achievability schemes, we must have \(\underset {n\rightarrow \infty }{\lim }t_{m,n}=t_{m}\) [6]. The random sequences \(\boldsymbol {X}_{i}^{k}\in \mathcal {X}_{i}\) and \(\boldsymbol {Y}_{i}^{k}\in \mathcal {Y}_{i}\), respectively, denote the channel input and output at kth channel use at node i, for i{1,2,r}. Note that the distributions of \(\boldsymbol {X}_{i}^{k}\) and \(\boldsymbol {Y}_{i}^{k}\) depend on the value of k, e.g., for kt 1,n .n, we are in phase 1; for t 1,n .nk≤(t 1,n +t 2,n ).n, we are in phase 2; for (t 1,n +t 2,n ).nk≤(t 1,n +t 2,n +t 3,n ).n, we are in phase 3; and for (t 1,n +t 2,n +t 3,n ).nkn, we are in phase 4 [6]. With a slight abuse of notation, assume that \(\boldsymbol {X}_{i}^{(m)}\) denotes the random variable with alphabet \(\mathcal {X}_{i}\) during phase m.

Fig. 2
figure 2

The channel model for the Gaussian two-way relay channel via 2-CoMABC protocol in the half-duplex mode

In the following, similar to [6], we define encoders, decoders, and associated probability of errors: let \(W_{S,T}:=\left \{ W_{i,j}|i\in S,\,j\in T,\,S,T\subset \mathcal {M}\right \} \) denote the set of messages from nodes in set S to nodes in set T. Note that if node i does not have a message for node j, then we have W i,j =Ø. At node i, the encoder at channel use k is a function \({X_{i}^{k}}\left (W_{\left \{ i\right \},\mathcal {M}},{Y_{i}^{1}},{Y_{i}^{2}},\ldots,Y_{i}^{k-1}\right)\in \mathcal {X}_{i}\); the decoder at node i after all n channel uses produces an estimate of the message W j,i using function \(\widehat {W}_{j,i}\left ({Y_{i}^{1}},{Y_{i}^{2}},\ldots,{Y_{i}^{n}},W_{\left \{ i\right \},\mathcal {M}}\right)\). The error event to decode the message W i,j at the end of the block of length n is defined by \(E_{i,j}:=\left \{ W_{i,j}\neq \widehat {W}_{i,j}\left (.\right)\right \} \), and the error event at node j in which node j wants to find w i at the end of phase m is denoted by \(E_{i,j}^{(m)}\). For a protocol with phase durations {t m }, a set of rates R i,j is said to be achievable if there exist encoders/decoders of block length n=1,2,… with both P[E i,j ]→0 and t m,n t m as n for all i, j, m. An achievable rate region is the closure of a set of achievable rate tuples for fixed {t m }. The set of all achievable rate tuples is the capacity region of the TWRC.

In this paper, we assume all links in the bi-directional relay channel are subject to independent, identically distributed (i.i.d) white Gaussian noise. In the following, we describe the Gaussian channel model for the 2-CoMABC protocol. Communication process takes place in four phases: MA phase, BC phase, and two cooperative phases, as follows:

$${} {\fontsize{8.9pt}{9.6pt}\selectfont{\begin{aligned} \textrm{MA phase }(\textrm{phase }t_{1}): & \boldsymbol{Y}_{r}^{(1)}=g_{1r}\boldsymbol{X}_{1}^{(1)}+g_{2r}\boldsymbol{X}_{2}^{(1)}+\boldsymbol{Z}_{r}^{(1)},\\ \textrm{BC phase }(\textrm{phase }t_{2}): & \boldsymbol{Y}_{i}^{(2)}=g_{ri}\boldsymbol{X}_{r}^{(2)}+\boldsymbol{Z}_{i}^{(2)},\:i\in\left\{ 1,2\right\} \\ \textrm{Cooperative phases }(\textrm{phase }t_{3}): & \boldsymbol{Y}_{1}^{(3)}=g_{r1}\boldsymbol{X}_{r}^{(3)}+g_{21}\boldsymbol{X}_{2}^{(3)}+\boldsymbol{Z}_{1}^{(3)},\\ \textrm{Cooperative phases }(\textrm{phase }t_{4}): & \boldsymbol{Y}_{2}^{(4)}=g_{r2}\boldsymbol{X}_{r}^{(4)}+g_{12}\boldsymbol{X}_{1}^{(4)}+\boldsymbol{Z}_{2}^{(4)}, \end{aligned}}} $$

where all Gaussian noise sequences are zero mean with unit variance and the channel inputs are subject to average power constraints as the following:

$$\frac{1}{n}\mathbb{E}\left\Vert \boldsymbol{X}_{i}^{(m)}\right\Vert^{2}\leq P_{i}.\,\,\,\,\,\mathrm{f}\text{or}\,\,\,i=1,2,r\,\,\,\,\,m=1,2,3,4 $$

In addition, g ij is the channel gain between transmitter i and receiver j. We assume that the channel is reciprocal such that g ij =g ji and each node is fully aware of g 1r , g 2r , and g 12 (i.e., full CSI). Considering channel reciprocity, the channel coefficient between nodes 1 and r is denoted collectively as g 1, i.e., g 1r =g r1=g 1. Similarly, we have g 2r =g r2=g 2 and g 12=g 21=g 3.

2.2 Lattice definitions

Here, we provide some necessary definitions on lattices and nested lattice codes. Interested readers can refer to [4042] and the references therein for more details.

Definition 1.

(Lattice): A lattice Λ (n) is a discrete additive subgroup of \(\mathbb {R}^{n}\). A lattice Λ (n) can always be written in terms of a generator matrix \(\mathbf {G}\in \mathbb {R}^{n\times n}\) as

$$\Lambda^{(n)}=\{\boldsymbol{x}=\boldsymbol{z}\mathbf{G}:\boldsymbol{z}\in\mathbb{Z}^{n}\}, $$

where \(\mathbb {Z}\) represents the set of integers.

Definition 2.

(Quantizer): The nearest neighbor quantizer \(\mathcal {Q}_{\Lambda }\) maps any point \(\boldsymbol {x}\in \mathbb {R}^{n}\) to the nearest lattice point:

$$\mathcal{Q}_{\Lambda}(\boldsymbol{x})=\arg\underset{\boldsymbol{l}\in\Lambda}{\min}\left\Vert \boldsymbol{x}-\boldsymbol{l}\right\Vert. $$

Definition 3.

(Voronoi region): The fundamental Voronoi region of lattice Λ (n) is set of points in \(\mathbb {R}^{n}\) closest to the zero codeword, i.e.,

$$\mathcal{V}_{0}(\Lambda^{(n)})=\{\boldsymbol{x}\in\mathbb{\mathbb{R}}^{n}:\mathcal{Q}(\boldsymbol{x})=0\}. $$

Definition 4.

(Moments): σ 2(Λ (n)) which is called the second moment of lattice Λ (n) is defined as

$$ \sigma^{2}(\Lambda^{(n)})=\frac{1}{n}\frac{\int_{\mathcal{V}(\Lambda)}\left\Vert \boldsymbol{x}\right\Vert^{2}d\boldsymbol{x}}{\int_{\mathcal{V}(\Lambda)}d\boldsymbol{x}}, $$
((1))

and the normalized second moment of lattice Λ can be expressed as

$$G(\Lambda^{(n)})=\frac{\sigma^{2}(\Lambda^{(n)})}{\left[\int_{\mathcal{V}(\Lambda)}d\boldsymbol{x}\right]^{\frac{2}{n}}}=\frac{\sigma^{2}(\Lambda)}{V^{\frac{2}{n}}}, $$

where \(V=\int _{\mathcal {V}(\Lambda)}d\boldsymbol {x}\) is the Voronoi region volume.

Definition 5.

(Modulus): The modulo- Λoperation with respect to lattice Λ returns the quantization error as

$$\boldsymbol{x}\text{mod }\Lambda^{(n)}=\boldsymbol{x}-\mathcal{Q}(\boldsymbol{x}), $$

that maps x into a point in the fundamental Voronoi region and it is always placed in \(\mathcal {V}\).

The modulo lattice operation satisfies the following distributive property [43]

$$\left[\boldsymbol{x}\text{mod }\Lambda^{(n)}+\boldsymbol{y}\right]\text{mod }\Lambda^{(n)}=\left[\boldsymbol{x}+\boldsymbol{y}\right]\text{mod }\Lambda^{(n)}. $$

Definition 6.

(Quantization goodness or Rogers-good): A sequence of lattices \(\Lambda ^{(n)}\subseteq \mathbb {R}^{n}\) is good for mean-squared error (MSE) quantization if

$$\underset{n\rightarrow\infty}{\lim}G\left(\Lambda^{(n)}\right)=\frac{1}{2\pi e}. $$

The sequence is indexed by the lattice dimension n. The existence of such lattices is shown in [44, 45].

Definition 7.

(AWGN channel coding goodness or Poltyrev-good): Let Z be a length-n i.i.d. Gaussian vector, \(\boldsymbol {Z}\thicksim \mathcal {N}\left (\boldsymbol {0},{\sigma _{Z}^{2}}\boldsymbol {I}_{n}\right)\). The volume-to-noise ratio of a lattice is given by

$$\mu\left(\Lambda,\epsilon\right)=\frac{\left(\text{Vol}(\mathcal{V})\right)^{2/n}}{2\pi e{\sigma_{Z}^{2}}}, $$

where \({\sigma _{Z}^{2}}\) is chosen such that \(\text {Pr}\left \{ \boldsymbol {Z}\notin \mathcal {V}\right \} =\epsilon \) and I n is an n-by-n identity matrix. A sequence of lattices is Λ (n) Poltyrev-good if

$$\underset{n\rightarrow\infty}{\lim}\mu\left(\Lambda^{(n)},\epsilon\right)=1,\,\,\,\,\,\forall\epsilon\in\left(0,1\right) $$

and, for fixed volume-to-noise ratio greater than 1, \(\text {Pr}\left \{ \boldsymbol {Z}\notin \mathcal {V}^{n}\right \} \) decays exponentially in n.

Definition 8.

(Nested lattices): A lattice Λ (n) is said to be nested in lattice \(\Lambda _{c}^{(n)}\) if \(\Lambda ^{(n)}\subseteq \Lambda _{c}^{(n)}\). Λ (n) is referred to as the coarse lattice and \(\Lambda _{c}^{(n)}\) as the fine lattice. The set of all points of a fine lattice \(\Lambda _{c}^{(n)}\) that are within the fundamental Voronoi region \(\mathcal {V}\) of a coarse lattice Λ (n) form a nested lattice code. The rate of a nested lattice code is defined as

$$R=\frac{1}{n}\log\left|\mathcal{C}^{(n)}\right|=\frac{1}{n}\log\frac{\text{Vol}\left(\mathcal{V}\right)}{\text{Vol}\left(\mathcal{V}_{c}\right)}. $$

Erez et al. show that there exists a sequence of lattices that are simultaneously good for packing, covering, source coding (Rogers-good), and channel coding (Poltyrev-good). In the following, we present a key property of dithered lattice codes.

Lemma 1.

[ 41 ] The Crypto Lemma Let V be a random vector with an arbitrary distribution over \(\mathbb {R}^{n}\). If D is independent of V and uniformly distributed over \(\mathcal {V}\), then (V+D)mod Λ is also independent of V and uniformly distributed over \(\mathcal {V}\).

Proof.

See lemma 1 in [41].

3 Lattice-coded cooperation protocol

In this section, based on nested lattice codes, we propose an achievable rate region on the capacity region of the GTWRC. First, we present the superposition scheme for nested lattice codes that is a key to our code construction.

3.1 Superposition coding for nested lattice codes

Consider the following nested lattices:

$$ \Lambda_{s1}^{(n)}\subseteq\Lambda_{s2}^{(n)}\subseteq\Lambda_{m}^{(n)}\subseteq\Lambda_{c}^{(n)}. $$
((2))

The coding lattice (i.e., fine lattice) \(\Lambda _{c}^{(n)}\) provides the codewords, while the shaping sublattices (i.e., coarse lattices) \(\Lambda _{s1}^{(n)}\) and \(\Lambda _{s2}^{(n)}\) satisfy the power constraint. The set of points of fine lattice \(\Lambda _{c}^{(n)}\) that lies in the fundamental Voronoi region of shaping lattice \(\Lambda _{\textit {si}}^{(n)}\) forms a codebook for node i, i.e.,

$$\mathcal{C}_{i}^{(n)}=\left\{ \Lambda_{c}^{(n)}\cap\mathcal{V}_{si}^{(n)}\right\}, $$

and its rate is given by

$$ R_{i}=\frac{1}{n}\log\left|\mathcal{C}_{i}^{(n)}\right|=\frac{1}{n}\log\left(\frac{\text{Vol}\left(\mathcal{V}_{si}^{(n)}\right)}{\text{Vol}\left(\mathcal{V}_{c}^{(n)}\right)}\right). $$
((3))

The meso-lattice [46] \(\Lambda _{m}^{(n)}\) partitions the set of codewords for node i into two parts. To clarify this discussion, we define two additional codebooks as follows:

$$\begin{array}{@{}rcl@{}} \mathcal{C}_{a}^{(n)} & = & \left\{ \Lambda_{c}^{(n)}\cap\mathcal{V}_{m}^{(n)}\right\},\\ \mathcal{C}_{b,i}^{(n)} & = & \left\{ \Lambda_{m}^{(n)}\cap\mathcal{V}_{si}^{(n)}\right\}, \end{array} $$

where the associated coding rates are

$$\begin{array}{@{}rcl@{}} R_{a} & = & \frac{1}{n}\log\left(\frac{\text{Vol}\left(\mathcal{V}_{m}^{(n)}\right)}{\text{Vol}\left(\mathcal{V}_{c}^{(n)}\right)}\right),\\ R_{b,i}=R_{i}-R_{a} & = & \frac{1}{n}\log\left(\frac{\text{Vol}\left(\mathcal{V}_{si}^{(n)}\right)}{\text{Vol}\left(\mathcal{V}_{m}^{(n)}\right)}\right). \end{array} $$

Now, we can decompose each lattice codeword \(\boldsymbol {V}_{i}\in \mathcal {C}_{i}^{(n)}\) by \(\Lambda _{m}^{(n)}\) into two points, V a,i (an individual codeword in each cloud, referred to as a satellite codeword) and V b,i (the cloud center associated with a nested lattice, referred to as a cloud center):

$$ \boldsymbol{V}_{i}=\left[\boldsymbol{V}_{a,i}+\boldsymbol{V}_{b,i}\right]\textrm{mod }\Lambda_{si}^{(n)}; $$
((4))

where

$$\begin{array}{@{}rcl@{}} \boldsymbol{V}_{a,i} & = & \boldsymbol{V}_{i}\textrm{mod }\Lambda_{m}^{(n)}\in\mathcal{C}_{a}^{(n)},\\ \boldsymbol{V}_{b,i} & = & \left[\boldsymbol{V}_{i}-\boldsymbol{V}_{a,i}\right]\textrm{mod }\Lambda_{si}^{(n)}\in\mathcal{C}_{b,i}^{(n)}. \end{array} $$

The meso-lattice point V b,i determines the cloud center in which V i resides, while V a,i identifies its location within the clouds (i.e., the individual codewords within the clouds). This scheme is similar to superposition coding in the broadcast channel [47].

The following theorem presents the main result of this paper.

Theorem 1.

An achievable region of the half-duplex bidirectional relay channel with the 2-CoMABC protocol is the closure of the set of all points (R 1,R 2) satisfying:

$$\begin{array}{@{}rcl@{}} R_{1} & \leq & \min\left(\vphantom{+ \left. t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)\right)}t_{1}R_{1,r}^{*}+t_{4}C\left({g_{3}^{2}}P_{1}\right),t_{2}C\left({g_{2}^{2}}P_{r}\right)\right.\\[-2pt] && + \left. t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)\right), \end{array} $$
((5))
$$\begin{array}{@{}rcl@{}} R_{2} & \leq & \min\left(\vphantom{+ \left. t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)\right)}t_{1}R_{2,r}^{*}+t_{3}C\left({g_{3}^{2}}P_{2}\right),t_{2}C\left({g_{1}^{2}}P_{r}\right)\right.\\[-2pt]&& + \left. t_{3}C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)\right), \end{array} $$
((6))

where \(R_{i,r}^{*}\overset {\triangle }{=}\left [\frac {1}{2}\log \left (\frac {{g_{i}^{2}}P_{i}}{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}}+{g_{i}^{2}}P_{i}\right)\right ]^{+}\) and [ x]+= max{0,x}.

In the following, the steps of the proof are presented. First, we provide a brief explanation of our coding and then present our scheme in more details. Without loss of generality, we assume that R 1R 2. Since we need two codebooks, three nested lattices for generating these codebooks are required. One of the lattices, \(\Lambda _{c}^{(n)}\), constructs the codewords while the other two lattices (shaping lattices) satisfy the channel power constraints (\(\Lambda _{s1}^{(n)}\) and \(\Lambda _{s2}^{(n)}\)). Based on the idea of computation coding [40], at the end of phase 1, we decode two linear combinations of messages. In order to decompose these linear combinations, which are points of \(\Lambda _{c}^{(n)}\), we define another lattice \(\left (\Lambda _{m}^{(n)}\right)\) that partitions \(\Lambda _{c}^{(n)}\) into clouds. Based on this coding strategy, both linear combinations have the same satellite codeword (i.e., an individual codeword in \(\mathcal {V}_{m}\)) but different cloud centers (i.e., an individual codeword in \(\mathcal {V}_{s1}\) or \(\mathcal {V}_{s2}\)).

In phase 2, we send the satellite codeword of the linear combinations to both nodes while in phase 3 and 4, we communicate the cloud center associated with the linear combination of codewords to node 1 and 2, respectively. Thus, at the end of phase 4, based on having the cloud center associated with a nested lattice and the individual codeword in that cloud, we can fully find the linear combination of messages at both nodes. Using the later decoding, we can decode the message of each node at the opposite node. In the following, we present our scheme in more details.

3.2 Phase 1 (MA phase)

  • Encoding:

By calculating the optimum phase durations, t 1, t 2, t 3, and t 4, we can determine the codeword length in each phase as \(n_{1}=\frac {t_{1}}{T_{s}}\), \(n_{2}=\frac {t_{2}}{T_{s}}\), \(n_{3}=\frac {t_{3}}{T_{s}}\), and \(n_{4}=\frac {t_{4}}{T_{s}}\), where T s is the sampling interval. In the following, without loss of generality, we assume that \({g_{1}^{2}}P_{1}\geq {g_{2}^{2}}P_{2}\). In order to apply the rate splitting, we choose a chain of lattices as (2), such that \(\Lambda _{s1}^{(n_{1})}\), \(\Lambda _{s2}^{(n_{1})}\) and \(\Lambda _{m}^{(n_{1})}\) are Rogers-good and Poltyrev-good while \(\Lambda _{c}^{(n_{1})}\) is Poltyrev-good. The generation of these lattices is fully explained in [45].

To send the messages W i i{1,2} to the relay node, using a one-to-one mapping, we first map it to lattice codeword \(\boldsymbol {V}_{i}\in \mathcal {C}_{i}^{(n_{1})}\). Then, we construct the following sequence to transmit over the channel:

$$\begin{array}{@{}rcl@{}} \boldsymbol{X}_{i}^{(1)} & =\frac{1}{g_{i}} & \left[\boldsymbol{V}_{i}+\boldsymbol{D}_{i}\right]\textrm{mod }\Lambda_{si}^{(n_{1})}, \end{array} $$

where D i is a dither that is uniformly distributed over the Voronoi region of \(\Lambda _{\textit {si}}^{(n_{1})}\), i.e., \(\boldsymbol {D}_{i}\sim \text {Unif}\left (\mathcal {V}_{\textit {si}}\right)\). Since the channel gains from node 1→r and 2→r are different, and we also aim to decode the sum of codewords V 1 and V 2 at the relay node, we pre-amplify the transmit signals by \(\frac {1}{g_{i}}\). According to the channel power constraints, we choose the second moments of lattices as the following:

$$\begin{array}{@{}rcl@{}} \sigma^{2}\left(\Lambda_{si}\right) & = & {g_{i}^{2}}P_{i}\qquad i\in\left\{ 1,2\right\}. \end{array} $$
  • Decoding:

The relay aims to recover the linear combination of V i ’s instead of recovering V 1 and V 2, separately. Thus, the lattice scheme inherits the idea of computation coding [40] and physical-layer network coding [48]. To reach this goal, with receiving the sequence

$$\boldsymbol{Y}_{r}^{(1)}=g_{1}\boldsymbol{X}_{1}^{(1)}+g_{2}\boldsymbol{X}_{2}^{(1)}+\boldsymbol{Z}_{r}^{(1)}, $$

the relay performs the following operations:

$${} {\fontsize{8.9pt}{10.6pt}\selectfont{\begin{aligned} \boldsymbol{Y}_{d_{r}}^{(1)} & = \alpha\boldsymbol{Y}_{r}^{(1)}-\boldsymbol{D}_{1}-\boldsymbol{D}_{2}\\ & = \alpha g_{1}\boldsymbol{X}_{1}^{(1)}+\alpha g_{2}\boldsymbol{X}_{2}^{(1)}+\alpha\boldsymbol{Z}_{r}^{(1)}-\boldsymbol{D}_{1}-\boldsymbol{D}_{2}\\ & = \boldsymbol{V}_{1}+\boldsymbol{V}_{2}+\alpha g_{1}\boldsymbol{X}_{1}^{(1)}-\left(\boldsymbol{V}_{1}+\boldsymbol{D}_{1}\right)+\alpha g_{2}\boldsymbol{X}_{2}^{(1)}\\ &\quad-\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)+\alpha\boldsymbol{Z}_{r}^{(1)}\\ & = \boldsymbol{V}_{1}+\boldsymbol{V}_{2}-\mathcal{Q}_{\Lambda_{s1}}\left(\boldsymbol{V}_{1}+\boldsymbol{D}_{1}\right)-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\\ & \quad+g_{1}\left(\alpha-1\right)\boldsymbol{X}_{1}^{(1)}+g_{2}\left(\alpha-1\right)\boldsymbol{X}_{2}^{(1)}+\alpha\boldsymbol{Z}_{r}^{(1)}\\ & = \boldsymbol{V}_{1}+\boldsymbol{V}_{2}-\mathcal{Q}_{\Lambda_{s1}}\left(\boldsymbol{V}_{1}+\boldsymbol{D}_{1}\right)-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)+\boldsymbol{Z}_{\text{eff}}, \end{aligned}}} $$

where

$$\boldsymbol{Z}_{\text{eff}}=g_{1}\left(\alpha-1\right)\boldsymbol{X}_{1}^{(1)}+g_{2}\left(\alpha-1\right)\boldsymbol{X}_{2}^{(1)}+\alpha\boldsymbol{Z}_{r}^{(1)}. $$

Due to the dithers, the vectors \(\boldsymbol {V}_{1},\boldsymbol {V}_{2},\boldsymbol {X}_{1}^{(1)},\boldsymbol {X}_{2}^{(1)}\) are independent and also independent of \(\boldsymbol {Z}_{r}^{(1)}\). Therefore Z eff is independent of V 1 and V 2. Now, we choose α such that the variance of the effective noise, Z eff is minimized. Hence, we obtain

$$\alpha_{\text{MMSE}}=\frac{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}}{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}+1}. $$

After calculating \(\boldsymbol {Y}_{d_{r}}^{(1)}\), we need to obtain estimations of the following linear combinations:

$$\begin{array}{@{}rcl@{}} \boldsymbol{V}_{r,1} & = & \left[\boldsymbol{V}_{1}+\boldsymbol{V}_{2}-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right]\textrm{mod }\Lambda_{s1}^{(n_{1})}, \end{array} $$
((7))
$$\begin{array}{@{}rcl@{}} \boldsymbol{V}_{r,2} & = & \left[\boldsymbol{V}_{1}+\boldsymbol{V}_{2}\right]\textrm{mod }\Lambda_{s2}^{(n_{1})}. \end{array} $$
((8))

To decode (7) using \(\boldsymbol {Y}_{d_{r}}^{(1)}\), we perform the following operation:

$$\begin{array}{@{}rcl@{}} \boldsymbol{Y}_{d_{r,1}} & = & \left[\boldsymbol{Y}_{d_{r}}^{(1)}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})} \\ & = & \left[\boldsymbol{V}_{1}+\boldsymbol{V}_{2}-\mathcal{Q}_{\Lambda_{s1}}\left(\boldsymbol{V}_{1}+\boldsymbol{D}_{1}\right)-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right.\\&&\qquad\!+ \left.\boldsymbol{Z}_{\text{eff}}\vphantom{\left[\boldsymbol{V}_{1}+\boldsymbol{V}_{2}-\mathcal{Q}_{\Lambda_{s1}}\left(\boldsymbol{V}_{1}+\boldsymbol{D}_{1}\right)-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right.}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})} \\ & = & \left[\boldsymbol{V}_{r,1}+\boldsymbol{Z}_{\text{eff}}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})} \end{array} $$
((9))

where (9) is based on the distributive law of the modulo operation. Now, we use the minimum Euclidean distance lattice decoding [41, 49] to decode V r,1, correctly. Thus, we get

$$\widehat{\boldsymbol{V}}_{r,1}=\mathcal{Q}_{\Lambda_{c}}\left(\boldsymbol{Y}_{d_{r,1}}\right). $$

From (9), we can see that the estimation is incorrect if

$$ \boldsymbol{Z}_{\text{eff}}\notin\mathcal{V}_{c}. $$
((10))

(10) shows that the estimation of V r,1 is incorrect if the effective noise Z eff leaves the Voronoi region surrounding the true codeword, i.e., \(P_{e}=\text {Pr}\left (\boldsymbol {Z}_{\text {eff}}\notin \mathcal {V}_{c}\right).\)

To show that \(P_{e}=\text {Pr}\left (\boldsymbol {Z}_{\text {eff}}\notin \mathcal {V}_{c}\right)\) goes to zero exponentially in n, we consider a Gaussian sequence, \(\mathcal {N}\left (\boldsymbol {0},{\sigma _{Z}^{2}}\boldsymbol {I}_{n}\right)\) with the same second moment as Z eff, i.e., \(\boldsymbol {Z}_{\text {eff}}^{*}\sim \mathcal {N}\left (0,\text {Var}\left (\boldsymbol {Z}_{\text {eff}}\right)\right)\). Since the fine lattice \(\Lambda _{c}^{(n_{1})}\) is a Poltyrev-good lattice, then from Definition 7, we know that for a Gaussian sequence, \(\boldsymbol {Z}_{\text {eff}}^{*}\sim \mathcal {N}\left (0,\text {Var}\left (\boldsymbol {Z}_{\text {eff}}\right)\right)\), the following error probability \(\text {Pr}\left (\boldsymbol {Z}_{\text {eff}}^{*}\notin \mathcal {V}_{c}\right)\), vanishes as n 1 if

$$ \mu=\frac{\left(\text{Vol}\left(\mathcal{V}_{c}^{(n_{1})}\right)\right)^{\frac{2}{n_{1}}}}{2\pi e\text{Var}\left(\boldsymbol{Z}_{\text{eff}}^{*}\right)}>1, $$
((11))

If this occurs, then from Lemma 11 in [41], \(P_{e}=\text {Pr}\left (\boldsymbol {Z}_{\text {eff}}\notin \mathcal {V}_{c}\right)\) goes to zero exponentially in n as well. Now, from (3), we can obtain the rate of link 1→r and 2→r, i.e., \(R_{i,r}^{(1)}\), as follows:

$$\begin{array}{@{}rcl@{}} R_{i,r}^{(1)} & = & \frac{1}{n_{1}}\log\left(\frac{\text{Vol}\left(\mathcal{V}_{si}^{(n_{1})}\right)}{\text{Vol}\left(\mathcal{V}_{c}^{(n_{1})}\right)}\right) \\ & = & \frac{1}{2}\log\left(\frac{\sigma^{2}\left(\Lambda_{si}^{(n_{1})}\right)}{G\left(\Lambda_{si}^{(n_{1})}\right)\left(\text{Vol}\left(\mathcal{V}_{c}^{(n_{1})}\right)\right)^{\frac{2}{n_{1}}}}\right) \\ & \leq & \frac{1}{2}\log\left(\frac{\sigma^{2}\left(\Lambda_{si}^{(n_{1})}\right)}{G\left(\Lambda_{si}^{(n_{1})}\right)2\pi e\text{Var}\left(\boldsymbol{Z}_{\text{eff}}^{*}\right)}\right) \end{array} $$
((12))
$$\begin{array}{@{}rcl@{}} & \leq & \frac{1}{2}\log\left(\frac{\sigma^{2}\left(\Lambda_{si}^{(n_{1})}\right)}{\text{Var}\left(\boldsymbol{Z}_{\text{eff}}^{*}\right)}\right)\\ & = & \frac{1}{2}\log\left(\frac{{g_{i}^{2}}P_{i}}{\text{Var}\left(\boldsymbol{Z}_{\text{eff}}^{*}\right)}\right) \\ & = & \frac{1}{2}\log\left(\frac{{g_{i}^{2}}P_{i}}{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}}+{g_{i}^{2}}P_{i}\right), \end{array} $$
((13))

where (12) follows from (11), and (13) is based on Rogers goodness of \(\Lambda _{\textit {si}}^{(n_{1})}\) and the fact that \(G\left (\Lambda _{\textit {si}}^{(n_{1})}\right)\geq \frac {1}{2\pi e}\). Thus, we obtain the achievable rate region of link 1→r and 2→r as

$$\begin{array}{@{}rcl@{}} R_{i,r}^{(1)} & \leq & R_{i,r}^{*}, \end{array} $$
((14))

where

$$R_{i,r}^{*}\overset{\triangle}{=}\left[\frac{1}{2}\log\left(\frac{{g_{i}^{2}}P_{i}}{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}}+{g_{i}^{2}}P_{i}\right)\right]^{+}. $$

In order to decode the second term, (8), we assume that the estimation of V r,1 is correct. This assumption is valid if \(R_{1,r}^{(1)}\) and \(R_{2,r}^{(1)}\) satisfy (14). Thus, we can calculate V r,2 as:

$${} {\fontsize{9.3pt}{9.6pt}\selectfont{\begin{aligned} \left[\boldsymbol{V}_{r,1}\right]\textrm{mod }\Lambda_{s2}^{(n_{1})} & = \left[\boldsymbol{V}_{1}+\boldsymbol{V}_{2}-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right]\textrm{mod }\Lambda_{s2}^{(n_{1})} \\ & = \left[\boldsymbol{V}_{1}+\boldsymbol{V}_{2}\right]\textrm{mod }\Lambda_{s2}^{(n_{1})}\\ & = \boldsymbol{V}_{r,2}, \end{aligned}}} $$
((15))

where (15) follows from \(\Lambda _{s1}^{(n_{1})}\subseteq \Lambda _{s2}^{(n_{1})}\) and the distributive law of the modulo operation.

Now, the relay node decomposes the linear combinations of messages, V r,1 and V r,2, as the following:

$$\begin{array}{@{}rcl@{}} \boldsymbol{L}_{a,1} & \overset{\triangle}{=} & \left[\boldsymbol{V}_{r,1}\right]\textrm{mod }\Lambda_{m}^{(n_{1})} \\ & = & \left[\left(\boldsymbol{V}_{1}+\boldsymbol{V}_{2}-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right)\textrm{mod }\Lambda_{s1}^{(n_{1})}\right]\textrm{mod }\Lambda_{m}^{(n_{1})} \\ & = & \left[\left(\left[\boldsymbol{V}_{a,1}+\boldsymbol{V}_{b,1}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})}+\left[\boldsymbol{V}_{a,2}+\boldsymbol{V}_{b,2}\right]\textrm{mod }\Lambda_{s2}^{(n_{1})}\right.\right.\\&&-\left.\left.\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right)\textrm{mod }\Lambda_{s1}^{(n_{1})}\right]\textrm{mod }\Lambda_{m}^{(n_{1})} \end{array} $$
((16))
$$\begin{array}{@{}rcl@{}} & = & \left[\boldsymbol{V}_{a,1}+\boldsymbol{V}_{b,1}+\boldsymbol{V}_{a,2}+\boldsymbol{V}_{b,2}-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right.\\&& -\left.\mathcal{Q}_{\Lambda_{s1}}\left(\boldsymbol{V}_{a,1}+\boldsymbol{V}_{b,1}\right)-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{a,2}+\boldsymbol{V}_{b,2}\right)\right]\textrm{mod }\Lambda_{m}^{(n_{1})} \\ & = & \left[\boldsymbol{V}_{a,1}+\boldsymbol{V}_{a,2}\right]\textrm{mod }\Lambda_{m}^{(n_{1})} \end{array} $$
((17))

where (16) follows from (4) and the last equality follows from \(\Lambda _{s1}^{(n_{1})}\subseteq \Lambda _{s2}^{(n_{1})}\subseteq \Lambda _{m}^{(n_{1})}\) and \(\boldsymbol {V}_{b,i}\in \mathcal {C}_{b,i}^{(n)}\). To determine the cloud center, we perform the following operation:

$$\begin{array}{@{}rcl@{}} \boldsymbol{L}_{b,1} & \overset{\triangle}{=} & \left[\boldsymbol{V}_{r,1}-\boldsymbol{L}_{a,1}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})} \end{array} $$
((18))
$$\begin{array}{@{}rcl@{}} & = & \left[\left(\left[\boldsymbol{V}_{a,1}+\boldsymbol{V}_{b,1}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})}+\left[\boldsymbol{V}_{a,2}+\boldsymbol{V}_{b,2}\right] \textrm{mod }\Lambda_{s2}^{(n_{1})}\right.\right.\\&& -\left.\left.\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)\right)\textrm{mod }\Lambda_{s1}^{(n_{1})}-\boldsymbol{L}_{a,1}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})} \\ & = & \left[\boldsymbol{V}_{b,1}+\boldsymbol{V}_{b,2}-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)-\mathcal{Q}_{\Lambda_{s1}}\left(\boldsymbol{V}_{a,1}+\boldsymbol{V}_{b,1}\right)\right.\\&&-\left.\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{a,2}+\boldsymbol{V}_{b,2}\right)+\mathcal{Q}_{\Lambda_{m}}\left(\boldsymbol{V}_{a,1}+\boldsymbol{V}_{a,2}\right)\right]\textrm{mod }\Lambda_{s1}^{(n_{1})} \\ & = & \left[\boldsymbol{V}_{b,1}+\boldsymbol{V}_{b,2}-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{a,2}+\boldsymbol{V}_{b,2}\right)\right.\\&&+\left.\mathcal{Q}_{\Lambda_{m}}\left(\boldsymbol{V}_{a,1}+\boldsymbol{V}_{a,2}\right)\right]\textrm{mod }\Lambda_{s1}^{(n_{1})} \end{array} $$
((19))

where the last equality is based on the distributive law of the modulo operation. Thus using (18), we can decompose V r,1 as follows:

$$\boldsymbol{V}_{r,1}=\left[\boldsymbol{L}_{a,1}+\boldsymbol{L}_{b,1}\right]\textrm{mod }\Lambda_{s1}^{(n_{1})}. $$

Similarly, for V r,2, we get:

$$\boldsymbol{V}_{r,2}=\left[\boldsymbol{L}_{a,2}+\boldsymbol{L}_{b,2}\right]\textrm{mod }\Lambda_{s2}^{(n_{1})}, $$

where

$$\begin{array}{@{}rcl@{}} \boldsymbol{L}_{a,2} & = & \left[\boldsymbol{V}_{a,1}+\boldsymbol{V}_{a,2}\right]\textrm{mod }\Lambda_{m}^{(n_{1})},\\ \boldsymbol{L}_{b,2} & = & \left[\boldsymbol{V}_{b,1}+\boldsymbol{V}_{b,2}+\mathcal{Q}_{\Lambda_{m}}\left(\boldsymbol{V}_{a,1}+\boldsymbol{V}_{a,2}\right)\right]\textrm{mod }\Lambda_{s2}^{(n_{1})}. \end{array} $$

Note that \(\boldsymbol {L}_{a,i}\in \mathcal {C}_{a}^{(n_{1})}\) and \(\boldsymbol {L}_{b,i}\in \mathcal {C}_{b,i}^{(n_{1})}\) for i=1,2. As we showed in (17) and (19), due to the structure of nested lattice codes, we can determine L a,i and L b,i for i=1,2 using V r,i . Our coding strategy sends the linear combination of satellite codewords (associated with V 1 and V 2), i.e., L a,1 to both nodes in phase 2. In phase 3 and 4, we communicate the cloud center associated with the linear combination of codewords, i.e., L b,2 and L b,1 to node 1 and 2, respectively.

3.3 Phase 2 (broadcast phase)

  • Encoding:

In this phase, we send the codeword \(\boldsymbol {L}_{a,1}=\left [\boldsymbol {V}_{a,1}+\boldsymbol {V}_{a,2}\right ]\textrm {mod }\Lambda _{m}^{(n_{1})}\) to both nodes by random coding. Note that \(\boldsymbol {L}_{a,1}\in \mathcal {C}_{a}^{(n_{1})}\). Here, we apply joint-typicality scheme. First, we generate \(2^{n_{2}R_{r}^{(2)}}\) sequences, with each element i.i.d. according to \(\mathcal {N}\sim \left (0,P_{r}\right)\) and \(R_{r}^{(2)}=\max \left (R_{r,1}^{(2)},R_{r,2}^{(2)}\right)\), where \(R_{r,1}^{(2)}\) and \(R_{r,2}^{(2)}\) will be determined later. These sequences form a codebook \(\mathcal {C}_{r}^{(n_{2})}\). We assume a one-to-one correspondence between each \(\boldsymbol {L}_{a,1}\in \mathcal {C}_{a}^{(n_{1})}\) and a codeword \(\boldsymbol {X}_{r}^{(2)}\in \mathcal {C}_{r}^{(n_{2})}\).

  • Decoding:

Let us denote the relay codeword by \(\boldsymbol {X}_{r}^{(2)}\left (\boldsymbol {L}_{a,1}\right)\). Based on \(\boldsymbol {Y}_{1}^{(2)}=g_{1}\boldsymbol {X}_{r}^{(2)}+\boldsymbol {Z}_{1}^{(2)},\) node 1 finds the relay message L a,1 as \(\widehat {\boldsymbol {L}}_{a,1}\) if a unique codeword \(\boldsymbol {X}_{r}\left (\widehat {\boldsymbol {L}}_{a,1}\right)\in \mathcal {C}_{r,2}^{(n_{2})}\) exists such that \(\left (\boldsymbol {X}_{r}\left (\widehat {\boldsymbol {L}}_{a,1}\right),\boldsymbol {Y}_{1}^{(2)}\right)\) are jointly typical, where

$$\mathcal{C}_{r,2}^{(n_{2})}=\left\{ \boldsymbol{X}_{r}\left(\boldsymbol{L}_{a,1}\right):\boldsymbol{L}_{a,1}=\left[\boldsymbol{v}_{a,1}+\boldsymbol{V}_{a,2}\right]\textrm{mod }\Lambda_{m}^{(n_{1})}\right\}. $$

Note that \(|\mathcal {C}_{r,2}^{(n_{2})}|=2^{n_{2}R_{r,1}^{(2)}}\)(since V a,1 is known at node 1). Since node 1 has access to its codeword, V 1, its corresponding satellite codeword V a,1 can be determined easily. Using the knowledge of v a,1 and \(\widehat {\boldsymbol {L}}_{a,1}\) at node 1, it can decode the message of node 2 as:

$$\widehat{\boldsymbol{V}}_{a,2}=\left[\widehat{\boldsymbol{L}}_{a,1}-\boldsymbol{v}_{a,1}\right]\textrm{mod }\Lambda_{m}^{(n_{1})}. $$

From the argument of random coding and jointly typical decoding [50], we get

$$ R_{r,1}^{(2)}\leq C\left({g_{1}^{2}}P_{r}\right). $$
((20))

Similarly, node 2 with having V 2 and thus its corresponding satellite codeword V a,2 can find V a,1 if

$$ R_{r,2}^{(2)}\leq C\left({g_{2}^{2}}P_{r}\right). $$
((21))

3.4 Phase 3 (first cooperative phase)

During this phase, only node 2 and the relay node transmit.

  • Encoding:

In this phase, node 1 attempts to decode V b,2 to construct V 2 using the decoded satellite codeword, \(\widehat {\boldsymbol {V}}_{a,2}\) from the previous phase. At the relay node, from phase 1, the following sequence is available:

$${} {\fontsize{9.2pt}{9.6pt}\selectfont{\begin{aligned} \boldsymbol{L}_{r}^{(3)}\overset{\triangle}{=}\boldsymbol{L}_{b,2}=\left[\boldsymbol{V}_{b,1}+\boldsymbol{V}_{b,2}+\mathcal{Q}_{\Lambda_{m}}\left(\boldsymbol{V}_{a,1}+\boldsymbol{V}_{a,2}\right)\right]\textrm{mod }\Lambda_{s2}^{(n_{1})}. \end{aligned}}} $$

The relay encodes \(\boldsymbol {L}_{r}^{(3)}\) and transmits it to node 1. Node 2 has access to V 2 and V a,1 (from phase 2) and thus can generate \(\boldsymbol {L}_{2}^{(3)}\overset {\triangle }{=}\left [\boldsymbol {V}_{b,2}+\mathcal {Q}_{\Lambda _{m}}\left (\boldsymbol {V}_{a,1}+\boldsymbol {V}_{a,2}\right)\right ]\textrm {mod }\Lambda _{s2}^{(n_{1})}\). Since, the relay and node 2 want to send \(\boldsymbol {L}_{r}^{(3)}\) and \(\boldsymbol {L}_{2}^{(3)}\) to node 1, respectively, we have a conventional MAC and we can apply any capacity-achieving code. Note that \(\boldsymbol {L}_{r}^{(3)}=\left [\boldsymbol {L}_{2}^{(3)}+\boldsymbol {V}_{b,1}\right ]\textrm {mod }\Lambda _{s2}^{(n_{1})}\) and the cardinality of the sets of \(\boldsymbol {L}_{r}^{(3)}\) and \(\boldsymbol {L}_{2}^{(3)}\) are equal (based on the Crypto lemma in [41, 51]). We assume that this cardinality is \(2^{n_{3}R^{(3)}}\phantom {\dot {i}\!}\). To construct codebooks at node 2 and the relay node, we first find all values of \(\boldsymbol {L}_{2}^{(3)}\) and V b,1 which results in the same \(\boldsymbol {L}_{r}^{(3)}\phantom {\dot {i}\!}\). Suppose that for each \(\boldsymbol {L}_{r}^{(3)}\phantom {\dot {i}\!}\), there are \(m_{i}\left (i\in \left \{ 1,2,\ldots,2^{n_{3}R^{(3)}}\right \}\right)\) values of \(\boldsymbol {L}_{2}^{(3)}\) and V b,1 so that the sum of them with each other results in the same \(\boldsymbol {L}_{r}^{(3)}\). Now, consider a multivariate Gaussian distribution \(p\left (x_{2,1},x_{2,2},\ldots,x_{2,2^{n_{3}R^{(3)}}},x_{r}\right)\) with the following covariance matrix:

$${} \Sigma=\left[\begin{array}{ccccc} P_{2} & 0 & \cdots & 0 & \rho_{2r}\sqrt{P_{r}P_{2}}\\ 0 & P_{2} & \cdots & 0 & \rho_{2r}\sqrt{P_{r}P_{2}}\\ \vdots & \vdots & \cdots & \vdots & \vdots\\ 0 & 0 & \cdots & P_{2} & \rho_{2r}\sqrt{P_{r}P_{2}}\\ \rho_{2r}\sqrt{P_{r}P_{2}} & \rho_{2r}\sqrt{P_{r}P_{2}} & \cdots & \rho_{2r}\sqrt{P_{r}P_{2}} & P_{r} \end{array}\right], $$

where ρ 2r denotes the correlation coefficient between x 2,i and x r for \(i\in \left \{ 1,2,\ldots,2^{n_{3}R^{(3)}}\right \} \). To generate the codebook at node 2, we use the marginal distributions \(p\left (x_{2,1}\right),p\left (x_{2,2}\right),\ldots,p\left (x_{2,2^{n_{3}R^{(3)}}}\right)\) and by each one of them, construct a codeword with each element i.i.d. These sequences form a codebook \(\mathcal {C}_{2}^{(n_{3})}\) for node 2. This enables node 2 to map \(\boldsymbol {L}_{2}^{(3)}\) to a codeword \(\boldsymbol {X}_{2}^{(3)}\left (\boldsymbol {L}_{2}^{(3)}\right)\in \mathcal {C}_{2}^{(n_{3})}\).

Now at the relay node, we generate the codebook \(\mathcal {C}_{r}^{(n_{3})}\). In order to construct this codebook, suppose \(\mathcal {X}_{i}\) is set of all \(\boldsymbol {X}_{2,i}\left (\boldsymbol {L}_{2}^{(3)}\right)\) such that for the corresponding \(\boldsymbol {L}_{2}^{(3)}\), there exist a V b,1 such that \(\boldsymbol {L}_{r}^{(3)}\) is the same for them. Now, we generate \(2^{n_{3}R^{(3)}}\phantom {\dot {i}\!}\) sequences according to the marginal distribution \(p\left (x_{r}|\mathcal {X}_{i}\right)\). The relay maps \(\boldsymbol {L}_{r}^{(3)}\) to a codeword \(\boldsymbol {X}_{r}^{(3)}\left (\boldsymbol {L}_{r}^{(3)}\right)\in \mathcal {C}_{r}^{(n_{3})}\). Note that both mappings at the relay and node 2 are one-to-one correspondence.

  • Decoding:

In this phase, the decoder of node 1 attempts to decode V b,2 to construct V 2. Note that node 1 already has V a,2 from phase 2. Since V b,1 is known at node 1, and the following equality holds for \(\boldsymbol {L}_{2}^{(3)}\) and \(\boldsymbol {L}_{r}^{(3)}\): \(\boldsymbol {L}_{r}^{(3)}=\left [\boldsymbol {L}_{2}^{(3)}+\boldsymbol {V}_{b,1}\right ]\textrm {mod }\Lambda _{s2}^{(n_{1})}\), the decoder of node 1 suffices to decode either \(\boldsymbol {L}_{2}^{(3)}\) or \(\boldsymbol {L}{}_{r}^{(3)}\). Based on the received sequence in this phase, \(\boldsymbol {Y}_{1}^{(3)}=g_{1}\boldsymbol {X}_{r}^{(3)}+g_{3}\boldsymbol {X}_{2}^{(3)}+\boldsymbol {Z}_{1}^{(3)},\) node 1 estimates the message of node 2, \(\boldsymbol {L}_{2}^{(3)}\), as \(\widehat {\boldsymbol {L}_{2}^{(3)}}\) if a unique codeword \(\boldsymbol {X}_{2}^{(3)}\left (\widehat {\boldsymbol {L}_{2}^{(3)}}\right)\in \mathcal {C}_{2}^{(n_{3})}\) exists such that \(\left (\boldsymbol {X}_{r}^{(3)}\left (\left [\widehat {\boldsymbol {L}_{2}^{(3)}}+\boldsymbol {v}_{b,1}\right ]\textrm {mod }\Lambda _{s2}^{(n_{1})}\right),\boldsymbol {X}_{2}^{(3)}\left (\widehat {\boldsymbol {L}_{2}^{(3)}}\right),\boldsymbol {Y}_{1}^{(3)}\right)\) are jointly typical, where

$$\begin{array}{@{}rcl@{}} \mathcal{C}_{2}^{(n_{3})} & = & \left\{ \boldsymbol{X}_{2}^{(3)}\left(\boldsymbol{L}_{2}^{(3)}\right):\boldsymbol{L}_{2}^{(3)}=\left[\boldsymbol{V}_{b,2}+\mathcal{Q}_{\Lambda_{m}}\left(\boldsymbol{v}_{a,1}+\boldsymbol{v}_{a,2}\right)\right]\right.\\&&\left.\textrm{mod }\Lambda_{s2}^{(n_{1})}\right\}. \end{array} $$

From the argument of random coding and jointly typical decoding [50], we get

$$R^{(3)}\leq C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right). $$

Note that by decoding \(\boldsymbol {L}_{2}^{(3)}\), we can decode V b,2 as the following:

$$\widehat{\boldsymbol{V}}_{b,2}=\left[\boldsymbol{L}^{(3)}_{2}-\mathcal{Q}_{\Lambda_{m}}\left(\boldsymbol{v}_{a,1}+\boldsymbol{v}_{a,2}\right)\right]\textrm{mod }\Lambda_{s2}^{(n_{1})}. $$

Also, using flow constraints, we have

$$ t_{3}R^{(3)}=t_{1}R_{2,r}^{(1)}-t_{2}R_{r,1}^{(2)}. $$
((22))

Thus, if the rate of \(\boldsymbol {X}_{r}^{(3)}\left (\boldsymbol {L}_{r}^{(3)}\right)\) or \(\boldsymbol {X}_{2}^{(3)}\left (\boldsymbol {L}_{2}^{(3)}\right)\), which are equal, is less than the sum capacity of the multiple-access channel, node 2 can transmit another sequence to node 1 in this phase. Suppose that \(\boldsymbol {X}_{2}^{'(3)}\) denotes this supplementary sequence. Given the data rate of \(\boldsymbol {X}_{2}^{'(3)}\) as \(R_{2,1}^{(3)}\), error probabilities vanish as n 3, if the following constraints are satisfied:

$$\begin{array}{@{}rcl@{}} R_{2,1}^{(3)} & \leq & C\left({g_{3}^{2}}P_{2}\right), \end{array} $$
((23))
$$\begin{array}{@{}rcl@{}} \!\!\!R^{(3)}+R_{2,1}^{(3)} & \leq & C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)\!. \end{array} $$
((24))

Using (22), the constraints in (23) and (24) can be rewritten as follows:

$${\kern20pt} R_{2,1}^{(3)} \leq C\left({g_{3}^{2}}P_{2}\right), $$
((25))
$${} {\fontsize{8.6pt}{9.6pt}\selectfont{\begin{aligned} t_{1}R_{2,r}^{(1)}+t_{3}R_{2,1}^{(3)} & \leq \, t_{2}R_{r,1}^{(2)}+t_{3}C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)\!. \end{aligned}}} $$
((26))

3.5 Phase 4 (second cooperative phase)

In this phase, we can use the explained scheme at phase 3. Since the message of node 2 is recovered by node 1 at phase 3, it can construct the following sequence, which the relay has it as well:

$${} {\fontsize{9.4pt}{9.6pt}\selectfont{\begin{aligned} \boldsymbol{L}_{b,1}&=\left[\boldsymbol{V}_{b,1}+\boldsymbol{V}_{b,2}-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{2}+\boldsymbol{D}_{2}\right)-\mathcal{Q}_{\Lambda_{s2}}\left(\boldsymbol{V}_{a,2}+\boldsymbol{V}_{b,2}\right)\right.\\ &\quad+\left.\mathcal{Q}_{\Lambda_{m}}\left(\boldsymbol{V}_{a,1}+\boldsymbol{V}_{a,2}\right)\right]\textrm{mod }\Lambda_{s1}^{(n_{1})}. \end{aligned}}} $$

Here, we assume that p(x 1,x r ) is a bivariate Gaussian distribution. The correlation coefficient between X 1 and X r is denoted by ρ 1r , Var(X 1)=P 1 and Var(X r )=P r . By using this distribution, we generate \(2^{n_{4}R^{(4)}}\phantom {\dot {i}\!}\) sequences, with each element i.i.d. according to p(x 1,x r ). We choose the first component of the generated sequence as a codeword for node 1 and the second component as a codeword for the relay node. These sequences form two codebooks \(\mathcal {C}_{1}^{(n_{4})}\) and \(\mathcal {C}_{r}^{(n_{4})}\). Node 1 and relay map L b,1 to \(\boldsymbol {X}_{1}^{(4)}\left (\boldsymbol {L}_{b,1}\right)\) and \(\boldsymbol {X}_{r}^{(4)}\left (\boldsymbol {L}_{b,1}\right)\), respectively, and send them to node 2. Thus, we have a conventional MAC and the capacity region is easily achieved. But, we can see that if individual rate of node 1 and the relay is less than the sum capacity of the MAC, node 1 in this phase can communicate another data sequence, denoted by \(\boldsymbol {X}_{1}^{'(4)}\). Thus, to correctly recover \(\boldsymbol {X}_{r}^{(4)}\left (\boldsymbol {L}_{b,1}\right)\) and \(\boldsymbol {X}_{1}^{'(4)}\) in this phase, we get:

$$\begin{array}{@{}rcl@{}} {}R_{1,2}^{(4)} & \leq & C\left({g_{3}^{2}}P_{1}\right), \end{array} $$
((27))
$$\begin{array}{@{}rcl@{}} {}R_{1,2}^{(4)}+R^{(4)} & \leq & C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right), \end{array} $$
((28))

where \(t_{4}R^{(4)}=t_{1}R_{1,r}^{(1)}-t_{2}R_{r,2}^{(2)}\). Thus, we can rewrite (27) and (28) as the following:

$${\kern22pt} R_{1,2}^{(4)} \leq C\left({g_{3}^{2}}P_{1}\right), $$
((29))
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} t_{1}R_{1,r}^{(1)}+t_{4}R_{1,2}^{(4)} \leq t_{2}R_{r,2}^{(2)}+t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right). \end{aligned}}} $$
((30))

Encoding and decoding at nodes in four phases are explained with the help of Table 1.

Table 1 Encoding and decoding at nodes

3.6 Achievable rate region

From (14), (20), (21), (25), (26), (29) and (30), the following rate region is achieved:

$$\begin{array}{@{}rcl@{}} R_{1,r}^{(1)} & \leq & R_{1,r}^{*}, \end{array} $$
((31))
$$\begin{array}{@{}rcl@{}} R_{2,r}^{(1)} & \leq & R_{2,r}^{*}, \end{array} $$
((32))
$$\begin{array}{@{}rcl@{}} R_{r,1}^{(2)} & \leq & C\left({g_{1}^{2}}P_{r}\right), \end{array} $$
((33))
$$\begin{array}{@{}rcl@{}} R_{r,2}^{(2)} & \leq & C\left({g_{2}^{2}}P_{r}\right), \end{array} $$
((34))
$$\begin{array}{@{}rcl@{}} R_{2,1}^{(3)} & \leq & C\left({g_{3}^{2}}P_{2}\right), \end{array} $$
((35))
$$\begin{array}{@{}rcl@{}} t_{1}R_{2,r}^{(1)}+t_{3}R_{2,1}^{(3)} & \leq & t_{2}R_{r,1}^{(2)}+t_{3}C\left(\vphantom{\left.2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)}{g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}\right.\\ &&+\left.2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right), \end{array} $$
((36))
$$\begin{array}{@{}rcl@{}} R_{1,2}^{(4)} & \leq & C\left({g_{3}^{2}}P_{1}\right), \end{array} $$
((37))
$$\begin{array}{@{}rcl@{}} t_{1}R_{1,r}^{(1)}+t_{4}R_{1,2}^{(4)} & \leq & t_{2}R_{r,2}^{(2)}+t_{4}C\left(\vphantom{\left.2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)}{g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}\right.\\ &&+\left.2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right). \end{array} $$
((38))

Since there are outgoing data flows from node 1 to node 2 at phases 1 and 4, from (31) and (37), we get:

$$\begin{array}{@{}rcl@{}} t_{1}R_{1,r}^{(1)}+t_{4}R_{1,2}^{(4)} & \leq & t_{1}R_{1,r}^{*}+t_{4}C\left({g_{3}^{2}}P_{1}\right). \end{array} $$
((39))

Similarly, there are information flows from node 2 to node 1 at phases 1 and 3. Thus, we get

$$\begin{array}{@{}rcl@{}} t_{1}R_{2,r}^{(1)}+t_{3}R_{2,1}^{(3)} & \leq & t_{1}R_{2,r}^{*}+t_{3}C\left({g_{3}^{2}}P_{2}\right). \end{array} $$
((40))

From (36), (38), (39), and (40), we have proved theorem.

4 The rate region outer bound and capacity results

4.1 The rate region outer bound

In this subsection using the cut-set bound, we obtain an outer bound over the rate region of the half-duplex GTWRC. This bound can be derived from the half-duplex cut-set bound in [52].

Lemma 2.

All rate pairs of the discrete memoryless restricted half-duplex two-way relay channel, as shown in Fig. 2, that are achievable for some joint probability distributions

$$\begin{array}{@{}rcl@{}} P\left(x_{1}^{(1)}x_{2}^{(1)}y_{r}^{(1)}\right) & = & P\left(x_{1}^{(1)}\right)P\left(x_{2}^{(1)}\right)P\left(y_{r}^{(1)}|x_{1}^{(1)}x_{2}^{(1)}\right),\\ P\left(x_{r}^{(2)}y_{1}^{(2)}y_{2}^{(2)}\right) & = & P\left(x_{r}^{(2)}\right)P\left(y_{1}^{(2)}|x_{r}^{(2)}\right)P\left(y_{2}^{(2)}|x_{r}^{(2)}\right),\\ P\left(x_{r}^{(3)}x_{2}^{(3)}y_{1}^{(3)}\right) & = & P\left(x_{r}^{(3)}x_{2}^{(3)}\right)P\left(y_{1}^{(3)}|x_{r}^{(3)}x_{2}^{(3)}\right),\\ P\left(x_{r}^{(4)}x_{1}^{(4)}y_{2}^{(4)}\right) & = & P\left(x_{r}^{(4)}x_{1}^{(4)}\right)P\left(y_{2}^{(4)}|x_{r}^{(4)}x_{1}^{(4)}\right), \end{array} $$

must satisfy

$${} \begin{aligned} R_{1}&\leq\min\left\{ \left(t_{1}I\left(X_{1}^{(1)};Y_{r}^{(1)}|X_{2}^{(1)}\right)+t_{4}I\left(X_{1}^{(4)};Y_{2}^{(4)}|X_{r}^{(4)}\right)\right)\!,\right.\\&\quad\qquad\left.\left(t_{2}I\left(X_{r}^{(2)};Y_{2}^{(2)}\right)+t_{4}I\left(X{}_{1}^{(4)},X_{r}^{(4)};Y_{2}^{(4)}\right)\right)\right\} \end{aligned} $$
((41))
$${} \begin{aligned} R_{2}&\leq\min\left\{ \left(t_{1}I\left(X_{2}^{(1)};Y_{r}^{(1)}|X_{1}^{(1)}\right)+t_{3}I\left(X_{2}^{(3)};Y_{1}^{(3)}|X_{r}^{(3)}\right)\right)\!,\right.\\&\quad\qquad\left.\left(t_{2}I\left(X_{r}^{(2)};Y_{1}^{(2)}\right)+t_{3}I\left(X_{2}^{(3)},X_{r}^{(3)};Y_{1}^{(3)}\right)\right)\right\} \end{aligned} $$
((42))

where all t m are non-negative subject to \(\overset {4}{\underset {m=1}{\sum }}t_{m}=1\).

Proof.

For a half-duplex relay network with k phases in which the sequence of phases is fixed with fraction of time, any achievable rate R of information flow is upper bounded as follows [53]

$$R\leq\underset{S}{\min}\overset{k}{\underset{m=1}{\sum}}t_{m}I\left(X_{S}^{(m)};Y_{S^{c}}^{(m)}|X_{S^{c}}^{(m)}\right), $$

where a cut partitions nodes into two sets S and S c such that the source nodes are in S, the destination nodes are in S c, and S c is the complement of S. Using this, we bound R 1 and R 2. For communication rate from node 1 to node 2, i.e., R 1, we divide the nodes into two sets, S={1} and S={1,r}. As seen from Fig. 2, node 1 only transmits to relay and node 2 in phases t 1 and t 4, i.e., there is data rate from S={1} to S c={r,2} only in phases t 1 and t 4. Thus,

$$ \begin{aligned} \overset{4}{\underset{m=1}{\sum}}t_{m}I\left(X_{S}^{(m)};Y_{S^{c}}^{(m)}|X_{S^{c}}^{(m)}\right)&=t_{1}I\left(X_{1}^{(1)};Y_{r}^{(1)}|X_{2}^{(1)}\right)\\&\quad+t_{4}I\left(X_{1}^{(4)};Y_{2}^{(4)}|X_{r}^{(4)}\right). \end{aligned} $$
((43))

On the other hand, there exists data rate from S={1,r} to S c={2} only in phases t 2 and t 4. Therefore, we get

$$ \begin{aligned} \overset{4}{\underset{m=1}{\sum}}t_{m}I\left(X_{S}^{(m)};Y_{S^{c}}^{(m)}|X_{S^{c}}^{(m)}\right)&=t_{2}I\left(X_{r}^{(2)};Y_{2}^{(2)}\right)\\&\quad+t_{4}I\left(X_{1}^{(4)},X_{r}^{(4)};Y_{2}^{(4)}\right). \end{aligned} $$
((44))

Now, by minimizing over two cuts, i.e., minimizing (43) and (44), we get the desired bound in (41). Similarly, we can conclude the bound on R 2 which is given by (42).

For the Gaussian model, we can upper bound the various mutual information terms as the following [53]

$$\begin{array}{@{}rcl@{}} I\left(X_{1}^{(1)};Y_{r}^{(1)}|X_{2}^{(1)}\right) & \leq & C\left({g_{1}^{2}}P_{1}\right),\\ I\left(X_{1}^{(4)};Y_{2}^{(4)}|X_{r}^{(4)}\right) & \leq & C\left({g_{3}^{2}}P_{1}\right),\\ I\left(X_{r}^{(2)};Y_{2}^{(2)}\right) & \leq & C\left({g_{2}^{2}}P_{r}\right),\\ I\left(X_{2}^{(3)},X_{r}^{(3)};Y_{1}^{(3)}\right) & \leq & C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right),\\ I\left(X{}_{1}^{(4)},X_{r}^{(4)};Y_{2}^{(4)}\right) & \leq & C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right), \end{array} $$

where ρ 1r is the correlation coefficient between \(X{}_{1}^{(4)}\) and \(X_{r}^{(4)}\) and ρ 2r is the correlation coefficient between \(X{}_{2}^{(3)}\) and \(X_{r}^{(3)}\). Using these mutual information terms at the outer bounds, given in (41) and (42), we get:

$$\begin{array}{@{}rcl@{}} R_{1} & \leq&\min\left\{ \left(\vphantom{t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)}t_{1}C\left({g_{1}^{2}}P_{1}\right)+t_{4}C\left({g_{3}^{2}}P_{1}\right)\right),\left(t_{2}C\left({g_{2}^{2}}P_{r}\right)\right.\right.\\ &&+\left.\left.t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)\right)\right\}, \end{array} $$
((45))
$$\begin{array}{@{}rcl@{}} R_{2}&\leq&\min\left\{\!\vphantom{t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)} \left(t_{1}C\left({g_{2}^{2}}P_{2}\right)+t_{3}C\left({g_{3}^{2}}P_{2}\right)\right),\left(t_{2}C\left({g_{1}^{2}}P_{r}\right)\right.\right.\\ &&+\left.\left. t_{3}C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)\right)\right\}. \end{array} $$
((46))

4.1.1 \thelikesubsubsection Linear resource allocation problem

In (45) and (46), phases t 1, t 2, t 3, and t 4 are not determined. Since the capacity region, which should be the convex hull of all achievable rate pairs (R 1,R 2), has two dimensions, this region has no unique maximum and we cannot determine optimum values for phases. To solve this problem, we use an alternative metric that is commonly used in [3, 6]. In this metric, we maximize the sum rate, i.e., R 1+R 2. Thus, we have the following optimization problem with t 1, t 2, t 3, and t 4 as optimization parameters:

$${} {\fontsize{8.3pt}{12.6pt}\selectfont{\begin{aligned} \underset{t_{1},t_{2},t_{3},t_{4}}{\max} &\qquad\qquad\qquad\qquad\qquad\quad R_{1}+R_{2} \\ s.t. & \begin{array}{lll} R_{1}-t_{1}C\left({g_{1}^{2}}P_{1}\right)-t_{4}C\left({g_{3}^{2}}P_{1}\right) & \leq & 0\\ R_{1}-t_{2}C\left({g_{2}^{2}}P_{r}\right)-t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right) & \leq & 0\\ R_{2}-t_{1}C\left({g_{2}^{2}}P_{2}\right)-t_{3}C\left({g_{3}^{2}}P_{2}\right) & \leq & 0\\ R_{2}-t_{2}C\left({g_{1}^{2}}P_{r}\right)-t_{3}C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right) & \leq & 0\\ t_{1}+t_{2}+t_{3}+t_{4} & = & 1\\ 0<t_{1},t_{2},t_{3},t_{4}<1 \end{array} \end{aligned}}} $$
((47))

We can easily transform this problem to a standard form and solve this optimization problem.

4.2 Capacity results

Corollary 1.

The capacity region of the half-duplex Gaussian two-way relay channel via the 2-CoMABC protocol, as shown in Fig. 2, is achievable within 0.5 bit.

Proof.

We first calculate the gap for R 1. For calculating the gap for R 1, if we compare the right-hand sides (RHS) of (5) and (41), the second term in both minimizations is the same and the first terms differ by at most \(\frac {1}{2}\) bit. To see this, we have:

$${} \begin{aligned} \frac{1}{2}\log\left(1+{g_{1}^{2}}P_{1}\right)-\left[\frac{1}{2}\log\left(\frac{{g_{1}^{2}}P_{1}}{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}}+{g_{1}^{2}}P_{1}\right)\right]^{+}\\\leq\frac{1}{2}\log\left(2-\frac{{g_{1}^{2}}P_{1}}{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}}\right)\leq\frac{1}{2}, \end{aligned} $$
((48))

where (48) is based on the fact that the maximum gap occurs at \(\frac {{g_{1}^{2}}P_{1}}{{g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}}+{g_{1}^{2}}P_{1}=1\). Now, from a simple inequality min(a 1,a 2)− min(b 1,b 2)≤ max(a 1b 1,a 2b 2), the RHSs of (5) and (45) differ by at most \(\frac {1}{2}\) bit. The same holds for (6) and (46), and thus the achievable rate region which is given by (5) and (6) is within 0.5 bit of the outer bound for each user regardless of channel parameters.

Now, we investigate the achievable rate region of the GTWRC via 2-CoMABC protocol in the high SNR regime.

Corollary 2.

At high SNRs (i.e., \({g_{1}^{2}}P_{1}\gg 1\) and \({g_{2}^{2}}P_{2}\gg 1\)), the capacity region of the GTWRC via 2-CoMABC protocol is given by

$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} R_{1} & \leq \min\left\{ \left(t_{1}C\left({g_{1}^{2}}P_{1}\right)+t_{4}C\left({g_{3}^{2}}P_{1}\right)-o(1)\right),\right.\\ &\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left(t_{2}C\left({g_{2}^{2}}P_{r}\right)+t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+ 2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)\right)\right\}, \end{aligned}}} $$
((49))
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} R_{2} & \leq \min\left\{ \left(t_{1}C\left({g_{2}^{2}}P_{2}\right)+t_{3}C\left({g_{3}^{2}}P_{2}\right)-o(1)\right),\right. \\ & \left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left(t_{2}C\left({g_{1}^{2}}P_{r}\right)+t_{3}C\left({g_{1}^{2}}P_{r} +{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)\right)\right\}, \end{aligned}}} $$
((50))

where o(1)→0 as \({g_{1}^{2}}P_{1},{g_{2}^{2}}P_{2}\rightarrow \infty \).

To evaluate the achievable rate region of the GTWRC via 2-CoMABC protocol that is given by (5) and (6), in the high SNR regime, we consider \({g_{1}^{2}}P_{1}\gg 1\) and \({g_{2}^{2}}P_{2}\gg 1\). Thus, we have:

$$\begin{array}{@{}rcl@{}} R_{1} & \leq & \min\left(t_{1}\left[\frac{1}{2}\log\left({g_{1}^{2}}P_{1}\right)\right]^{+}+t_{4}C\left({g_{3}^{2}}P_{1}\right),t_{2}C \left({g_{2}^{2}}P_{r}\right)\right.\\&&+\left. t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}+2\rho_{1r}g_{2}g_{3}\sqrt{P_{r}P_{1}}\right)\vphantom{\left[\frac{1}{2}\log\left({g_{1}^{2}}P_{1}\right)\right]}\!\right),\\ R_{2} & \leq & \min\left(t_{2}\left[\frac{1}{2}\log\left({g_{2}^{2}}P_{2}\right)\right]^{+}+t_{3}C\left({g_{3}^{2}}P_{2}\right),t_{2}C \left({g_{1}^{2}}P_{r}\right)\right.\\&&+\left.t_{3}C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}+2\rho_{2r}g_{1}g_{3}\sqrt{P_{r}P_{2}}\right)\vphantom{\left[\frac{1}{2}\log\left({g_{1}^{2}}P_{1}\right)\right]}\!\right). \end{array} $$

By comparing this region with the outer bound in (45) and (46) for \({g_{1}^{2}}P_{1}\gg 1\) and \({g_{2}^{2}}P_{2}\gg 1\), we can see that the capacity region is achievable at high SNRs.

5 Numerical results

In this section, we compare the achievable rate region and the outer bound, in the sum rate sense, of the bidirectional coded cooperation protocols: the MABC protocol [3], the TDBC protocol [3], the CoMABC protocol [38], the HBC [3], the 6-phase protocol [27], and our proposed scheme, 2-CoMABC. Since we have studied a 4-phase protocol, and the MABC and TDBC protocols have two and three phases, respectively, we compare the achievable rate region of our protocol with the outer bound of MABC and TDBC protocols.

  • MABC protocol (outer bound): The MABC protocol is a two-phase protocol (phases 1 and 2 of Fig. 1) where both users simultaneously transmit during the first phase and the relay alone transmits during the second. The outer bound of the MABC protocol is given by [3]:

    $$\begin{array}{@{}rcl@{}} R_{1} & \leq & \min\left\{ t_{1}C\left({g_{1}^{2}}P_{1}\right),t_{2}C\left({g_{2}^{2}}P_{r}\right)\right\},\\ R_{2} & \leq & \min\left\{ t_{1}C\left({g_{2}^{2}}P_{2}\right),t_{2}C\left({g_{1}^{2}}P_{r}\right)\right\}. \end{array} $$
  • TDBC protocol (outer bound): The second protocol considers sequential transmissions from the two users followed by a transmission from the relay:

    $${\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} R_{1} & \leq \min\left\{ t_{1}C\left(P_{1}\left({g_{1}^{2}}+{g_{3}^{2}}\right)\right),t_{1}C\left({g_{3}^{2}}P_{1}\right)+t_{3}C\left({g_{2}^{2}}P_{r}\right)\right\},\\ R_{2} & \leq \min\left\{ t_{2}C\left(P_{2}\left({g_{2}^{2}}+{g_{3}^{2}}\right)\right),t_{2}C\left({g_{3}^{2}}P_{2}\right)+t_{3}C\left({g_{1}^{2}}P_{r}\right)\right\}.\end{aligned}}} $$
  • In [38], using doubly nested lattice codes, an achievable rate region for three phases CoMABC protocol is obtained. The achievable rate region for this protocol is given by [38]

    $$\begin{array}{@{}rcl@{}} R_{1} & \leq & \min\left\{ t_{1}R_{1,r}^{*}+t_{3}C\left({g_{3}^{2}}P_{1}\right),t_{2}C\left({g_{2}^{2}}P_{r}\right)\right.\\&&+\left. t_{3}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}\right)\right\},\\ R_{2} & \leq & \min\left\{ t_{1}R_{2,r}^{*},t_{2}C\left({g_{1}^{2}}P_{r}\right)\right\}. \end{array} $$
  • HBC protocol (achievable rate region): The HBC protocol contains four phases (phases 1, 2, 5, and 6 of Fig. 1) which starts with the broadcast phases (5 and 6) followed by the MABC phases (1 and 2). In [3], it is shown that for the HBC protocol, the following rate region is achievable:

    $$\begin{array}{@{}rcl@{}} {}R_{1} & \leq & \min\left\{ t_{1}C\left({g_{1}^{2}}P_{1}\right)+t_{3}C\left({g_{1}^{2}}P_{1}\right)\right., \\ &&t_{1}C\left({g_{3}^{2}}P_{1}\right) + \left.t_{4}C\left({g_{2}^{2}}P_{r}\right)\right\}, \end{array} $$
    ((51))
    $$\begin{array}{@{}rcl@{}} {}R_{2} & \leq & \min\left\{ t_{2}C\left({g_{2}^{2}}P_{2}\right)+t_{3}C\left({g_{2}^{2}}P_{2}\right)\right., \\ &&t_{2}C\left({g_{3}^{2}}P_{2}\right) +\left. t_{4}C\left({g_{1}^{2}}P_{r}\right)\right\}, \end{array} $$
    ((52))
    $$\begin{array}{@{}rcl@{}} {}R_{1}+R_{2} & \leq & t_{1}C\left({g_{1}^{2}}P_{1}\right)+t_{2}C\left({g_{2}^{2}}P_{2}\right)\\&&+t_{3}C\left({g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}\right). \end{array} $$
    ((53))

An outer bound for the HBC protocol using the cut-set bound is given in [3]. Since it is not clear that jointly Gaussian distributions are optimal, it is difficult to compute the outer bound of the HBC protocol numerically [3]. However, using the presented approach at [53], we can bound the given outer bound in [3] as the following for the Gaussian case:

  • HBC protocol (outer bound)

    $$\begin{array}{@{}rcl@{}} R_{1} & \leq & \min\left\{ t_{1}C\left({g_{1}^{2}}P_{1}+{g_{3}^{2}}P_{1}\right)+t_{3}C\left({g_{1}^{2}}P_{1}\right),\right.\\ && \left.t_{1}C\left({g_{3}^{2}}P_{1}\right)+t_{4}C\left({g_{2}^{2}}P_{r}\right)\right\},\\ R_{2} & \leq & \min\left\{ t_{2}C\left({g_{2}^{2}}P_{2}+{g_{3}^{2}}P_{2}\right)+t_{3}C\left({g_{2}^{2}}P_{2}\right),\right.\\&&\left. t_{2}C\left({g_{3}^{2}}P_{2}\right)+t_{4}C\left({g_{1}^{2}}P_{r}\right)\right\}. \end{array} $$
  • 6-phase protocol (achievable rate region) [27]

    $$\begin{array}{@{}rcl@{}} {}R_{1} & \leq & R_{1,2}^{(4)}+\min\left\{\vphantom{\left.t_{2}C\left({g_{2}^{2}}P_{r}\right)+R_{r,2}^{(4)}\right\}}\left(t_{1}+t_{5}\right)C\left({g_{1}^{2}}P_{1}\right),t_{5}C\left({g_{3}^{2}}P_{1}\right)\right.\\&&+\left.t_{2}C\left({g_{2}^{2}}P_{r}\right)+R_{r,2}^{(4)}\right\},\\ {}R_{2} & \leq & R_{2,1}^{(3)}+\min\left\{\vphantom{\left.t_{2}C\left({g_{2}^{2}}P_{r}\right)+R_{r,2}^{(4)}\right\}} \left(t_{1}+t_{6}\right)C\left({g_{2}^{2}}P_{2}\right),t_{6}C\left({g_{3}^{2}}P_{2}\right)\right.\\&&+\left.t_{2}C\left({g_{1}^{2}}P_{r}\right)+R_{r,1}^{(3)}\right\},\\{} R_{1}+R_{2} & \leq & t_{5}C\left({g_{1}^{2}}P_{1}\right)+R_{1,2}^{(4)}+t_{6}C\left({g_{2}^{2}}P_{2}\right)\\&&+t_{1}C\left({g_{1}^{2}}P_{1}+{g_{2}^{2}}P_{2}\right)+R_{2,1}^{(3)},\\ {}R_{1,2}^{(4)} & \leq & t_{4}C\left({g_{3}^{2}}P_{1}\right),\:R_{r,2}^{(4)}\leq t_{4}C\left({g_{2}^{2}}P_{r}\right),\:R_{1,2}^{(4)}\\&&+R_{r,2}^{(4)}\leq t_{4}C\left({g_{2}^{2}}P_{r}+{g_{3}^{2}}P_{1}\right),\\ {}R_{2,1}^{(3)} & \leq & t_{3}C\left({g_{3}^{2}}P_{2}\right),\:R_{r,1}^{(3)}\leq t_{3}C\left({g_{1}^{2}}P_{r}\right),\:R_{2,1}^{(3)}\\&&+R_{r,1}^{(3)}\leq t_{3}C\left({g_{1}^{2}}P_{r}+{g_{3}^{2}}P_{2}\right). \end{array} $$

Kim et al. [3] show that the achievable rate region for the HBC protocol contains points that are outside the outer bounds of the MABC and TDBC protocols. In [27], it is shown that by using a 6-phase protocol, we can achieve a better rate region than the obtained rate region of the HBC protocol in [3]. Note that this improvement is due to increasing number of transmission phases from 4 to 6 (which induces higher complexity). Here, we numerically compare 2-CoMABC with the above-mentioned protocols. When we compare the sum rate outer bounds or achievable sum rates for different protocols, linear programming is used to optimize the portion of time allocated to each phase. In the following, for all nodes, we assume that the power constraint equals to P and we define \(\text {SNR}_{i}={g_{i}^{2}}P\) for i{1,2,3}.

We compare the sum rate in an environment with path loss. We assume the channel gains to be \(g_{1}=\left (1+d\right)^{-\frac {\gamma }{2}}\), \(g_{2}=\left (1-d\right)^{-\frac {\gamma }{2}}\) and \(g_{3}=2^{-\frac {\gamma }{2}}\), where d is the position of the relay and γ is the path loss exponent.

The achievable sum rate and sum rate outer bound with different d for HBC and 2-CoMABC protocols are given in Fig. 3. The figures are evaluated at P=5 dB and 15 dB, respectively, with γ=4 and ρ 1r =ρ 2r =0.2. As we expected from Corollary 2, for 2-CoMABC, the inner bound meets the outer bound at high SNR and performs superior to the HBC protocol. Also at P=15 dB, the achievable rate for 2-CoMABC protocol is larger than the HBC outer bound. For P=5 dB, there is a gap between the inner bound and the outer bound for 2-CoMABC protocol. This gap at d=±1 tends to zero due to \(R_{1,r}^{*}\rightarrow C\left ({g_{1}^{2}}P_{1}\right)\) or \(R_{2,r}^{*}\rightarrow C\left ({g_{2}^{2}}P_{2}\right)\).

Fig. 3
figure 3

Achievable sum rate region and sum rate outer bound vs. the relay position for different SNRs (γ=4, ρ 1r =ρ 2r =0.2)

In Fig. 4, we compare the achievable sum rates and sum rate outer bounds with different path loss exponents for 2-CoMABC and HBC protocols. The results are for γ=2,5 with P=10 dB. Since we have normalized the channel gain at d=0, the two groups of curves meet at this position. As we observe, for smaller γ, the gap between 2-CoMABC protocol and HBC protocol is significantly larger than the case of an environment with larger value of path loss exponent. Moreover, achievable sum rate region and sum rate outer bound of 2-CoMABC protocol is larger than that of HBC under any circumstance.

Fig. 4
figure 4

Achievable sum rate region and sum rate outer bound vs. the relay position for different path loss exponents (ρ 1r =ρ 2r =0.2)

In Figs. 5 and 6, we compare the rate region of different protocols in asymmetric and symmetric scenarios. For the asymmetric case, we see that the achievable rate region of our proposed scheme not only contains the achievable rate region of all previous protocols but it also contains some rates of the 6-phase protocol in [27]. For the symmetric case, the achievable rate region of our proposed scheme is completely better than all other protocols and it fully contains the rate region of them.

Fig. 5
figure 5

Achievable rate regions for six protocols in an asymmetric scenario: SNR1=35 dB, SNR2=30 dB, and SNR3=13 dB(ρ 1r =ρ 2r =0.3)

Fig. 6
figure 6

Achievable rate regions for six protocols in the symmetric scenario: SNR1=SNR2=SNR3=10 dB(ρ 1r =ρ 2r =0.3)

In Fig. 7, we compare the rate region of protocols in low SNRs, SNR1=−3 dB, SNR2=−4 dB, and SNR3= 2 dB. As we observe, the performance of our transmission scheme via the 2-CoMABC protocol is similar to the 6-phase transmission protocol and both schemes achieve the same rate region. In addition, this rate region coincides with the outer bound of the TDBC protocol. Note that the achievable rate region using CoMABC protocol is zero and thus by the proposed scheme in [38], no rate region for this protocol can be achieved.

Fig. 7
figure 7

Achievable rate regions for five protocols: SNR1=−3 dB, SNR2=−4 dB, and SNR3= 2 dB

Finally, we compare the performance of HBC, 6-phase, CoMABC, TDBC, MABC, and 2-CoMABC protocols (in the sense of achievable sum rate) for P=15 dB, g 2=0 dB, g 3 =−5 dB with different \(G_{1}={g_{1}^{2}}\). Figure 8 shows that the achievable sum rate for 2-CoMABC is larger than that of all other protocols. Note that in this example, 2-CoMABC protocol, which only uses four transmission phases, has a larger sum rate than the 6-phase protocol in [27].

Fig. 8
figure 8

Achievable sum rates of different protocols. Channel parameters are P=15 dB, g 2=0 dB, and g 3=−5 dB

6 Conclusions

In this paper, the Gaussian two-way relay channel in the half-duplex mode, which operates in four phases, is studied. By using superposition coding, a scheme which achieves the outer bound within 0.5 bit is proposed. In this scheme, both structured codes and random coding have been used. In phase 1 (MA phase), we decompose the message of each user into two parts due to structured codes. In phase 2 (BC phase) and phase 3 and 4 (cooperative phases), random coding is applied. In the high SNR regime, the proposed scheme coincided with the cut-set outer bound and thus the capacity region is achieved. Also, using numerical examples, we showed that our 2-CoMABC protocol performs superior to the well-known HBC protocol (which has the same number of transmission phases). Although in general the comparison for few examples may not provide a general insight on which scheme outperforms the others, similar behavior has been observed by evaluating the achievable sum rates and the achievable rate regions in many other examples with different channel parameters.

References

  1. E Van Der Meulen, Three-terminal communication channels. Adv. Appl. Probab. 3(1), 120–154 (1971).

    Article  MathSciNet  MATH  Google Scholar 

  2. TM Cover, AAE Gamal, Capacity theorems for the relay channel. IEEE Trans. Inf. Theory. 25(5), 572–584 (1979).

    Article  MATH  Google Scholar 

  3. S Kim, P Mitran, V Tarokh, Performance bounds for bidirectional coded cooperation protocols. IEEE Trans. Inf. Theory. 54(11), 5235–5241 (2008).

    Article  MathSciNet  Google Scholar 

  4. B Rankov, A Wittneben, Spectral efficient protocols for half-duplex fading relay channels. IEEE J. Sel. Areas Commun. 25(2), 379–389 (2007).

    Article  Google Scholar 

  5. B Rankov, A Wittneben, in Proc. 43rd Allerton Conf. Commun., Contr., Comput. Spectral efficient protocols for nonregenerative half-duplex relaying (University of IllinoisMonticello, IL, 2005).

    Google Scholar 

  6. S Kim, N Devroye, P Mitran, V Tarokh, Achievable rate regions and performance comparison of half duplex bi-directional relaying protocols. IEEE Trans. Inf Theory. 57(10), 6405–6418 (2011).

    Article  MathSciNet  Google Scholar 

  7. S Smirani, M Kamoun, M Sarkiss, A Zaidi, P Duhamel, Achievable rate regions for two-way relay channel using nested lattice coding. IEEE Trans. Wireless Commun. 13(10), 5607–5620 (2014).

    Article  Google Scholar 

  8. K Ishaque Ashar, V Prathyusha, S Bhashyam, A Thangaraj, in Proc. 50th Allerton Conf. Commun., Contr., Comput. Outer bounds for the capacity region of a Gaussian two-way relay channel (IEEEMonticello, IL, 2012), pp. 1645–1652.

    Google Scholar 

  9. T Koike-Akino, P Popovski, V Tarokh, Optimized constellations for two-way wireless relaying with physical network coding. IEEE J. Sel. Areas Commun. 27(5), 773–787 (2009).

    Article  Google Scholar 

  10. S Zhang, SC Liew, Channel coding and decoding in a relay system operated with physical-layer network coding. IEEE J. Sel. Areas Commun. 27(5), 788–796 (2009).

    Article  Google Scholar 

  11. QF Zhou, Y Li, FCM Lau, B Vucetic, Decode-and-forward two-way relaying with network coding and opportunistic relay selection. IEEE Trans. Commun. 58(11), 3070–3076 (2010).

    Article  Google Scholar 

  12. W Nam, S Chung, YH Lee, Capacity of the Gaussian two-way relay channel to within 1/2 bit. IEEE Trans. Inf. Theory. 56(11), 5488–5494 (2010).

    Article  Google Scholar 

  13. MP Wilson, K Narayanan, H Pfister, A Sprintson, Joint physical layer coding and network coding for bidirectional relaying. IEEE Trans. Inf. Theory. 56(11), 5641–5654 (2010).

    Article  MathSciNet  Google Scholar 

  14. TJ Oechtering, H Boche, in Proc. ISITA. Optimal resource allocation for a bidirectional regenerative half-duplex relaying (IEEESeoul Korea, 2006).

    Google Scholar 

  15. C Schnurr, TJ Oechtering, S Stanczak, in Proc. 41st Annual Asilomar Conference on Signals, Systems, and Computers. Achievable rates for the restricted half-duplex two-way relay channel (IEEEPacific Grove, CA, 2007).

    Google Scholar 

  16. J Zhao, Analysis and design of communication techniques in spectrally efficient wireless relaying systems (Master’s thesis, ETH Zurich, 2010).

    Google Scholar 

  17. R Ahlswede, N Cai, S-YR Li, RW Yeung, Network information flow. IEEE Trans. Inf. Theory. 46(5), 1204–1216 (2000).

    Article  MathSciNet  MATH  Google Scholar 

  18. Y Wu, P Chou, S-Y Kung, in Proc. 39th Ann. Conf. Inf. Sci. Syst. (CISS). Information exchange in wireless networks with network coding and physical-layer broadcast (The Johns Hopkins UniversityBaltimore, MD, USA, 2005).

    Google Scholar 

  19. P Larsson, N Johansson, K-E Sunell, in Proc. IEEE Vehicular Technology Conf. (VTC). Coded bidirectional relaying (IEEEMelbourne, Australia, 2006).

    Google Scholar 

  20. P Popovski, H Yomo, in Proc. IEEE Int. Conf. Commun. (ICC). Physical network coding in two-way wireless relay channels (IEEEGlasgow, Scotland, 2007), pp. 707–712.

    Google Scholar 

  21. J Liu, M Tao, Y Xu, X Wang, in Proc. IEEE GLOBECOM. Superimposed XOR: a new physical layer network coding scheme for two-way relay channels (IEEEHonolulu, Hawaii, 2009), pp. 1–6.

    Google Scholar 

  22. Q Zhou, Y Li, F Lau, B Vucetic, Decode-and-forward two-way relaying with network coding and opportunistic relay selection. IEEE Trans. Comm. 58(11), 3070–3076 (2010).

    Article  Google Scholar 

  23. I-J Baik, S-Y Chung, in Proc. IEEE Int. Conf. Commun. (ICC). Network coding for two-way relay channels using lattices (IEEEBeijing, China, 2008), pp. 3898–3902.

    Google Scholar 

  24. S Ghasemi-Goojani, H Behroozi, in Proc. IEEE Inf. Theory Workshop (ITW). On the Ice-Wine problem: recovering linear combination of codewords over the Gaussian multiple access channel (IEEEHobart, Australia, 2014).

    Google Scholar 

  25. S Ghasemi-Goojani, H Behroozi, Nested lattice codes for Gaussian two-way relay channels. Available at http://arxiv.org/abs/1301.6291, Accessed Jan. 2013.

  26. P Zhong, M Vu, in Proc. IEEE Int. Conf. Commun. (ICC). Partial decode-forward coding schemes for the Gaussian two-way relay channel (IEEEOttawa, Canada, 2012).

    Google Scholar 

  27. C Gong, G Yue, X Wang, A transmission protocol for a cognitive bidirectional shared relay system. IEEE J. Sel. Top. Sign. Proces. 5(1), 160–170 (2011).

    Article  Google Scholar 

  28. M Khafagy, A El-Keyi, M Nafie, T ElBatt, in Proc. IEEE Int. Conf. Commun. (ICC). Degrees of freedom for separated and non-separated half-duplex cellular MIMO two-way relay channels (IEEEOttawa, Canada, 2012).

    Google Scholar 

  29. A Zaidi, L Vandendorpe, P Duhamel, in IEEE Int. Commun. Conf. (ICC). Lower bounds on the capacity regions of the relay channel and the cooperative relay-broadcast channel with non-causal side-information (IEEEGlasgow, Scotland, 2007), pp. 6005–6011.

    Google Scholar 

  30. A Zaidi, SP Kotagiri, JN Laneman, L Vandendorpe, Cooperative relaying with state available non-causally at the relay. IEEE Trans. Inf. Theory. 56(5), 2272–2298 (2010).

    Article  MathSciNet  Google Scholar 

  31. A Zaidi, SP Kotagiri, JN Laneman, L Vandendorpe, in Proc. IEEE Information Theory Workshop (ITW). Cooperative relaying with state at the relay (IEEEPorto, Portugal, 2008), pp. 139–143.

    Google Scholar 

  32. A Zaidi, S Shamai, P Piantanida, L Vandendorpe, Bounds on the capacity of the relay channel with noncausal state at source. IEEE Trans. Inf. Theory. 59(5), 2639–2672 (2013).

    Article  MathSciNet  Google Scholar 

  33. A Zaidi, L Vandendorpe, Lower bounds on the capacity of the relay channel with states at the source. Eurasip J. Wireless Commun. Netw. 2009:, 1–23 (2009).

    Article  Google Scholar 

  34. A Zaidi, S Shamai, P Piantanida, L Vandendorpe, in Proc. IEEE ISIT. Bounds on the capacity of the relay channel with noncausal state information at source (IEEEAustin, TX, 2010), pp. 639–643.

    Google Scholar 

  35. A Zaidi, L Vandendorpe, in Proc. IEEE ISIT. Rate regions for the partially-cooperative relay broadcast channel with noncausal side information (IEEENice, France, 2007), pp. 1246–1250.

    Google Scholar 

  36. A Zaidi, L Vandendorpe, in IEEE Int. Commun. Conf. (ICC). Achievable rates for the Gaussian relay interferer channel with a cognitive source (IEEEDresden, Germany, 2009), pp. 1–6.

    Google Scholar 

  37. S Ghasemi-Goojani, H Behroozi, A new achievable rate region for the Gaussian two-way relay channel via hybrid broadcast protocol. IEEE Commun. Lett. 18(11), 1883–1886 (2014).

    Article  Google Scholar 

  38. Y Tian, D Wu, C Yang, A Molisch, Asymmetric two-way relay with doubly nested lattice codes. IEEE Trans. Wireless Commun. 11(2), 694–702 (2012).

    Article  Google Scholar 

  39. S Bhashyam, A Thangaraj, in Proc. 51th Ann. Allerton Conf. The Gaussian two-way diamond channel (IEEEMonticello, IL, 2013), pp. 1292–1299.

    Google Scholar 

  40. B Nazer, M Gastpar, Compute-and-forward: harnessing interference through structured codes. IEEE Trans. Inf. Theory. 57(10), 6463–6486 (2011).

    Article  MathSciNet  Google Scholar 

  41. U Erez, R Zamir, Achieving 1/2 log(1 + SNR) on the AWGN channel with lattice encoding and decoding. IEEE Trans. Inf. Theory. 50(22), 2293–2314 (2004).

    Article  MathSciNet  MATH  Google Scholar 

  42. JH Conway, NJA Sloane, Packings, Lattices and Groups (Springer-Verlag, New York, 1992).

    Google Scholar 

  43. B Nazer, M Gastpar, Computation over multiple-access channels. IEEE Trans. Inf. Theory. 53(19), 3498–3516 (2007).

    Article  MathSciNet  Google Scholar 

  44. R Zamir, M Feder, On lattice quantization noise. IEEE Trans. Inf. Theory. 42(4), 1152–1159 (1996).

    Article  MATH  Google Scholar 

  45. U Erez, S Litsyn, R Zamir, Lattices which are good for (almost) everything. IEEE Trans. Inf. Theory. 51(16), 3401–3416 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  46. M Nokleby, B Azhang, in IEEE Int’ l Conf on Comm. (ICC). Lattice coding over the relay channel (IEEEKyoto, Japan, 2011), pp. 1–5.

    Google Scholar 

  47. TM Cover, Broadcast channels. IEEE Trans. Inf. Theory. 18(13), 2–14 (1972).

  48. S Zhang, SC Liew, PP Lam, in Proceedings of the 12th annual international conference on Mobile computing and networking. Hot topic: physical-layer network coding (ACMLos Angeles, CA, USA, 2006), pp. 358–365.

    Chapter  Google Scholar 

  49. G Poltyrev, On coding without restrictions for the AWGN channel. IEEE Trans. Inf. Theory. 40(9), 409–417 (1994).

    Article  MathSciNet  MATH  Google Scholar 

  50. TM Cover, JA Thomas, Elements of Information Theory, 2nd edn (John Wiley & Sons, New York, 2006).

    MATH  Google Scholar 

  51. GD Forney, in 41st Ann. Allerton Conf. Commun., Control Comput. On the role of MMSE estimation in approaching the information theoretic limits of linear Gaussian channels: Shannon meets Wiener (University of IllinoisMonticello, IL, 2003).

    Google Scholar 

  52. MA Khojastepour, A Sabharwal, B Aazhang, in Proc. 2nd International Workshop on Information Processing (IPSN). Bounds on achievable rates for general multi-terminal networks with practical constraints (Sringer-VerlagPalo Alto, CA, 2003).

    Google Scholar 

  53. M Khojastepour, A Sabharwal, B Aazhang, in Global Telecommunications Conference, 2003. GLOBECOM’03. On capacity of Gaussian cheap relay channel (IEEESan Francisco, USA, 2003), pp. 1776–1780.

    Google Scholar 

Download references

Acknowledgements

This work has been supported by the Iran NSF under Grant No. 93046836.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamid Behroozi.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghasemi-Goojani, S., Behroozi, H. Lattice-coded cooperation protocol for the half-duplex Gaussian two-way relay channel. J Wireless Com Network 2015, 252 (2015). https://doi.org/10.1186/s13638-015-0483-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-015-0483-2

Keywords