On the capacity of state-dependent Gaussian cognitive interference channel

A Gaussian cognitive interference channel with state (G-CICS) is studied. In this paper, we focus on the two-sender, two-receiver case and consider the communication situation in which two senders transmit a common message to two receivers. Transmitter 1 knows only message W1, and transmitter 2, referred to as the cognitive user, knows both messages W1 and W2 and also the channel’s states sequence non-causally. Receiver 1 needs to decode only W1 while receiver 2 needs to decode both messages. In this paper, we investigate the weak and moderate interference case where we assume that the channel gain a satisfies |a|≤1. In addition, inner and outer bounds on the capacity region are derived in the regime of high state power, i.e., the channel state sequence has unbounded variance. First, we show that the achievable rate by Gelfand-Pinsker coding vanishes in the high state power regime under a condition over the channel gain. In contrast, we propose a transmission scheme (based on lattice codes) that can achieve positive rates, independent of the interference. Our transmission scheme can achieve the capacity region in a high signal-to-noise ratio (SNR) regime. Also, regardless of all channel parameters, the gap between the achievable rate region and the outer bound is at most 0.5 bits.


Introduction
In the exchange of information among many nodes, the interference between different transmitter and receiver pairs is unavoidable. In the classical interference channel (IC), this interference exists between two different transmitters and receivers. In [1], Carleial, using superposition coding, obtains general bounds on the capacity region of discrete memoryless interference channels. By using rate splitting at transmitters and sequential decoding at destinations, Han and Kobayashi establish the best achievable rate region known to date [2]. Unfortunately, the problem of characterizing the capacity region of a general IC has been open for more than 30 years. Except for very strong Gaussian IC, strong Gaussian IC, the sum capacity of the degraded Gaussian IC and very weak interference, characterizing the capacity region of a Gaussian IC is still an open problem [3][4][5][6]. Etkin et al., by deriving new outer bounds, show that an explicit Han-Kobayashi version scheme can achieve capacity region within 1 bit for all channel parameters [7].
The cognitive interference channel, where one user has full non-causal knowledge of the other user's message, *Correspondence: behroozi@sharif.edu Electrical Engineering Department, Sharif University of Technology, Tehran, Iran is studied in [8][9][10]. This setup is also referred to as the interference channels with degraded message sets.
Recently, interference channels with state have received considerable attention. In general, channels with random states can model a time-varying wireless channel as well as interfering signals. The two-user state-dependent Gaussian interference channel where the state information is non-causally known at both encoders is studied in [11]. By proposing an active interference-cancellation mechanism, which is a generalized dirty-paper coding (DPC) [12] technique, some achievable rate regions for this channel are obtained. A Gaussian IC with the same state at both links which is scaled differently at two receivers is studied in [13]. For the very strong interference regime, as well as for the weak regime, the sum capacity is obtained under certain conditions on channel parameters [13]. In [14], a state-dependent Gaussian Z-interference channel model in the regime of high state power is investigated. By utilizing a layered coding scheme, inner and outer bounds on the capacity region are derived.
In [15], a model of cognitive state-dependent interference channels is studied, in which one of the transmitters knows both messages and also the states of the channel in a non-causal manner while the other transmitter knows only one of the messages and does not know the channel states. Each of the two decoders try to decode only its intended message. By using a generalized binning principle, inner and outer bounds on the capacity region are established.
In this paper, we study the Gaussian cognitive interference channel with a state (G-CICS) with two transmitters and two receivers (see Figure 1). In this model, transmitter 1 knows only message 1 while transmitter 2 (the cognitive transmitter) knows both messages 1 and 2. The state sequence is known only at transmitter 2, and transmitter 1 does not know the state channel. The common message known to both transmitters, i.e., message 1, needs to be decoded at both receivers instead of at receiver 1 only. This model is investigated in [16], in which by using superposition coding, rate splitting, and Gelfand-Pinsker binning scheme, inner bounds are established. It is shown that the inner bounds are coincided with the outer bounds for a degraded semi-deterministic channel and channels that satisfy a less noisy condition. The Gaussian channels are also studied where inner and outer bounds are derived.
The main result of this paper is designing a novel transmission scheme for the Gaussian interference channel with state where we aim to recover a common message at two decoders. To reach this goal, we treat this channel as two state-dependent Gaussian multiple-access channels (MACs) and try to simultaneously recover the common message at both decoders. Prior to this work, different types of the state-dependent two-user MAC are investigated in the literature (See e.g., [17][18][19][20][21][22][23][24]). In [17], a two-user state-dependent multi-access channel in which the state is known only at the encoder (that sends both messages) is investigated. By generalizing the Gelfand-Pinsker model, the capacity region for both non-causal and causal state information is characterized. If the state information is non-causally known only at the encoder that sends the common message, then the capacity region for the Gaussian scenario in some cases is characterized in [18]. In [19][20][21], the state-dependent two-user multi-access channel in which the states of the channel are known non-causally at one of the encoders and only strictly causally at the other encoder is considered. By generalizing the framework of [21], the capacity region of this model is fully characterized in [19], and the optimal schemes for achieving the capacity region are also studied. In [22][23][24], the two-user multiple-access channel with state is considered in which the states are known causally or strictly causally at both encoders or only at one encoder. For the causal state, it is shown that the capacity region is fully achievable. If the state is known strictly causally at both the encoders or only at one encoder, then the capacity region at some cases is characterized.
The capacity region of relay channel with state is investigated in [25][26][27][28][29][30][31][32]. The relay channel and the cooperative relay broadcast channel controlled by random parameters are studied in [25]. It is shown that when the state is non-causally known to the transmitter and intermediate nodes, the decode-and-forward can achieve the capacity region under some cases. The relay channel with the state known non-causally at the relay is investigated in [26] and [27]. Using Gelfand-Pinsker coding, rate splitting, and decode-and-forward, a lower bound on channel capacity is obtained for this channel, and it is shown that for the degraded Gaussian channels, the lower bound meets the upper bound and thus the capacity region is achievable. The relay channel when the state is available only at the source is studied in [28][29][30]. By obtaining lower and upper bounds, it is shown that in a number of special cases the capacity region is achievable. A partially cooperative relay broadcast channel (PC-RBC) with state is studied in [31] where two situations including the availability of the state non-causally at both the source and the relay and only at the source are analyzed. The relay interference channel with a cognitive source where only the source knows (non-causally) the interference from the interferer is considered in [32], and some achievable rate regions are obtained.
All achievable rate regions in [16] are based on random coding. In this paper, we use the lattice-based coding scheme (especially lattice alignment) to establish capacity regions for this channel. A comprehensive study on the performance of lattices is presented in [33]. Performance of lattice codes over the additive white Gaussian noise (AWGN) channel is studied in [34]. A dirty paper AWGN channel in which the interference is known noncausally or causally at the transmitter is investigated in [35]. In [36], it is shown that the lattice coding strategy may outperform the DPC in a doubly dirty MAC. In [37], we also show that if the noise's variance satisfies some constraints, then the capacity region of an additive state-dependent Gaussian interference channel with two independent channel states is achieved when the state power goes to infinity. In [38], a Gaussian relay channel with a state is considered in which the additive state is either added at the destination and known non-causally at the source or experienced at the relay and known at the destination. It is shown that a scheme based on nested lattice codes can achieve the capacity region within 0.5 bits. In [39], by using nested lattice codes, the generalized degrees of freedom for the two-user cognitive interference channel are characterized where one of the transmitters has knowledge of a linear combination of the two information messages. Using lattice codes for the state-dependent Gaussian Z-interference channel, some rate regions are established in [40].
Here, we evaluate the performance of lattice-based coding schemes on obtaining achievable rate regions for the G-CICS. Similar to [14,36], we assume that the channel state has unbounded variance. This is referred to as a high state power regime. In addition, we consider the weak and moderate interference cases, i.e., the channel gain is smaller than one; |a| ≤ 1. First, we show that the achievable rate region by random coding vanishes in a high state power regime under a condition over the channel gain. Then, by using a lattice-based coding scheme, we obtain an achievable rate region for the G-CICS. As Figure 1 shows, we can see that the G-CICS can be treated as two state-dependent MACs with a common message: one from encoders 1 and 2 to decoder 1, and the other from encoders 1 and 2 to decoder 2. For both these MACs, the capacity region is completely characterized in [19]. However in the G-CICS, we need to decode the common message simultaneously at both decoders. Since these two MACs are different, we cannot apply the proposed scheme in [19] for this channel.
The main challenge of this paper is designing a scheme that can achieve a rate region close to the outer bound for the state-dependent Gaussian interference channel with a common message (set W 2 = 0 in Figure 1). Although this channel can be treated as two state-dependent MACs with a common message, these two MACs are different, and since the common message should be recovered simultaneously at both decoders, the known schemes in the literature cannot be directly applied. To solve this problem, we use lattice codes and obtain a linear combination of the common message, sent by two transmitters, at the decoders. Note that lattice codes are among the best codes in finding the linear combination of messages [41]. As we will show, at high signal-to-noise ratios (SNRs), the achievable rate region meets the outer bound, and regardless all channel parameters, the achievable rate region is within 0.5 bits.
The paper is organized as follows: We present the channel model in Section 2. The achievable rate region by random coding is presented at Section 3. Section 4, by using lattice codes, establishes an achievable rate region for the G-CICS. Using numerical examples, achievable rate regions of our proposed scheme and random coding are compared in Section 5. Section 6 concludes the paper.

System model
Throughout the paper, random variables and their realizations are denoted by capital and small letters, respectively.
x stands for a vector of length n, (x 1 , x 2 , . . . , x n ). Also, . denotes the Euclidean norm, and all logarithms are with respect to base 2.
In this paper, we consider a G-CICS in which two transmitters send a common message W 1 to two receivers, and transmitter 2 wishes to communicate a message W 2 to receiver 2 only. The channel is also corrupted by an independent and identically distributed (i.i.d.) state sequence. We investigate the asymmetric cognitive scenario, as [15,16], where the state is non-causally known at transmitter 2 and is unknown at transmitter 1 and at the receivers. The system model is depicted in Figure 1. The interference channel is described by (X 1 , X 2 , S, Y 1 , Y 2 , P(y 1 , y 2 |x 1 , x 2 , s)), where X 1 and X 2 are the two input alphabets, S is the state alphabet, and Y 1 and Y 2 are the output alphabets associated with the two receivers. In the Gaussian case, the alphabets of inputs, outputs, and the state are real. Messages at the encoders, W 1 and W 2 , are independent random variables and uniformly distributed on the sets 1, 2, . . . , 2 nR i for i = 1, 2, respectively, where n represents the block length and R i the transmission rate. Encoder 2, (i.e., the cognitive user) in addition to W 2 , also knows W 1 , thus allowing for full http://jwcn.eurasipjournals.com/content/2014/1/196 unidirectional cooperation. Both encoders wish to send a message W 1 to both the decoders over n channel uses while encoder 2 also wants to communicate a message W 2 to decoder 2. The channel outputs at receivers 1 and 2 at time instant j are given, respectively, by N) are independent Gaussian random variables, and the normally distributed state variable S j ∼ N (0, Q) is independent of Z 1,j and Z 2,j . Both the noise variables and the state variable are i.i.d. over channel uses. The state sequence S j n j=1 is non-causally known only at transmitter 2. In this paper, similar to [42,43], we assume that the channel gains are rational, a = p q ∈ Q. The channel inputs X i 's (i ∈ {1, 2}) are average-power limited to P > 0, i.e., A 2 nR 1 , 2 nR 2 , n code consists of message sets W 1 = 1, 2, . . . , 2 nR 1 and W 2 = 1, 2, . . . , 2 nR 2 , two encoding functions and two decoding functions such that the transmitted codeword X i satisfies the power constraint, given by (1). We define the probability of error as A rate pair (R 1 , R 2 ) of non-negative real values is achievable if there exists a sequence of 2 nR 1 , 2 nR 2 , n code such that lim n→∞ P (n) e → 0. The capacity region is defined as the convex closure of the set of all achievable rate pairs (R 1 , R 2 ).

Achievable rate region by random coding
In this Section, we evaluate achievable rate regions by random coding for the G-CICS in the regime of high state power. In [16], by using random coding, two inner bounds for the G-CICS are provided when |a| ≤ 1. By evaluating the inner bound 1 of Proposition 3 in [16] and replacing S → aS, b → a, c → 1 a , we can see that this inner bound when the channel gain tends to zero vanishes, and thus, we cannot achieve any positive rate region by such scheme. The following theorem presents the second inner bound. To achieve this region, the Gelfand-Pinsker coding and rate splitting in transmitter 2 is used. Lemma 1. [16] For the Gaussian cognitive interference channel with state non-causally known at transmitter 2, if |a| ≤ 1, then an inner bound on the capacity region in the high state power regime consists of rate pairs (R 1 , R 2 ) satisfying Proof. See Proposition 4 in [16].
Now from Lemma 1, we can see that if then the achievable rate of such random coding argument vanishes. Thus, under this condition, such random coding scheme fails to achieve any positive rate for the G-CICS in the high state power regime. In Figure 2, we set P = 5 dB and a = 0.15 ( Figure 2a) and P = 10 dB and a = 0.1 (Figure 2b), and then, by considering the left-hand side (LHS) of the condition in (2), we plot the range of parameter P , under which we cannot achieve any positive rate by using random coding. Note that, in order to plot this figure, we consider a fixed channel gain and a fixed channel power constraint. Since the achievable rate region http://jwcn.eurasipjournals.com/content/2014/1/196 depends on P and P where P + P = 1 − ρ 2 21 − ρ 2 2s P, we must vary P over interval 0 ≤ P ≤ 1 − ρ 2 21 − ρ 2 2s P, and then, by equating P = 1 − ρ 2 21 − ρ 2 2s P − P , we can plot the left-hand side of the condition in (2).

Lattice definitions
Here, we provide some necessary definitions on lattices and nested lattice codes. The reader can find more details in [34,41,44].

Definition 1. (Lattice):
An n-dimensional lattice is a set of points in Euclidean space R n such that, if x, y ∈ , then x + y ∈ , and if x ∈ , then −x ∈ . A lattice can always be written in terms of a generator matrix G ∈ Z n×n as where Z represents integers.

Definition 2. (Quantizer): The nearest neighbor quantizer Q(.) associated with the lattice is
and the normalized second moment of lattice is where V = V( ) dx is the Voronoi region volume, i.e., V = Vol(V).

Definition 5. (Modulus):
The modulo-operation with respect to lattice is defined as that maps x into a point in the fundamental Voronoi region.
For all x, y ∈ R n and ⊆ 1 , the modulo lattice operation satisfies the following properties

Definition 6. (Quantization goodness or Rogers-good):
The sequence is indexed by the lattice dimension n. The existence of such lattices is shown in [45,46]. http://jwcn.eurasipjournals.com/content/2014/1/196 Poltyrev-good): Let Z be a length-i.i.d Gaussian vector, Z ∼ N 0, σ 2 Z I n . The volume-to-noise ratio of a lattice is given by

Definition 7. (AWGN channel coding goodness or
where σ 2 Z is chosen such that Pr {Z / ∈ V} = , and I n is an n × n identity matrix. A sequence of lattices and, for a fixed volume-to-noise ratio greater than 2πe, Pr {Z / ∈ V n } decays exponentially in n. Poltyrev showed that sequences of such lattices exist [47]. The existence of a sequence of lattices (n) which is good in both senses (i.e., simultaneously are Poltyrevgood and Rogers-good) has been shown in [46].

Definition 8. (Nested lattices): A lattice is said to be
is referred to as the coarse lattice and 1 as the fine lattice.
Note that if a ∈ Z, then always a ⊆ .

Definition 9. (Nested lattice codes): A nested lattice code is the set of all points of a fine lattice (n)
1 that is within the fundamental Voronoi region V of a coarse lattice (n) ,

Definition 10. (Rate):
The rate of a nested lattice code is In the following, we present a key property of dithered nested lattice codes. [34,48]. Let V be a random vector with an arbitrary distribution over R n . If D is independent of V and uniformly distributed over V, then (V + D) mod is also independent of V and uniformly distributed over V.
Before presentation of our proposed scheme, we prove the following lemma that plays an important role in the proof of achievable rate region by lattice codes. Lemma 3. Suppose that and 1 are two lattices such that ⊆ 1 . Then, the modulo operation is commutative, i.e., Proof. We start with manipulating the left-hand side of (9): where the last equality follows from the fact that ⊆ 1 . For the RHS of (9), we have: where (11) is based on the fact that ⊆ 1 . Now, by comparing (10) and (12), the proof of the lemma is complete.

Our proposed scheme
In this section, we obtain an achievable rate region using lattice codes for the G-CICS. If we use the common encoding and decoding as it is explained in [34], then similar to random coding, we cannot achieve the capacity region within a constant gap. Thus, we require to introduce a new scheme for this channel. For presenting this scheme, we use two modulo operations at the decoder. Then using Lemma 3, we interchange modulo operations. As we will see, this scheme can achieve the capacity region at high SNRs and within 0.5 bits regardless of all channel parameters. In the following, we present our scheme in more detail.
A method to obtain a rate region is achieving two corner points of that region. Then, by time sharing between two corner points, we can achieve a rate region. Suppose that V 1 and V 2 are two lattice codewords that carry the information for user 1 and 2, respectively. We use DPC or a lattice scheme to decode V 2 at decoder 2 and a scheme which estimates linear combination of the common message at both decoders to decode V 1 for both users. In the following, we explain both schemes in more details.
First, suppose that lattice is a Rogers-good lattice and has the following second moment: • Sending the private message, V 2 (for decoder 2): Here, we assume that encoder 1 has no message to send. Thus, we can consider the G-CICS as a point-to-point http://jwcn.eurasipjournals.com/content/2014/1/196 channel with state, which aims to send V 2 to decoder 2. This channel can be characterized as the following: Since transmitter 1 has no message to send, we set X 1 = 0. Now, we can use a DPC or lattice scheme to achieve the capacity of this channel. By using a lattice coding scheme, transmitter 2 sends the following signal: where dither sequence D 2 is uniformly distributed over the Voronoi region . Thus, based on the crypto lemma, the power constraint is met. Now, by using lattice decoding and choosing α = P P+N , we achieve the following corner point [35]: • Encoding the common message,V 1 : To estimate the common message V 1 , we first assume that 0 ≤ a ≤ 1. Then, by a simple changing at encoding, we extend our scheme to −1 ≤ a ≤ 0. Suppose that V 2 = 0, and we intend to send V 1 to both decoders. Consider the following nested lattices: where the coding lattice, 1 , is Poltyrev-good while the shaping lattice, , is both Rogers-good and Poltyrevgood. For instance, a lattice partition chain is visualized in Figure 3 for the two-dimensional case. Without loss of generality and due to a reason which will be determined later, we assume that = q (1 + a) 1 . Based on this lattice chain, we construct the following codebook for each node: For the lattice chain shown in Figure 3, we have C i = {1, 2, . . . , 9, 10}. Now, using a one-to-one mapping at encoder i, we map message W 1 to a lattice codeword V 1 of codebook C i and send the following signals over the channel: where D 1 and D 2 are two independent dithers which are uniformly distributed on the Voronoi region V. Note that by the crypto lemma, we know that the power constraint is satisfied. Now, we explain decoding at decoder 1 and 2.
• Decoding the common message,V 1 , at decoder 1 Decoder 1, upon receiving Y 1 , performs the following operations to estimate V 1 : Equation (17) follows from (6), and (18) is based on Lemma 3. Equation (19) is based on applying (4) while (20) follows from Lemma 3. By using minimum Euclidean distance lattice decoding [34,49], which finds the closest point to Y d1 in q 1 , we estimate V 1 = (1 + a) qV 1 mod as: where (22) is based on the fact that ⊆ q 1 and property (7). Thus, the estimation of V 1 is correct if Note that to decode (1 + a) qV 1 mod , since we have used a quantizer associated with lattice q 1 , we may map some points of lattice q 1 to (1 + a) qV 1 mod . That's by finding some points of q 1 , since we used a one-to-one mapping, we can recover (1 + a)qV 1 mod .
Therefore, (23) shows that the estimation of V 1 is incorrect if the effective noise Z eff leaves the Voronoi region surrounding the codeword (1 + a) qV 1 , i.e., P e = Pr (Z eff / ∈ qV 1 ) . Now from [34,47], the error probability vanishes as n → ∞ if where Z * eff ∼ N (0, Var (Z eff )). Since 1 is Poltyrev-good, the condition of (24) is satisfied. Now, from (8) for R 1 , we have where (25) follows from (24), and (26) is based on Rogers goodness of . Now, we have V 1 = (1 + a) qV 1 mod and must try to decode V 1 . In the following lemma, we show that it is possible to decode it correctly. Lemma 4. Suppose 1 and are two lattices such that ⊆ 1 . For x, y ∈ 1 , x = y and a ∈ Z, we have Proof. By using definition of modulo operation, we have Since x = y, x, y ∈ 1 and a ∈ Z, thus a x − y is a nonzero element of lattice 1 . On other hand, for lattice , we know = a 1 . Thus, the element a x − y of lattice 1 is not an element of lattice , and therefore, we get [ax] mod − ay mod = 0. Now, we return to our problem where we aim to estimate V 1 . Since = q (1 + a) 1 , according to the preceding lemma for V 1 = V 2 , we have Thus, there exists only a codeword that can satisfy V 1 = (1 + a) qV 1 mod and it is the transmitted codeword. Therefore, we can achieve the following corner point at decoder 1: • Decoding the common message,V 1 , at decoder 2: Decoding at decoder 2 is exactly the same as decoder 1. Thus, we can achieve the following corner point Therefore, by using time sharing between the two corner points, given in (14) and (27), we can achieve the following rate region: It is easy to see that the following rate region is also achievable (since it is inside the region given by (29)): This transmission scheme is depicted in Figure 4.

Remark 1.
In order to extend our scheme to the case that −1 ≤ a ≤ 0, we must modify our encoding as the following: By comparing (31) with (15), we can see that at encoder 1, instead of sending V 1 , we transmit −V 1 . At the decoder, instead of decoding V 1 = (q + p) V 1 mod , we find V 1 = (−q + p) V 1 mod . But since p ≥ 0 and q ≤ 0, thus p − q ≥ −q which enables us to estimate V 1 correctly. Note that for the case −1 ≤ a ≤ 0, if we estimate (q + p) V 1 mod , since p + q ≤ −q, we cannot find the desired lattice point correctly.

Rate-region outer bound
For comparison, an outer bound on the capacity region of the G-CICS is provided. This outer bound is similar to the bound provided in [16] obtained using a different approach.

Lemma 5.
For the Gaussian cognitive interference channel with state non-causally known at transmitter 2, if the power of the state goes to infinity (Q → ∞), an outer bound consists of rate pairs (R 1 , R 2 ) satisfying: where the union is taken over all parameters 0 ≤ ρ 21 and ρ 2s ≤ 1 such that ρ 2 21 + ρ 2 2s ≤ 1.
Proof. We have where (34) is based on Fano's inequality. For the second term, we have where , and (35) follow from the fact that S is independent of (W 1 , W 2 ). Equation (36) follows from the fact that Gaussian distribution maximizes differential entropy for a fixed second moment and Cauchy-Schwarz inequality. For the asymptotic case of strong interference, i.e., Q → +∞, we get
We now calculate the maximum gap between the outer bound, given in (33), and the achievable rate-region, given by (30). We have where (38) is based on the fact that the maximum gap occurs at 1 a 2 +1 + P N = 1 for i = 1, 2. Thus, we have the following result. Theorem 1. The capacity region of the state-dependent Gaussian cognitive interference channel for the weak and moderate interference case in the high state power regime is achievable within 0.5 bits.

Numerical results
In this section, we numerically compare the achievable rates of random coding with those of our lattice-based transmission scheme. For comparison, the outer bound is http://jwcn.eurasipjournals.com/content  also provided. For simplicity, we use the following outer bound in our simulations: In Figure 5, we provide a comparison of the achievable rate regions and the outer bound at SNR = 10 dB for a = 0.3, 0.5, 0.8 (note that |a| ≤ 1). We observe that the achievable rate region of our lattice-based coding scheme is significantly larger than that of random coding. By increasing the channel gain, a, the achievable rate by lattice codes is within 0.5 bits of the outer bound.
In Figure 6, we compare the achievable rate regions when the channel gain is fixed at a = 0.5 and SNR varies. The values of the SNRs are SNR = 5, 15, 25 dB. As we observe, for high SNRs, the achievable rate region by lattice codes coincides with the outer bound.

Conclusions
In this paper, the state-dependent Gaussian cognitive interference channel in the weak and moderate interference case and in the high state power regime is studied. First, we showed that the achievable rate by random coding, which is based on Gelfand-Pinsker coding, vanishes under a condition over the channel gain. Then, we showed that a scheme that is based on lattice codes can achieve the capacity region at high SNRs and within 0.5 bits of the outer bound for all channel parameters.