Open Access

On the capacity of state-dependent Gaussian cognitive interference channel

EURASIP Journal on Wireless Communications and Networking20142014:196

https://doi.org/10.1186/1687-1499-2014-196

Received: 30 June 2014

Accepted: 27 October 2014

Published: 22 November 2014

Abstract

A Gaussian cognitive interference channel with state (G-CICS) is studied. In this paper, we focus on the two-sender, two-receiver case and consider the communication situation in which two senders transmit a common message to two receivers. Transmitter 1 knows only message W1, and transmitter 2, referred to as the cognitive user, knows both messages W1 and W2 and also the channel’s states sequence non-causally. Receiver 1 needs to decode only W1 while receiver 2 needs to decode both messages. In this paper, we investigate the weak and moderate interference case where we assume that the channel gain a satisfies |a|≤1. In addition, inner and outer bounds on the capacity region are derived in the regime of high state power, i.e., the channel state sequence has unbounded variance. First, we show that the achievable rate by Gelfand-Pinsker coding vanishes in the high state power regime under a condition over the channel gain. In contrast, we propose a transmission scheme (based on lattice codes) that can achieve positive rates, independent of the interference. Our transmission scheme can achieve the capacity region in a high signal-to-noise ratio (SNR) regime. Also, regardless of all channel parameters, the gap between the achievable rate region and the outer bound is at most 0.5 bits.

1 Introduction

In the exchange of information among many nodes, the interference between different transmitter and receiver pairs is unavoidable. In the classical interference channel (IC), this interference exists between two different transmitters and receivers. In [1], Carleial, using superposition coding, obtains general bounds on the capacity region of discrete memoryless interference channels. By using rate splitting at transmitters and sequential decoding at destinations, Han and Kobayashi establish the best achievable rate region known to date [2]. Unfortunately, the problem of characterizing the capacity region of a general IC has been open for more than 30 years. Except for very strong Gaussian IC, strong Gaussian IC, the sum capacity of the degraded Gaussian IC and very weak interference, characterizing the capacity region of a Gaussian IC is still an open problem [36]. Etkin et al., by deriving new outer bounds, show that an explicit Han-Kobayashi version scheme can achieve capacity region within 1 bit for all channel parameters [7].

The cognitive interference channel, where one user has full non-causal knowledge of the other user’s message, is studied in [810]. This setup is also referred to as the interference channels with degraded message sets.

Recently, interference channels with state have received considerable attention. In general, channels with random states can model a time-varying wireless channel as well as interfering signals. The two-user state-dependent Gaussian interference channel where the state information is non-causally known at both encoders is studied in [11]. By proposing an active interference-cancellation mechanism, which is a generalized dirty-paper coding (DPC) [12] technique, some achievable rate regions for this channel are obtained. A Gaussian IC with the same state at both links which is scaled differently at two receivers is studied in [13]. For the very strong interference regime, as well as for the weak regime, the sum capacity is obtained under certain conditions on channel parameters [13]. In [14], a state-dependent Gaussian Z-interference channel model in the regime of high state power is investigated. By utilizing a layered coding scheme, inner and outer bounds on the capacity region are derived.

In [15], a model of cognitive state-dependent interference channels is studied, in which one of the transmitters knows both messages and also the states of the channel in a non-causal manner while the other transmitter knows only one of the messages and does not know the channel states. Each of the two decoders try to decode only its intended message. By using a generalized binning principle, inner and outer bounds on the capacity region are established.

In this paper, we study the Gaussian cognitive interference channel with a state (G-CICS) with two transmitters and two receivers (see Figure 1). In this model, transmitter 1 knows only message 1 while transmitter 2 (the cognitive transmitter) knows both messages 1 and 2. The state sequence is known only at transmitter 2, and transmitter 1 does not know the state channel. The common message known to both transmitters, i.e., message 1, needs to be decoded at both receivers instead of at receiver 1 only. This model is investigated in [16], in which by using superposition coding, rate splitting, and Gelfand-Pinsker binning scheme, inner bounds are established. It is shown that the inner bounds are coincided with the outer bounds for a degraded semi-deterministic channel and channels that satisfy a less noisy condition. The Gaussian channels are also studied where inner and outer bounds are derived.
Figure 1

System model. Gaussian cognitive interference channel with state (G-CICS).

The main result of this paper is designing a novel transmission scheme for the Gaussian interference channel with state where we aim to recover a common message at two decoders. To reach this goal, we treat this channel as two state-dependent Gaussian multiple-access channels (MACs) and try to simultaneously recover the common message at both decoders. Prior to this work, different types of the state-dependent two-user MAC are investigated in the literature (See e.g., [1724]). In [17], a two-user state-dependent multi-access channel in which the state is known only at the encoder (that sends both messages) is investigated. By generalizing the Gelfand-Pinsker model, the capacity region for both non-causal and causal state information is characterized. If the state information is non-causally known only at the encoder that sends the common message, then the capacity region for the Gaussian scenario in some cases is characterized in [18]. In [1921], the state-dependent two-user multi-access channel in which the states of the channel are known non-causally at one of the encoders and only strictly causally at the other encoder is considered. By generalizing the framework of [21], the capacity region of this model is fully characterized in [19], and the optimal schemes for achieving the capacity region are also studied. In [2224], the two-user multiple-access channel with state is considered in which the states are known causally or strictly causally at both encoders or only at one encoder. For the causal state, it is shown that the capacity region is fully achievable. If the state is known strictly causally at both the encoders or only at one encoder, then the capacity region at some cases is characterized.

The capacity region of relay channel with state is investigated in [2532]. The relay channel and the cooperative relay broadcast channel controlled by random parameters are studied in [25]. It is shown that when the state is non-causally known to the transmitter and intermediate nodes, the decode-and-forward can achieve the capacity region under some cases. The relay channel with the state known non-causally at the relay is investigated in [26] and [27]. Using Gelfand-Pinsker coding, rate splitting, and decode-and-forward, a lower bound on channel capacity is obtained for this channel, and it is shown that for the degraded Gaussian channels, the lower bound meets the upper bound and thus the capacity region is achievable. The relay channel when the state is available only at the source is studied in [2830]. By obtaining lower and upper bounds, it is shown that in a number of special cases the capacity region is achievable. A partially cooperative relay broadcast channel (PC-RBC) with state is studied in [31] where two situations including the availability of the state non-causally at both the source and the relay and only at the source are analyzed. The relay interference channel with a cognitive source where only the source knows (non-causally) the interference from the interferer is considered in [32], and some achievable rate regions are obtained.

All achievable rate regions in [16] are based on random coding. In this paper, we use the lattice-based coding scheme (especially lattice alignment) to establish capacity regions for this channel. A comprehensive study on the performance of lattices is presented in [33]. Performance of lattice codes over the additive white Gaussian noise (AWGN) channel is studied in [34]. A dirty paper AWGN channel in which the interference is known non-causally or causally at the transmitter is investigated in [35]. In [36], it is shown that the lattice coding strategy may outperform the DPC in a doubly dirty MAC. In [37], we also show that if the noise’s variance satisfies some constraints, then the capacity region of an additive state-dependent Gaussian interference channel with two independent channel states is achieved when the state power goes to infinity. In [38], a Gaussian relay channel with a state is considered in which the additive state is either added at the destination and known non-causally at the source or experienced at the relay and known at the destination. It is shown that a scheme based on nested lattice codes can achieve the capacity region within 0.5 bits. In [39], by using nested lattice codes, the generalized degrees of freedom for the two-user cognitive interference channel are characterized where one of the transmitters has knowledge of a linear combination of the two information messages. Using lattice codes for the state-dependent Gaussian Z-interference channel, some rate regions are established in [40].

Here, we evaluate the performance of lattice-based coding schemes on obtaining achievable rate regions for the G-CICS. Similar to [14, 36], we assume that the channel state has unbounded variance. This is referred to as a high state power regime. In addition, we consider the weak and moderate interference cases, i.e., the channel gain is smaller than one; |a|≤1. First, we show that the achievable rate region by random coding vanishes in a high state power regime under a condition over the channel gain. Then, by using a lattice-based coding scheme, we obtain an achievable rate region for the G-CICS. As Figure 1 shows, we can see that the G-CICS can be treated as two state-dependent MACs with a common message: one from encoders 1 and 2 to decoder 1, and the other from encoders 1 and 2 to decoder 2. For both these MACs, the capacity region is completely characterized in [19]. However in the G-CICS, we need to decode the common message simultaneously at both decoders. Since these two MACs are different, we cannot apply the proposed scheme in [19] for this channel.

The main challenge of this paper is designing a scheme that can achieve a rate region close to the outer bound for the state-dependent Gaussian interference channel with a common message (set W2=0 in Figure 1). Although this channel can be treated as two state-dependent MACs with a common message, these two MACs are different, and since the common message should be recovered simultaneously at both decoders, the known schemes in the literature cannot be directly applied. To solve this problem, we use lattice codes and obtain a linear combination of the common message, sent by two transmitters, at the decoders. Note that lattice codes are among the best codes in finding the linear combination of messages [41]. As we will show, at high signal-to-noise ratios (SNRs), the achievable rate region meets the outer bound, and regardless all channel parameters, the achievable rate region is within 0.5 bits.

The paper is organized as follows: We present the channel model in Section 2. The achievable rate region by random coding is presented at Section 3. Section 4, by using lattice codes, establishes an achievable rate region for the G-CICS. Using numerical examples, achievable rate regions of our proposed scheme and random coding are compared in Section 5. Section 6 concludes the paper.

2 System model

Throughout the paper, random variables and their realizations are denoted by capital and small letters, respectively. x stands for a vector of length n, (x1,x2,…,x n ). Also, . denotes the Euclidean norm, and all logarithms are with respect to base 2.

In this paper, we consider a G-CICS in which two transmitters send a common message W1 to two receivers, and transmitter 2 wishes to communicate a message W2 to receiver 2 only. The channel is also corrupted by an independent and identically distributed (i.i.d.) state sequence. We investigate the asymmetric cognitive scenario, as [15, 16], where the state is non-causally known at transmitter 2 and is unknown at transmitter 1 and at the receivers. The system model is depicted in Figure 1. The interference channel is described by X 1 , X 2 , S , Y 1 , Y 2 , P ( y 1 , y 2 | x 1 , x 2 , s ) , where X 1 and X 2 are the two input alphabets, is the state alphabet, and Y 1 and Y 2 are the output alphabets associated with the two receivers. In the Gaussian case, the alphabets of inputs, outputs, and the state are real. Messages at the encoders, W1 and W2, are independent random variables and uniformly distributed on the sets 1 , 2 , , 2 n R i for i=1,2, respectively, where n represents the block length and R i the transmission rate. Encoder 2, (i.e., the cognitive user) in addition to W2, also knows W1, thus allowing for full unidirectional cooperation. Both encoders wish to send a message W1 to both the decoders over n channel uses while encoder 2 also wants to communicate a message W2 to decoder 2. The channel outputs at receivers 1 and 2 at time instant j are given, respectively, by
Y 1 , j = X 1 , j + aX 2 , j + aS j + Z 1 , j , Y 2 , j = aX 1 , j + X 2 , j + S j + Z 2 , j ,
where Z 1 , j N 0 , N and Z 2 , j N 0 , N are independent Gaussian random variables, and the normally distributed state variable S j N 0 , Q is independent of Z1,j and Z2,j. Both the noise variables and the state variable are i.i.d. over channel uses. The state sequence S j j = 1 n is non-causally known only at transmitter 2. In this paper, similar to [42, 43], we assume that the channel gains are rational, a = p q Q . The channel inputs X i ’s (i{1,2}) are average-power limited to P>0, i.e.,
1 n E X i 2 P , f or i = 1 , 2 .
(1)
A 2 nR 1 , 2 nR 2 , n code consists of message sets W 1 = 1 , 2 , , 2 nR 1 and W 2 = 1 , 2 , , 2 nR 2 , two encoding functions
f 1 : W 1 X 1 n , f 2 : W 1 × W 2 × S X 2 n ,
and two decoding functions
g 1 : Y 1 n W 1 , g 2 : Y 2 n W 1 × W 2
such that the transmitted codeword X i satisfies the power constraint, given by (1). We define the probability of error as
P e ( n ) = 1 2 n R 1 + R 2 ω 1 = 1 2 nR 1 ω 2 = 1 2 nR 2 × Pr ω ̂ 1 ( 1 ) , ω ̂ 1 ( 2 ) , ω ̂ 2 ω 1 , ω 1 , ω 2 .

A rate pair (R1,R2) of non-negative real values is achievable if there exists a sequence of 2 nR 1 , 2 nR 2 , n code such that lim n P e ( n ) 0 . The capacity region is defined as the convex closure of the set of all achievable rate pairs (R1,R2).

3 Achievable rate region by random coding

In this Section, we evaluate achievable rate regions by random coding for the G-CICS in the regime of high state power. In [16], by using random coding, two inner bounds for the G-CICS are provided when |a|≤1. By evaluating the inner bound 1 of Proposition 3 in [16] (and replacing S a S , b a , c 1 a ), we can see that this inner bound when the channel gain tends to zero vanishes, and thus, we cannot achieve any positive rate region by such scheme. The following theorem presents the second inner bound. To achieve this region, the Gelfand-Pinsker coding and rate splitting in transmitter 2 is used.

Lemma 1

[16] For the Gaussian cognitive interference channel with state non-causally known at transmitter 2, if |a|≤1, then an inner bound on the capacity region in the high state power regime consists of rate pairs (R1,R2) satisfying
R 1 1 2 log a 2 P α 1 2 a 2 P + α 2 a 2 P ′′ + α 2 N , R 2 1 2 log 1 + P ′′ N , R 1 + R 2 1 2 log 1 + 1 ρ 21 2 ρ 2 s 2 P N ,
where
ρ 21 = E X 2 X 1 E X 2 2 E X 1 2 , ρ 2 s = E X 2 S E X 2 2 E S 2 ,
P + P ′′ = 1 ρ 21 2 ρ 2 s 2 P , ρ 21 2 + ρ 2 s 2 1 , α = P P + P ′′ + N .

Proof

See Proposition 4 in [16].

Now from Lemma 1, we can see that if
α 2 2 α P + α 2 P ′′ + α 2 a 2 N > 0 ,
(2)
then the achievable rate of such random coding argument vanishes. Thus, under this condition, such random coding scheme fails to achieve any positive rate for the G-CICS in the high state power regime. In Figure 2, we set P=5 dB and a=0.15 (Figure 2a) and P=10 dB and a=0.1 (Figure 2b), and then, by considering the left-hand side (LHS) of the condition in (2), we plot the range of parameter P, under which we cannot achieve any positive rate by using random coding. Note that, in order to plot this figure, we consider a fixed channel gain and a fixed channel power constraint. Since the achievable rate region depends on P and P′′ where P + P ′′ = 1 ρ 21 2 ρ 2 s 2 P , we must vary P over interval 0 P 1 ρ 21 2 ρ 2 s 2 P , and then, by equating P ′′ = 1 ρ 21 2 ρ 2 s 2 P P , we can plot the left-hand side of the condition in (2).
Figure 2

LHS of ( 2) versus the parameter P under which the achievable rate region by random coding is zero. (a) System parameters are P=5 dB and a=0.15. (b) System parameters are P=10 dB and a=0.1.

4 Lattice alignment

4.1 Lattice definitions

Here, we provide some necessary definitions on lattices and nested lattice codes. The reader can find more details in [34, 41, 44].

Definition 1

(Lattice): An n-dimensional lattice Λ is a set of points in Euclidean space R n such that, if x,yΛ, then x+yΛ, and if xΛ, then −xΛ. A lattice Λ can always be written in terms of a generator matrix G Z n × n as
Λ = { x = z G : z Z n } ,

where represents integers.

Definition 2.

(Quantizer): The nearest neighbor quantizer Q ( . ) associated with the lattice Λ is
Q Λ ( x ) = arg min l Λ x l .

Definition 3.

(Voronoi region): The fundamental Voronoi region of a lattice Λ is a set of points in R n closest to the zero codeword, i.e.,
V 0 ( Λ ) = { x R n : Q ( x ) = 0 } .

Definition 4.

(Moments): σ2(Λ) which is called the second moment of lattice Λ is given by
σ 2 ( Λ ) = 1 n V ( Λ ) x 2 d x V ( Λ ) d x ,
(3)
and the normalized second moment of lattice Λ is
G ( Λ ) = σ 2 ( Λ ) V ( Λ ) d x 2 n = σ 2 ( Λ ) V 2 n ,

where V = V ( Λ ) d x is the Voronoi region volume, i.e., V = Vol ( V ) .

Definition 5.

(Modulus): The modulo- Λ operation with respect to lattice Λ is defined as
x mod Λ = x Q ( x )

that maps x into a point in the fundamental Voronoi region.

For all x , y R n and ΛΛ1, the modulo lattice operation satisfies the following properties
x mod Λ + y mod Λ = x + y mod Λ ,
(4)
a x mod Λ = a x mod Λ mod Λ , a Z n
(5)
β x mod Λ = β x mod βΛ , β R n ,
(6)
Q Λ 1 x mod Λ = Q Λ 1 x mod Λ mod Λ.
(7)

Definition 6.

(Quantization goodness or Rogers-good): A sequence of lattices Λ ( n ) R n is good for mean-squared error (MSE) quantization if
lim n G Λ ( n ) = 1 2 πe .

The sequence is indexed by the lattice dimension n. The existence of such lattices is shown in [45, 46].

Definition 7.

(AWGN channel coding goodness or Poltyrev-good): Let Z be a length- i.i.d Gaussian vector, Z N 0 , σ Z 2 I n . The volume-to-noise ratio of a lattice is given by
μ Λ , ε = Vol ( V ) 2 / n σ Z 2 ,
where σ Z 2 is chosen such that Pr Z V = ε , and I n is an n×n identity matrix. A sequence of lattices Λ(n) is Poltyrev-good if
lim n μ Λ ( n ) , ε = 2 πe , ε 0 , 1

and, for a fixed volume-to-noise ratio greater than 2π e, Pr Z V n decays exponentially in n.

Poltyrev showed that sequences of such lattices exist [47]. The existence of a sequence of lattices Λ(n) which is good in both senses (i.e., simultaneously are Poltyrev-good and Rogers-good) has been shown in [46].

Definition 8.

(Nested lattices): A lattice Λ is said to be nested in lattice Λ1 if ΛΛ1. Λ is referred to as the coarse lattice and Λ1 as the fine lattice.

Note that if a Z , then always a ΛΛ.

Definition 9.

(Nested lattice codes): A nested lattice code is the set of all points of a fine lattice Λ 1 ( n ) that is within the fundamental Voronoi region of a coarse lattice Λ(n),
C = Λ 1 V .

Definition 10.

(Rate): The rate of a nested lattice code is
R = 1 n log C = 1 n log Vol V Vol V 1 .
(8)

In the following, we present a key property of dithered nested lattice codes.

Lemma 2.

The Crypto Lemma [34, 48]. Let V be a random vector with an arbitrary distribution over R n . If D is independent of V and uniformly distributed over , then (V+D) mod Λ is also independent of V and uniformly distributed over .

Proof.

See Lemma 2 in [48].

Before presentation of our proposed scheme, we prove the following lemma that plays an important role in the proof of achievable rate region by lattice codes.

Lemma 3.

Suppose that Λ and Λ1 are two lattices such that ΛΛ1. Then, the modulo operation is commutative, i.e.,
x mod Λ 1 mod Λ = x mod Λ mod Λ 1 .
(9)

Proof.

We start with manipulating the left-hand side of (9):
x mod Λ 1 mod Λ = x mod Λ 1 Q Λ x mod Λ 1 = x Q Λ 1 x Q Λ x Q Λ 1 x = x Q Λ 1 x ,
(10)
where the last equality follows from the fact that ΛΛ1. For the RHS of (9), we have:
x mod Λ mod Λ 1 = x mod Λ Q Λ 1 x mod Λ = x Q Λ x Q Λ 1 x Q Λ x = x Q Λ x Q Λ 1 x + Q Λ x
(11)
= x Q Λ 1 x ,
(12)

where (11) is based on the fact that ΛΛ1. Now, by comparing (10) and (12), the proof of the lemma is complete.

4.2 Our proposed scheme

In this section, we obtain an achievable rate region using lattice codes for the G-CICS. If we use the common encoding and decoding as it is explained in [34], then similar to random coding, we cannot achieve the capacity region within a constant gap. Thus, we require to introduce a new scheme for this channel. For presenting this scheme, we use two modulo operations at the decoder. Then using Lemma 3, we interchange modulo operations. As we will see, this scheme can achieve the capacity region at high SNRs and within 0.5 bits regardless of all channel parameters. In the following, we present our scheme in more detail.

A method to obtain a rate region is achieving two corner points of that region. Then, by time sharing between two corner points, we can achieve a rate region. Suppose that V1 and V2 are two lattice codewords that carry the information for user 1 and 2, respectively. We use DPC or a lattice scheme to decode V2 at decoder 2 and a scheme which estimates linear combination of the common message at both decoders to decode V1 for both users. In the following, we explain both schemes in more details.

First, suppose that lattice Λ is a Rogers-good lattice and has the following second moment:
σ 2 Λ = P.
  • Sending the private message, V2 (for decoder 2):

Here, we assume that encoder 1 has no message to send. Thus, we can consider the G-CICS as a point-to-point channel with state, which aims to send V2 to decoder 2. This channel can be characterized as the following:
Y 2 = X 2 + S + Z 2 .
(13)
Since transmitter 1 has no message to send, we set X1 = 0. Now, we can use a DPC or lattice scheme to achieve the capacity of this channel. By using a lattice coding scheme, transmitter 2 sends the following signal:
X 2 = V 2 α S + D 2 mod Λ ,
where dither sequence D2 is uniformly distributed over the Voronoi region Λ. Thus, based on the crypto lemma, the power constraint is met. Now, by using lattice decoding and choosing α = P P + N , we achieve the following corner point [35]:
R 1 , R 2 = 0 , 1 2 log 1 + P N .
(14)
  • Encoding the common message, V1:

To estimate the common message V1, we first assume that 0≤a≤1. Then, by a simple changing at encoding, we extend our scheme to −1≤a≤0. Suppose that V2=0, and we intend to send V1 to both decoders. Consider the following nested lattices:
Λ q Λ 1 ,
where the coding lattice, Λ1, is Poltyrev-good while the shaping lattice, Λ, is both Rogers-good and Poltyrev-good. For instance, a lattice partition chain is visualized in Figure 3 for the two-dimensional case. Without loss of generality and due to a reason which will be determined later, we assume that Λq(1+a)Λ1. Based on this lattice chain, we construct the following codebook for each node:
C i = Λ 1 V , i = 1 , 2 .
Figure 3

Example of a lattice partition chain. The blue circles and the dashed circles denote the lattice points associated with lattice q Λ1 and lattice Λ, respectively.

For the lattice chain shown in Figure 3, we have C i = 1 , 2 , , 9 , 10 . Now, using a one-to-one mapping at encoder i, we map message W1 to a lattice codeword V1 of codebook C i and send the following signals over the channel:
X 1 = V 1 + D 1 mod Λ ,
(15)
X 2 = V 1 α S + D 2 mod Λ ,
(16)

where D1 and D2 are two independent dithers which are uniformly distributed on the Voronoi region . Note that by the crypto lemma, we know that the power constraint is satisfied. Now, we explain decoding at decoder 1 and 2.

  • Decoding the common message, V1, at decoder 1

Decoder 1, upon receiving Y1,
Y 1 = X 1 + a X 2 + a S + Z 1 ,
performs the following operations to estimate V1:
Y d 1 = q α Y 1 D 1 a D 2 mod Λ mod Λ = αq X 1 + a X 2 + a S + Z 1 q D 1 qa D 2 mod mod Λ
(17)
= αq X 1 + a X 2 + a S + Z 1 q D 1 qa D 2 mod Λ mod
(18)
= 1 + a q V 1 q V 1 + D 1 aq V 1 α S + D 2 + αq X 1 + a X 2 + Z 1 mod Λ mod = 1 + a q V 1 q X 1 aq X 2 + αq X 1 + a X 2 + Z 1 mod Λ mod
(19)
= 1 + a q V 1 + α 1 q X 1 + a α 1 q X 2 + αq Z 1 mod mod Λ = 1 + a q V 1 + Z eff mod mod Λ ,
(20)
where
Z eff = α 1 q X 1 + a α 1 q X 2 + αq Z 1 mod qΛ.
Equation (17) follows from (6), and (18) is based on Lemma 3. Equation (19) is based on applying (4) while (20) follows from Lemma 3. By using minimum Euclidean distance lattice decoding [34, 49], which finds the closest point to Yd 1 in q Λ1, we estimate V 1 = 1 + a q V 1 mod Λ as:
V 1 ̂ = Q q Λ 1 Y d 1 mod Λ ,
(21)
= Q q Λ 1 1 + a q V 1 + Z eff mod Λ mod mod Λ , = Q q Λ 1 1 + a q V 1 + Z eff mod Λ , = Q q Λ 1 1 + a q V 1 + Z eff mod Λ ,
(22)
where (22) is based on the fact that Λq Λ1 and property (7). Thus, the estimation of V 1 is correct if
Z eff q V 1 .
(23)

Note that to decode [(1+a) q V1]mod Λ, since we have used a quantizer associated with lattice q Λ1, we may map some points of lattice q Λ1 to [(1+a) q V1]mod Λ. That’s by finding some points of q Λ1, since we used a one-to-one mapping, we can recover [(1+a)q V1]mod Λ.

Therefore, (23) shows that the estimation of V 1 is incorrect if the effective noise Zeff leaves the Voronoi region surrounding the codeword (1+a)q V1, i.e.,
P e = Pr Z eff q V 1 .
Now from [34, 47], the error probability vanishes as n if
μ = Vol q V 1 2 n 2 πe Var Z eff > 1 ,
(24)
where Z eff N 0 , Var Z eff . Since Λ1 is Poltyrev-good, the condition of (24) is satisfied. Now, from (8) for R1, we have
R 1 = 1 n log Vol V Vol V 1 = 1 2 log σ 2 ( Λ ) G ( Λ ) Vol V 1 2 n 1 2 log σ 2 ( Λ ) G ( Λ ) 2 πe Var Z eff
(25)
= 1 2 log σ 2 ( Λ ) ) Var Z eff = 1 2 log P Var Z eff = 1 2 log 1 a 2 + 1 + P N
(26)

where (25) follows from (24), and (26) is based on Rogers goodness of Λ.

Now, we have V 1 = 1 + a q V 1 mod Λ and must try to decode V1. In the following lemma, we show that it is possible to decode it correctly.

Lemma 4.

Suppose Λ1 and Λ are two lattices such that ΛΛ1. For x,yΛ1, xy and a Z , we have
a x mod Λ a y mod Λ ,

if Λa Λ1.

Proof.

By using definition of modulo operation, we have
a x mod Λ a y mod Λ = a x y Q Λ a x + Q Λ a y .
Since xy, x,yΛ1 and a Z , thus a(xy) is a non-zero element of lattice Λ1. On other hand, for lattice Λ, we know Λa Λ1. Thus, the element a(xy) of lattice Λ1 is not an element of lattice Λ, and therefore, we get
a x mod Λ a y mod Λ 0 .
Now, we return to our problem where we aim to estimate V1. Since Λq(1+a)Λ1, according to the preceding lemma for V1V2, we have
V 1 = 1 + a q V 1 mod Λ V 2 = 1 + a q V 2 mod Λ.
Thus, there exists only a codeword that can satisfy V 1 = 1 + a q V 1 mod Λ and it is the transmitted codeword. Therefore, we can achieve the following corner point at decoder 1:
R 1 , R 2 = 1 2 log 1 a 2 + 1 + P N , 0 .
(27)
  • Decoding the common message, V1, at decoder 2:

Decoding at decoder 2 is exactly the same as decoder 1. Thus, we can achieve the following corner point
R 1 , R 2 = 1 2 log 1 a 2 + 1 + P N , 0 .
(28)
Therefore, by using time sharing between the two corner points, given in (14) and (27), we can achieve the following rate region:
R 2 + log 1 + P N log 1 a 2 + 1 + P N R 1 1 2 log 1 + P N .
(29)
It is easy to see that the following rate region is also achievable (since it is inside the region given by (29)):
R 2 + R 1 1 2 log 1 a 2 + 1 + P N .
(30)
This transmission scheme is depicted in Figure 4.
Figure 4

Encoding/decoding scheme.

Remark 1

In order to extend our scheme to the case that −1≤a≤0, we must modify our encoding as the following:
X 1 = V 1 + D 1 mod Λ ,
(31)
X 2 = V 1 α S + D 2 mod Λ.
(32)

By comparing (31) with (15), we can see that at encoder 1, instead of sending V1, we transmit −V1. At the decoder, instead of decoding V 1 = q + p V 1 mod Λ , we find V 1 = q + p V 1 mod Λ . But since p≥0 and q≤0, thus pq≥−q which enables us to estimate V1 correctly. Note that for the case −1≤a≤0, if we estimate [(q+p)V1] mod Λ, since p+q≤−q, we cannot find the desired lattice point correctly.

4.2.1 Rate-region outer bound

For comparison, an outer bound on the capacity region of the G-CICS is provided. This outer bound is similar to the bound provided in [16] obtained using a different approach.

Lemma 5.
For the Gaussian cognitive interference channel with state non-causally known at transmitter 2, if the power of the state goes to infinity (Q), an outer bound consists of rate pairs (R1,R2) satisfying:
R 1 + R 2 1 2 log 1 + P 1 ρ 21 2 ρ 2 s 2 N ,
(33)

where the union is taken over all parameters 0≤ρ21 and ρ2s≤1 such that ρ 21 2 + ρ 2 s 2 1 .

Proof.
We have
n ( R 1 + R 2 ) = h W 1 , W 2 = h W 1 , W 2 | Y 2 + I W 1 , W 2 ; Y 2 n ε n + I W 1 , W 2 ; Y 2 ,
(34)
where (34) is based on Fano’s inequality. For the second term, we have
I W 1 , W 2 ; Y 2 = h Y 2 h Y 2 | W 1 , W 2 = h Y 2 + h S | W 1 , W 2 , Y 2 h Y 2 , S | W 1 , W 2 h Y 2 + h S | X 1 , Y 2 h Y 2 | X 1 , X 2 , S h S
(35)
= h Y 2 h S + h S | X 1 , Y 2 h Z 2 n 2 log N + aP + P + Q 2 Q + n 2 log 1 + P 1 ρ 21 2 ρ 2 s 2 N ,
(36)
where ρ 21 = E X 2 X 1 E X 2 2 E X 1 2 , ρ 2 s = E X 2 S E X 2 2 E S 2 , and (35) follow from the fact that S is independent of (W1,W2). Equation (36) follows from the fact that Gaussian distribution maximizes differential entropy for a fixed second moment and Cauchy-Schwarz inequality. For the asymptotic case of strong interference, i.e., Q→+, we get
R 1 + R 2 1 2 log 1 + P 1 ρ 21 2 ρ 2 s 2 N .

4.3 Capacity results

By comparing the outer region (33) and the achievable region in (29), we conclude that the outer region is indeed tight at high SNRs for the weak and moderate interference case in the high state power regime. Thus, we have the following Corollary

Corollary 1.

At high SNRs and in a high state power regime, the capacity region of the state-dependent Gaussian cognitive interference channel for the weak and moderate interference case is given by the set of all rate pairs satisfying
R 1 + R 2 1 2 log 1 + P N o ( 1 ) ,
(37)

where o(1)→0 as P N .

We now calculate the maximum gap between the outer bound, given in (33), and the achievable rate-region, given by (30). We have
1 2 log 1 + P N 1 2 log 1 a 2 + 1 + P N + 1 2 log 2 1 a 2 + 1 1 2 ,
(38)

where (38) is based on the fact that the maximum gap occurs at 1 a 2 + 1 + P N = 1 for i=1,2. Thus, we have the following result.

Theorem 1

The capacity region of the state-dependent Gaussian cognitive interference channel for the weak and moderate interference case in the high state power regime is achievable within 0.5 bits.

5 Numerical results

In this section, we numerically compare the achievable rates of random coding with those of our lattice-based transmission scheme. For comparison, the outer bound is also provided. For simplicity, we use the following outer bound in our simulations:
R 1 + R 2 1 2 log 1 + P N .
In Figure 5, we provide a comparison of the achievable rate regions and the outer bound at S N R=10 dB for a=0.3,0.5,0.8 (note that |a|≤1). We observe that the achievable rate region of our lattice-based coding scheme is significantly larger than that of random coding. By increasing the channel gain, a, the achievable rate by lattice codes is within 0.5 bits of the outer bound.
Figure 5

Rate-region outer bound and achievable rate regions of random coding and lattice-based coding scheme for different channel gains. System parameters are a=0.3,0.5,0.8 and S N R=10 dB.

In Figure 6, we compare the achievable rate regions when the channel gain is fixed at a=0.5 and SNR varies. The values of the SNR s are S N R=5,15,25 dB. As we observe, for high SNRs, the achievable rate region by lattice codes coincides with the outer bound.
Figure 6

Comparison between the achievable rate-regions and the outer bound for different SNR values. System parameters are a=0.5 and S N R=5,15,25 dB.

6 Conclusions

In this paper, the state-dependent Gaussian cognitive interference channel in the weak and moderate interference case and in the high state power regime is studied. First, we showed that the achievable rate by random coding, which is based on Gelfand-Pinsker coding, vanishes under a condition over the channel gain. Then, we showed that a scheme that is based on lattice codes can achieve the capacity region at high SNRs and within 0.5 bits of the outer bound for all channel parameters.

Authors’ information

The authors are members of IEEE.

Declarations

Acknowledgments

This work has been supported in part by the Iran NSF under Grant No. 88114.46. The material in this paper was presented in part at the 2nd Iran Workshop on Communication and Information Theory (IWCIT 2014), Tehran, Iran.

Authors’ Affiliations

(1)
Electrical Engineering Department, Sharif University of Technology

References

  1. Carleial AB: Interference channels. IEEE Trans. Inf. Theory 1978, 24(1):60-70. 10.1109/TIT.1978.1055812MATHMathSciNetView ArticleGoogle Scholar
  2. Han TS, Kobayashi K: A new achievable for the interference channel. IEEE Trans. Inf. Theory 1981, 27(1):49-60. 10.1109/TIT.1981.1056307MATHMathSciNetView ArticleGoogle Scholar
  3. Sato H: The capacity of the Gaussian interference channel under strong interference. IEEE Trans. Inf. Theory 1981, 27(6):786-788. 10.1109/TIT.1981.1056416MATHView ArticleGoogle Scholar
  4. Carleial AB: A case where interference does not reduce capacity. IEEE Trans. Inf. Theory 1975, 21(5):569-570. 10.1109/TIT.1975.1055432MATHMathSciNetView ArticleGoogle Scholar
  5. Sason I: On achievable rate regions for the Gaussian interference channel. IEEE Trans. Inf. Theory 2004, 53(12):1345-1356.MathSciNetView ArticleGoogle Scholar
  6. Motahari A, Khandani A: Capacity bounds for the Gaussian interference channel. IEEE Trans. Inf. Theory 2009, 55(2):620-643.MathSciNetView ArticleGoogle Scholar
  7. Etkin RH, Tse DNC, Wang H: Gaussian interference channel capacity to within one bit. IEEE Trans. Inf. Theory 2008, 54(12):5534-5562.MATHMathSciNetView ArticleGoogle Scholar
  8. Wu W, Vishwanath S, Arapostathis A: Capacity of a class of cognitive radio channels: interference channels with degraded message sets. IEEE Trans. Inf. Theory 2007, 53(11):4391-4399.MathSciNetView ArticleGoogle Scholar
  9. Maric I, Goldsmith A, Kramer G, Shamai (Shitz) S: On the capacity of interference channels with one cooperating transmitter. Eur. Trans. Telecomm 2008, 19: 405-420. 10.1002/ett.1298View ArticleGoogle Scholar
  10. Jovicic A, Viswanath P: Cognitive radio: an information-theoretic perspective. IEEE Trans. Inf. Theory 2009, 55(9):3945-3958.MathSciNetView ArticleGoogle Scholar
  11. Zhang L, Jinhua J, Shuguang C: Gaussian interference channel with state information. IEEE Trans. Wireless Commun 2013, 12(8):4058-4071.View ArticleGoogle Scholar
  12. Costa M: Writing on dirty paper. IEEE Trans. Inf. Theory 1983, 29(3):439-441. 10.1109/TIT.1983.1056659MATHView ArticleGoogle Scholar
  13. Duan R, Liang Y, Shamai (Shitz) S: On the capacity region of Gaussian interference channels with state. In Proc. IEEE ISIT. Istanbul, Turkey; 2013:1097-1101.Google Scholar
  14. Duan R, Liang Y, Khisti A, Shamai (Shitz) S: State-dependent Gaussian Z-channel with mismatched side-information and interference. In Proc. IEEE Inf. Theory Workshop (ITW). Sevilla, Spain; 2013.Google Scholar
  15. Somekh-Baruch A, Shamai (Shitz) S, Verdu S: Cognitive interference channels with state information. In Proc. IEEE Int. Symp. Information Theory (ISIT). Toronto, Canada; 2008:1353-1357.Google Scholar
  16. Duan R, Liang Y, Shamai (Shitz) S: Bounds and capacity theorems for cognitive interference channels with state. IEEE Trans. Inf. Theory . June 2012, revised Oct 2013 [http://arxiv.org/abs/1207.0016]
  17. Somekh-Baruch A, Shamai (Shitz) S, Verdu S: Cooperative multiple access encoding with states available at one transmitter. IEEE Trans. Inf. Theory 2008, 54(1):4448-4469.View ArticleGoogle Scholar
  18. Zaidi A, Vandendorpe L, Kotagiri SP, Laneman JN: Multiaccess channels with state known to one encoder: another case of degraded message sets. In Proc. IEEE ISIT. Seoul, South Korea; 2009:2376-2380.Google Scholar
  19. Zaidi A, Piantanida P, Shamai (Shitz) S: Capacity region of cooperative multiple-access channel with states. IEEE Trans. Inf. Theory 2013, 59(10):6153-6174.View ArticleGoogle Scholar
  20. Zaidi A, Piantanida P, Shamai (Shitz) S: Wyner-Ziv type versus noisy network coding for a state-dependent MAC. In Proc. IEEE ISIT. Cambridge, MA; 2012:1682-1686.Google Scholar
  21. Zaidi A, Piantanida P, Shamai (Shitz) S: Multiple access channel with states known noncausally at one encoder and only strictly causally at the other encoder. In Proc. IEEE ISIT. Saint Petersburg, Russia; 2011:2801-2805.Google Scholar
  22. Zaidi A, Shamai (Shitz) S: On cooperative multiple access channels with delayed CSI at transmitters. IEEE Trans. Inf. Theory 2014, 60(10):6204-6230.View ArticleGoogle Scholar
  23. Zaidi A, Shamai (Shitz) S: Asymmetric cooperative multiple access channels with delayed CSI. In Proc. IEEE ISIT. Honolulu, HI; 2014:1186-1190.Google Scholar
  24. Zaidi A, Shamai (Shitz) S: On cooperative multiple access channels with delayed CSI. In Proc. IEEE ISIT. Istanbul, Turkey; 2013:982-986.Google Scholar
  25. Zaidi A, Vandendorpe L, Duhamel P: Lower bounds on the capacity regions of the relay channel and the cooperative relay-broadcast channel with non-causal side-information. In IEEE Int. Commun. Conf. (ICC). Glasgow, Scotland; 2007:6005-6011.Google Scholar
  26. Zaidi A, Kotagiri SP, Laneman JN, Vandendorpe L: Cooperative relaying with state available non-causally at the relay. IEEE Trans. Inf. Theory 2010, 56(5):2272-2298.MathSciNetView ArticleGoogle Scholar
  27. Zaidi A, Kotagiri SP, Laneman JN, Vandendorpe L: Cooperative relaying with state at the relay. In Proc. IEEE Information Theory Workshop(ITW). Porto, Portugal; 2008:139-143.Google Scholar
  28. Zaidi A, Shamai S, Piantanida P, Vandendorpe L: Bounds on the capacity of the relay channel with noncausal state at source. IEEE Trans. Inf. Theory 2013, 59(5):2639-2672.MathSciNetView ArticleGoogle Scholar
  29. Zaidi A, Vandendorpe L: Lower bounds on the capacity of the relay channel with states at the source. Eurasip J. Wireless Commun. Netw 2009, 2009: 1-23.View ArticleGoogle Scholar
  30. Zaidi A, Shamai (Shitz) S, Piantanida P, Vandendorpe L: Bounds on the capacity of the relay channel with noncausal state information at source. In Proc. IEEE ISIT. Austin, TX; 2010:639-643.Google Scholar
  31. Zaidi A, Vandendorpe L: Rate regions for the partially-cooperative relay broadcast channel with non-causal side information. In Proc. IEEE ISIT. Nice, France; 2007:1246-1250.Google Scholar
  32. Zaidi A, Vandendorpe L: Achievable rates for the Gaussian relay interferer channel with a cognitive source. In IEEE Int. Commun. Conf. (ICC). Dresden, Germany; 2009:1-6.Google Scholar
  33. Zamir R: Lattices are everywhere. In Proceedings of the 4th Annual Workshop on Information Theory and its Applications (ITA 2009). San Diego, CA; 2009:392-421.View ArticleGoogle Scholar
  34. Erez U, Zamir R: Achieving 1/2 log(1 +SNR) on the AWGN channel with lattice encoding and decoding. IEEE Trans. Inf. Theory 2004, 50(22):2293-2314.MATHMathSciNetView ArticleGoogle Scholar
  35. Erez U, Shamai (Shitz) S, Zamir R: Capacity and lattice strategies for canceling known interference. IEEE Trans. Inf. Theory 2005, 51(14):3820-3833.MATHView ArticleGoogle Scholar
  36. Philosof T, Zamir R, Erez U, Khisti AJ: Lattice strategies for the dirty multiple access channel. IEEE Trans. Inf. Theory 2011, 57(8):5006-5035.MathSciNetView ArticleGoogle Scholar
  37. Ghasemi-Goojani S, Behroozi H: On the transmission strategies for the two-user state-dependent Gaussian interference channel. In Proc. Tenth International Symposium on Wireless Communication Systems (ISWCS). Ilmenau, Germany; 2013.Google Scholar
  38. Song Y, Devroye N: Structured interference-mitigation in two-hop networks. In Information Theory and Applications Workshop. UCSD,, San Diego; 2011.Google Scholar
  39. Hong S-N, Caire G: Generalized degrees of freedom for network-coded cognitive interference channel. In Proc. IEEE ISIT. Istanbul, Turkey; 2013:1769-1773.Google Scholar
  40. Ghasemi-Goojani S, Behroozi H: State-dependent Gaussian Z-interference channel: new results. In Proc. IEEE ISITA. Melbourne, Australia; 2014.Google Scholar
  41. Nazer B, Gastpar M: Compute-and-forward: harnessing interference through structured codes. IEEE Trans. Inf. Theory 2011, 57(10):6463-6486.MathSciNetView ArticleGoogle Scholar
  42. Jafarian A, Vishwanath S: Achievable rates for k-user Gaussian interference channels. IEEE Trans. Inf. Theory 2012, 58(7):4367-4380.MathSciNetView ArticleGoogle Scholar
  43. Ordentlich O, Erez U: On the robustness of lattice interference alignment. IEEE Trans. Inf. Theory 2013, 59(5):2735-2759.MathSciNetView ArticleGoogle Scholar
  44. Conway JH, Sloane NJA: Sphere Packings, Lattices and Groups. Springer-Verlag, New York; 1992.Google Scholar
  45. Zamir R, Feder M: On lattice quantization noise. IEEE Trans. Inf. Theory 1996, 42(4):1152-1159. 10.1109/18.508838MATHView ArticleGoogle Scholar
  46. Erez U, Litsyn S, Zamir R: Lattices which are good for (almost) everything. IEEE Trans. Inf. Theory 2005, 51(16):3401-3416.MATHMathSciNetView ArticleGoogle Scholar
  47. Poltyrev G: On coding without restrictions for the AWGN channel. IEEE Trans. Inf. Theory 1994, 40(9):409-417.MATHMathSciNetView ArticleGoogle Scholar
  48. Forney GD: On the role of MMSE estimation in approaching the information theoretic limits of linear Gaussian channels: Shannon meets Wiener. In Proc. 41st Ann. Allerton Conf. Monticello, IL; 2003:430-439.Google Scholar
  49. Gamal HE, Caire G, Damen MO: Lattice coding and decoding achieve the optimal diversity multiplexing tradeoff of MIMO channels. IEEE Trans. Inf. Theory 2004, 50(6):968-985. 10.1109/TIT.2004.828067MATHView ArticleGoogle Scholar

Copyright

© Ghasemi-Goojani and Behroozi; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.