- Research article
- Open Access

# Slotted Gaussian Multiple Access Channel: Stable Throughput Region and Role of Side Information Vaneet

- Vaneet Aggarwal
^{1}Email author and - Ashutosh Sabharwal
^{2}

**2008**:2

https://doi.org/10.1155/2008/695894

© V. Aggarwal and A. Sabharwal. 2008

**Received:**2 September 2007**Accepted:**19 March 2008**Published:**19 March 2008

## Abstract

We study the relation between the stable throughput regions and the capacity regions for a Gaussian multiple-access channel. Our main focus is to study how the extent of side information about source arrival statistics and/or instantaneous queue states at each transmitter influence the achievable stable throughput region. Two notions of MAC capacity are studied. The first notion is the conventional Shannon capacity which relies on large coding block lengths for finite *SNR*, while the second uses finite code blocks with high *SNR*. We find that the stable throughput region coincides with the Shannon capacity region for many scenarios of side information, where side information is defined as a mix of statistical description and instantaneous queue states. However, a lack of sufficient side information about arrival statistics can lead to a significant reduction in the stable throughput region. Finally, our results lend strong support to centralized architectures implementing some form of congestion/rate control to achieve Shannon capacity, primarily to counter lack of detailed information about source statistics at the mobile nodes.

## Keywords

- Arrival Rate
- Queue Length
- Capacity Region
- Side Information
- Arrival Distribution

## 1. Introduction

Often in communication networks, the traffic distribution is unknown. However, in bounding the performance of a network, it is commonly assumed [1-5] that the transmitters and the receivers have complete knowledge of the source arrival distribution (or the probability distribution of the number of packets arriving per unit time), which is then used in the design of optimal transmission methods. In this paper, we study the impact of such side information regarding the traffic on the stability performance of Gaussian multiple- access channels.

We will focus our attention on a slotted Gaussian multiple-access channel with *K* users sending bursty data to a single receiver. In capacity analysis, it is implicitly assumed that the users are aware of all the transmission rates (of every user) and have optimal codebooks, which allows the whole system to operate close to the boundary of the capacity region; this can be understood as all nodes having complete statistical knowledge regarding all sources. In this work, we will take the first step towards understanding exactly how much information is needed at each node about the *other* sources. We will study this through a series of five cases with different amount of statistical information at each node. Furthermore, we will adopt a more general source model, unlike prior work [3, 4], in which data arrives randomly at a user with general distribution in each symbol duration. This arrival process decouples the definition of source arrivals from that of communication system design, which will occur in blocks of length *n* codewords.

In the first of the five cases, the transmitters and the receiver will be assumed to know the arrival distribution of all the sources, which represents the case of complete statistical information. We show that in this case the stable throughput region coincides with the Shannon capacity region. A similar result was shown in [3] for two-user multiple-access system, by requiring not only complete distributional knowledge but also one bit of instantaneous queue state information. Our encoding strategy holds for a general *K*-user system *and* requires no instantaneous state information to achieve every point in the Shannon capacity region. Keeping our objective in mind, we reduce the amount of side information in the second case from full statistical knowledge to only the knowledge of the mean arrival rates of every node. Since the probability of a certain input bit pattern depends on the length of the bit pattern, the coding techniques which assume equally likely bit arrival for every input bit pattern are not optimal. As a result, it appears that the knowledge of mean arrival rates alone is insufficient to guarantee achieving every point in the Shannon capacity region (optimal operation will potentially require a multiuser universal source coder and since we are transmitting over a noisy channel, we will also need multiuser source/channel separation. Thus, the chances that full Shannon capacity region is achievable with only information about mean arrival rates appear slim). However,we show that if each node sends quantized one- bit information about its own queue state to the receiver every time-slot, then the entire Shannon capacity region can be achieved with reduced statistical information. Again, in contrast to the main result in [3], we show that limited statistical information suffices if one bit of instantaneous system state information is available.

In contrast to the above cases, we consider a case where the sources are not aware of the mean arrival rates of other nodes. In this case, the stability region is significantly reduced compared to Shannon capacity region, even with one bit of quantized state information every time-slot. The significant loss can be attributed to the fact that each node has to essentially assume that other nodes potentially have highest possible load, thus *predividing* the total available capacity into *K* equal portions. This division is inefficient in the cases where some of the nodes have lower-than-maximum arrival rates. Thus the stable throughput region of actual systems may be significantly lower than the Shannon capacity region due to lack of knowledge of source arrival distribution, thereby underscoring the importance of side information.

We next show that this loss can be completely recovered if the receiver has either full statistical knowledge (like in first case above) or knows the mean arrivals of all nodes along with one-bit instantaneous queue state information, and the receiver can send feedback to each node once during system operation. This one-time feedback is akin to congestion/rate control [6, 7], where the receiver informs each node of their allowable rate at the beginning of the communication. In this case, the stable throughput region coincides with the Shannon capacity region. This scenario is akin to the congestion control via feedback from the base station. Hence, compared to the first scenario, we traded the knowledge of complete arrival distribution of other sources at a source with a feedback from the receiver.

For each case of side information, we consider two notions of Shannon capacity. The first is the conventional notion of capacity, which is defined for a fixed value of *SNR* with growing encoding block length [5]. In this notion, the problem of understanding delay in an information- theoretic setting remains intractable, since *n → ∞* makes all talk of per-bit delay irrelevant. A bit, even if it is in the queue for finite number of time-slots, will have infinitely large delay since time-slot is infinite. Hence, we explore an alternate asymptote, partially inspired by age-old diversity order analysis, where we study finite block lengths in a high- *SNR* regime. In this regime, we no longer work with the exact capacity region, but with the rates of growth. However, as *SNR* increases, the capacity increases in an unbounded fashion. So for, any meaningful discussion about stability regions, the sources have to produce data at rates which scale with the *SNR*. This leads us to reconsider the definition of stability. Note that this is not an uncommon assumption, and is implicitly assumed in asymptotic in *SNR* frameworks like diversity-multiplexing studies [8, 9]. We formalize these new notions of capacity and stable throughput region and show that conceptually similar results hold for the finite block length case. While conceptually the results are similar in two notions of the capacity, they differ in the details of their proofs.

We quickly note that the problem of not knowing statistics of other nodes can be understood as a multiuser universal source-coding problem, where the distributed sources are not aware of the full joint distribution. In this case, the transmitters will encode assuming incomplete system knowledge and the receiver will decode assuming similarly reduced information about the system. Such a general formulation will form the obvious next step following our current work, and will be a topic of discussion elsewhere.

The relation between queuing stability and Shannon capacity remains a problem of interest, since most sources are bursty and require a delay-bounded delivery. For random access systems, the problem has been studied extensively (e.g., see [1, 4, 10-13]). A key result in [2, 4] is that the queuing stability region coincides with the Shannon capacity region in many cases on the collision channel, illustrating that the bursty nature of the arriving packets does not limit the data rate at which the probability of error can be made arbitrarily small. The entire body of work on collision channels assumes that the arrival distributions are known to all the users, and also that the receiver knows if a particular user is sending the data in order to decide collisions. Also, the transmitting users have some side information, by which they know if a collision has occurred and, hence, decide to retransmit. Similar information is needed for random-access systems with multipacket reception capability in order to decide retransmissions.

Following the work on the collision channel, the effect of scheduling and power control on the stability in multiuser channels under Poisson arrivals was studied in [7, 14-16]. The stability of the queues in wired switches has been studied for general arrivals [17, 18] for certain scheduling algorithms. While numerous works characterize stability conditions, stability policies and stable-system behavior, we concentrate on the influence of side information on the stable throughput region.

The relation between queuing stability and Shannon capacity for multiple-access systems over AWGN channels has been studied earlier in [3, 7, 14, 15, 19].Most of this work [7, 14, 15, 19] assumes complete knowledge of the queue states. In [3], with the assumptions of time slots being large, and a maximum of one packet of fixed length arriving at each user in a time slot for a system with two users, it was shown that the stable throughput region was independent of the burstiness and the shared queue information. The model assumed a fixed form of side information rather than the tradeoff of side information with stable throughput regions that is addressed in this paper.

We begin in Section 2 by describing the channel model and the capacity regions. In Section 3, we assume finite *SNR* with large time-slots. Various achievable stable throughput regions are described depending on the amount of side information. In Section 4, we consider the dual problem of finite time-slots with large *SNR*, where the influence of the amount of side information on the achievable stable throughput region is studied.

## 2. Problem Formulation

### 2.1. Channel and source model

*K*users transmit to a single receiver. We will assume a time-slotted system where the transmissions only occur at the slot boundaries. Each slot is indexed by

*k*and within each slot, the

*n*symbols will be indexed by

*j*.

*j*is the sum of the transmissions of the users and Gaussian noise at time unit

*j*,

*Z*[

*j*] is zero mean i.i.d. Gaussian noise with variance

*N*, and is independent of channel inputs

*X*

_{ i }[

*j*] ϵ ℝ,

*i*= 1,2,…,

*K*((

*X*

_{ i }[1],…,

*X*

_{ i }[

*n*]) represents the n-length codeword sent in a time-slot). The channel output is

*Y*[

*j*] ϵ ℝ. The transmit power of user

*i*averaged over all the codewords is

*P*

_{ i }if it is transmitting and 0 otherwise. Thus, no power control is performed. We also assume that receiver has the perfect timing information to ensure the synchronization at the receiver. Finally define

*SNR*= (

*SNRU*

_{1},

*SNR*

_{2},…,

*SNR*

_{K}), where

*SNR*

_{ j }=

*P*

_{ i }/

*N*, is the signal-to-noise ratio for user

*i*(

*SNR*

_{ i }is the ratio of the average signal power of

*i*th user to the noise variance, and does not include interference from other users).

While the transmissions occur only at the slot boundaries, the data can arrive anytime during the slot. Data arrives for user *i* in form of bits with an average rate of *λ*_{
i
} bits per unit time (one time unit represents one symbol duration). So the arrival process is defined independent of the slot boundaries or the slot length and directly on the smallest time unit, the symbol time-duration. We define Δ_{
i
}(*k*) to be the number of bits that arrive for transmission at user *i* in time-slot *k*, *E*[Δ_{
i
}(*k*)] = *nλ*_{
i
}. Thus, unlike the previous works [1, 2, 4], we do not assume that the arrivals occur only at the start or the end of the time-slot, which makes the definition of arrival process independent of the communication-system design, notably the slot length *n*.

*i*in time slot

*k*as Ω

_{ i }(

*k*). Further, we denote the number of bits in the

*i*th queue at the beginning of time-slot

*k*as Q

_{ i }(

*k*). The queue is updated as

*Q*

_{ i }(1) = 0. We assume that the number of elements in the queue of

*i*th user (or the queue state) is known only to

*i*th user. The stable throughput region is the region where the queues do not grow unbounded in the steady state. The mathematical definitions for these stability regions for large time slots and large

*SNRs*are defined in Section 2.3.

### 2.2. Capacity regions

In this section, we define the two notions of capacity. First is the conventional definition where the *SNR* is held fixed and the blocklength is allowed to grow unbounded. And in the second case, blocklength is held fixed and *SNR* is increased unbounded. To contrast the two definitions, we state the preliminaries for conventional definition of capacity.

A $\left(\left({2}^{n{R}_{1}},{2}^{n{R}_{2}},\dots ,{2}^{n{R}_{K}}\right),n\right)$ code for multiple-access channel consists of *K* encoding functions *f*_{
i
} : $\mathbb{W}$_{
i
} → *W*
_{
i
}
^{
n
}
where Wd _{
i
} is the input to the encoder at user *i* and takes ${2}^{n{R}_{K}}$ values, and a decoding function g : *g* : $\mathbb{R}$^{
n
} → $\mathbb{W}$_{1} × $\mathbb{W}$_{2} × ⋯ × $\mathbb{W}$_{
k
}, where $\mathbb{X}$
_{
i
}
^{
n
}
⊆ R^{
n
}

*Definition 1*. A rate tuple (*R*_{1},*R*_{2},…,*R*_{
K
}) is achievable if there exists a sequence of $\left(\left({2}^{n{R}_{1}},{2}^{n{R}_{2}},\dots {2}^{n{R}_{K}}\right),n\right)$ codeswith error probability *P*_{
e
} → *0 as* n → ∞. The capacity region is the closure of all achievable rate tuples (*R*_{1}, *R*_{2},…, *R*_{
K
}).

**Lemma 1**(see [5]).

*The capacity region of Gaussian multiple- access channel is given by closure of convex hull of all*(

*R*

_{1},

*R*

_{2},…,

*R*

_{k}) satisfying

*where S is any subset of*{1,2, …

*K*},

*and C*(

*x*) = (1/2)log (1 +

*x*)

The classical definition of capacity requires the block length *n* to approach infinity. In contrast, we will define the SNR-capacity region for a fixed blocklength *n* but with increasing *SNR*. Our motivation stems from our aim to analytically understand the role of delay in communication. Towards that end, the high*-SNR* regime allows keeping delay in check and is useful for high-data-rate systems. Part of our motivation for choosing high-SNR regime comes from diversity-multiplexing tradeoff [8, 9], which has proven useful in fading channels.

To maintain differentiation in the quality of user channels, we will model the growth of *SNR* on different transmitter-receiver links with different exponents. Let $SN{R}_{i}\doteq {u}^{{\alpha}_{i}}$ for some fixed *α*_{
i
} greater than zero (we adopt the notation of [8]to denote ≐, $\stackrel{\text{.}}{\le}$ and $\stackrel{\text{.}}{\ge}$ to represent exponential equality and inequalities) for some base *SNR* denoted by *u* (In other words, *α*_{
i
} ≜ lim *u* → ∞ (log(*SNR*_{
i
})/log(*u*))) (*u* can be chosen to be any base *SNR*, e.g., max(*SNR*_{
i
}) or min(*SNR*_{
i
}) or an average). For our asymptotic analysis, we take the rates to vary as *Ri* ≐ *r*_{
i
}, log(*SNR*_{
i
}). Thus, *R*, ≐ *r*_{
i
}, log(*SNR*_{
i
}) ≐ *r*_{
i
}*α*_{
i
}log(u)

*Definition 2*. A rate tuple (*r*_{1},*r*_{2},…,*r*_{
K
}) is SNR-achievable if there exists a sequence of $\left(\left({2}^{n{r}_{1}log\left(SN{R}_{1}\right)},{2}^{n{r}_{2}log\left(SN{R}_{2}\right)},\dots {2}^{n{r}_{K}log\left(SN{R}_{K}\right)}\right),n\right)$ codes with error probability *P*_{
e
} *→* 0 as *u — to* (equivalently, $SN{R}_{i}\doteq {u}^{{\alpha}_{i}}\to \infty $). The *SNR-* capacity region is the closure of all *SNR*-achievable rate tuples (*r*_{1},*r*_{2},…,*r*_{
K
}).

*SNR*-capacity region; and Figure 2 shows an example capacity-region when

*α*

_{1}>

*α*

_{2}.

**Lemma 2.**

*The SNR-capacity region of Gaussian multiple- access channel is given by closure of all*(

*r*

_{1},

*r*

_{2},…,

*r*

_{ K })

*satisfying*

*where S is any subset of*{1,2,…

*K}*,

*where α*

_{ i }

*is defined as above*.

*Proof*. The achievability proof can be given on similar lines as in [9]. Consider the ensemble of i.i.d. random codes. Specifically, each user generates a codebook ℂ^{(i)} containing ${2}^{n{R}_{i}}\doteq SN{R}_{i}^{{r}_{i}n}$ codewords denoted as ${\mathrm{X}}_{1}^{\left(i\right)},\dots ,{\mathrm{X}}_{SN{R}_{i}^{{r}_{i}n}}^{\left(i\right)}$. Each codeword is *n*-length vector with i.i.d. Gaussian entries with zero mean and unit variance. Once picked, the codebooks are revealed to the receiver. In each block period, transmitted signal of user *i* is chosen from ℂ^{(i)}.

*S*⊆ {1, 2, …

*K*} an error event

*i*while

*m*

_{ i }is the actual message. Thus

*ε*

^{ S }is the event that receiver makes wrong decisions of all users in set

*S*and makes correct for the rest. Clearly, we have the probability of error

*P*

_{ e }(

*u*) as

*S*= {1, 2,…, |S|}. Let X0 = (X

_{0}

^{(1)}, …, X

_{0}

^{(K)}) be the transmitted signal where X

_{0}

^{(i)}∈ ℂ

^{(i)}is the codeword transmitted by user i. Denote by X

_{1}another codeword that differs from X

_{0}on the symbols transmitted by all users in S but coincide on those by other users, that is, X

_{1}= (X

_{1}

^{(1)}, … X

_{1}

^{(|S|)}, X

_{0}

^{(|S| + 1)}, … X

_{0}

^{(K)})

*.*

*ε*

^{ S }occurs if the receiver makes a wrong decision in favor of any such X

_{1}. This happens when

_{i ∈ S}

*R*

_{ i }, |

*S*| transmit antennas and 1 receive antenna.

**X**

_{0}and

**X**

_{1}differ in set

*S*⊆ {1, 2, …,

*K*}. Construct a matrix Δ

**X**of size |S| ×

*n*containing

**X**

_{0}–

**X**

_{ 1 }at all |S| users.This reflects the difference in transmit matrices for

*|S|*transmit antennas.

**X**is isotropic, ||HΔX||

^{2}has same distribution as

*ev*(

*HH*

^{†})||Δ

**x**||

^{2}where

*ev*(

*HH*

^{†}) is defined as the eigen-value of

*HH*

^{†}and equals $\sum}_{i\in S}SN{R}_{i}\doteq {u}^{{max}_{i\in S}{\alpha}_{i}$, and Δ

**x**any row vector of Δ

**X.**Hence,

**X**

_{ 1 }is ${\prod}_{i\in S}}\left(SN{R}_{i}^{n{r}_{i}}-1\right)$, the overall error probability

*Σ*

_{i ∈ S}

*r*

_{ i }

*α*

_{ i }− (1/2 max

_{i ∈ S}

*α*

_{ i }< 0 for all

*S*which is true by the statement of the theorem.

*Σ*

_{i ∈ S}

*r*

_{ i }

*α*

_{ i }> (1/2)max

_{i ∈ S}

*α*

_{ i }for some S. Again, assume that

*S =*{1,2,…, |S|} without loss of generality. The probability of error can be bounded by Fano’s inequality as

*R =*Σ

_{iϵS}R

_{ i }. Since

*I*(

*X*

_{1}

^{ n },

*X*

_{2,}

^{ n }…,

*X*

_{|S|}

^{ n };

*Y*

^{ n }|

*X*

_{|S| + 1}

^{ n }, …

*X*

_{ K }

^{ n }) = (

*n*/2)log(1 +

*Σ*

_{i ∈ S}

*P*

_{ i }/

*N*) we get

*SNR*when

*Σ*

_{i ∈ S}

*r*

_{ i }

*α*

_{ i }> (1/2)max

_{i ∈ S}

*α*

_{ i }for any

*S*, thus proving the converse.□

We will sometimes use the term *interior of the capacity region* or *interior of SNR-capacity region* to imply that the vector is not on the boundary, but inside the region.

### 2.3. Stable throughput region

In the case of finite *SNR*, stable throughput region is the region formed by the closure of vectors of mean arrival rates at the different queues for which there exists a transmission scheme in which the queues are stable. In our discussion, stability of queues implies that the ratio of queue length to block length is finite with probability one for large block-lengths. The above definition of stability of queues corresponds to a “time-observed” version of the stability definition used in standard practice [12, 20, 21], according to which the queue length is finite with probability one. There are some other weaker notions of stability in literature like substability [12, 17, 20, 21] which also hold when the queues satisfy stability. Formally, we will use the following definition of the stable throughput region

*Definition 3* (Stable throughput (large *n* and finite *SNR*)). The stable throughput region in the case when *n* → ∞ is defined as the closure of all (λ_{1}, λ_{2},…, λ_{
K
}), such that, there exists a transmission scheme, such that, the signal received at destination has probability of error *P*_{
e
} *→* 0 as *n* → ∞. Furthermore, there exists an integer *M* such that all the queues are stable for *n > M* .In other words, there exists an integer *M* such that lim_{j→∞} Pr[Q_{
i
}(*j*) *< nx*] *= F*(*x*, *n*) and lim_{x→∞} *F*(*x*, *n*) = 1, for *i =* 1,2,…, *K* and *n > M*, when the arrival rates at user *i* is *λ*_{
i
}.

When the *SNR* is large, large amount of data can be serviced in a time-slot even if the time-slot is not large. Hence, the stability of the queue is defined by normalizing with the order of amount of information that can be serviced in each block-length which is of the order of log(u) as mentioned in Section 2.2. As we can serve such large rates, the incoming rates which are of lower order than log(u) can definitely be served; hence we assume that the incoming arrival rates are also of the order of log(u) (i.e., *λ*_{
i
} ≐ *l*_{
i
} log(*SNR*_{
i
}) ≐ *l*_{
i
}*α*_{
i
} log(*u*)). Hence, stable throughput region in this case is the region formed by the closure of vectors of normalized mean arrival rates at the different queues for which there exists a transmission scheme in which the queues are stable. In our discussion, stability of queues implies that the ratio of queue length to log(*u*) is finite with probability one for large *SNR* (or *u*). Hence, the stable throughput region in this case is defined as follows.

*Definition 4* (SNR-stable throughput (finite *n* and large *SNR*)). The SNR-stable throughput region in the case when *SNR* → ∞ is defined as the closure of all (*l*_{1},*l*_{2},…, *l*_{
K
}) such that there exists a transmission scheme such that the signal received at destination has probability of error *P*_{
e
} → 0 as *u* → ∞. Furthermore, all the queues are stable for *u* large enough. In other words, there exists *M <* ∞ such that lim_{j → ∞} Pr[Q_{
i
},(*j*) < *x*log(u)] = F(*x*, *u*) and lim_{x→∞} *F*(*x*, *u*) = 1, for *i =* 1,2,…, *K* and *u > M*, when the arrival rate at user *i* is *λ*_{
i
} ≐ *l*_{
i
}*α*_{
i
} log(*u*).

As *u* → ∞, the region is defined as the closure of (*l*_{1},*l*_{2},…,*l*_{
K
}) which is a finite region although λ*i*, go to to. Let us say *l*_{
i
} as the arrival rate per *SNR*-unit which we use in defining the stable throughput region for large *SNR* in contrast to *λ*_{
i
}, which is the arrival rate per unit time and is used to define stable throughput region for large *n*. The stability of the queues refer to the queue length per growth is finite with probability one, where the growth is the block-length (f) in case of large *n*, while log(u) in the case of large *SNR*. Also, the stable throughput region is defined as the region in which the arrival rates per unit-growth (per time-unit or per SNR-unit in the two cases) lie so that the queues remain stable.

### 2.4. Source side information

To keep our analysis tractable, we consider the case of i.i.d. arrivals. Thus, each source *i* is described by an arrival distribution *p*_{
i
},(*a*), and the joint distribution of the arrival process is **p**(*a*) = Π_{
i
}*p*_{
i
}(*a*). While the joint distribution is often used in capacity analysis to obtain best possible capacity region [5], it is seldom known in such detail in operational systems. In source coding parlance, this is a problem of universal source coding over noisy channels, where the distribution is only partially known (see universal source coding over noiseless channels in [5]). Our objective is to study the role of this statistical information on the stability of multiuser-queuing system and, in the process, understand the interplay between statistical information and instantaneous source information (in the form of queue state *Q*_{
i
},(*j*)). Hence we will study the following series of cases.

- (1a)
Full statistical information

**p**(*a*) known to all nodes—*K*transmitters and the central receiver. - (1b)
Each node knows arrival means ($\mathbb{E}$(a

_{1}), $\mathbb{E}$(*a*_{2}),…, $\mathbb{E}$(*a*_{ K })) for all nodes and transmitters convey 1-bit of quantized queue information to the receiver every time-slot. - (2a)
Each transmitter only knows its own arrival statistics

*p*_{ i },(*a*) but the receiver knows**p**(*a*) and is allowed to feedback some information to all the transmitters. - (2b)
Each transmitter knows only its own arrival mean $\mathbb{E}$(a

_{ i }), and conveys 1-bit of quantized queue information to the receiver every time-slot. In this case, the receiver knows ($\mathbb{E}$(*a*_{1}), $\mathbb{E}$(*a*_{2}),…, $\mathbb{E}$(*a*_{K})), and is allowed to feedback some information to all the transmitters. - (3)
No statistical information is available to any node and the transmitters can convey 1-bit of queue information.

Our focus is on finite-bit overhead information about the queue state from transmitters to receiver to counter lack of statistical knowledge. In the process, we discovered that a single bit of queue state information is sufficient to achieve our goals; as a result, we state our results only for single-bit information and note that the proof techniques can be generalized to multi bit information case. Furthermore, implicit in our constructions is the desire to use the simplest multiuser receiver and avoid universal decoding (which is not matched to any particular source distribution but to a whole class). Thus, the receiver adapts its decoder to match the current state of the queues in those cases where the arrival distributions (and hence the prior message probabilities) are (partially) unknown.

An analogue of quantized instantaneous source information is the quantized channel state information studied in fading channels [22, 23] where the receivers convey a few bits of information about the current fading states. In the current case, the transmitters have instantaneous information about the sources; and, hence, they convey a few bits to the receiver to enable improved decoding. This source-channel duality in information sharing, while interesting in its own right, is not further discussed in this paper.

Finally, we note that each node in the system is assumed to know the transmit powers of each user and their channel gains, and hence knows the capacity region accurately. Thus the only uncertainty at the transmitters is potentially about the source arrival statistics of other nodes; and, in some cases (2a, 2b and 3), about the available capacity for each node.

## 3. Finite *SNR* With *n* → **∞**

In this section, we assume that the time-slots can be made arbitrarily large so that we can use the conventional large block length coding strategies [5] for multiple-access channels. We will give an achievable rate region for all five side information cases listed in Section 2.4. Two main code constructions are used, namely in Theorem 1 for Case 1a and Theorem 2 for Case 1b, and rest of the side-information cases follow from them (Cases 2a, 2b and 3). Throughout this section, we assume that *λ*_{
i
}, *<* ∞ and lim_{n→∞}(Var(Δ_{
i
},(k))/*n*^{2}) < ∞

**Theorem 1** (side-information Case 1a)*. The stable throughput region (as in Definition 3) coincides with the Shannon capacity region given in Lemma 1, when all K* + 1 *nodes know the complete arrival distribution of all K sources.*

*Proof*. Let the incoming rates (λ_{1},λ_{2},…,λ_{
K
}) be such that Σ_{iϵS}λ_{
i
} *< C* (Σ_{iϵS}*P*_{
i
}*/N*) where S ϵ $\mathbb{S}$ and $\mathbb{S}$ is set of all nonempty subsets of {1,2,…,*K*}. Let *τ =* (1/2 K)min_{sϵ}$\mathbb{S}$(C(Σ*i*ϵ*sP*_{
i
}*/N*) – Σ_{
iϵs
}*λ*_{
i
}). Let *γ*_{
i
} *= λ*_{
i
}, + *τ*. Note that (*γ*_{1}, *γ*_{2},…, *γ*_{
K
}) is also in the interior of the capacity region of Lemma 1.

Suppose that the number of bits with the transmitter *i* at the beginning of time-slot *k* is *Q*_{
i
}(*k*). Then, Ω_{
i
}(*k*) = min(*Q*_{
i
}(*k*), *nγ*_{
i
}) bits are sent to the encoder. The encoding and decoding operations make use of the prior probabilities of all the possible *2*^{nγi+1} – 1 messages (consisting of all messages of length ≤ *nγ*_{i}). The asymptotic rate at user *i* is thus no more than *γ*_{
i
}. Since (*γ*_{1}, *γ*_{2},…, *γ*_{
K
}) is in the interior of the capacity region, there exists an encoding and decoding scheme in which average probability of error at the decoder goes to 0 as *n →* ∞. Note here that we do not change the codebooks in every time slot. Once chosen in the first time-slot, the codebook for each user remains the same throughout the use of the channel in all subsequent timeslots.

Using the above encoding scheme, the queue update equation for user *i* (2) reduces to *Q*_{
i
}(*k* + 1) = *Q*_{
i
}(*k*) –min(Q_{
i
}(*k*), *ny*_{
i
}) + Δ_{
i
}(*k*) = (Q_{
i
}(*k*) – *ny*_{
i
})^{
+
} + Δ_{
i
}(k). This queue update equation is different from [24] because in our case, the encoding is done at the beginning of a time-slot. Hence, Δ_{
i
}(*k*) bits will remain in the queue at the start of the next time-slot.We prove that this queue is stable in the appendix. This completes the proof. □

When there is incomplete information about statistical information, we can assume a distribution and encode as if the distribution is known. But, this approach will lead to a loss in stable throughput region due to mismatch of the probability distributions at the sources and the actual distribution (this leads to a loss in the achievable rates as shown in [25]). To avoid this loss, we consider 1-bit queue state information which can aid in alleviating the loss in performance.

**Theorem 2** (side-information Case 1b). *The stable throughput region* (*as in Definition 3*) *coincides with the Shannon capacity region given in Lemma 1*, *when all K* +1 *nodes know the mean arrival rates of all K sources and transmitters convey 1-bit of quantized queue state information to the receiver in each time-slot*.

*Proof*. Let the incoming rates (λ_{1},λ_{2},…,λ_{
K
}) be such that Σ_{iϵS}*λ*_{
i
}, *< C* (Σ_{
iϵs
}*P*_{
i
}*/N*) where *S* ϵ $\mathbb{S}$ where $\mathbb{S}$ is set of all nonempty subsets of {1,2,…,*K*}. Let *t =* (1/2 K)min_{sϵs}(*C*(Σ_{iϵS}*P/N*) – Σ_{iϵS}*λ*_{
i
}). Let *γ*_{
i
}, *= λ*_{
i
}, + min(*τ*, *λ*_{
i
}). Note that (*γ*_{1}, *γ*_{2},…, *γ*_{
K
}) is also in the interior of the capacity region.

Let us consider transmitter *i*. Suppose that the number of bits with the transmitter at the beginning of time-slot *k* be *Q*_{
i
}(*k*). If *Q*_{
i
}(*k*) < *nγ*_{
i
}, we serve Ω_{
i
},(*k*) = 0 bits, else we serve Ω_{
i
},(*k*) = *nγ*_{
i
} bits. The input to the encoder at transmitter *i* can have any bit-sequence of length *nγ*_{
i
}. These are encoded to *2*^{
nγi
} codewords of length *n* (since we do not know the whole statistical information,we encode only the equally likely symbols in contrast to Theorem 1 where we encode all the possible bit sequences having length at-most *nγ*_{
i
}). The transmitter sends the information of Ω_{
i
}(*k*) = 0 by sending the single bit quantized queue state information. The codebooks are chosen at the beginning of operation and stay fixed throughout. Since (y_{1}, *y*_{
2
},…, *y*_{
K
}) is in the interior of the capacity region, there exists an encoding and decoding scheme with average probability of error at the decoder going to 0 as *n* goes to infinity.

Using the above scheme, the queue update (2) reduces to *Q*_{
i
}(*k* +1) = *f* (*Q*_{
i
}(*k*), *nγ*_{
i
}) + Δ_{
i
}(*k*), where *f* (*A*,*B*) *= A* – *B* if A ≥ *B*, and = *A* otherwise. Note that *Q*_{
i
}(*k* +1) ≤ (*Q*_{
i
}(*k*) – *2ny*_{
i
})^{
+
} + (*ny*_{
i
} + Δ_{
i
}(*k*)). In other words, this queue is upper bounded by a queue whose stability can be proven similar to the appendix. Hence, we see that the queues are stable. This completes the proof. □

**Lemma 3**(side-information Case 3).

*Suppose that the transmit power P*

_{ i }

*= P for all i. All transmitters send 1-bit queue information as in Theorem 2. If nodes do not know anything about arrival distribution of any node*(

*not even its own*),

*then the stable throughput region contains the closure of*(λ

_{1},λ

_{2},…, λ

_{k})

*where*(λ

_{1},λ

_{2},…,λ

_{ K })

*satisfy*

*for any predecided variable S >*0

*chosen without any knowledge ofarrival distribution or arrival means*,

*and known to all the transmitting nodes and the base station*.

*Proof*. By choosing *γ*_{
i
} *=* (*1/K*)*C*(*KP/N*) – *δ/*2 *K* and using the same protocol as in the proof of Theorem 2 to encode, we see that any point in the above region is in stable throughput region. □

*Remark 1*. Although *S* can be chosen arbitrarily small, the knowledge of same *δ >* 0 at all the nodes can be considered as a side-information.

Two important conclusions can be drawn from studying Theorems 1 and 2 and Lemma 3. First, there is a significant loss in the achievable stable throughput region if the transmitters do not know anything about the source arrival statistics at all nodes. This situation represents the common scenario in actual practice. The loss is entirely due to the fact that any transmitter has to guarantee that its packets will be received error-free *without* any knowledge of amount of data which other transmitters may be trying to send. Thus, each transmitter assumes that every source is sending at its peak rate, so that they essentially split the sum-rate capacity equally. However, in cases where some sources have lower arrival rates, this scheme can lead to a large loss compared to full-information scenario of Theorem 1 or partial knowledge with feed-forward bit of Theorem 2.

Second, the proofs of Theorems 1 and 2 suggest a simple architecture to get around the problem of every transmitter knowing the statistics of every source in the system. Essentially the transmitters need to know only an appropriate value of *t* to calculate how far the arrival rate is from the capacity region and hence decide their transmission rate. Thus if the receiver knows the statistics or arrival rates at each node, it can calculate the appropriate “backoff” parameter *t* and send it to each transmitter. This is akin to congestion/rate control by base-stations and leads to the following result.

**Corollary 1** (side-information Case 2). *The stable throughput region* (*as in Definition 3*) *coincides with the Shannon capacity region given in Lemma 1 if*

- (2a)
*the K transmitters know only their own complete arrival distribution but the base station knows the complete arrival distribution ofall K sources*,*or* - (2b)
*the K transmitters know only their own mean arrival rates but the base station knows all the mean arrival rates; in addition*,*transmitters send one bit ofinstanta- neous queue state information*

*if the base station can feedback a real number to all K sources*.

*Proof*. The proof is similar to that of Theorems 1 and 2 except that *t* is calculated at the base station and is fedback to all the *K* sources. □

### 3.1. Discussion

In Theorem 1, it is shown that with full statistical information about *all* the nodes at *all* the nodes guarantees that every point in the Shannon capacity region of Lemma 1 is in stable throughput region. Thus, full statistical information guarantees maximal achievable performance. The key ingredient of this result is that the probability of different bit sequences is known to the receiver which allows the receiver to use the optimal maximum *a posteriori* probability decoder. Thus, while each codeblock may have different number of information bits, the receiver has accurate knowledge of the probability of each message.

As the information about the distribution of source information bits reduces, the transmitter and the receiver no longer can perform optimal allocation of code rates and thus suffer from additional “overhead” of indicating the number of encoded bits in every codeword. If we reduce the information, we may not have the same knowledge of the encoding rate at the senders and the receiver, which will result in significant number of errors, and the stable throughput regions will be much less than the capacity region. This overhead can surprisingly be eliminated if the encoders can share one-bit information about their current queue state. When the distribution of arrivals is not known, we have to assume that the incoming symbols are equally likely. For this assumption to hold, we encode equally likely symbols which have the same number of input bits. Hence, we send one bit of queue information when the respective queue has less elements than that supported by the service rate of that particular transmitter, and send the bits supported by service rate otherwise. This helps us to send all the blocks with equal probability. The encoding rates just need the side information of mean arrival rates rather than the whole distribution since we know the prior probabilities of each codeword and do not need the whole distribution for this information. This is why in Theorem 2 when each node knows the arrival rates at all the nodes, we get stable queues when the mean arrival rates are within the capacity region. There is a loss in achievable rates as we know less and less information.

We further see in Lemma 3 that if we do not know any information, we have to predecide the encoding and decoding rates in the interior of the capacity region which leads to the stability region being smaller than the capacity region. Note that the 1-bit side information is not needed when all the queues always have data to send, the case which is commonly considered in information theoretic analysis resulting in Lemma 1.

We further see in Corollary 1 that if the base station can feedback a real number to the transmitting users once, we need much less side information at the transmitters. This side information is needed only once and hence can be communicated the same time when the codebooks are told to the receivers. In this case, every point in the capacity region is stable. Hence, we have a practical way akin to congestion control where we trade side information of other transmitters at a transmitter by some information from receiver.

## 4. Fixed Block-Length *n* With *SNR* **→** ∞

Till now, we concentrated on the block-length being large enough to make the error probability at the receiver decrease to 0. We now consider the dual-like problem to keep the block-length fixed, while letting the *SNR =* (SNR_{1}, *SNR*_{2},…, *SNR*_{
K
}) high enough for the probability of error at the decoder to be very small, where $SN{R}_{i}={P}_{i}/N\doteq {u}^{{\alpha}_{i}}$, where *u* is some base *SNR*. We will give an achievable rate region for all five side-information cases listed in Section 2.4. Two main code constructions are used, namely in Theorem 3 for Case 1a and Theorem 4 for Case 1b, and rest of the side-information cases follows from them (Cases 2a, 2b, and 3).

In this section, we also assume that lim_{SNR→∞}($\mathbb{E}$(Δ*i*(*k*))/log(*SNR*_{
i
})) < ∞ and lim_{SNR→∞}(Var(Δ_{
i
}(k))/log^{2}(SNR*i*)) < ∞. As the *SNR* is large enough, the channel can support rates that are logarithmic in *SNR*. Hence, we can also support the arrival rates that are logarithmic in *SNR*. Hence, we let all the arrival rates be assumed to be a function of *SNR* as *λ*_{
i
} ≐ *l*_{
i
} log(*SNR*_{
i
}) ≐ *l*_{
i
}*α*_{
i
} log(*u*). We also assume that *SNR*_{
i
} for all *i* is known to all the nodes.

**Theorem 3** (side-information Case 1a). *The SNR-stable throughput region* (*as in Definition 4*) *coincides with the SNR- capacity region given in Lemma 2*, *when all K* +1 *nodes know the complete arrival distribution of all K sources*.

*Proof*. Let the incoming rates (λ_{1},λ_{2},…,λ_{
K
}) be *λ*_{
i
} ≐ *l*_{
i
}*α*_{
i
} log(*u*), with Σ_{iϵS}*l*_{
i
}*α*_{
i
} *<* (1/2)max_{iϵS} *α*_{
i
}, where *S* ϵ $\mathbb{S}$, where $\mathbb{S}$ is set of all nonempty subsets of {1,2,…, *K*}. Let *τ =* (1/K)(1 – max_{SϵS}Σ_{iϵS} l_{
i
}*α*_{
i
}/(1/2)max_{iϵS} *α*_{
i
})). Let *g*_{
i
} *= l*_{
i
} + τ/4. Note that (*g*_{1},*g*_{2},…,*g*_{
K
}) is also in the interior of the *SNR*-capacity region. We can take the departure rate *γ*_{
i
} *= λ*_{
i
} + (*g*_{
i
} − *l*_{
i
})log *SNR*_{
i
} ≐ *g*_{
i
} log *SNR*_{
i
} ≐ *g*_{
i
}*α*_{
i
} log u.

The constructive queuing scheme is similar to that in Theorem 1 with the above arrival and departure rates. We get probability of error at the decoder going to 0 as *u* → ∞ since (*g*_{1},*g*_{2},…,*g*_{
K
}) is in the interior of the SNR-capacity region.

Using the above scheme, the queue states emerge as *Q*_{
i
}(*k* + 1) = (*Q*_{
i
}(*k*) *– nγ*_{
i
})^{
+
} + Δ_{
i
}(*k*). Using similar procedure as in the appendix, we can easily show that the queues are stable. □

**Theorem 4** (side-information Case 1b). *The SNR-stable throughput region* (*as in Definition 4*) *coincides with the SNR- capacity region given in Lemma 2*, *when all K* +1 *nodes know the mean arrival rates of all K sources and transmitters convey 1-bit of quantized queue state information to the receiver in each time-slot*.

*Proof*. Let the incoming rates (λ_{1},λ_{2},…,λ_{
K
}) be *λ*_{
i
} ≐ *l*_{
i
}*α*_{
i
} log(*u*), with Σ_{iϵS}*l*_{
i
}*α*_{
i
} *<* (1/2)max_{iϵS}*α*_{i}, where *S ϵ* $\mathbb{S}$, and $\mathbb{S}$ is set of all nonempty subsets of {1,2,…,*K*}. Let *τ =* (1/K)(1 – max_{SϵS}(Σ_{
i
}ϵ*s l*_{
i
}*α*_{
i
}*/*(*1/2*)*max*_{
iϵs
} *α*_{i})). Let *g*_{
i
} *= l*_{
i
} + min(*l*_{
i
}, *τ*/4). Note that (*g*_{1},*g*_{2},…, *g*_{
K
}) is also in the interior of the *SNR*-capacity region. We can take the departure rate *γ*_{
i
} *= λ*_{
i
} + (*g*_{
i
}, − *l*_{
i
})log *SNR* ≐ *g*_{
i
}*α*_{
i
} log *u*.

Let us consider transmitter *i*. Suppose that the number of bits with the transmitter at the beginning of time-slot *k* is *Q*_{
i
}(*k*). If *Q*_{
i
}(*k*) *< nγ*_{
i
}, we serve Ω_{
i
}(k) = 0 bits, else we serve Ω_{
i
},(*k*) = *nγ*_{
i
} bits. The input to the encoder at transmitter *i* can be any bit-sequence of length *nγ*_{
i
}. These are encoded to *2*^{
nγi
} codewords of length *n*. The transmitter sends the information of Ω_{
i
}(*k*) = 0 by sending the single-bit information that the queue is empty. Since (*g*_{1},g_{2},.…,*g*_{
K
}) is in the interior of the SNR-capacity region, there exists *u* large enough so that there exist an encoding and decoding scheme with arbitrarily small average probability of error as *u* goes to infinity.

Using the above scheme, the queue states emerge as *Q*_{
i
} (*k* +1) = *f* (*Q*_{
i
}(*k*), *nγ*_{
i
},) + Δ_{
i
},(*k*), where *f* (*A*, *B*) *= A* – *B* if *A* ≥ *B*, and = *A* otherwise. Using arguments similar to those used in the proof of Theorem 2, we can show that the queues are stable. □

**Lemma 4**(side-information Case 3).

*Suppose that the transmit power P*

_{ i }

*= P for all i*,

*hence SNR*

_{ i }

*= u. All transmitters send 1-bit queue information as in Theorem 4. If nodes do not know anything about arrival distribution of any node*(

*not even its own*),

*then the SNR-stable throughput region contains the closure of*(

*l*

_{1},(

*l*

_{2},…,

*l*

_{ K }),

*where*(

*l*

_{1},

*l*

_{2},…,

*l*

_{ K })

*satisfy*

*for any predecided variable δ >*0

*chosen without any knowledge of arrival distribution or arrival means*,

*and known to all the transmitting nodes and the base station*.

*Proof*. The SNR-capacity region in this case is the closure of Σ_{1≤i≤K}*r*_{
i
} < 1/2. By choosing *g*_{
i
} = 1/2*K* − *δ*/2*K*, *γ*_{
i
} ≐ *g*_{
i
} log *u* and using the same protocol as in the proof of Theorem 4 to encode, we see that any point in the above region as stable. □

We further see that both the Cases 1a and 1b in Theorems 3 and 4 suggest that at a transmitting node, we use the information of its own arrival distribution/means while the arrival distributions/means at other nodes are absorbed into the parameter *τ* similar to that in Section 3. Hence we consider a method where only the receiver knows the full arrival distribution/mean, computes an appropriate *τ*, and then relays it to all the transmitters as in Section 3. Note that this parameter is needed only once and not in every time slot. Using this, Theorems 3 and 4 change as follows.

**Corollary 2** (side-information Case 2). *The SNR-stable throughput region* (*as in Definition 4*) *coincides with the SNR- capacity region given in Lemma 2*, *when*

- (2a)
*all K-transmitting nodes know the complete arrival distribution of its own source*,*the base station knows the complete arrival distribution of all K sources*,*or* - (2b)
*all K-transmitting nodes know the mean arrival rates of its own source*,*the base station knows the mean arrival rates of all K sources*,*transmitters convey 1-bit of quantized queue state information to the receiver in each time-slot*

*if the base station can feedback a real number to all K sources*.

*Proof*. The proof is similar to that of Theorems 3 and 4 except that *τ* is calculated at the base station and is fedback to all the *K* sources. □

### 4.1. Discussion

The results follow the similar behavior as in Section 3 for finite *SNR*. When each node knows the distribution of arrivals at all the nodes, the *SNR*-stable throughput region coincides with the *SNR*-capacity region as in Theorem 3. As we decrease the side information, we loose in the *SNR-*stable throughput regions since we would have to predecide something. With the knowledge of whole distribution of arrival, we estimated the a priori probability of all the messages sent from the encoder that gave us the maximum stable throughput region. We show that the knowledge of whole distribution of arrival can be relaxed if we allow one bit of information from the transmitters to indicate whether the queue is empty. On these lines, if each node knows the mean arrival rate at all the queues, stable queues are achieved as long as the mean arrival rates are in the interior of the SNR-capacity region (Theorem 4). We further see that there is a loss in the throughput stability regions due to predeciding some terms when there is a less information. We further see in Corollary 2 that the knowledge of arrival distribution/means of other transmitters is not needed at a transmitter as long as there is a real feedback from the base station.

Till now, we saw the similarities in the two cases when the *SNR* is finite with infinite block-length and when the block- length is finite while *SNR* is infinite. The main difference in the two approaches is the definition of stability. The definition of stability in the case of infinite block length means that the queue length has to be finite multiple of block-length with probability 1. This would mean that a bit would take extremely large amount of time to get serviced (As *n* → ∞, each bit even if waits one time-slot means that it waits for infinite time). In the case of infinite *SNR* stability refers to queue length being a finite multiple of log(SNR) with probability 1. Since, increasingly large number of bits are being served each time unit, the average bit delay is smaller.

## 5. Conclusion

In this paper, we studied the impact of side information on the stable throughput regions. We found that when every node knows the distribution of the arrival rates of every node, the stable throughput region coincides with the Shannon capacity region for both finite time-slots with large *SNR* and finite *SNR* with large time-slots. We also considered a variant of the side information in which the knowledge of the whole of the distribution is replaced by the arrival means, along with one bit of side-information in every time-slot; and that seems enough. Further, the case of side information at a node in which a transmitter knows only its own arrival statistics was studied, and we found that a feedback from the receiver is enough. Finally, we showed that any information content that is less than the mean arrival rates known to all nodes implies a reduced stable throughput region, a case which is indicative of the performance of real systems.

## Declarations

### ACKNOWLEDGMENTS

We acknowledge fruitful discussions with Robert Calderbank and Tian Lan (Princeton University). We would also like to thank the anonymous reviewers for many suggestions that improved this paper. V. Aggarwal was partially supported by NSF Awards ANI-0338807, CCF-0635331, CNS-0325971, and AFOSR under Contract 00852833. A. Sabharwal was partially supported by NSF Awards CCF-0635331 and CNS- 0325971.

## Authors’ Affiliations

## References

- J. Luo and A. Ephremides, “On the throughput, capacity, and stability regions of random multiple access,”
*IEEE Transactions on Information Theory*, vol. 52, no. 6, pp. 2593–2607, 2006.MathSciNetView ArticleMATHGoogle Scholar - J. Massey and P. Mathys, “The collision channel without feedback,”
*IEEE Transactions on Information Theory*, vol. 31, no. 2, pp. 192–204, 1985.MathSciNetView ArticleMATHGoogle Scholar - M. Medard, J. Huang, A. J. Goldsmith, S. P. Meyn, and T. P. Coleman, “Capacity of time-slotted ALOHA packetized multiple-access systems over the AWGN channel,”
*IEEE Transactions on Wireless Communication*, vol. 3, no. 2, pp. 486–499, 2004.View ArticleGoogle Scholar - R. Rao and A. Ephremides, “On the stability of interacting queues in a multiple-access system,”
*IEEE Transactions on Information Theory*, vol. 34, no. 5, pp. 918–930, 1988.MathSciNetView ArticleMATHGoogle Scholar - T. M. Cover and J. A. Thomas,
*Elements of Information Theory*, John Wiley & Sons, New York, NY, USA, 1991.View ArticleMATHGoogle Scholar - H. Ohsaki, M. Murata, H. Suzuki, C. Ikeda, and H. Miyahara, “Rate-based congestion control for ATM networks,”
*ACM SIGCOMM Computer Communication Review*, vol. 25, no. 2, pp. 60–72, 1995.View ArticleGoogle Scholar - E. M. Yeh and A. S. Cohen, “Throughput optimal power and rate control for queued multiaccess and broadcast communications,” in
*Proceedings of the International Symposium on Information Theory*(*ISIT ’04*), p. 112, Chicago, 1ll, USA, June- July 2004.Google Scholar - L. Zheng and D. N. C. Tse, “Diversity and multiplexing: a fundamental tradeoff in multiple-antenna channels,”
*IEEE Transactions on Information Theory*, vol. 49, no. 5, pp. 1073–1096, 2003.View ArticleMATHGoogle Scholar - D. N. C. Tse, P. Viswanath, and L. Zheng, “Diversity- multiplexing tradeoff in multiple-access channels,”
*IEEE Transactions on Information Theory*, vol. 50, no. 9, pp. 1859–1874, 2004.MathSciNetView ArticleMATHGoogle Scholar - B. Tsybakov and V. Mikhailov, “Ergodicity of a slotted ALOHA system,”
*Problems of Information Transmission*, vol. 15, no. 4, pp. 301–312, 1979.MathSciNetMATHGoogle Scholar - S. Ghez, S. Verdu, and S. C. Schwartz, “Stability properties of slotted Aloha with multipacket reception capability,”
*IEEE Transactions on Automatic Control*, vol. 33, no. 7, pp. 640–649, 1988.MathSciNetView ArticleMATHGoogle Scholar - W. Szpankowski, “Stability conditions for some distributed systems: buffered random access systems,”
*Advances in Applied Probability*, vol. 26, no. 2, pp. 498–515, 1994.MathSciNetView ArticleMATHGoogle Scholar - B. Shrader and A. Ephremides, “On the Shannon capacity and queueing stability of random access multicast,” submitted to IEEE Transactions on Information Theory, May 2007.MATHGoogle Scholar
- H. Boche and M. Wiczanowski, “Optimal scheduling for high speed uplink packet access—a cross-layer approach,” in
*Proceedings of the 59th IEEE Vehicular Technology Conference*(*VTC ‘04*), vol. 5, pp. 2575–2579, Milan, Italy, May 2004.Google Scholar - M. J. Neely, E. Modiano, and C. E. Rohrs, “Power allocation and routing in multibeam satellites with time-varying channels,”
*IEEE/ACM Transactions on Networking*, vol. 11, no. 1, pp. 138–152, 2003.View ArticleGoogle Scholar - M. Kobayashi and G. Caire, “Joint beamforming and scheduling for a MIMO downlink with random arrivals,” in
*Proceedings of the IEEE International Symposium on Information Theory*(*ISIT ‘06*), pp. 1442–1446, Seattle, Wash, USA, July 2006.View ArticleGoogle Scholar - E. Leonardi, M. Mellia, F. Neri, and M. A. Marsan, “Bounds on average delays and queue size averages and variances in input- queued cell-based switches,” in
*Proceedings of the 20th Annual Joint Conference of the IEEE Computer and Communications Societies*(*INFOCOM ‘01*), vol. 2, pp. 1095–1103, Anchorage, Alaska, USA, April 2001.Google Scholar - N. McKeown, V. Anantharam, and J. Walrand, “Achieving 100 percent throughput in an input-queued switch,” in
*Proceedings of the 15th Annual Joint Conference of the IEEE Computer and Communications Societies*(*INFOCOM ‘96*), pp. 296–302, San Francisco, Calif, USA, March 1996.View ArticleGoogle Scholar - I. E. Telatar and R. G. Gallager, “Combining queueing theory with information theory for multiaccess,”
*IEEE Journal on Selected Areas in Communications*, vol. 13, no. 6, pp. 963–969, 1995.View ArticleGoogle Scholar - R. M. Loynes, “The stability of a queue with non-independent interarrival and service times,”
*Proceedings of the Cambridge Philosophical Society*, vol. 58, pp. 497–520, 1968.MathSciNetView ArticleMATHGoogle Scholar - S. P. Meyn and R. L. Tweedie,
*Markov Chains and Stochastic Stability*, Springer, New York, NY, USA, 1996.MATHGoogle Scholar - A. Khoshnevis and A. Sabharwal, “Performance of quantized power control in multiple antenna systems,” in
*Proceedings of the IEEE International Conference on Communications*(*ICC ‘04*), vol. 2, pp. 803–807, Paris, France, June 2004.Google Scholar - C. Swannack, G. W. Wornell, and E. Uysal-Biyikoglu, “MIMO broadcast scheduling with quantized channel state information,” in
*Proceedings of the IEEE International Symposium on Information Theory*(*ISIT ‘06*), pp. 1788–1792, Seattle, Wash, USA, July 2006.View ArticleGoogle Scholar - L. Kleinrock,
*Queueing Systems*, John Wiley & Sons, New York, NY, USA, 1975.MATHGoogle Scholar - F. Hekland, G. E. 0ien, and T. A. Ramstad, “Quantifying performance losses in source-channel coding,” in Proceedings of the European Wireless Conference (EW ‘07), Paris, France, April 2007.Google Scholar
- E. Gelenbe and G. Pujolle,
*Introduction to Queueing Networks*, John Wiley & Sons, New York, NY, USA, 1987.MATHGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.