In this section, we will review lower and upper bounds for the channel capacity of CDMA systems. In the first part, we define the concept of sum capacity. In the second part, lower and upper bounds are surveyed for binary and nonbinary CDMA transmission for the noiseless case and in the third part, the extension to the channel additive noise is discussed.

### A Definition of sum capacity

In Multi Access Channels (MAC), additive noise and multi user interference are the main factors that cause disturbance in CDMA transmission. These factors affect the capacity of such channels. In [28], the authors have defined capacity regions to find all achievable transmission rates in such channels. In order to assign a single value as a measure of channel capacity, the sum capacity would be the best choice. The sum capacity is defined as the maximum sum of all user rates that can be achieved and is equal to \mathsf{\text{ma}}{\mathsf{\text{x}}}_{{p}_{1}\times {p}_{2}\times \cdot \cdot \cdot \times {p}_{n}}I\left({X}_{1},{X}_{2},...,{X}_{n};Y\right) where *p*_{
i
} is the input distribution function of the *i* th user.

The sum capacity for CDMA as a special case of MAC is also defined in [43]. For the noiseless case, the channel capacity of a system with binary signature matrix **A** will be equal to C\left(m,n\right)=A\in {\mathcal{M}}_{m\times n}\left(\pm 1\right)\phantom{\rule{0.3em}{0ex}}maxC\left(A\right), where *C*(**A**) is the sum capacity of the channel. For the noisy case, we again use the channel model Y=\frac{1}{\sqrt{m}}AX+N. The total power of the users is tr(E\left(\frac{1}{m}AX{X}^{*}{A}^{*}\right) and the noise power is also modeled as E\left({N}^{*}N\right). Thus, the multi user SNR at the receiver is defined in [20, 21] as

SNR=\frac{tr(E\left(\frac{1}{m}AX{X}^{*}{A}^{*}\right)}{E\left(N*N\right)},

(17)

where the entries of *N* are i.i.d. random variables, *f*(·) is their common probability distribution function (pdf) with variance {\sigma}_{f}^{2}. This implies that the overall power at the receiver is equal to m{\sigma}_{f}^{2}. If \frac{m}{n}SNR\le \eta, we will have

tr(E\left(\frac{1}{m}AX{X}^{*}{A}^{*}\right)\le n\eta {\sigma}_{f}^{2}.

(18)

For a given signature matrix **A** and *η*, the sum channel capacity will be defined as

C\left(A,\eta \right)=max\left\{I\left(X;Y\right)|X~{p}_{1}\left({x}_{1}\right)\times {p}_{2}\left({x}_{2}\right)\times \cdot \cdot \cdot \times {p}_{n}\left({x}_{n}\right)\right\},

(19)

such that the above inequality is satisfied.

The authors of [43] have tried to find the lower bounds for both noisy and noiseless channels in binary CDMA systems by choosing a random signature matrix and then derive the expected value of the sum capacity of the channel corresponding to this random matrix. In other words, the lower bound is the average sum channel capacity of a typical signature matrix. According to [41–43], the capacity of a channel with random signature matrix will be higher than the expected value with high probability.

All upper bounds that are derived for noisy and noiseless channels are based upon a conjecture which implies that the input vectors have uniform distribution [43]. In [41], the authors used this conjecture for the special case when the noise has Gaussian distribution. But for the general case, the authors of [43] considered this conjecture to be true for all noise distributions. Although this conjecture looks very simple, it is still an open problem.

### B Noiseless channel capacity bounds

In this subsection, we will take a look at the lower and upper bounds for the sum capacity of general CDMA systems. These bounds are investigated further for several special cases such as COW matrices and active user detection systems. In the noiseless case, multi user interference is the only disturbance that has to be taken into account for CDMA transmission.

#### 1 Lower bounds for the sum capacity of CDMA systems for the noiseless case

In the general mode where the signature alphabets and input vectors are not binary, the authors of [20, 21] first defined *p* and *π* as follows: Suppose that \stackrel{\u0303}{\mathcal{I}} is the difference set of \mathcal{I} and is defined as:

\stackrel{\u0303}{\mathcal{I}}=\mathcal{I}-\mathcal{I}=\left\{i-{i}^{\prime}|i,{i}^{\prime}\in \mathcal{I}\right\}

(20)

\stackrel{\u0303}{p}\left(\cdot \right) is defined as a probability on \stackrel{\u0303}{\mathcal{I}} to be the pdf of the difference of two independent random variables from the set \mathcal{I} (the pdf of the random variables from \mathcal{I} has the same distribution *p*(·)). *π*(·) is a probability distribution on \mathcal{S}. The probability measure on the signature matrix {\mathcal{M}}_{m\times n} is induced by choosing entries of the random matrix independently and with the same distribution *π*(·).

In [20, 21], a lower bound for the channel capacity for the general case was introduced which is stated in the following theorem:

*Theorem*
**7**

C\left(m,n,\mathcal{I},\mathcal{S}\right)\ge \underset{p,\pi}{\mathrm{sup}}\left\{-log\phantom{\rule{0.3em}{0ex}}{E}_{\stackrel{\u0303}{X}}\left(\mathbb{P}{\left({a}^{T}\stackrel{\u0303}{X}=0\right)}^{m}\right)\right\},

(21)

where a\in {\mathcal{S}}^{n} and \stackrel{\u0303}{X}\in {\stackrel{\u0303}{\mathcal{I}}}^{n} with i.i.d entries with distributions *π*(·) and \stackrel{\u0303}{p}\left(\cdot \right), respectively.

For the special case, where the input and signature matrix alphabets are finite, a simpler form for the above expression is derived in [20, 21].

For example, in the COW mode, the above lower bound simplifies to the lower bound obtained in [43]

C(m\mathrm{,}n)\ge n-{{\displaystyle log}}_{2}{\displaystyle \sum _{j=0}^{\frac{n}{2}}}\left({}_{2j}^{n}\right){\left[\frac{\left(\begin{array}{l}2j\\ j\end{array}\right)}{{2}^{2j}}\right]}^{m}

(22)

Another example could be the case where the input vectors are binary and \mathcal{S}=\left\{0,\pm 1,...,\pm p\right\}. The lower bound has been derived in [21, 51].

#### 2 Conjectured upper bounds for the sum capacity of CDMA systems for the noiseless case

In [20, 21], the authors introduced a theorem that presents a conjectured upper bound for the channel capacity in general

*Theorem* **8** In the absence of additive noise, if \mathcal{I}=\left\{{i}_{1},...,{i}_{q}\right\} with distribution *p*(*i*_{
i
} ) = *p*_{
i
} and \mathcal{S}=\left\{{s}_{1},\dots ,{s}_{l}\right\}, the upper bound is as follows:

C\left(m,n,\mathcal{I},\mathcal{S}\right)\le \underset{\begin{array}{c}\sum _{i=l}^{l}{u}_{i}=n\\ \phantom{\rule{1em}{0ex}}p\left(\cdot \right)\end{array}}{max}min\left(n\mathbb{H}\left(\mathcal{I}\right),m\mathbb{H}\left(\stackrel{\u0303}{f}\right)\right),

(23)

in which

\begin{array}{l}\tilde{f}(z)={\displaystyle \sum _{\underset{1\le i\le l}{{\displaystyle \sum _{j=1}^{q}}{v}_{ij}={u}_{i}}}}\left({\displaystyle \prod _{k=1}^{l}}\left(\begin{array}{c}{u}_{k}\\ {v}_{k1}\mathrm{,}\dots \mathrm{,}{v}_{kq}\end{array}\right)\right)\\ \left({\displaystyle \prod _{k=1}^{q}}{p}_{k}^{{\displaystyle \sum _{\alpha =1}^{l}}{v}_{k\alpha}}\right)\delta \left(z-\frac{1}{\sqrt{m}}\left({\displaystyle \sum _{k=1}^{l}}{s}_{k}{\displaystyle \sum _{\alpha =1}^{q}}{v}_{k\alpha}{i}_{\alpha}\right)\right)\mathrm{,}\end{array}

(24)

where *δ* is the Dirac function and \mathbb{H}\left(f\right) is the entropy of the distribution *f*.

Also, when {s}_{i}={e}^{\frac{2\pi i}{l}\sqrt{-1}} and *l* divides *n*, we conjecture that {u}_{1}={u}_{2}=\cdot \cdot \cdot ={u}_{l}=\frac{n}{l}.

The above upper bound is simplified for COW matrices in [43]. For COW matrices, a simpler upper bound is obtained in [22]:

C\left(m,n\right)\le m\left(\frac{1}{2}{\mathrm{log}}_{2}n+{\mathrm{log}}_{2}\lambda \right)+1,

(25)

where *λ* is the unique positive solution of the equation

{\left(\lambda \sqrt{n}\right)}^{m}=m{e}^{\frac{-{\lambda}^{2}}{2}}{2}^{n+1}.

(26)

Another example could be the case where the input vectors are binary and \mathcal{S}=\left\{0,\pm 1,...,\pm p\right\}. The upper bound has been derived in [21, 51].

Although Equation (25) shows a tight upper bound on the channel capacity, in some regions, there are bounds that are a bit tighter than the above bound. These bounds are conceptually obvious and are shown below:

C\left(m,n\right)\le n

(27)

C\left(m,n\right)\le m{\mathrm{log}}_{2}\phantom{\rule{0.3em}{0ex}}n+1

(28)

The conjectured upper bound that was introduced in Equation (25) was a special case when the noise variance is zero without restricting the signature to have unity magnitude. Another conjectured upper bound that is more general and does not depend on the symbol alphabet is introduced in [20, 21].

In Figure 4, The lower and upper bounds for a COW matrix with fixed chip rate (*m* = 64) are plotted versus the number of users. An interesting result that can be drawn from this figure is that the channel capacity increases almost linearly with the number of users until *n* reaches a certain threshold value *n* th. In this region, the errorless transmission is achieved and this implies that overloaded signature matrices do exist for these values of *m* and *n*. As *n* goes beyond this threshold value, the lower and upper bounds tend to diverge from each other. This implies that errorless transmission cannot be achieved anymore and multi user interference causes transmission errors. The lower bound for an 8 × 13 COW matrix equals 12:164 bits. This shows the extreme tightness of this bound.

In Figure 5, the capacity bounds are sketched versus the number of chips for a fixed value of *n* (*n* = 220). In this figure, we see that as the number of chips increases and before it reaches a certain value *m* th, the channel is lossy and errorless transmission is not achievable. This is due to the fact that when the number of chips is less than a certain value, the transform of the input vectors into an *m*-dimensional space is not one-to-one. But as *m* increases over this threshold value, errorless transmission can be achieved. In this figure, we can also observe that in some regions, the upper bound introduced in (27) and (28) tends to be slightly tighter than the bound introduced in (25).

Figures 6 and 7 show the same facts as Figures 4 and 5, respectively, but for several values of *m* and *n*.

Figure 8 shows the normalized channel capacity bounds for binary CDMA systems. From this figure we can derive that systems with higher spreading factor can support more users.

In CDMA systems with relatively small values for *n* and *m* (small sale system) the sum channel capacity depends on the input and signature alphabets. In Figure 9 and 10, this dependence is shown for different systems with *m* being equal to 32. In Figure 9, binary signature CDMA systems are considered while in Figure 10 the systems have binary inputs and ternary signatures.

### C Noisy channel capacity bounds

In the presence of noise, not only multi user interference but also additive noise can reduce the sum capacity of the system. In this subsection, lower and upper bounds sum channel capacity bounds with any arbitrary noise distribution are surveyed. However, only the Gaussian noise distribution is discussed in detail.

In the presence of additive noise, the calculation of channel capacity is a challenging problem. In [20, 21], a lower and a conjectured upper bound for the general case is introduced and will be discussed later. Assume **A** = *r* **B** where *r* is a fixed number and **B** is randomly chosen with distribution {\mathcal{P}}^{\pi}.

After taking expectation over {\mathcal{P}}^{\pi}, we have:

r\le \sqrt{\frac{\eta {\sigma}_{f}^{2}}{\left({\sigma}_{p}^{2}+n{\mu}_{p}^{2}\right)}\left({\sigma}_{\pi}^{2}+n{\mu}_{\pi}^{2}\right)},

(29)

where *μ*_{
p
} and *σ*_{
p
} are the mean and variance of the input distribution *p*(·), respectively, and *μ*_{
π
} and *σ*_{
π
} are the mean and variance of the signature code distribution *π*(·).

#### 1 Lower bounds for the sum capacity of CDMA systems for the noisy case

The authors of [20, 21] presented a theorem to obtain a lower bound for the most general case for any given input and signature matrix symbols and additive noise with arbitrary distribution:

*Theorem*
**9**

\begin{array}{lll}\hfill C\left(m,n,\mathcal{I},\mathcal{S},\eta \right)\ge & \underset{\pi ,p}{\mathrm{sup}}\underset{q}{\mathrm{sup}}\left[-mE\left(q\left({N}_{1}\right)\right)\right.\phantom{\rule{2em}{0ex}}& \hfill \text{(1)}\\ \left(\right)close="]">-log\phantom{\rule{2.77695pt}{0ex}}{E}_{\stackrel{\u0303}{X}}\left({\left({E}_{b,{N}_{1}}\left({2}^{-q\left({N}_{1}-s\frac{{b}^{T}\stackrel{\u0303}{X}}{\sqrt{m}}\right)}\right)\right)}^{m}\right)& \phantom{\rule{2em}{0ex}}\\ \hfill \text{(2)}\end{array}\n \n \n \n \n (3)\n \n \n \n

(30)

Here, *q*(·) is any arbitrary function, *N*_{1} is the first entry of the noise vector, *b* and \stackrel{\u0303}{X} are vectors of length *n* with i.i.d. entries of distribution *π*(0) and *p*(0).

For the special case, when the additive noise is Gaussian, the above theorem can be stated in a more explicit way by setting q\left(x\right)=\frac{\gamma}{2}|\frac{x}{\sigma}{|}^{2} log *e* and the resulting lower bound is shown below:

\begin{array}{l}C(m\mathrm{,}n\mathrm{,}\mathcal{I},\mathcal{S}\mathrm{,}\eta )\ge \underset{\pi \mathrm{,}p}{sup}\phantom{\rule{0.5em}{0ex}}\underset{\gamma}{sup}[-m(\gamma log\phantom{\rule{0.5em}{0ex}}\phantom{\rule{0.5em}{0ex}}e\\ \phantom{\rule{0.5em}{0ex}}-log(1+\gamma ))-log{\mathbb{E}}_{\tilde{X}}\left({\left({\mathbb{E}}_{b}\left(\phantom{\rule{0.5em}{0ex}}e{\phantom{\rule{0.5em}{0ex}}}^{{\scriptscriptstyle \frac{-\gamma {r}^{2}}{2(1+\gamma )m}}}{}^{|{b}^{T}\tilde{X}{|}^{2}}\right)\right)}^{m}\right)]\mathrm{.}\end{array}

(31)

For special cases, where the user alphabets of the inputs and the arbitrary signature matrices are finite, a lower bound is presented in [20, 21]. The same authors have also obtained this lower bound for any noise distribution.

For example, when our input vectors and signature matrix are binary (COW case), the following inequality presents the sum capacity lower bound for any arbitrary noise distribution *f* and any arbitrary function *q*:

\begin{array}{l}C(m\mathrm{,}n\mathrm{,}f)\ge n-m\mathbb{E}(q({N}_{1}))\\ \phantom{\rule{0.5em}{0ex}}-log\left({\displaystyle \sum _{k=0}^{n}}\left({}_{k}^{n}\right){\left(\mathbb{E}\left({2}^{-q\left({N}_{1}-{\scriptscriptstyle \frac{2{s}_{k}}{\sqrt{m}}}\right)}\right)\right)}^{m}\right)\mathrm{,}\end{array}

(32)

where *s*_{
k
} is the sum of *k* independent random variables taking ±1 with equal probability.

In [20, 21], the authors considered the function *q*(*x*) to be equal to *- γ* log(*f*(*x*)) where *f* is the pdf of additive Gaussian noise with variance *σ*^{2}. Now, suppose we denote the capacity in this case by *C*_{
G
} (*m*, *n*, *σ*^{2}), then we have the following family of lower bounds

\begin{array}{l}\phantom{\rule{0.5em}{0ex}}C(m\mathrm{,}n\mathrm{,}{\sigma}^{2})\ge n-m\gamma log(\sqrt{e})\\ \phantom{\rule{0.5em}{0ex}}-log\left({\displaystyle \sum _{k=0}^{n}}\left({}_{k}^{n}\right){\left({\displaystyle \sum _{j=0}^{k}}\frac{\left({}_{j}^{k}\right)}{{2}^{k}}\frac{\phantom{\rule{0.5em}{0ex}}e{\phantom{\rule{0.5em}{0ex}}}^{-2{\left({\scriptscriptstyle \frac{2j-k}{\sigma \sqrt{m}}}\right)}^{2}\left({\scriptscriptstyle \frac{\gamma}{1+\gamma}}\right)}}{\sqrt{1+\gamma}}\right)}^{m}\right)\mathrm{.}\end{array}

(33)

#### 2 Conjectured upper bounds for the sum capacity of CDMA systems for the noisy case

An upper bound is derived in [20, 21] for a general mode when the user inputs are finite in the presence of noise. However, we just review a conjectured upper bound for a special case where the input vectors and signature matrices are binary (COW) that was introduced in [43]. The following theorem shows this conjectured upper bound:

*Theorem* **10** For any symmetric pdf function *f*, we have

C\left(m,n,f\right)\le min(n,m\left(h\left(\stackrel{\u0303}{f}\right)-h\left(f\right)\right),

(34)

where

\tilde{f}(x)={\displaystyle \sum _{j=0}^{n}}\frac{\left({}_{j}^{n}\right)}{{2}^{n}}f\left(x-\frac{2j-n}{\sqrt{m}}\right)\mathrm{,}

(35)

and *h*(*f*) is the differential entropy of the distribution *f*. For the noiseless case, the usual entropy was used instead of the differential entropy.

For the special case when the noise has Gaussian distribution, \stackrel{\u0303}{f} becomes

\tilde{f}(x)={\displaystyle \sum _{j=0}^{n}}\frac{\left({}_{j}^{n}\right)}{{2}^{n}}\phantom{\rule{0.5em}{0ex}}e{\phantom{\rule{0.5em}{0ex}}}^{-\frac{{\left(x-{\scriptscriptstyle \frac{2j-n}{\sqrt{m}}}\right)}^{2}}{2{\sigma}^{2}}}\mathrm{.}

(36)

Then, we have:

{C}_{G}\left(m,n,{\sigma}^{2}\right)\le min\left(n,m\left(h\left(\stackrel{\u0303}{f}\right)-log\left(\sqrt{2\pi e}\right)\right)\right).

(37)

For some other noise distributions, the capacity upper bounds are derived in [21, 43]. In Figure 11, the normalized sum channel capacity is shown for three values for *γ*. It can be concluded that for larger values of *γ*, the capacity of the channel increases.

As mentioned in the previous subsection, the channel capacity in the small scale system depends on the input and signature alphabets. In Figure 12, this dependence is shown in a noisy channel for binary input and binary signature alphabets.

### D Asymptotic analysis of CDMA systems

The asymptotic analysis of CDMA channels is referred to the case in which the number of users and the spreading factor tend to infinity while their ratio (*β* ) remains constant. The asymptotic case which is also called *large scale system*[39–42, 57], is being studied by many recent works. The base of these works are related to replica theory derived from statistical physics [39, 58]. In the replica method, a quantity called free energy is used which indicates the cumulative generating function carrying all the information about the statistics of the system and is defined as follows:

{\mathcal{F}}_{m}\left(Y,C\right)=\frac{1}{m}lo{g}_{e}Z\left(Y,C\right),

(38)

where

Z\left(Y,C\right)=\sum _{X}p\left(X\right){e}^{\left(-\frac{1}{2{\sigma}^{2}}\left|\right|Y-{m}^{-\frac{1}{2}}CX|{|}^{2}\right)}.

(39)

This quantity has self averaging property. In communication systems, it means that in the asymptotic case, the differential entropy normalized by the users is equal to its average. Using this assumption, the capacity of the large system channel was derived as follows:

C=\underset{n\to \infty}{\mathrm{lim}}\frac{1}{n}I\left({X}_{1},{X}_{2},\dots ,{X}_{n};Y\right)=-\left({\mathcal{F}}_{0}+\frac{1}{2\beta}\right).

(40)

This expression can be applied for both binary and Gaussian distributed inputs. For the Gaussian input case, the expression is somewhat trivial but for the binary input case, the capacity is shown to be

\underset{m\to \infty}{\mathrm{lim}}C=\underset{t\in \left[0,1\right]}{\mathrm{min}}\frac{\lambda}{2}\left(1+t\right)-\frac{1}{2\beta}\lambda {\sigma}^{2}-\int \frac{{e}^{-\frac{{z}^{2}}{2}}}{\sqrt{2\pi}}ln\left(2cosh\left(\sqrt{\lambda}z+\lambda \right)\right)dz,

(41)

where

\lambda =\frac{1}{{\sigma}^{2}+\beta \left(1-t\right)},

(42)

and

t=\int \frac{{e}^{-\frac{{z}^{2}}{2}}}{\sqrt{2\pi}}tanh\left(\sqrt{\lambda}z+\lambda \right))dz.

(43)

This equation does not always yield a unique value for the capacity. This phenomenon is called phase coexistence and it occurs for *β* values greater than 1.49. Tanaka considered the lowest solution to be the actual capacity. Montanari and Tse [40] used a new sparse signature scheme to prove the correctness of Tanaka's capacity for the binary input case for *β* ≤ 1.49, where the above mentioned expression for *t* has unique solution. Furthermore, they proved that for these values of *β*, optimal detection can be achieved using Belief Propagation (BP).

It was shown in [39] that as long as the channel capacity of the Gaussian input system is less than 1, it will be almost equal to the binary input system capacity for large *β*. But, since the binary input system capacity should not exceed 1, the channel capacity saturates in this case to 1, when the equivalent Gaussian input system capacity exceeds 1.

The replica method is nonrigorous and the channel capacities obtained from this method are conjectured. In [41], it is proved that Tanaka's expression is an upper bound for the actual channel capacity for all values of *β* using an interpolation method. The authors of [41] have also proved that the channel capacity for the large system limit (**C**) concentrates to its mean \left({E}_{S}\left\{C\right\}\right)[42]. In [40, 42] the authors have also proved that the sum channel capacity is independent of the signature alphabet for large scale systems.

In [24] decoding techniques were studied for the large scale system also using the replica method. For the MUD scheme, the authors have devised a technique to convert multi user detection into single user detection with some modified parameters for AWGN parameters as shown below:

{C}_{\mathsf{\text{sep}}}\left(\beta \right)=\beta E\left\{I\left({\eta}^{\prime}\phantom{\rule{0.3em}{0ex}}\mathsf{\text{snr}}\right)\right\},

(44)

in which

I\left({\eta}^{\prime}\phantom{\rule{2.77695pt}{0ex}}snr\right)=D\phantom{\rule{2.77695pt}{0ex}}\left({p}_{Z|X,snr;{\eta}^{\prime}}\left|\right|{p}_{Z|snr;{\eta}^{\prime}}|{p}_{X}\right),

(45)

where snr is the single user SNR, *η'* is the multiuser efficiency and Z=\sqrt{snr}X+\frac{N}{\sqrt{{\eta}^{\prime}}}.

In the same paper, an expression for the channel capacity with optimal joint decoding (MUD) over separate decoding was derived in the large system limit as shown below:

{C}_{joint}\left(\beta \right)={C}_{sep}\left(\beta \right)+\left({\eta}^{\prime}-1\right)\phantom{\rule{0.3em}{0ex}}log\phantom{\rule{0.3em}{0ex}}e-log\phantom{\rule{0.3em}{0ex}}{\eta}^{\prime}.

(46)

Finally, it was concluded that for large scale systems, successive decoding with an individually optimal detection front end achieves the CDMA channel capacity with arbitrary inputs. For the special case of Gaussian inputs, the sum channel capacity can be achieved with Minimum Mean Square Error (MMSE) decoding.

Independently, the bounds for the asymptotic sum channel capacity were derived without using the replica method. The following inequality shows the lower bound for the asymptotic sum channel capacity that is derived in [20, 21]:

Let *b* and \stackrel{\u0303}{X} be vectors of length *n* with i.i.d. entries of distributions *π*(·) and \stackrel{\u0303}{p}\left(\cdot \right), respectively. Then

\begin{array}{l}\underset{\begin{array}{l}m\mathrm{,}n\to \infty \\ n/m\to \beta \end{array}}{lim}\frac{1}{n}\left[-m\gamma loge-log{\mathbb{E}}_{\tilde{X}}\left({\left({\mathbb{E}}_{b}\left(\frac{\phantom{\rule{0.5em}{0ex}}e{\phantom{\rule{0.5em}{0ex}}}^{{\scriptscriptstyle \frac{-\gamma {r}^{2}}{2\left(1+\gamma \right)m}}|{b}^{T}\tilde{X}{|}^{2}}}{1+\gamma}\right)\right)}^{m}\right)\right]\\ \phantom{\rule{0.5em}{0ex}}=\underset{\gamma}{{\displaystyle sup}}\{\underset{\widehat{p}(\cdot ),{\mu}_{\pi}{\mu}_{\widehat{P}}=0}{{\displaystyle inf}}\{\mathbb{D}\left(\widehat{p}\left|\right|\tilde{p}\right)-\frac{1}{\beta}\left(\gamma log\phantom{\rule{0.5em}{0ex}}e-log\left(1+\gamma \right)\right)\\ \phantom{\rule{0.5em}{0ex}}+\frac{1}{2\beta}\left(log\left(1+\frac{2\beta \eta \gamma {\lambda}_{1}}{\left(1+\gamma \right){\sigma}_{p}^{2}({\sigma}_{\pi}^{2}+{\mu}_{\pi}^{2})}\right)+log\left(1+\frac{2\beta \eta \gamma {\lambda}_{2}}{\left(1+\gamma \right){\sigma}_{p}^{2}({\sigma}_{\pi}^{2}+{\mu}_{\pi}^{2})}\right)\right)\}\}\mathrm{,}\end{array}

(47)

where \widehat{p} the empirical distribution of \stackrel{\u0303}{p}, and *λ*_{1}, *λ*_{2} are eigenvalues of the covariance matrix of a random variable which has the distribution of the product of two independent variables with distribution \widehat{p} and *π*, and D\phantom{\rule{2.77695pt}{0ex}}\left(\cdot \left|\right|\cdot \right) is the Kullback-Leibler distance. The term in the limit is the sum capacity lower bound for finite alphabets which is derived in [21].

In [21], the authors have also obtained an upper bound for the sum channel capacity for the binary input case in the presence of additive noise with arbitrary distribution. The following inequality shows this upper bound

\underset{\begin{array}{c}n\u2215m=\beta \\ n,m\to \infty \end{array}}{\mathrm{lim}}c\left(m,n,f\right)\le min\left\{1,\frac{1}{\beta}\left(h\left({N}_{1}+\sqrt{\beta}Z\right)-h\left({N}_{1}\right)\right)\right\},

(48)

where *Z* is a Gaussian random variable independent of *N*_{1}. If the additive noise is Gaussian, then *N*_{1} is a Gaussian random variable with variance *σ*^{2}.

Thus,h\left({N}_{1}\right)=\frac{1}{2}log\left(2\pi e{\sigma}^{2}\right) and h\left({N}_{1}+\sqrt{\beta}Z\right)=\frac{1}{2}log\left(2\pi e\left({\sigma}^{2}+\beta \right)\right). Hence,

\underset{\underset{n,m\to \infty}{n\u2215m=\beta}}{\mathrm{lim}}c\left(m,n,f\right)\le min\left\{1,\frac{1}{2\beta}log\left(1+\frac{\beta}{{\sigma}^{2}}\right)\right\}.

(49)

The above upper bound is reminiscent of the Shanon capacity for an AWGN channel where \frac{1}{\beta}=\frac{m}{n} is the normalized bandwidth and the SNR=\frac{\beta}{{\sigma}^{2}}. As *β* approaches zero, the above bound goes to log\frac{e}{2{\sigma}^{2}}. This bound is appropriate for low SNR (for \frac{{E}_{b}}{{N}_{0}}\le 1.593 dB, this upper bound will be less than 1 bit per user). However, for *β* ≤ 1, the actual channel capacity reaches the single user capacity.

For the binary input case, the normalized sum capacity bounds are plotted in Figure 13. Tanaka's result lies between the conjectured lower and upper bounds introduced in [43]. As *β* increases, Tanaka's capacity tends to reach the upper bound and the lower and upper bounds become tighter.

Figure 14 shows the asymptotic lower bound for the normalized sum capacity versus *η* for QPSK inputs and for *β* = 1 and 3. As *β* increases, the lower bound that was introduced in [21] becomes closer to Guo-Verdu's result (*η* is defined in (18)).

For a noiseless channel, the sum capacity was derived in [21] where the authors compared the results of Tanakas's asymptotic capacity which was shown in [24, 39–42, 59], with their bounds. The asymptotic sum channel capacity for a fixed *β* is equal to [22]

\underset{\begin{array}{c}n\u2215m=\beta \\ n,m\to \infty \end{array}}{\mathrm{lim}}\frac{1}{n}c\left(m,n\right)=1

(50)

As it was shown in the first section, as *m* becomes large, the rate of increase of *n* is much faster than *m* for COW codes. Thus, the assumption of reaching the full capacity in (50) is justifiable. For nonbinary inputs and signature matrices, the asymptotic lower capacity bound can be shown from the following theorem [21]

*Theorem*
**11**

\begin{array}{l}\underset{\underset{n\mathrm{,}m\to \infty}{n/(mlogn)\to \zeta}}{lim}\frac{1}{n}C(m\mathrm{,}n\mathrm{,}\mathcal{I},\mathcal{S})\ge \underset{\mathcal{J}\subseteq \tilde{I}}{\text{m}in}\\ \left\{\frac{\text{r}ank(\mathcal{J}\cdot S)}{2\zeta}-log\tilde{p}(\mathcal{J})\right\}\mathrm{,}\end{array}

(51)

where \mathcal{J}\cdot \mathcal{S}=\left\{js|j\in \mathcal{J},s\in \mathcal{S}\right\} and \stackrel{\u0303}{p}\left(\mathcal{J}\right)={\sum}_{j\in \mathcal{J}}\stackrel{\u0303}{p}\left(j\right) and for a set of numbers Λ, rank(Λ) denotes the dimension of Λ as a set of vectors over the field of rational numbers \mathcal{Q}.

For the special case when \mathcal{I}=\mathcal{S}=\left\{\pm 1\right\} and *π* and *p* are uniform distributions on \mathcal{I} and \mathcal{S} (binary case), we have \stackrel{\u0303}{\mathcal{I}}=\left\{-2,0,2\right\} and \stackrel{\u0303}{p}\left(-2\right)=\stackrel{\u0303}{p}\left(2\right)=\frac{1}{4} and \stackrel{\u0303}{p}\left(0\right)=\frac{1}{2}. Thus, the above bound is simplified as shown below:

\underset{\underset{n,m\to \infty}{n\u2215(mlogn)\to \zeta}}{\mathrm{lim}}\frac{1}{n}C\left(m,n\right)\ge min\left\{1,\frac{1}{2\zeta}\right\}

(52)

In [43], an upper bound was also derived and the result is given below:

\underset{\underset{n\mathrm{,}m\to \infty}{n/(m\mathrm{log}n)=\zeta}}{\mathrm{lim}}\frac{1}{n}C(m\mathrm{,}n)\le \mathrm{min}\left\{\mathrm{1,}\frac{1}{2\zeta}\right\}

(53)

The above results for binary matrices show that the lower and upper capacity bounds approach each other asymptotically, and therefore, we have the actual capacity. stop

In Figure 15, the normalized sum channel capacity for small to medium scale systems is shown. This figure shows that small to medium scaled systems cannot be accurately estimated by the asymptotic lower bound for high values of ζ.