### Appendix

### Proof of Proposition 1

We first need the following lemma

###
**Lemma 3**

*Let* \(\alpha \ge 0\), \(s_{j},s_{i}\in {\text{supp}}\left\{ X_{d}\right\}\) *and* \(g\left( n,\alpha \right) =\sum _{j=1,j\ne i}^{n}e^{-\alpha \left( s_{i}-s_{j}\right) ^{2}}\). *For* \(\alpha \ge 0\) *, we have*

$$\begin{aligned} g\left( n,\alpha \right)&\le 2e^{-\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) } \\ &\quad +\sqrt{\frac{\pi }{\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}{\text{Erfc}}\left( \sqrt{\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\right) \\ & \le \sqrt{\frac{\pi }{\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}. \end{aligned}$$

(25)

###
*Proof*

Note that we can always re-label the \(\left\{ s_{j}\right\} _{j=1,j\ne i}^{n}\) such that \(s_{1}\) is the closest point (in distance) to \(s_{i}\), \(s_{2}\) is the next closest, and so on. Clearly, \(\left| s_{i}-s_{j}\right| \ge d_{{\mathrm{min}} }\left( X_{d}\right)\) by definition. Now for any *j* we have

$$\begin{aligned} \left( s_{j}-s_{1}\right) ^{2}\ge \left( s_{i}-\left[ s_{i}\pm jd_{{\mathrm{min}} }\left( X_{d}\right) \right] \right) ^{2} \end{aligned}$$

and this allows us to upper bound \(g\left( n,\alpha \right)\) with \(2\sum _{j=1}^{\frac{n-1}{2}}e^{-\alpha j^{2}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\). Making use of [28, Sect. 8.12], we get (25). Proving the last inequality in the lemma is simple and omitted. \(\square\)

We start with a numerical bound reported in [4, Eq. (18a)] as follows

$$\begin{aligned} I\left( X_{d};\sqrt{P}X_{d}+Z\right) \ge -\log \left[ \sum _{i,j\in \left[ 1:n\right] ^{2}}{\tilde{p}}_{i}{\tilde{p}}_{j}\frac{1}{\sqrt{4\pi }}e^{-\frac{\frac{P}{N}\left( s_{i}-s_{j}\right) ^{2}}{4}}\right] -c_{1}, \end{aligned}$$

(26)

where \(s_{i}\in {\text{supp}}\left\{ X_{d}\right\}\), \(\tilde{p_{i}}=\Pr \left\{ X_{d}=s_{i}\right\}\), and \(c_{1}=\frac{1}{2}\log \left( 2\pi e\right)\). This result is leveraged below to produce an analytical bound that improves the Ozarow–Wyner bound in Eq. (5).

We can now prove the proposition. The main idea is to bound the summation in (26) using a *staircase* approximation.

$$\begin{aligned} \frac{1}{\sqrt{4\pi }}\sum _{i,j\in \left[ 1:n\right] ^{2}}{\tilde{p}}_{i}{\tilde{p}}_{j}e^{-\frac{1}{4}\frac{P}{N}\left( s_{i}-s_{j}\right) ^{2}} \le&\frac{p_{{\mathrm{max}} }}{\sqrt{4\pi }}\sum _{i\left[ 1:n\right] }p_{i}\left[ 1+\sum _{j\in \left[ 1:n\right] \backslash i}e^{-\frac{1}{4}\frac{P}{N}\left( s_{i}-s_{j}\right) ^{2}}\right] \\ &\overset{\left( a\right) }{\le }\frac{p_{{\mathrm{max}} }}{\sqrt{4\pi }}\Biggl [1+2e^{-\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\\&+\sqrt{\frac{\pi }{\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}{\text{Erfc}}\left( \sqrt{\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\right) \Biggr ]\\ \le&\frac{p_{{\mathrm{max}} }}{\sqrt{4\pi }}\left[ 1+\sqrt{\frac{\pi }{\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}\right] . \end{aligned}$$

where in (*a*) we use Lemma 3 with \(\alpha =\frac{1}{4}\frac{P}{N}\).

### Proof of Theorem 2

A constant gap approximation to capacity is relevant for large *P*, so throughout this section we assume that *P* is sufficiently large. The transmissions \(X_{i}\) are symmetrical *n*-PAM modulations where *n* is the number of discrete levels \(X_{i}\) may take, i.e., \(n=\left| {\text{supp}}\left[ X_{i}\right] \right|\). Furthermore, for a PAM input it should be easy to verify (see (1)) that \(d_{{\mathrm{min}} }\left( X_{i}\right) =\sqrt{\frac{12}{n_{i}^{2}-1}}.\)

The two cases require distinct proofs and are treated individually.

#### Case 1: \(h_{2}^{2}\ge h_{1}^{4}\frac{P}{N}\)

For the very strong interference regime, \(h_{2}^{2}\ge h_{1}^{2}\left( 1+h_{1}^{2}\frac{P}{N}\right)\), the result was proved in [4, Th. 7]. We therefore only have to prove it for \(h_{2}^{2}<h_{1}^{2}\left( 1+h_{1}^{2}\frac{P}{N}\right)\). Choose \(n=\left\lceil \sqrt{\frac{3}{4}}\frac{|h_{2}|}{|h_{1}|}\right\rceil\), and we may bound

$$\begin{aligned} n^{2}<\left( 1+\sqrt{\frac{3}{4}}\frac{|h_{2}|}{|h_{1}|}\right) ^{2}\le \frac{3}{4}\frac{h_{2}^{2}}{h_{1}^{2}}. \end{aligned}$$

Using the result in [4, Prop. 2], we know the minimum distance of the discrete constellation \(\sqrt{P}\left( h_{1}X_{1}+h_{2}X_{2}\right)\) as

$$\begin{aligned} d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right) =\sqrt{h_1^2P}d_{{\mathrm{min}} }\left( X_{i}\right) . \end{aligned}$$

For a fixed constant \(c>0\) and Proposition 1 or Eq. (5), we get

$$\begin{aligned} I\left( Y_{i}-Z_{i};Y_{i}\right) +c\ge \log \left( n^{2}\right) -\frac{1}{2}\log \left( 1+\frac{12}{d_{{\mathrm{min}} }^{2}\left( Y_{i}-Z_{i}\right) /N}\right) . \end{aligned}$$

We now lower bound the

$$\begin{aligned} d_{{\mathrm{min}} }^{2}\left( Y_{i}-Z_{i}\right) /N&\ge16\frac{h_{1}^{4}{P/N}}{h_{2}^{2}}\\ &\ge 16\frac{h_{1}^{2}{P/N}}{1+h_{1}^{2}{P/N}}\\ &\ge 8, \end{aligned}$$

where the last inequality follows since the *P* is large. The above estimate allows us to state that \(R\ge \frac{1}{2}\log \left( \frac{3}{4}\frac{h_{2}^{2}}{h_{1}^{2}|}\right) -c_{1}\), for some constant \(c_{1}>0\). Finally, it is easy to verify that

$$\begin{aligned} \left| 2R-\frac{1}{2}\log \left( 1+h_2^2\frac{P}{N}+h_1^2\frac{P}{N}\right) \right| &<c_{3} \end{aligned}$$

for some constant \(c_{3}>0\).

### Case 2: \(h_{1}^{2}<h_{2}^{2}<h_{1}^{4}\frac{P}{N}\)

For large *P*, we can write the signal at receiver \(i,j\in \left\{ 1,2\right\}\) as

$$\begin{aligned} Y_{i}=2^{\ell _{i}}\tilde{h_{i}}X_{i}+2^{\ell _{j}}\tilde{h_{j}}X_{j}+Z_{i},\ i\ne j. \end{aligned}$$

Recall \({\tilde{h}}_{i}\in [1,2)\) was defined as the fractional part of \(h_i\) *in the log domain*, and \(\ell _i \triangleq \log _2 \frac{h_i}{{\tilde{h}}_i}\); therefore, \(\ell _i\in {\mathbb{Z}}^{+}.\) Now set \(n=\left\lfloor \left( 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\right) ^{\frac{1}{4}}\right\rfloor -1\) and \(m=\frac{n-1}{2}\). It is easy to show

$$\begin{aligned} m\,d_{{\mathrm{min}} }\left( X_{i}\right) &\le\sqrt{3}, \\ n&\le\left( 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\right) ^{\frac{1}{4}}. \end{aligned}$$

(27)

In this case, we have the following bound for \(d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right)\), using [29, Prop. 2] for certain channel gains \(h_{1},h_{2}\).

$$\begin{aligned} d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right) & \ge\frac{\gamma }{8\sqrt{3}}\frac{2^{n_{1}+n_{2}}d_{{\mathrm{min}} }^{2}\left( X_{i}\right) }{2^{n_{1}}+2^{n_{2}}}\\ &=\frac{\gamma }{8\sqrt{3}}\frac{P\left| h_{2}h_{1}\right| d_{{\mathrm{min}} }^{2}\left( X_{i}\right) }{\sqrt{P}\left( \left| h_{1}\right| /{\tilde{h}}_{1}+\left| h_{2}\right| /{\tilde{h}}_{2}\right) }\\ & \overset{\left( a\right) }{\ge } \frac{\gamma \cdot 2}{3\sqrt{3}}\frac{\sqrt{P}\left| h_{2}h_{1}\right| }{\left| h_{1}\right| /{\tilde{h}}_{1}+\left| h_{2}\right| /{\tilde{h}}_{2}}\left( 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\right) ^{-\frac{1}{2}}\\ & \overset{\left( b\right) }{\ge }\frac{\gamma \cdot 2}{3\sqrt{3}}\frac{\left| h_{2}h_{1}\right| \left( 1+2h_{2}^{2}\right) ^{-\frac{1}{2}}}{\left| h_{1}\right| +\left| h_{2}\right| }=C_{\gamma ,h_{1},h_{2}}. \end{aligned}$$

To arrive at (*a*), we first lower bound the term \(d_{{\mathrm{min}} }\left( X_{i}\right)\) with \(\frac{\sqrt{12}}{n}\) and then use the upper bound on *n* using Eq. (27). At \(\left( b\right)\), we make use of the following relation

$$\begin{aligned} 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\le \frac{P}{N}+2h_{2}^{2}\frac{P}{N}\end{aligned}$$

and lower bound the constants \(\tilde{h_{i}}\) to arrive at a constant that depends on \(\gamma ,h_{1}\) and \(h_{2}.\) Finally, note that for the channel realizations \(h_{1},h_{2}\) where the above bound on \(d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right)\) holds we have

$$\begin{aligned} H\left( \sqrt{P}h_{1}X_{i}+\sqrt{P}h_{2}X_{j}\right) =H\left( X_{i}\right) +H\left( X_{j}\right) , \end{aligned}$$

which allows us to use Eq. (6) and Proposition 1 to bound \(R\ge \log \left( n\right) -O\left( \log \gamma \right) .\)