We can see that the problem (10) is an optimization problem with some constraints. The convex optimization is a powerful tool to solve these kinds of problems. A standard form of convex optimization problems is formulated as follows:
$$\begin{array}{*{20}l} {\text{min}} \quad &{f_{0}}(x)\\ \text{s.t.} \quad &{f_{i}}(x) \le 0,i = 1, \cdots,q\\ &{a_{j}^{T}}x = {b_{j}},{\rm{}}j = 1, \cdots,p \end{array} $$
(11)
where all of f
0,⋯,f
q
are convex functions, f
i
(x) is the constraint of inequality, and \({h_{j}}(x) = {a_{j}^{T}}x - {b_{j}}\) is the constraint of equality [27]. But as it is shown in problem (10), neither optimization objective nor the constraints is convex function. The convex optimization cannot be used straightly. So, we first investigate the primal objective function and approximate it with a series of convex ones.
The objective function can be rewritten as the difference of two concave functions, u and w:
$$\begin{array}{*{20}l} \sum\limits_{i = 1}^{N} {{{\log }_{2}}\left({1 + \frac{{{p_{i}}{g_{i,i}}}}{{\sum_{j = 1,j \ne i}^{N} {{p_{j}}{g_{j,i}}} + {\sigma_{0}^{2}}}}} \right)} {= }u(\mathbf{p}) - w(\mathbf{p}), \end{array} $$
(12)
where both u(p) and w(p) are concave functions:
$$\begin{array}{*{20}l} &u(\mathbf{p}) = \sum\limits_{i = 1}^{N} {{{\log }_{2}}\left({\sum\limits_{j = 1}^{N} {{p_{j}}{g_{j,i}} + {\sigma_{0}^{2}}}} \right)} \\ &w(\mathbf{p}) = \sum\limits_{i = 1}^{N} {{{\log }_{2}}\left({\sum\limits_{j \ne i}^{N} {{p_{j}}{g_{j,i}} + {\sigma_{0}^{2}}}} \right)}. \end{array} $$
(13)
Then, we work out the derivation of w(p) about p
i
:
$$\begin{array}{*{20}l} \frac{{\partial w(\mathbf{p})}}{{\partial {p_{i}}}} &= \sum\limits_{j \ne i}^{N} {\frac{1}{{\ln 2}}\frac{{{g_{j,i}}}}{{\sum_{j \ne i}^{N} {{p_{j}}{g_{j,i}}} + {\sigma_{0}^{2}}}}} \\ & = \sum\limits_{j = 1}^{N} {\frac{1}{{\sum_{j \ne i}^{N} {{p_{j}}{g_{j,i}}} + {\sigma_{0}^{2}}}}} {e_{j,i}}, \end{array} $$
(14)
where \({e_{j,i}} = \left \{ \begin {array}{l} 0,j = i\\ \frac {g_{j,i}}{\ln 2},j \ne i \end {array} \right.\). The equation above is represented as a column vector \(\nabla w(\mathbf {p}) = \left \{ {{\tau _{i}}} \right \} = \left \{ {\frac {{\partial w(\mathbf {p})}}{{\partial {p_{i}}}}} \right \}\). We can approximately denote w(p) by the first-order Taylor series expansion w(p
(k))+〈∇w(p
(k)),p−p
(k)〉 of w(p) around the given any viable solution \(\left \{ {{p_{i}^{k}}} \right \}\). It is important to note that the superscript k represents the number of iterations. As a result, a conservative surrogate for the objective function is formulated as follows:
$$\begin{array}{*{20}l} - {f_{0}}(\mathbf{p}) = u(\mathbf{p}) - w({\mathbf{p}^{(k)}}) - \left\langle {\nabla w({\mathbf{p}^{(k)}}),\mathbf{p} - {\mathbf{p}^{(k)}}} \right\rangle. \end{array} $$
(15)
Since w(p) is concave function as mentioned previously, it can be easily captured as follows:
$$\begin{array}{*{20}l} w(\mathbf{p}) \le w({\mathbf{p}^{(k)}}) + \left\langle {\nabla w({\mathbf{p}^{(k)}}),\mathbf{p} - {\mathbf{p}^{(k)}}} \right\rangle. \end{array} $$
(16)
If {p
i
} is feasible and −f
0(p
(k+1))≥−f
0(p
(k)), the following formulation can be acquired:
$$\begin{array}{*{20}l}{} u({\mathbf{p}^{(k + 1)}}) - w({\mathbf{p}^{(k + 1)}})\ge - {f_{0}}({\mathbf{p}^{(k + 1)}})\ge u({\mathbf{p}^{(k)}}) - w({\mathbf{p}^{(k)}}). \end{array} $$
(17)
It is can be seen that −f
0(p
(k)) provides a lower bound. Maximizing function (12) is equivalent to maximizing the initial function. Through the above analysis, the original problem is non-convex, but we can approximate it with convex functions.
Here, we analyze the characteristics of the constraints. Considering the fact that CSI can be robustly estimated between SCs, C1 is a simple linear constraint, so is C2. They can be equivalently written as:
$$\begin{array}{*{20}l} &{f_{1,i}}\left(\mathbf{p} \right) = {p_{i}} - {p_{\max }} \le 0,~\forall i\\ &{f_{2,i}}\left(\mathbf{p} \right) = \sum\limits_{j = 1,j \ne i}^{N} {{p_{j}}{g_{j,i}}} - {I_{\max }} \le 0{\rm{}},~\forall i. \end{array} $$
(18)
Nevertheless, for constraint C3, the uncertainty of the channel between S-Tx and P-Rx makes C3 intractable, i.e., C3 has no idea to satisfy the requirement of a convex function. As a result, we take different attitudes towards the three constraints. For the first two, we can acquire the goal through above steps (17). As to C3, we have to study its transformation as the following description.
Let \({X_{i}} = {p_{i}}g_{i}^{\text {PU}}\), which are independently exponential distributed with mean p
i
θ. Denote \(X = {\sum \nolimits }_{i} {{X_{i}}}\) as the sum of all the random variables. Therefore, we can change the original constraint into the following probability form:
$$\begin{array}{*{20}l} \text{Pr} \left[ {X = \sum\limits_{i} {{X_{i}}} \le I_{\max }^{p}} \right] \ge 1 - \varepsilon. \end{array} $$
(19)
How to test and verify the constraint (19)? The key lies in the distribution of X. To determine the distribution explicitly, we use Gaussian approximation based on the following Lemma 1 [28].
Lemma
1.
The Lyapunov’s central limit theorem (CLT): If X
1,X
2,…,X
n
are independent of each other with mean E(X
k
)=μ
k
and variance \(D\left ({{X_{k}}} \right) = {\sigma _{k}^{2}} > 0\),we can obtain that
$$\begin{array}{*{20}l} {Z_{n}} = \frac{{\sum_{k = 1}^{n} {{X_{k}}} - \sum_{k = 1}^{n} {{\mu_{k}}} }}{{{B_{n}}}} \sim N\left({0,1} \right) \end{array} $$
(20)
and
$$\begin{array}{*{20}l} \sum\limits_{k = 1}^{n} {{X_{k}}} \sim N\left({\sum\limits_{k = 1}^{n} {{\mu_{k}}},{B_{n}^{2}}} \right), \end{array} $$
(21)
where \({B_{n}^{2}} = \sum \limits _{k = 1}^{n} {{\sigma _{k}^{2}}}\).
Due to the above Lemma 1, the Lyapunovcondition should be satisfied as follows to apply the Lyapunov’s (CLT):
$$ {\lim}_{N \to \infty} \frac{{{{\left({\sum_{i = 1}^{N} {{r_{i}^{3}}}} \right)}^{\frac{1}{3}}}}}{{{{\left({\sum_{i = 1}^{N} {{\sigma_{i}^{3}}}} \right)}^{\frac{1}{2}}}}} = 0. $$
(22)
where r
i
is the third-order center distance of X
i
, i.e., E[(X
i
−m
i
)3] and m
i
and \({\sigma _{i}^{2}}\) are the mean and variance of the exponential distributed random variable X
i
, respectively. We can easily check this condition [20].
In general, for massive connections, we can regard X as a normally distributed random variable whose mean is m and variance is σ
2 approximately
$$\begin{array}{*{20}l} &m \sim \sum_{i} {{p_{i}}\lambda} \\ &{\sigma^{2}} \sim \sum_{i} {{{\left({{p_{i}}\lambda} \right)}^{2}}}. \end{array} $$
(23)
Therefore, the following constraint is a conservative surrogate for (19):
$$\begin{array}{*{20}l} P(\mathbf{p}) &= 1 - {F_{N}}(I_{\max }^{p})\\ &= \frac{1}{2}\text{erfc}(\frac{{I_{\max }^{p} - m}}{{\sqrt 2 \sigma }}) \le \varepsilon. \end{array} $$
(24)
where F
N
(·) is the cumulative distribution function (CDF) of a normal distribution with mean m and variance σ
2 and \(\text {erfc}(z) = \frac {2}{{\sqrt \pi }}\int _{z}^{\infty } {{e^{- {t^{2}}}}dt}\). For (24), we assume that \({f_{3}}\left (\mathbf {p} \right) = \frac {1}{2}\text {erfc}(\frac {{I_{\max }^{p} - m}}{{\sqrt 2 \sigma }}) - \varepsilon \). It is obvious that it is not a standard convex function at all which needs separate treatment. Inspired by the scheme proposed in [20], once a power allocation is given from the first two constraints, we can check whether it satisfies the outage constraint (24) or not.
In a word, we can easily confirm that \((\mathcal {P})\) is equivalent to
$$\begin{array}{*{20}l} (\mathcal{P}1)~~&\min {f_{0}}(\mathbf{p})\\ ~~~\text{s.t.}~~&C1:{f_{1,i}}\left(\mathbf{p} \right) = {p_{i}} - {p_{\max }} \le 0,~~\forall i\\ &C2:{f_{2,i}}\left(\mathbf{p} \right) = \sum\limits_{j = 1,j \ne i}^{N} {{p_{j}}{g_{j,i}}} - {I_{\max }} \le 0,~~\forall i\\ &C3:{f_{3}}(\mathbf{p}){{= }}\frac{1}{2}{\text{erfc}}\left(\frac{{I_{\max }^{p} - m}}{{\sqrt 2 \sigma}}\right) - \varepsilon \le 0. \end{array} $$
(25)
Since C3 is not a standard convex function at all as abovementioned, it does not satisfy the standard form of convex optimization problems (11). So we could not deal with the problem \((\mathcal {P}1)\) directly. In order to obtain a feasible solution, we divide the original problem \((\mathcal {P}1)\) into two parts to solve in this paper:
$$\begin{array}{*{20}l} \left({\mathcal{SP}1} \right): \min {f_{0}}\left(\mathbf{p} \right)\\\notag \text{s.t.} C1,C2 \end{array} $$
(26)
and
$$\begin{array}{*{20}l} (\mathcal{SP}2):{f_{3}}(\mathbf{p}){\rm{= }}\frac{1}{2}{\text{erfc}}\left(\frac{{I_{\max }^{p} - m}}{{\sqrt 2 \sigma}}\right) - \varepsilon \le 0. \end{array} $$
(27)
If we acquire an optimal solution p from \((\mathcal {SP}1)\), we will examine whether it meets or not by substituting into \((\mathcal {SP}2)\). In another word, the input of \((\mathcal {SP}2)\) is the output of \((\mathcal {SP}1)\).
For the first part \((\mathcal {SP}1)\), it is suitable to use the above convex optimization to deal with. The Lagrangian function of the problem \((\mathcal {SP}1)\) is given by
$$ \begin{aligned} L(\mathbf{p},\mathbf{\lambda },\mathbf{\nu }) &= {f_{0}}\left(\mathbf{p} \right) + \sum\limits_{i = 1}^{N} {{\lambda_{i}}\left({{p_{i}} - {p_{\max }}} \right)}\\ &+ \sum\limits_{i = 1}^{N} {{v_{i}}\left({\sum\limits_{j = 1,j \ne i}^{N} {{p_{j}}{g_{j,i}}} - {I_{\max }}} \right)}, \end{aligned} $$
(28)
where λ and ν are the non-negative dual variables associated with the corresponding constraints C1 and C2, respectively. The Lagrangian dual function is defined as follows:
$$ \begin{aligned} G(\mathbf{\lambda },\mathbf{\nu })& = \inf_{\mathbf{p} = \left\{ {{p_{i}}} \right\}} L(\mathbf{p},\mathbf{\lambda },\mathbf{\nu }) \\ &= \inf_{\mathbf{p} = \left\{ {{p_{i}}} \right\}} \left({f_{0}}\left(\mathbf{p} \right) + \sum\limits_{i = 1}^{N} {{\lambda_{i}}\left({{p_{i}} - {p_{\max}}} \right)} \right.\\ &\left. \quad+ \sum\limits_{i = 1}^{N} {{v_{i}}\left({\sum\limits_{j = 1,j \ne i}^{N} {{p_{j}}{g_{j,i}}} - {I_{\max }}} \right)} \right). \end{aligned} $$
(29)
If there is no lower bound of the Lagrangian, the dual function value is −∞.
Next, we carry on the feasibility analysis of the part. Problem (26) is feasible if and only if the following linear programming (LP) is feasible:
$$\begin{array}{*{20}l} (\mathcal{SP}1'):~~&{\text{find}}~~ \mathbf{p}\\ ~~~\text{s.t.}~~&C1:{f_{1}}\left(\mathbf{p} \right) = {p_{i}} - {p_{\max}} \le 0,~~\forall i\\ ~~&C2:{f_{2}}\left(\mathbf{p} \right) = \sum\limits_{j = 1,j \ne i}^{N} {{p_{j}}{g_{j,i}}} - {I_{\max }} \le 0,~~\forall i. \end{array} $$
(30)
The corresponding dual function is given by
$$ \begin{aligned} L'(\mathbf{p},\mathbf{\lambda '},\mathbf{\nu '}) &= \sum\limits_{i = 1}^{N} {{{\lambda '}_{i}}\left({{p_{i}} - {p_{\max }}} \right)}\\ &+ \sum\limits_{i = 1}^{N} {{{v'}_{i}}\left({\sum\limits_{j = 1,j \ne i}^{N} {{p_{j}}{g_{j,i}}} - {I_{\max }}} \right)}, \end{aligned} $$
(31)
where λ
′=[λ
′
1,⋯,λ
′
N
]≥0 and ν
′=[v
′
1,⋯,v
′
N
]≥0 consist of the Lagrange multipliers associated with the N connections constraints. The dual function of problem (S
P1′) is then given by
$$ G'\left({\mathbf{\lambda '},\mathbf{\nu '}} \right){= \inf_{\mathbf{p} = \left\{ {{p_{i}}} \right\}}}L'(\mathbf{p},\mathbf{\lambda '},\mathbf{\nu '}). $$
(32)
Theorem
1.
For a given I
max>0,p
max>0, problem \((\mathcal {SP}1')\) is infeasible if and only if there exists λ
′≥0,ν
′≥0 such that G
′(λ
′,ν
′)>0; otherwise, \((\mathcal {SP}1')\) is feasible.
Proof.
See Appendix for details.
For the second part \((\mathcal {SP}2)\), we can check whether the optimal value p
t
∗ obtained by \((\mathcal {SP}1)\) satisfies the outage constraint (27) or not. If it does, we think it is the optimal solution to \(\mathcal {P}1\); otherwise, all the transmit power decreases by a step γ and we search for the optimal one again.
Since the original complex problem is transformed into a more tractable form, we obtain the near-optimal solution along with acceptable computational complexity. Detailed algorithm design will be provided in the following section. □