In this section, to reduce the computational complexity, we consider the problem transformation for problem P2, where the number of the optimization variables dramatically decreases from (*K*+1)(*T*+1) to \frac{T}{2}+1. Based on the bisection search method, a greedy algorithm with low complexity is proposed for solving problem P2.

Firstly, since Lemma 1 provides the necessary condition for the optimal solutions of (12) and problem P2, based on (16), the resource allocation variables *μ*_{
k
}(*t*) at slot *t* can be substituted by one single variable *x*(*t*). Then, the objective function of problem P2 can be simplified into

\begin{array}{ll}\sum _{t=0}^{T}\sum _{k\in \mathcal{K}}{\omega}_{k}ln\left({\mu}_{k}\right(t\left)\right)& =\sum _{t=0}^{T}\sum _{k\in \mathcal{K}}\left({\omega}_{k}\left(ln\left({\omega}_{k}\right)+ln\left(x\right(t\left)\right)\right)\right)\\ =\alpha +\gamma \sum _{t=0}^{T}ln\left(x\right(t\left)\right),\end{array}

(17)

where \alpha =(T+1)\sum _{k}\left({\omega}_{k}(ln({\omega}_{k}\left)\right)\right) and \gamma =\sum _{k}{\omega}_{k} are both constant. Similarly, for the constraint (7c), \sum _{k}{\mu}_{k}\left(t\right)=\mathrm{\gamma x}\left(t\right). Thus, problem P2 can be transformed into

\begin{array}{ll}\left(\mathbf{\text{P3}}\right)\phantom{\rule{0.75em}{0ex}}\text{maximize}& \sum _{t=0}^{T}ln\left(x\right(t\left)\right)\phantom{\rule{2em}{0ex}}\end{array}

(18a)

\begin{array}{ll}\text{subject to}& \frac{1}{T+1}\sum _{t=0}^{T}P\left(t\right)\le {P}_{\text{av}},\phantom{\rule{2em}{0ex}}\end{array}

(18b)

\begin{array}{l}\phantom{\rule{5em}{0ex}}\mathrm{\gamma x}\left(t\right)\le \stackrel{~}{C}\left(t\right)=\frac{{T}_{s}W}{L}\underset{2}{log}\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right),\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{12em}{0ex}}\forall t\in \phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}0,T],\phantom{\rule{2em}{0ex}}\end{array}

(18c)

\begin{array}{ll}\text{variables}& x\left(t\right)\ge 0,P\left(t\right)\ge 0,\forall k\in \mathcal{K},t\in \phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}0,T].\phantom{\rule{2em}{0ex}}\end{array}

(18d)

**Lemma 2.**
*Suppose that the optimal solution of problem P3 exists, the optimal solution provides proportionally fair resource allocation along the time for each service.*

*Proof*. The proof of Lemma 2 is provided in Appendix 2.

After the problem transformation, the total number of variables decreases from (*K*+1)(*T*+1) to 2(*T*+1), and hence, the computational complexity is dramatically reduced when *K* is large. Based on the investigation on problem P3, the total number of variables can be further reduced to *T*+1 as shown below.

It is easy to show that at the optimality of problem P3, the constraints (18b) and (18c) are both tight; otherwise, one can increase the value of *x*(*t*) and *P*(*t*), such that the objective function can be further maximized. Thus, we have

\frac{1}{T+1}\sum _{t=0}^{T}P\left(t\right)={P}_{\text{av}},

(19)

and

\mathrm{\gamma x}\left(t\right)=\stackrel{~}{C}\left(t\right)=\frac{{T}_{s}W}{Lln2}ln\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right),\phantom{\rule{1em}{0ex}}\forall t\in \phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}0,T].

(20)

Based on (20), there exits a one-to-one correspondence established between *x*(*t*) and *P*(*t*), which is expressed by

x\left(t\right)=\eta ln\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right),

(21)

where \eta =\frac{{T}_{s}W}{\mathrm{\gamma L}ln2}.

Thus, plugging (19) and (21) into problem P3 yields

\begin{array}{ll}\left(\mathbf{\text{P4}}\right)\phantom{\rule{0.25em}{0ex}}\text{maximize}& \sum _{t=0}^{T}ln\left(\eta ln\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right)\right)\phantom{\rule{2em}{0ex}}\end{array}

(22a)

\begin{array}{ll}\text{subject to}& \sum _{t=0}^{T}P\left(t\right)=(T+1){P}_{\text{av}},\phantom{\rule{2em}{0ex}}\end{array}

(22b)

\begin{array}{ll}\text{variables}& P\left(t\right)\ge 0,\forall t\in \phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}0,T].\phantom{\rule{2em}{0ex}}\end{array}

(22c)

**Lemma 3.**
*The optimal solution of problem P4 is the same as that of the proportional fair power allocation (PFPA) problem.*

*Proof*. The proof of Lemma 3 is provided in Appendix 3.

Based on Lemma 3, the optimal solution of problem P4 provides proportionally fair power allocation along the time. Furthermore, we can observe that problem P2 can be equivalently decomposed into two subproblems: problem P4 and PAS problem, which are corresponding to power allocation problem along the time and packet allocation problem among the services, respectively. The explanation of the equivalence problem is given as follows. On the one hand, problem P2 can be decomposed into *T*+1 PAS problems for the given power allocation along the time. Based on the Lemma 1, if the optimal power allocation *P*^{∗}(*t*) is given, the optimal packet allocation {\mu}_{k}^{\ast}\left(t\right) of problem P2 can be obtained by solving the PAS problems. On the other hand, based on (15) and (16), the virtual variable *x*(*t*) builds a bridge between *μ*_{
k
}(*t*) and *P*(*t*). By variable substitution, problem P2 can be equivalently transformed into problem P3, which is further simplified into problem P4 based on the necessary optimality conditions. These equivalent transforms can guarantee that the optimal power allocation *P*^{∗}(*t*) of problem P4 is the same as that of problem P2 at any slot *t*. Based on the above two points, we can conclude that problem P2 can be equivalently decomposed into problem P4 and PAS problem. Thus, in order to obtain the optimal solutions of problem P2, we can solve problem P4 at first, and then by using the power allocation results, the packet allocation solution can be obtained by using (16) and (21).

To solve problem P4 effectively, the following lemma allows us to further reduce the computational complexity based on the distance symmetry at the base station, which has been mentioned in Section 3.1.

**Lemma 4.** *In the optimal solution vector* **P**^{∗}= [ *P*^{∗}(0),…,*P*^{∗}(*T*)], *there exists a symmetry on the optimal solution, i.e.,* *P*^{∗}(*t*)=*P*^{∗}(*T*−*t*),∀*t*∈ [ 0,*T*].

*Proof*. The proof of Lemma 4is provided in Appendix 4.

As a consequence of Lemma 4, problem P4 can be simplified into the power allocation problem from slot 0 to slot \frac{T}{2}, which is labeled as P5.

\begin{array}{ll}\left(\mathbf{\text{P5}}\right)\phantom{\rule{0.25em}{0ex}}\text{maximize}& \sum _{t=0}^{T/2}g\left(P\right(t\left)\right)\phantom{\rule{2em}{0ex}}\end{array}

(23a)

\begin{array}{ll}\text{subject to}& \sum _{t=0}^{T/2}P\left(t\right)=\left(\frac{T}{2}+1\right){P}_{\text{av}},\phantom{\rule{2em}{0ex}}\end{array}

(23b)

\begin{array}{ll}\text{variables}& P\left(t\right)\ge 0,\forall t\in \phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}0,T/2],\phantom{\rule{2em}{0ex}}\end{array}

(23c)

where g\left(P\right(t\left)\right)=ln\left(\eta ln\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right)\right) and the total number of variables decrease nearly half from *T*+1 to \frac{T}{2}+1.

Problem P5 is a convex optimization problem, which can be solved by CVX [29]. In addition, since problem P5 has a similar structure to the PFPA problem, the proposed algorithm in [10] can be used to find the *ε*-optimal solution of problem P5. However, the Lambert W function was introduced in the proposed algorithm, resulting in the high computing time. In this paper, the bisection search method is employed to reduce the computing time of searching the optimal solution.

Specifically, using the standard optimization technique, the corresponding Lagrangian function is obtained as

\begin{array}{ll}L\left(\right\{P\left(t\right)\},\lambda )& =\sum _{t=0}^{T/2}g\left(P\right(t\left)\right)-\lambda \left(\sum _{t=0}^{T/2}P\left(t\right)-\left(\frac{T}{2}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}1\right){P}_{\text{av}}\right)\phantom{\rule{2em}{0ex}}\\ =\sum _{t=0}^{T/2}\left[ln\left(\eta ln\left(1\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\frac{P\left(t\right)}{N\left(t\right)}\right)\right)\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}\mathrm{\lambda P}\left(t\right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\lambda {P}_{\text{av}}\right].\phantom{\rule{2em}{0ex}}\end{array}

(24)

Based on the KKT conditions, we have

\frac{\mathrm{\partial L}\left(\right\{P\left(t\right)\},\lambda )}{\mathrm{\partial P}\left(t\right)}=\frac{1}{\eta ln\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right)}\frac{\eta}{P\left(t\right)+N\left(t\right)}-\lambda =0,

(25)

which can be rewritten by

ln\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right)\left(P\right(t)+N(t\left)\right)=\frac{1}{\lambda}.

(26)

Let f\left(P\right(t\left)\right)=ln\left(1+\frac{P\left(t\right)}{N\left(t\right)}\right)\left(P\left(t\right)+N\left(t\right)\right), which is a monotonically increasing function of *P*(*t*) at any slot *t*. Let \beta =\frac{1}{\lambda}, then (26) is equal to *f*(*P*(*t*))=*β*. Due to the monotonicity of *f*(*P*(*t*)), the bisection search method can be used to find *P*(*t*), satisfying *f*(*P*(*t*))=*β* for a given *β* at each slot *t*. In addition, for any slot *t*, *P*(*t*)=*f*^{−1}(*β*) is also a monotonically increasing function of *β*. Thus, to satisfy the average power constraint (23b), the bisection search method can also be used to find the optimal *β*^{∗}.

The specific steps of the bisection search method is provided in Algorithm 1. The search regions of *P*(*t*) and *β* should be initialized based on their maximums and minimums. At first, it is easy to verify that the maximum and minimum of *P*(*t*) at each slot *t* can be set as {P}_{\text{max}}=\left(\frac{T}{2}+1\right){P}_{\text{av}} and *P*_{min}=0, respectively. And then the maximum and minimum of *β* can be obtained by the following lemma:

**Lemma 5.** *Based on the equality* *f*(*P*(*t*))=*β*, *the maximum of* *β* *can be obtained when* *P*(*t*)=*P*_{max}*and* *t*=0 *in function* *f*(*P*(*t*)), i.e., *β*_{max}=*f*(*P*_{max})|_{t=0}*and the minimum of* *β* *can be set as* *β*_{min}=0.

*Proof*. The proof of Lemma 5 is provided in Appendix 5.

The Algorithm 1 consists of two loops to find the optimal power allocation. The outer loop is used for the bisection search of *β*, and the inner loop is used to solve *f*(*P*(*t*))=*β* for a given *β*. In addition, the convergence of Algorithm 1 is ensured by the bisection search, where *ε*_{
Δ
P
} and *ε*_{
Δ
β
} are small constants to control the convergence accuracy.

### Algorithm 1 **Bisection search method**