It is clearly that the problem in (8) belongs to the non-convex MINLP, that is NP-hard to find the global optimal solution in general. In this section, we design a polynomial-time iterative algorithm based on the big-M reformulation, the penalized SCP, and the generalized Dinkelbach’s method to obtain a feasible suboptimal solution of (8).
Big-M reformulation
Due to the product term ρk,npk,n in (8), it is not easy to design an efficient algorithm for solving (8). Fortunately, we can use the big-M reformulation to decompose the product terms [25]. With the help of big-M reformulation, we first introduce auxiliary variables \({\tilde p_{k,n}} = {\rho _{k,n}}{p_{k,n}}, \forall k \in \mathcal {K}, n \in \mathcal {N}\). Then, all the product terms ρk,npk,n in (8) will be replaced with \({\tilde p_{k,n}}\), which yields
$$\begin{array}{*{20}l} & {{\tilde \eta }_{k}} = \frac{{\tilde R}_{k}}{{\tilde P}_{k}} = \frac{ \sum\limits_{n = 1}^{N} {{\tilde R}_{k,n}} + \frac{f_{k}}{C_{k}} }{ {\tilde P}_{k} }, \end{array} $$
(9a)
$$\begin{array}{*{20}l} & {\tilde{\mathcal{C}}}_{1}: {\tilde R}_{k} \geq R_{k}^{\min },\forall k \in \mathcal{K}, \end{array} $$
(9b)
$$\begin{array}{*{20}l} & {\tilde{\mathcal{C}}}_{2}: {\tilde P}_{k} \leq P_{k}^{\max },\forall k \in \mathcal{K}, \end{array} $$
(9c)
where
$$\begin{array}{*{20}l} {{\tilde R}_{k,n}} & = {W_{n}}{\log_{2}}\left({1 + \frac{{{g_{k,n}}{{\tilde p}_{k,n}}}}{{\sum\limits_{i \in {S_{k,n}}} {{g_{i,n}}{{\tilde p}_{i,n}}} + {n_{0}}{W_{n}}}}} \right), \end{array} $$
(10a)
$$\begin{array}{*{20}l} {{\tilde P}_{k}} & = {\sum\limits_{n = 1}^{N} {\xi_{k} {\tilde p}_{k,n}}} + {\gamma_{k}}f_{k}^{3} + {p_{c,k}}. \end{array} $$
(10b)
Afterwards, we add the following constraints into (8):
$$\begin{array}{*{20}l} & {\mathcal{C}_{7}}: {{\tilde p}_{k,n}} \leq P_{k}^{\max }{\rho_{k,n}},\forall k \in \mathcal{K},n \in \mathcal{N}, \end{array} $$
(11a)
$$\begin{array}{*{20}l} & {\mathcal{C}_{8}}:{{\tilde p}_{k,n}} \leq {p_{k,n}},\forall k \in \mathcal{K},n \in \mathcal{N}, \end{array} $$
(11b)
$$\begin{array}{*{20}l} & {\mathcal{C}_{9}}: {{\tilde p}_{k,n}} \geq {p_{k,n}} - \left({1 - {\rho_{k,n}}} \right)P_{k}^{\max },\forall k \in \mathcal{K},n \in \mathcal{N}, \end{array} $$
(11c)
$$\begin{array}{*{20}l} & {\mathcal{C}_{10}}: {{\tilde p}_{k,n}} \geq 0,\forall k \in \mathcal{K},n \in \mathcal{N}. \end{array} $$
(11d)
Moreover, we can rewrite the binary constraint \(\mathcal {C}_{6}\) in its equivalent continuous form as follows:
$$\begin{array}{*{20}l} & {\mathcal{C}_{6a}}: 0 \leq {\rho_{kn}} \leq 1,\forall k \in \mathcal{K},n \in \mathcal{N}, \end{array} $$
(12a)
$$\begin{array}{*{20}l} & {\mathcal{C}_{6b}}: \sum\limits_{k = 1}^{K} {\sum\limits_{n = 1}^{N} {\left({{\rho_{k,n}} - \rho_{k,n}^{2}} \right)}} \leq 0. \end{array} $$
(12b)
Therefore, we can reformulate (8) as the following equivalent problem:
$$ \begin{aligned} \mathop {\max }\limits_{\Theta} \quad & \mathop { \min }\limits_{k \in \mathcal{K}} \left\{ {{{\tilde \eta }_{k}}} \right\} \hfill \\ \mathrm{s.t.} \quad & {\tilde{\mathcal{C}}}_{1},{\tilde{\mathcal{C}}}_{2},{\mathcal{C}_{3}},{\mathcal{C}_{4}}, {\mathcal{C}_{5}}, {\mathcal{C}_{6a}}, {\mathcal{C}_{6b}}, {\mathcal{C}_{7}} \sim {\mathcal{C}_{10}}, \end{aligned} $$
(13)
where \(\Theta = \left \{ {{\rho _{k,n}},{p_{k,n}},{{\tilde p}_{k,n}},{f_{k,n}},\forall k \in \mathcal {K},n \in \mathcal {N}} \right \}\).
The big-M reformulation linearizes the term ρk,npk,n, such that ρk,n is separated from pk,n, which helps to design an efficient CE optimization algorithm. Moreover, if the constraints \(\mathcal {C}_{6a}\), \(\mathcal {C}_{6b}\), and \(\mathcal {C}_{7} \sim \mathcal {C}_{10}\), are satisfied, then \({\tilde {\mathcal {C}}}_{1}\) is equivalent to \(\mathcal {C}_{1}\), and \({\tilde {\mathcal {C}}}_{2}\) is also equivalent to \(\mathcal {C}_{2}\). Hereto, the original MINLP problem in (8) has been equivalently transformed into a more tractable form in (13). However, the transformed problem in (13) still belongs to the non-convex max-min fractional programming (MMFP), which can not be addressed directly. To this end, we combine fractional programming theory with sequential convex programming (SCP) [26, 27].
Sequential convex programming
In particular, the core idea of SCP is to convert a non-convex optimization problem into a series of convex optimization problems and solve them iteratively, where the non-convex terms in each iteration will be substituted by appropriate inner convex terms [28].
We note that \({{\tilde R}_{k,n}}\) in (13) can be expressed as the subtraction of two concave functions \({{\tilde R}_{k,n,1}}\) and \({{\tilde R}_{k,n,2}}\), i.e.,
$$ \begin{aligned} {{\tilde R}_{k,n}} & = {{\tilde R}_{k,n,1}} - {{\tilde R}_{k,n,2}} \\ & = {W_{n}}{\log_{2}}\left({\sum\limits_{i \in {S_{k,n}}} {{g_{i,n}}{{\tilde p}_{i,n}}} + {g_{k,n}}{{\tilde p}_{k,n}} + {n_{0}}{W_{n}}} \right) \\ & \quad- {W_{n}}{\log_{2}}\left({\sum\limits_{i \in {S_{k,n}}} {{g_{i,n}}{{\tilde p}_{i,n}}} + {n_{0}}{W_{n}}} \right). \end{aligned} $$
(14)
At the tth iteration (t≥1) of SCP, we can obtain an upper bound \(\tilde R_{k,n,2}^{\left (t \right)}\) of \({{\tilde R}_{k,n,2}} \) by using its first-order Taylor expansion, which is expressed as
$$ \begin{aligned} {\tilde R_{k,n,2}} \leq \tilde R_{k,n,2}^{\left(t \right)} & = {W_{n}}{\log_{2}}\left({\sum\limits_{i \in {S_{k,n}}} {{g_{i,n}}\tilde p_{i,n}^{\left({t - 1} \right)}} + {n_{0}}{W_{n}}} \right) \\ & \quad+ \frac{{{W_{n}}}}{{\ln 2}}\frac{{\sum\limits_{i \in {S_{k,n}}} {{g_{i,n}}\left({{{\tilde p}_{i,n}} - \tilde p_{i,n}^{\left({t - 1} \right)}} \right)} }}{{\sum\limits_{i \in {S_{k,n}}} {{g_{i,n}}\tilde p_{i,n}^{\left({t - 1} \right)}} + {n_{0}}{W_{n}}}}, \end{aligned} $$
(15)
where \({\tilde p_{i,n}^{\left ({t - 1} \right)}}\) is the value of \({{{\tilde p}_{i,n}}}\) at the (t−1)th iteration of SCP. Similarly, we can obtain a lower upper \({\tilde \rho }_{k,n}^{\left ({t} \right)}\) of \(\rho _{k,n}^{2}\) by using its first-order Taylor expansion, which is expressed as
$$ \rho_{k,n}^{2} \geq \tilde \rho_{k,n}^{\left(t \right)} = {\left({\rho_{k,n}^{\left({t - 1} \right)}} \right)^{2}} + 2\rho_{k,n}^{\left({t - 1} \right)}\left({{\rho_{k,n}} - \rho_{k,n}^{\left({t - 1} \right)}} \right), $$
(16)
where \({\rho _{k,n}^{\left ({t - 1} \right)}}\) is the value of ρk,n at the (t−1)th iteration of SCP. Furthermore, since the function \(\min \{\cdot \}\) is increasing, we can obtain a lower bound \(\tilde \eta _{k}^{\left (t \right)}\) of \({\tilde \eta }_{k}\) at the tth iteration of SCP, i.e.,
$$ {{\tilde \eta }_{k}} \geq \tilde \eta_{k}^{\left(t \right)} = \frac{{\tilde R_{k}^{\left(t \right)}}}{{{{\tilde P}_{k}}}} = \frac{{\sum\limits_{n = 1}^{N} {\left({{{\tilde R}_{k,n,1}} - \tilde R_{k,n,2}^{\left(t \right)}} \right)} + \frac{{{f_{k}}}}{{{C_{k}}}}}}{{\sum\limits_{n = 1}^{N} {{\xi_{k}}{{\tilde p}_{k,n}}} + {\gamma_{k}}f_{k}^{3} + {p_{c,k}}}}. $$
(17)
As a result, a convex approximation of (13) at the tth iteration of SCP can be cast as
$$ \begin{aligned} \mathop {\max }\limits_{\Theta} \quad & \mathop { \min }\limits_{k \in \mathcal{K}} \left\{ {\tilde \eta_{k}^{\left(t \right)}} \right\} \\ \mathrm{s.t.} \quad & {\bar{\mathcal{C}}}_{1}: \sum\limits_{n = 1}^{N} {\left({{{\tilde R}_{k,n,1}} - \tilde R_{k,n,2}^{\left(t \right)}} \right)} + \frac{{{f_{k}}}}{{{C_{k}}}} \geq R_{k}^{\min },\forall k \in \mathcal{K}, \\ & {\tilde{\mathcal{C}}_{2}},{\mathcal{C}_{3}},{\mathcal{C}_{4}},{\mathcal{C}_{5}},{\mathcal{C}_{6a}}, \\ & {\tilde{\mathcal{C}}_{6b}}: \sum\limits_{k = 1}^{K} {\sum\limits_{n = 1}^{N} {\left({{\rho_{k,n}} - \tilde \rho_{k,n}^{\left(t \right)}} \right)}} \leq 0, \forall k \in \mathcal{K},n \in \mathcal{N}, \\ & {\mathcal{C}_{7}} \sim {\mathcal{C}_{10}}. \end{aligned} $$
(18)
Generalized Dinkelbach’s method
It can be observed that all the constraints in (18) are convex, i.e., the feasible region of (18) is a convex set. In addition, for each \({\tilde {\eta }}_{k}\), \(\forall k \in \mathcal {K}\), its numerator is non-negative and concave, meanwhile its denominator is positive and convex. Therefore, the problem in (18) belongs to the convex MMFP, which can be efficiently addressed by the generalized Dinkelbach’s method [26].
Let λ∗ and Θ∗ denote the optimal objective and the optimal solution of (18), respectively, then
$$ {\lambda^{*}} = \mathop {\max }\limits_{\Theta} \;\; \mathop {\min }\limits_{k \in \mathcal{K}} \left\{ {\frac{{\tilde R_{k}^{\left(t \right)}\left(\Theta \right)}}{{{{\tilde P}_{k}}\left(\Theta \right)}}} \right\} = \mathop {\min }\limits_{k \in \mathcal{K}} \left\{ {\frac{{\tilde R_{k}^{\left(t \right)}\left({{\Theta^{*}}} \right)}}{{{{\tilde P}_{k}}\left({{\Theta^{*}}} \right)}}} \right\}. $$
(19)
Moreover, Θ∗ is the optimal solution of (18) if and only if the following condition is satisfied: [26]
$$ \begin{aligned} & \mathop {\max }\limits_{\Theta} \;\; \mathop {\min }\limits_{k \in \mathcal{K}} \left\{ {\tilde R_{k}^{\left(t \right)}\left(\Theta \right) - {\lambda^{*}}{{\tilde P}_{k}}\left(\Theta \right)} \right\} \\ & = \mathop {\min }\limits_{k \in \mathcal{K}} \left\{ {\tilde R_{k}^{\left(t \right)}\left({{\Theta^{*}}} \right) - {\lambda^{*}}{{\tilde P}_{k}}\left({{\Theta^{*}}} \right)} \right\} \\ & = 0. \end{aligned} $$
(20)
Based on (20), we can transform (18) into its associated subtractive form as
$$ \begin{aligned} \mathop {\max }\limits_{\Theta} \quad & \mathop { \min }\limits_{k \in \mathcal{K}} \left\{ {\tilde R_{k}^{\left(t \right)} - \lambda^{\left(q - 1 \right)} {{\tilde P}_{k}}} \right\} \\ \mathrm{s.t.} \quad & {\bar{\mathcal{C}}}_{1}, {\tilde{\mathcal{C}}_{2}},{\mathcal{C}_{3}},{\mathcal{C}_{4}},{\mathcal{C}_{5}},{\mathcal{C}_{6a}}, {\tilde{\mathcal{C}}_{6b}}, {\mathcal{C}_{7}} \sim {\mathcal{C}_{10}}, \end{aligned} $$
(21)
where (21) is the problem to be solved at the qth iteration (q≥1) of the generalized Dinkelbach’s method, and \(\lambda ^{\left (q - 1 \right)}\) is the corresponding iterative parameter at the (q−1) iteration. At the qth iteration, we compute the optimal solution \(\Theta ^{\left (q \right)}\) of (21) for the fixed \(\lambda ^{\left ({q - 1} \right)}\), then we update \(\lambda ^{\left (q \right)}\) by using the obtained \(\Theta ^{\left (q \right)}\), i.e.,
$$ {\lambda^{\left({q} \right)}} = \mathop {\min }\limits_{k \in \mathcal{K}} \left\{ {\frac{{\tilde R_{k}^{\left(t \right)}\left({{\Theta^{\left(q \right)}}} \right)}}{{{{\tilde P}_{k}}\left({{\Theta^{\left(q \right)}}} \right)}}} \right\}. $$
(22)
By solving (21) iteratively and setting a very small iteration tolerance ε>0, ε-optimal solution \(\Theta ^{*} = \Theta ^{\left (q \right)}\) of (18) is achieved if the following condition holds: [26]
$$ \left| {\mathop {\min }\limits_{k \in \mathcal{K}} \left\{ {\tilde R_{k}^{\left(t \right)}\left({{\Theta^{\left(q \right)}}} \right) - {\lambda^{\left(q - 1 \right)}}{{\tilde P}_{k}}\left({{\Theta^{\left(q \right)}}} \right)} \right\}} \right| \leq \epsilon. $$
(23)
Since the objective function in (18) is not smooth, we introduce a new auxiliary variable given as
$$ u = \mathop { \min }\limits_{k \in \mathcal{K}} \left\{ {\tilde R_{k}^{\left(t \right)} - \lambda^{\left(q - 1 \right)} {{\tilde P}_{k}}} \right\}. $$
(24)
Accordingly, the equivalent form of (18) can be expressed as:
$$ \begin{aligned} \mathop {\max }\limits_{\Theta, u} \quad & u \\ \mathrm{s.t.} \quad & {\bar{\mathcal{C}}}_{1}, {\tilde{\mathcal{C}}_{2}},{\mathcal{C}_{3}},{\mathcal{C}_{4}},{\mathcal{C}_{5}},{\mathcal{C}_{6a}}, {\tilde{\mathcal{C}}_{6b}}, {\mathcal{C}_{7}} \sim {\mathcal{C}_{10}}, \\ & {\mathcal{C}_{11}}: {\tilde R_{k}^{\left(t \right)} - \lambda^{\left(q - 1 \right)} {{\tilde P}_{k}}} \geq u, \forall k \in \mathcal{K}. \end{aligned} $$
(25)
Based on the above analysis, a two-layer iterative algorithm consisting of the outer iterative procedure based on the SCP and the inner iterative procedure based on the generalized Dinkelbach’s method can be employed to obtain a feasible suboptimal solution of (13). Nevertheless, the above algorithm requires a feasible point of (13) as the initial point. If the above algorithm starts from a feasible point, then all the generated points will be feasible; otherwise, it may fail at the first iteration due to the infeasibility [28]. However, finding a feasible point in the non-convex feasible region of (13) is very hard. To avoid this, we will integrate the penalized SCP [29] with the framework of the above algorithm, which can start from an arbitrary point.
Penalized sequential convex programming
To ensure that a sequence of feasible solutions can be generated, the abovementioned SCP needs a feasible initial point. Generally, an infeasible initial point misleads intermediate solutions obtained by the iterative process, which often causes an incorrect local solution. However, it is usually NP hard to find a feasible initial point for non-convex problems, e.g., the CE optimization problem in (13). To avoid the above initialization requirement, it is a good choice to adopt the penalized SCP. Specially, auxiliary variables are introduced to relax all non-convex inequality constraints, and penalty parameters are added into the objective function, so that we can randomly generate the initial point and then feasible solutions can be gradually obtained with the increasing of iterations.
It can be found that only the constraints \({\tilde {\mathcal {C}}}_{1}\) and \({\mathcal {C}}_{6b}\) are non-convex in (13). To this end, we relax the constraint \({\tilde {\mathcal {C}}}_{1}\) via slack variables s1,k\((s_{1, k} \geq 0, \forall k \in \mathcal {K})\) and the constraint \({\mathcal {C}}_{6b}\) via a slack variable s2 (s2≥0). Then, the constraints \({\bar {\mathcal {C}}}_{1}\) and \({\tilde {\mathcal {C}}}_{6b}\) in (25) are, respectively, reformulated as
$$\begin{array}{*{20}l} & {\breve{\mathcal{C}}}_{1}: \sum\limits_{n = 1}^{N} {\left({{{\tilde R}_{k,n,1}} - \tilde R_{k,n,2}^{\left(t \right)}} \right)} + \frac{{{f_{k}}}}{{{C_{k}}}} + s_{1, k} \geq R_{k}^{\min },\forall k \in \mathcal{K}, \end{array} $$
(26a)
$$\begin{array}{*{20}l} & {\breve{\mathcal{C}}_{6b}}: \sum\limits_{k = 1}^{K} {\sum\limits_{n = 1}^{N} {\left({{\rho_{k,n}} - \tilde \rho_{k,n}^{\left(t \right)}} \right)}} \leq s_{2}, \forall k \in \mathcal{K},n \in \mathcal{N}. \end{array} $$
(26b)
In order to minimize the violations of non-convex constraints, we subtract the sum of the slack variables from the objection function of (25). Therefore, (25) can be reformulated as
$$ \begin{aligned} \mathop {\max }\limits_{\left\{\Theta, u, s_{1, k}, s_{2}\right\}} \quad & u - \tau^{\left(t - 1\right)} \left({\sum\limits_{k = 1}^{K} {{s_{1,k}}} + {s_{2}}} \right) \\ \mathrm{s.t.} \quad \quad & {\breve{\mathcal{C}}}_{1}, {\tilde{\mathcal{C}}_{2}},{\mathcal{C}_{3}},{\mathcal{C}_{4}},{\mathcal{C}_{5}},{\mathcal{C}_{6a}}, {\breve{\mathcal{C}}_{6b}}, {\mathcal{C}_{7}} \sim {\mathcal{C}_{11}}, \\ & {\mathcal{C}_{12}}: s_{1, k} \geq 0, \forall k \in \mathcal{K}, \\ & {\mathcal{C}_{13}}: s_{2} \geq 0, \end{aligned} $$
(27)
where (27) is the problem to be solved at the tth iteration of the penalized SCP, and \(\tau ^{\left (t - 1 \right)}\) is the corresponding penalty factor at the (t−1)th iteration and is updated for the next iteration by using the following equation:
$$ \tau^{\left(t \right)} = \min\left\{ {\mu\tau^{\left(t - 1\right)}, \tau_{\max}} \right\}. $$
(28)
Here, \(\tau _{\max }\) denotes the upper bound of \(\tau ^{\left (t \right)}\) so as to avoid numerical problems if \(\tau ^{\left (t \right)}\) is too large and provide convergence if a feasible region is not found, and μ is the predefined increasing factor [29]. Moreover, existing optimization tools, like YALMIP [30], CVX [31], CVXQUAD [32], can be used to solve the convex problem in (27).
In summary, we propose a CE optimization algorithm based on the big-M reformulation, the penalized SCP, and the generalized Dinkelbach’s method, which is presented in Algorithm 1.
Theorem 1
Algorithm 1 can converge to a local optimal solution of (13).
Proof
See Appendix A. □
Remark 1
The stopping criterion of Algorithm 1 includes two parts, while one is that the objective value of (13) is nearly unchanged, and the other is that the sum of slack variables is very small, which indicates that the solution obtained by Algorithm 1 is a feasible solution of (13).
Computational complexity analysis
In Algorithm 1, the computational complexity is mainly determined by solving problem (27) L1L2 times, where L1 and L2 denote the required iterations of the inner loop and outer loop, respectively. In addition, (27) is solved by using standard optimization tools based on the interior-point method whose complexity is \(\mathcal {O}\left ((4 K N + N + 2)^{3.5}\log (1/\varepsilon)\right)\), where \(\mathcal {O}\left (\cdot \right)\) denotes the big-O notation and ε represents the solution accuracy [33]. Thus, the proposed Algorithm 1 has an acceptable polynomial time computational complexity, i.e., \(\mathcal {O}\left (L_{1} L_{2} (4 K N + N + 2)^{3.5}\log (1/\varepsilon)\right)\).