 Research
 Open Access
 Published:
Joint optimization of computing ratio and access points’ density for mixed mobile edge/cloud computing
EURASIP Journal on Wireless Communications and Networking volume 2021, Article number: 16 (2021)
Abstract
Cooperation between the mobile edge computing (MEC) and the mobile cloud computing (MCC) in offloading computing could improve quality of service (QoS) of user equipments (UEs) with computationintensive tasks. In this paper, in order to minimize the expect charge, we focus on the problem of how to offload the computationintensive task from the resourcescarce UE to access point’s (AP) and the cloud, and the density allocation of APs’ at mobile edge. We consider three offloading computing modes and focus on the coverage probability of each mode and corresponding ergodic rates. The resulting optimization problem is a mixedinteger and nonconvex problem in the objective function and constraints. We propose a lowcomplexity suboptimal algorithm called Iteration of Convex Optimization and Nonlinear Programming (ICONP) to solve it. Numerical results verify the better performance of our proposed algorithm. Optimal computing ratios and APs’ density allocation contribute to the charge saving.
Introduction
With the rapid development of smart mobile user equipments (UEs), many applications with advanced features have emerged, such as augmented reality, facial recognition, and online games. The UEs who have computationintensive applications to compute may demand powerful computing capacity and huge amounts of energy [1]. The demands lead to contradictions in UEs which are resourcescarce. The conflict between demands and equipments has become the bottleneck to improve the experience satisfaction of users. Mobile cloud computing (MCC) [2,3,4] has been proposed as a promising way to address challenges by offloading computing tasks to the cloud which has abundant computing resources and energy. However, for delay sensitive applications, the delay of cloud computing is noneligible because of the long distance between the terminal device and the cloud [5]. Meanwhile, the burden on fronthaul is huge, which may lead to heavy jam in data transmission and computing.
In order to solve above problems, the vision of mobile edge computing (MEC) [2] or Fog Computing [6] is proposed as a supplement to MCC [7], which enables applications to run directly at the edge of the network. It extends the traditional cloud computing paradigm to the network edge [5] by putting a substantial amount of storage, communication, control, configuration, measurement, and management at the edge servers [8, 9]. With the help of MEC, low latency, location awareness, and high quality of service (QoS) for streaming media and realtime applications at resourcescarce UEs can be realized. To incorporate MEC in edge devices, some of the traditional access points (APs) are evolved to the edge computingbased access points by equipping with a certain caching, computing capabilities [10], which are more to be called as fog computingbased access points (FAPs).
Some outstanding works have been dedicated to computation offloading. [11] introduced many equivalence definitions of mobile edge computing, mobile edge computing platforms and architecture design. [12] and [13] discussed security threats of mobile edge computing, such as hacking. [14] illustrated the application of mobile edge computing in combination with the Internet of Things. In [15], the UEs, APs, and the cloud made up a three layer structure. They process a task collaboratively by offloading in the mixed MEC/MCC system. [16] and [17] thoroughly described the envisioned network architecture, proposed resource management scheme and analyzed its performance for edge/mobile edge computing.
There are also many previous works improve the system performance through the optimization of offloading decisions and resource allocation, such as the allocation of transmit power, bandwidth, and computation resource. Improvement of the system performance contains reduction in delay or energy consumption [18,19,20], minimization of the system cost [21, 22], improvement of QoS [23], maximization of the revenue of the server [24], adaptation user access mode selection mechanism [25]. However, most of those previous works put their emphasis on offloading decision making, resource allocation, or access mode selection, without a joint consideration of them.
Different from the above approaches, in this paper, we study the joint optimization of offloading decision making and access mode selection for a mixed MEC/MCC system to minimize the expect charge. It is embodied in optimization of computing ratios at each layers and the distribution density of APs. It is meaningful to study the distribution density of APs in MEC due to the edge severs have mobility and controllability. To the best of our knowledge, the joint design of offloading decision making and access mode selection in a mixed MEC/MCC system has not been addressed in previous works. The main contributions of this work are summarized as follows.

We analyze the selection probability and corresponding ergodic rate of each mode.

We formulate an optimization problem to minimize the expect charge of computing a task in the mixed MCC/MEC system. Due to the multiaccess mode, the expect charge is in the form of the product of connecting probability of each mode and its corresponding charge.

We devise a low complexity algorithm called Iteration of Convex Optimization and Nonlinear Programming (ICONP) to solve the formulated NPhard optimization problem. It first fixes the specific variable and transform the original problem into a convex problem by geometric mean inequality method. Solve the convex problem by CVX tool and get the optimal values of other variables. Then fix those variables which are got from last step and solve the problem with the specific variable by constrained nonlinear programming. Do iteration until meet convergence.

We prove the convergence of the proposed algorithm. Simulation results show the effectiveness of the proposed scheme with different system parameters.
The rest of this paper is organized as follows. The system model is described in Section 2. The mode selection and its corresponding ergodic rates are represented in Section 3. Section 4 formulates original problem. Section 5 represents the design of optimization algorithm. Simulation results are discussed in Section 6. Finally, we conclude this study in Section 7. “Appendix” can be seen in Sect. 8.
System model
We consider a threelayer mixed mobile edge/cloud uplink system, which is consisted of a user equipment (UE), a large number of APs, and a remote cloud as illustrated in Fig. 1. In this paper, the UE and APs are assumed to be equipped with a single antenna. APs which is capable of computing are called as FAPs as well. FAPs are deployed according to a twodimensional PPP(Poisson Point Procession) \(\Phi _f\) with density of \(\lambda _1\) in a disc plane, whose center is the UE. Thus, the deployment of all the APs is an expanded homogeneous PPP \(\Phi _d\) with density of \(\lambda _2=\lambda _1/{k}\), where \(k\in \left( 0,1 \right]\) denotes the probability of an AP supporting computation.
Without loss of generality, only one intensive computing task \({{\Gamma }_{u}}=\{N,\omega \}\) needs to be completed for the UE, where N is the size of computing task, \(\omega\) denotes the number of CPU cycles required for computing one bit. In this paper, we assume that computing task is divisible, which means that the computing task can be divided into two or more parts.
Three computing modes are considered in this paper including FAP execution mode, AP relay mode, and local execution mode, denoted as mode \(i, i\in \Psi =\{1,2,3\}\), respectively. Mode 1 means that the UE computes the task collaboratively with the FAP and the cloud, while mode 2 means that the UE computes the task collaboratively with the cloud, and mode 3 is that the UE executes the task locally by adapting its computation capacity. In mode \(i, i\in \Psi\), the UE first processes \(\alpha _i\) of the task, where \(\alpha _i\in \left[ 0,1\right] , i\in \Psi\), and \(\alpha _3=1\). Let \(\varvec{\alpha }=[\alpha _1,\alpha _2,1]\). Then, the UE transmits the rest \((1\alpha _{i})N\) bits to the selected AP. After that, the selected AP processes \(\varrho =\text {max}\{\beta (2i),0\}\) of the received data and then transmits \((1\alpha _{i})(1\beta (2i))N\) to the cloud, where \(\beta \in \left[ 0,1\right]\). Finally, the cloud computes the received data.
When compute some data, the energy consumption \({E_c}\) and time consumption \({T_c}\) are given as [26, 27]
where \(\kappa\) denotes the effective capacitance coefficient. f is the computation capacity of the central processing unit(CPU). \(D_1\) is the size of computing data(in bits).
When transmit some data, the energy consumption \({E_t}\) and time consumption \({T_t}\) are given as
where p and r denote the transmit power and rate, respectively. \(D_2\) is the size of transmitting data(in bits).
The size of computation outcome is much smaller than that of the computing task. Thus, the charge due to downlink transmission of the result is negligible compared to the uplink [28, 29]. Combined with Eq. (1)–(4), the charge is the sum of the product of consumed energy and its corresponding price and the product of computing delay and its corresponding price at each layer. The charge in mode \(i, i\in \Psi\), can be computed as
where \(f_{\text {loc}}\) is the local computation capacity (in CPU cycles/s) of the UE. \(f_{\text {AP1}}\) is the computation capacity of the selected FAP. \(f_{\text {AP}i}, i\in \Upsilon =\{2,3\}\), is a meaningless constant which is not equal to 0 for the rigor of the formula. \(f_\text {C}\) is the cloud’s computation capacity. \(R_i\) is the ergodic rate from the UE to the selected AP, while \(r_i\) is the transmission rate from the selected AP to the cloud in mode \(i, i\in \Omega =\{1,2\}\), which will be discussed in the following section in details. \(V_{\text {loc}},V_{\text {AP}i}\) and \(V_{\text {C}i}\) are prices per Joule (in yuan/J) at the UE, the selected AP and the cloud, respectively. The price raises in proportion to the corresponding amount of data needed to be computed or offloaded. For simplicity, we define \(V_{\text {loc}}=\upsilon _1\), \(V_{\text {AP}i}=\upsilon _2\cdot {\beta }^{2i}(1\alpha _i)N\), \(V_{\text {C}i}=\upsilon _3\cdot \left( 1\alpha _i\right) \left( 1+\beta (i2)\right) N\). \(G_\mho , \mho \in \{\text {loc,AP,C}\}\) are prices per second (in yuan/s) for computing delay at the served UE, the selected AP and the cloud, respectively. All notations in this paper and their definitions are collected in Table 1.
Mode selection and ergodic rate
The UE first tries to select an FAP which is nearest to it and the received signaltonoise ratio (SNR) is larger than a preset SNR threshold \(T_1\). If the UE cannot find an FAP which meets the requirements, the UE will select an AP which is nearest to it meanwhile the SNR between them is larger than a preset SNR threshold \(T_2\) as a relay. If neither of them can be achieved, the UE will compute data by itself. The probability of finding an AP which is nearest to the UE meanwhile the SNR between them is larger than \(T_i\) is expressed as \(F(\lambda _i), i\in \Omega\) [25].
where \(B_1\), \(\sigma ^2\) and \(p_1\) are the transmission bandwidth, the mean noise power per Hz, and the transmission power of the UE. The prove of \(F(\lambda _i)\) can be seen in “Appendix 1”.
The probability of selecting mode \(i, i\in \Psi\), is denoted as \(M_i\), and expressed as
Next, we focus on the derivation of ergodic rate in mode \(i, i\in \Omega\). Since the APs are deployed according to PPP, the ergodic rate (in bps) is defined as [25]
where \({\mathbb {E}}(\cdot )\) is the expectation with respect to the channel fading distribution as well as the locations of the random receiver nodes. SNR and T are the realtime SNR and the preset threshold of SNR in the wireless connection between the UE and the selected AP, respectively.
The ergodic rate from the UE to the selected AP in mode \(i, i\in \Omega\), can be derived as [25]
where \(\rho (\lambda _i)=\int _{{{\log }_{2}}(1+{{T}_{i}})}^{\infty }\frac{p_1\lambda _i\pi }{p_1\lambda _i\pi +2^\theta B_1\sigma ^2}{d\theta }\). More details about \(R_i\) can be seen in “Appendix 2”.
The transmission rate from the selected AP to the cloud is [25]
where \(B_2\), \(p_2\) and \(\left\ {{D}_{i}} \right\\) are the transmit bandwidth, transmit power of the selected AP, and the expect distance between the selected AP and the cloud in mode \(i, i\in \Omega\). \(\zeta _c\) is the path loss exponent between the AP and the cloud. Please see the details of \(r_i\) in “Appendix 3”.
Problem formulation
Since three execution modes are all likely to occur, the overall charge in our paper is defined as expected charge. Expected charge is the sum of product of select probability and corresponding charge of each mode, i.e.,
In this paper, the objective is to minimize the charge of offloading computing, which is formulated as follows:
The constraint \(\text {C1}\) means the size of offloading data which is offloaded from the UE to the selected FAP should be no larger than \(n_1\), \(\text {C2}\) guarantees that the data size is no larger than \(n_2\) when it offloaded from the selected AP to the cloud, where \(n_1\) is the maximum receive capacity to the UE offered by the selected AP, \(n_2\) is the maximum receive capacity of the cloud which is offered to the selected AP. Constraints \(\text {C3}\) and \(\text {C4}\) ensure that the computing ratio should no more than 1 and no smaller than 0. Constraint \(\text {C5}\) ensures multimode corporation, where \({F}({{\lambda }_{1}})\) is the probability of choosing mode 1, \({{F}_{\min }}\) and \({{F}_{\max }}\) are the lower and upper bound of probability, respectively. Due to the relationship between \(\lambda _1\) and \(\lambda _2\), constraint \(\text {C5}\) contains the constraint of \(F(\lambda _2)\). It is hard to solve this complex and nonconvex problem due to the existence of product relationship between variables in the objective function and the constraint \(\text {C2}\). Thus, we need to reduce the complexity and get the suboptimal values of variables by transforming the problem into a convex form.
Design of optimization algorithm
Note that \(\lambda _1\) is related to transmission rate and the probability of each mode’s selection. The coupling among \({{\lambda }_{1}}\), \(\beta\), and \(\varvec{\alpha }\) makes transforming the objective function into a convex form difficultly. To overcome these difficulties, we propose to address problem \(\mathcal {P}_{\text {1}}\) in an alternative manner. Specifically, we firstly solve problem \(\mathcal {P}_{\text {1}}\) with respect to \(\varvec{\alpha }\) and \(\beta\) for fixed \(\lambda _{1}\). Then, we solve problem \(\mathcal {P}_{\text {1}}\) with respect to \(\lambda _{1}\) for fixed \(\varvec{\alpha }\) and \(\beta\). Do iteration until convergence.
When the value of \(\lambda _1\) is given, the value of \(M_i, i\in \Psi\), and transmission rate \(R_i\), \(r_i\), \(i\in \Omega\), are all known. By taking various expressions which had been illustrated above into the problem, the objective function and constraints of problem \(\mathcal {P}_{\text {2}}\) are shown as bellow.
where the constraints C6 and C7 come from C1 when i=1 and 2, respectively. The constraints C8 and C9 come from C2 when i=1 and 2. The constraints C10 and C11 come from C4 when i=1 and 2.
In problem \(\mathcal {P}_{\text {2}}\), the objective function and the constraint C8 exist product relationships between variables \(\alpha _1\) and \(\beta\). It is obviously that the objective function and constraint C8 are not convex. The remaining constraints are linear. Before solving this problem, it is necessary to transform them into convex forms. In the arithmetic geometric mean inequality theorem, for real numbersa, b, there exists \({a^2} + {b^2} \ge 2ab\). So \(\frac{{{a^2} + {b^2}}}{2}\) is the upper bound of the value of ab. Based on arithmetic geometric mean inequality theorem, problem \(\mathcal {P}_{\text {2}}\) is relaxed to problem \(\mathcal {P}_{\text {3}}\) whose objective and constraints are transformed according to variables \(\varvec{\alpha }\) and \(\beta\).
The second derivative of the objective and constraints of problem \(\mathcal {P}_{\text {3}}\) with respect to the variable \(\varvec{\alpha }\) and \(\beta\) are greater than or equal to 0. Thus, the problem \(\mathcal {P}_{\text {3}}\) is a convex problem, which can be solved by CVX tool easily and efficiently. When the values of \(\varvec{\alpha }\) and \(\beta\) are given, the optimal solution of \(\lambda _1\) can be obtained via solving the following problem \(\mathcal {P}_{\text {4}}\). The expression of the objective is same as the objective in problem \(\mathcal {P}_{\text {3}}\). However, the unknown variable is \({{\lambda }_{1}}\) in problem \(\mathcal {P}_{\text {4}}\). Thus, the constraint is related to variable \({{\lambda }_{1}}\) in problem \(\mathcal {P}_{\text {4}}\) as constraint C5.
where
\(\mathcal {P}_{\text {4}}\) is a nonlinear constrained optimization problem which only contains variable \(\lambda _1\). We can get the range of \(\lambda _1\) from the constraint \(\text {C5}\) and record as \(\lambda _\text {min}\le \lambda _1\le \lambda _\text {max}\). There only exists one inequality constraint in \(\mathcal {P}_{\text {4}}\), thus we can get the optimal value of \(\lambda _1\) by interior point penalty function method [30]. The main idea of penalty function method is to transform nonlinear constrained optimization problem into nonlinear unconstrained optimization problem. Firstly, define barrier function
where r is a very small positive number. In this way, when \(\lambda _1\) is close to \(\lambda _\text {min}\) or \(\lambda _\text {max}\), \(G(\lambda _1,r)\) is tending to infinity. Otherwise, \(G(\lambda _1,r)\approx P(\lambda _1)\). Thus, we can solve \(\mathcal {P}_{\text {5}}\) to get the optimal value of \(\lambda _1\) equivalently.
\(\mathcal {P}_{\text {5}}\) is a nonlinear unconstrained optimization problem and can be solved by one dimensional linear search method. The derivative of the objective can be solved by Newton’s method [31]. The derivative of \(G({\lambda _1},r)\) with respect to \({\lambda _1}\) is denoted as \(g({\lambda _1},r)\). For one dimensional search function \(g(\lambda _1,r)\), suppose that a close point to the extreme minimum point has been given as \(\delta ^0\). Near the point \(\delta ^0\), we use a quadratic function \(\hbar (\delta ,r)\) to approximate the original function \(g(\delta ,r)\). The original function is obtained by Taylor expansion as
where \(g'(\delta ^0,r)=\frac{dg(\delta ,r)}{d\delta }_{\delta =\delta ^0}\), \(g''(\delta ^0,r)=\frac{d^2g(\delta ,r)}{d(\delta )^2}_{\delta =\delta ^0}\). Then the extreme minimum point of the quadratic function \(\hbar (\lambda _1,r)\) is used as the new close point to the extreme minimum point of \(G(\delta ,r)\), and record as \(\delta ^1\). According to the necessary conditions of extreme value, \(\delta ^{1}=\delta ^0\frac{g'(\delta ^0,r)}{g''(\delta ^0,r)}\) can be drawn from \(\frac{d\hbar (\delta ,r)}{d\delta }=0\). Further we can get the update formula as \(\delta ^{m+1}=\delta ^m\frac{g'(\delta ^m,r)}{g''(\delta ^m,r)}\). The algorithm is shown in Algorithm 1.
According to the definition of \(G(\lambda _1,r)\), the smaller the r is, the closer the solution of \(\mathcal {P}_{\text {5}}\) to the solution of \(\mathcal {P}_{\text {4}}\). Thus, we adopt Series Unconstrained Minimization Method (SUMT) to make the solution of \(\mathcal {P}_{\text {5}}\) more closer to the solution of \(\mathcal {P}_{\text {4}}\) [30]. Set an infinite penalty factor series {\(r_k\)} which is strictly monotonic decreasing and tending to zero. Then solve \(G(\lambda _1,r_k)\) according to each \(r_k\) until meet the iterative termination requirement. The complete algorithm of solving \(\mathcal {P}_{\text {4}}\) is shown in Algorithm 2.
Finally, take the \({{\lambda }_{1}}\) which is obtained by Algorithm 2 back to the problem \(\mathcal {P}_{\text {3}}\). \(\lambda _1\) is a known value in \(\mathcal {P}_{\text {3}}\) and then derive optimized value of \(\varvec{\alpha }\) and \(\beta\) by CVX tool. After that, we solve problem \(\mathcal {P}_{\text {4}}\) with fixed \(\varvec{\alpha }\) and \(\beta\). In conclusion, the algorithm for solving \(\mathcal {P}_{\text {}}\) is firstly solving problem \(\mathcal {P}_{\text {3}}\) with respect to \(\varvec{\alpha }\) and \(\beta\) for fixed \(\lambda _{1}\). Then, we solve problem \(\mathcal {P}_{\text {4}}\) with respect to \(\lambda _{1}\) for fixed \(\varvec{\alpha }\) and \(\beta\). When we solve problem \(\mathcal {P}_{\text {3}}\) with respect to \(\varvec{\alpha }\) and \(\beta\) for fixed \(\lambda _{1}\), the value of P with optimized \(\varvec{\alpha }\) and \(\beta\) is smaller than before. Similarly, when we solve problem \(\mathcal {P}_{\text {4}}\) with respect to \(\lambda _{1}\) for fixed \(\varvec{\alpha }\) and \(\beta\), the value of P with optimized \(\lambda _{1}\) is smaller than before. Thus, the algorithm of solving \(\mathcal {P}_{\text {2}}\) is convergent, and it is shown in Algorithm 3.
Simulation results and analysis
In this section, the impact of N, \(n_1\), and \(n_2\) on latency, computing ratios, expect charge are evaluated by using MATLAB with CVX tool. The simulation parameters are listed as follows in Table 2.
Figure 2a, b show the delay and charge of the offloading system with an increasing data size N when \(n_1=1200, n_2=800\). The delay and the charge of the system increase with the increasing of the data size. Compared with local computing, the proposed offloading strategy can improve the QoS by saving about 4 seconds and 1.5 yuan when facing the same data size of the computing task under simulation parameters we set. This is because the objective function is a balance of energy consumption and delay at each layer. Thus, it will not only charge less, but also spend less time than local computing.
Next the computing ratios of each layers in mode 1, mode 2, and allocation of FAPs’ distribution density versus the value of data size are shown in Fig. 3a, b, and Fig. 4, where \(n_1=1200, n_2=800\). In Fig. 3a, with the increasing of data size, the UE firstly computes none, and then the computing ratio of the UE keeps increasing. The computing ratio of the FAP is firstly unchanged, then increasing, and finally decreasing. The computing ratio of the cloud is firstly unchanged, then decreasing, and finally decreasing. Staying unchanged when the data size is smaller than 1000 bits is because that, the optimal data size which computed at each layer to minimize the charge is smaller than its receive capacity. When the data size becomes larger than 1000 bits and smaller than 1200 bits, the data which is optimized to offload to the cloud is larger than its receive capacity. Thus, the computing ratio of the cloud decreases. Meanwhile, the data which is optimized to offload to the FAP is smaller than its receive capacity. That is why the computing ratio of the FAP increases with the increase of data size of the task. When data size is larger than the FAP’s receive capacity, the UE needs to compute the part which is larger than \(n_1\). Thus, when the data size of the task is larger than \(n_1\), the larger the data size is, the more ratio of the task the UE needs to compute. Meanwhile, both computing ratio of the FAP and cloud decrease. In Fig. 3b, the cloud computes all the task while the UE computes none when the data size is smaller than \(n_2\). When the data size is larger than \(n_2\), the computing ratio of the UE keeps increasing while cloud’s keeps decreasing. This is because when the optimized data size which is allocated to the cloud is smaller than its receive capacity, the computing ratios of the cloud and the UE stay unchanged. When the optimized offloaded data size is larger than \(n_2\), the amount of offloaded data is fixed at \(n_2\) and the computing ratio of the cloud is decreasing while the computing ratio of the UE is increasing. The change of data size do not affect the optimal value of \(\lambda _1\) as shown in Fig.4. This is due to the increase of data size has no relationship with the allocation of distribution density of FAPs.
Figure 5a, b shows the expect charge and distribution density of FAPs versus the values of \(n_1\) and \(n_2\), where \(N=2000\). In Fig. 5a, the expect charge decreases with the increase of \(n_1\) and \(n_2\). This is because the computing power of upper layers is larger than the UE, it could save charge by offloading. The optimized computing ratios at each layer are limited by the receive capacity of upper layers. Larger \(n_1\) and \(n_2\) mean the UE is permitted to offload more data to upper layers when do optimized allocation of the task. When receive capacity are larger than the optimized computing data size which allocated to corresponding layer, the computing ratios and charge stay unchanged with the increasing of \(n_1\) and \(n_2\). In Fig. 5b, the distribution density of FAP is hardly influenced by \(n_1\) and \(n_2\) unless \(n_1\) and \(n_2\) are small. This is because when the receive capacity of the FAP is too small to receive offloaded data, the whole task is computed locally. In that case, the value of distribution density of the FAPs do not need optimization. When the receive capacities become larger, the UE can offload data to upper layers. The distribution density of the FAPs has to be optimized to minimize the charge of the offloading system. From the simulation we found that, there is no direct connection between the distribution density of FAP and receive capacity when the UE can do offloading.
Figure 6a–c shows the computing ratios at each layer in mode 1 versus the values of \(n_1\) and \(n_2\), where \(N=2000\). In Fig.6a, the UE’s computing ratio decreases with the increase of \(n_1\) and \(n_2\). This is because the UE will offload part of task to upper layers to save charge after optimization. The offloading size of data is limited by receive capacities. When the optimized allocated data size at upper layers is larger than their receive capacity, the UE will offload as much as possible in the limit of receive capacity. In Fig. 6b, the FAP’s computing ratio will first increase then decrease to a stable value when \(n_2\) keeps increasing. The increase is due to the UE is permitted to offload more data to the cloud through the FAP with the increasing of \(n_2\). It leads to more data can be computed at the FAP. However, when the receive capacity of the cloud keeps increasing, more data will be offloaded to the cloud to save charge and the computing ratio of the FAP decreases. The FAP’s computing ratio will increase to a stable value when \(n_1\) keeps increasing. This is because when the optimized offloaded data size is larger than the receive capacity of the FAP, in order to save charge, the UE will offload as much as possible under the limit of the receive capacity of the FAP. That is why it increases with the increasing of \(n_1\). When the receive capacity of upper layer is increasing to larger than the optimized allocated data size at corresponding layer, the computing ratios at each layer will stay unchanged with the increasing of receive capacities. In Fig. 6c, the computing ratio of the cloud is complemented with the sum of the UE’s and the FAP’s.
Figure 7a, b shows the computing ratios at each layer in mode 2 versus the values of \(n_1\) and \(n_2\), where \(N=2000\). In Fig. 7a, the computing ratio of the UE decreases when the receive capacities becomes larger. The computing ratio of the cloud is complemented with UE’s with the increase of \(n_1\) and \(n_2\) as shown in Fig. 7b. The reason is similar to mode 1. This is because when the optimized offloaded data size is larger than the receive capacity of the cloud, in order to save charge, the UE will offload as much as possible under the limit of the receive capacity. When the receive capacity of upper layer further increases to the value which is larger than the optimized allocated data size at the cloud, the computing ratios will stay unchanged with the increasing of receive capacities. Compared with mode 1, the computing burden on the cloud is larger when there is no edge severs.
Conclusion
In this paper, a mixed MEC/MCC system based on offloading computing was investigated, which joint optimized the computing ratios at each layer and distribution density of FAPs to minimize the expect charge. To address the nonconvex problem, we had proposed ICONP algorithm to solve it. The suboptimal computing ratios of the computing task at each layers were obtained by fixing the value of the density of FAPs and using geometric mean inequality to transform the problem into a convex form. The density of APs was obtained via nonlinear unconstrained programming. The computing ratios and the density of FAPs were solved iteratively. Our simulation results verified that the proposed system can achieve better performance than computing the whole task locally in respects of the charge and the delay. Actually, the research in our paper does not consider the interference between multicell and multiuser which indeed exists in real life. Meanwhile, the cost of calculation of optimization problem is not taken into account. Thus, there are several future directions of interest to pursue based on our work. It is interesting to study the multiuser and multiMAP coordinated communication under mobile edge computing to overcome the limitation of our work in this paper. In this case, interuser interference and multiuser game on resources will be taken into consideration. It is also meaningful to consider the cost when deal with the optimization problem. Meanwhile, machine learning is a hot research topic at present. The way to combine machine learning with mobile edge computing effectively is also worth studying in the future.
Abbreviations
 Gloss:

Meaning
 MEC:

Mobile edge computing
 AP:

Access point
 UE :

User equipment
 ICONP:

Iteration of convex optimization and nonlinear programming
 MCC:

Mobile cloud computing
 FAP:

Fog computingbased access point
 QoS:

Quality of service
 SNR:

Signaltonoise ratio
References
 1.
Cisco visual networking index: mobile data traffic forecast update 20172022. CISCO White Paper, February (2018)
 2.
W. Shi, J. Cao, Q. Zhang et al., Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016)
 3.
M. Peng, Y. Li, Z. Zhao et al., System architecture and key technologies for 5g heterogeneous cloud radio access networks. IEEE Netw. 29(2), 6–14 (2015)
 4.
L. Zhou, Specificversus diversecomputing in media cloud. IEEE Trans. Circuits Syst. Video Technol. 25(12), 1888–1899 (2015)
 5.
S. Wang, J. Xu, N. Zhang et al., A survey on service migration in mobile edge computing. IEEE Access 6, 23511–23528 (2018)
 6.
R. Naha, S. Garg, D. Georgekopolous et al., Fog computing: survey of trends, architectures, requirements, and research directions. IEEE Access 6, 47980–48009 (2018)
 7.
T.X. Tran, A. Hajisami, P. Pandey et al., Collaborative mobile edge computing in 5g networks: new paradigms, scenarios, and challenges. IEEE Commun. Mag. 55(4), 54–61 (2017)
 8.
S. Sardellitti, G. Scutari, S. Barbarossa, Joint optimization of radio and computational resources for multicell mobileedge computing. IEEE Trans. Signal Inf. Process. Over Netw. 1(2), 89–103 (2015)
 9.
H. Viswanathan, P. Pandey, D. Pompili, Maestro: Orchestrating concurrent application workflows in mobile device clouds. 2016 IEEE International Conference on Autonomic Computing (ICAC), pp. 257–262 (2016)
 10.
M. Peng, S. Yan, K. Zhang, et al., Ieee network. Fogcomputingbased radio access networks: issues and challenges, (2016)
 11.
S. Yi, Z. Hao, Z. Qin, et al., Fog computing: platform and applications. 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb), Washington, DC, pp. 73–78 (2015)
 12.
I. Stojmenovic, S. Wen, The Fog computing paradigm: scenarios and security issues. 2014 Federated Conference on Computer Science and Information Systems, Warsaw, 18 (2014)
 13.
R. Roman, J. Lopez, M. Mambo, Mobile edge computing, fog et al.: A survey and analysis of security threats and challenges. Future Gener. Comput. Syst. 2016: S0167739X16305635 (2016)
 14.
V. Pande, C. Marlecha, S. Kayte, A Reviewfog computing and its role in the internet of things. J. Eng. Res. Appl. (2016)
 15.
J. Du, L. Zhao, J. Feng et al., Computation offloading and resource allocation in mixed fog/cloud computing systems with minmax fairness guarantee. IEEE Trans. Commun. 66(4), 1594–1608 (2018)
 16.
B.P. Rimal, D.P. Van, M. Maier, Mobileedge computing versus centralized cloud computing over a converged FiWi access network. IEEE Trans. Network Serv. Manag. 14(3), 498–513 (2017)
 17.
B.P. Rimal, D.P. Van, M. Maier, Cloudlet enhanced fiberwireless access networks for mobileedge computing. IEEE Trans. Wirel. Commun. 16(6), 3601–3618 (2017)
 18.
Y. Lin, E. Chu, Y. Lai et al., Timeandenergyaware computation offloading in handheld devices to coprocessors and clouds. IEEE Syst. J. 9(2), 393–405 (2015)
 19.
K. Zhang, Y. Mao, S. Leng et al., Energyefficient offloading for mobile edge computing in 5g heterogeneous networks. IEEE Access 4, 5896–5907 (2016)
 20.
O. Munoz, A. PascualIserte, J. Vidal, Optimization of radio and computational resources for energy efficiency in latencyconstrained application offloading. IEEE Trans. Veh. Technol. 64(10), 4738–4755 (2015)
 21.
X. Chen, Decentralized computation offloading game for mobile cloud computing. IEEE Trans. Parallel Distrib. Syst. 26(4), 974–983 (2015)
 22.
Y. He, F.R. Yu, N. Zhao et al., Big data analytics in mobile cellular networks. IEEE Access 4, 1985–1996 (2016)
 23.
S. Deng, L. Huang, J. Taheri et al., Computation offloading for service workflow in mobile cloud computing. IEEE Trans. Parallel Distrib. Syst. 26, 3317–3329 (2015)
 24.
C. Wang, C. Liang, F.R. Yu et al., Computation offloading and resource allocation in wireless cellular networks with mobile edge computing. IEEE Trans. Wirel. Commun. 16(8), 4924–4938 (2017)
 25.
S. Yan, M. Peng, W. Wang, User access mode selection in fog computing based radio access networks. In: 2016 IEEE International Conference on Communications (ICC), 16 (2016)
 26.
T.D. Burd, R.W. Brodersen, Processor design for portable systems. J. Vlsi Signal Process. Syst. Signal Image Video Technol. 13(2–3), 203–221 (1996)
 27.
F. Wang, J. Xu, X. Wang et al., Joint offloading and computing optimization in wireless powered mobileedge computing systems. IEEE Trans. Wirel. Commun. 17(3), 1784–1797 (2018)
 28.
S.W. Ko, K. Huang, S.L. Kim et al., Live prefetching for mobile computation offloading. IEEE Trans. Wirel. Commun. 16(5), 3057–3071 (2017)
 29.
H. Guo, J. Liu, Collaborative computation offloading for multiaccess edge computing over fiberwireless networks. IEEE Trans. Veh. Technol. 1–1 (2018)
 30.
R. Liu, Mathematical Modeling Method and Mathematical Experiment (China Water Conservancy and Hydropower Press, Beijing, 2011)
 31.
B. Stephen, V. Lieven, Convex Optimization (Cambridge University Press, Britain, 2004)
Acknowledgements
This work was supported by National Natural Science Foundation of China under Grants 61471120, Natural Science Foundation of Hunan Province under Grants 2020JJ4745, the National Science and Technology Major Project of China under Grant 2018ZX03001002003.
Author information
Affiliations
Contributions
All the authors designed research, design the algorithm, performed research, analyzed data, and wrote the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1
Appendix 1
The probability of choosing mode \(i, i\in \Omega\), can be derived by
where \(B_1\) and \(p_1\) are transmission bandwidth and power of the UE. \(\sigma ^2\) is the mean noise power per Hz. \(\mathbf {h_1}^2\sim exp (1)\) characterize the exponentially distributed fading power over the flat Rayleigh fading channel between the UE and the selected AP. \({{\left\ {{L}_{i}} \right\ }^{{{\zeta }_{f}}}}\) denotes the path loss of mode i and \(\zeta _f\) is the path loss exponent, where \(L_i\) is the distance between the UE and the selected AP. \(f({{l}_{i}})=2{{\lambda }_{i}}\pi {{l}_{i}}{{e}^{{{\lambda }_{i}}\pi {{l}_{i}}^{2}}}\) is the probability density function(PDF) of the distance between UE and the nearest AP [25]. A closed form expression can be expressed as \({{F}}({{\lambda }_{i}})=\frac{1}{1+T_iB_1\sigma ^2/(p_1\lambda _i\pi )}\) when \(\zeta _f=1\). In this paper, we analyze the problem base on \(\zeta _f=1\).
Appendix B
For a positive continuous random variable A, \({\mathbb {E}} [AA\ge W]=W\Pr (A\ge W)+\int _{W}^{\infty }{\Pr (A\ge a)}da\) [25]. Thus, the transmission rate between the UE and the selected AP in mode \(i, i\in \Omega\) is derived as bellow.
where \(\rho (\lambda _i)=\int _{{{\log }_{2}}(1+{{T}_{i}})}^{\infty }\frac{p_1\lambda _i\pi }{p_1\lambda _i\pi +2^\theta B_1\sigma ^2}{d\theta }\).
Appendix C
The expect distance between the selected AP and the cloud can be expressed as \({{X}_{i}}=\int _{0}^{\infty }{{{l}_{i}}f({{l}_{i}})}d{{l}_{i}}, i\in \Omega .\) Suppose the distance between the UE and the cloud is \({{H}_{uc}}\) and the schematic plan of three points is in Fig. 8:
where point R is the location of the selected AP, \(X_i, i\in \Omega\) ,is the expect distance between the selected AP and the UE. The expect distance between the AP and the cloud is calculated as the average distance between the point R and the point \((H_{uc},0)\).
Thus the transmission rate from AP to the cloud is
where \(\text { }\!\!\!\!\text { }{{{\mathbf {h}}}_{2}}{{}^{2}}\sim exp (1)\) characterize the exponentially distributed fading power over the flat Rayleigh fading channel between the AP and the cloud, \({\mathbb {E}}[{{{\mathbf {h}}}_{{\mathbf {2}}}}^{2}]=1\). \({{\left\ {{D}_{i}} \right\ }^{{{\zeta }_{c}}}}\) denotes the path loss, \({{B}_{2}}{{\sigma }_{2}}^{2}\) represents the noise power received by the Cloud. Thus, \({{r}_{i}}={{B}_{2}}{{\log }_{2}}(1+\frac{{{p}_{2}}{{\left\ {{D}_{i}} \right\ }^{2{{\zeta }_{c}}}}}{{{B}_{2}}{{\sigma }}^{2}}), i\in \Omega\).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jing, T., He, S., Yu, F. et al. Joint optimization of computing ratio and access points’ density for mixed mobile edge/cloud computing. J Wireless Com Network 2021, 16 (2021). https://doi.org/10.1186/s1363802101891w
Received:
Accepted:
Published:
Keywords
 Computation offloading
 Mobile edge computing
 Density allocation
 Mode selection