Skip to main content

Automated uplink power control optimization in LTE-Advanced relay networks

Abstract

Relaying is standardized in 3rd Generation Partnership Project (3GPP) Long-Term Evolution (LTE)-Advanced Release 10 as a promising cost-efficient enhancement to existing radio access networks. Relay deployments promise to alleviate the limitations of conventional macrocell-only networks such as poor indoor penetration and coverage holes. However, to fully exploit the benefits of relaying, power control (PC) in the uplink should be readdressed. In this context, PC optimization should jointly be performed on all links, i.e., on the donor-evolved Node B (DeNB)-relay node (RN), the DeNB-user equipment (UE) link, and the RN–UE link. This ensures proper management of interference in the network besides attaining a receiver dynamic range which ensures the orthogonality of the single-carrier frequency-division multiple access (SC-FDMA) system. In this article, we propose an automated PC optimization scheme which jointly tunes PC parameters in relay deployments. The automated PC optimization can be based on either Taguchi’s method or a meta-heuristic optimization technique such as simulated annealing. To attain a more homogeneous user experience, the automated PC optimization scheme applies novel performance metrics which can be adapted according to the operator’s requirements. Moreover, the performance of the proposed scheme is compared with a reference study that assumes a scenario-specific manual learn-by-experience optimization. The evaluation of the optimization methods within the LTE-Advanced uplink framework is carried out in 3GPP-defined urban and suburban propagation scenarios by applying the standardized LTE Release 8 PC scheme. Comprehensive results show that the proposed automated PC optimization can provide similar performance compared to the reference manual optimization without requiring direct human intervention during the optimization process. Furthermore, various trade-offs can easily be achieved; thanks to the new performance metrics.

1. Introduction

Relaying is considered an integral part of the Fourth Generation (4G) radio access networks, namely IEEE 802.16 m and 3rd Generation Partnership Project (3GPP) LTE Release 10 and beyond (LTE-Advanced). Decode-and-forward relay nodes (RNs) are relatively small nodes with low power consumption, which connect to the core network with wireless relay link through a donor-evolved Node B (donor eNB, DeNB). The wireless backhaul enables deployment flexibility and eliminates the high costs of a fixed backhaul. Thanks to their compact physical characteristics and low power consumption, RNs can be mounted on structures such as lamp posts with power supply facilities. Furthermore, RNs do not have strict installation guidelines with respect to radiation, visual disturbance, and planning regulation. Therefore, relaying is regarded a cost-efficient technology[1]. Previous technical studies have further shown that RNs promise to increase the network capacity and to better distribute resources in the cell, or extend the cell coverage area[2, 3].

The uplink received power of a user equipment (UE) depends on the path loss which can vary significantly among different UE locations in a cell. Accordingly, without uplink power control (PC), UEs would transmit with the same power level which could yield a high difference between the uplink received powers of different UEs. This would in turn cause a high receiver dynamic range which increases the susceptibility of single carrier-frequency division multiple access (SC-FDMA) to the loss of orthogonality and hence can cause intra-cell interference[4]. In this context, PC decreases the deviation between received power levels of different UEs in the same cell ensuring that the receiver dynamic range does not exceed a predetermined level. Besides, PC is a vital means in the uplink, not only to compensate for channel variations, but also to mitigate the inter-cell interference, and to increase the cell edge and system capacities.

Within the single-hop LTE Release 8 framework, simulation-based performance evaluation of PC has been well elaborated in the literature[59]. However, for LTE-Advanced relay deployments PC parameter optimization problems are not widely examined. Relay deployments require a more detailed dimensioning and planning than conventional single-hop networks. Furthermore, in contrast to eNB-only networks, PC is necessary for the relay link (DeNB-RN) because the end-to-end throughput (TP) of RN-served UEs (RUEs) depends on the qualities of both the access (RN-UE) and relay links. As the access and relay links have different propagation conditions than that of the direct link (DeNB-UE), the distribution of uplink transmit/received power in relay networks can significantly be different than that in macrocell-only networks.

1.1. Motivation for the work

In relay networks, PC parameters on the direct (DeNB-UE), access, and relay links should be optimized jointly in order to find the optimum solution which maximizes a target performance metric. Nonetheless, given the standardized PC parameter ranges in LTE, a brute-force approach, which tests all of the combinatorial possibilities of these parameters, is not feasible as it requires a very large number of network trial runs.a One approach for optimizing the PC parameters is provided in our previous work[10]. It is worth noting that to the best of the authors’ knowledge, Bulakci et al.[10] provide the first study which investigates the PC optimization in LTE-Advanced relay networks. Therein, two PC parameter optimization methods are suggested for urban and suburban scenarios, where the parameters on one of the three links, the direct (DeNB-UE), access, and relay links, are tuned in each step according to the results obtained in the preceding step. The described approaches of[10] can be classified as manual learn-by-experience optimization methods, since in each step a set of values is logically selected for each PC parameter and evaluated against target performance metrics. One difficulty that arises here is that extensive skilled human intervention is required after each optimization step. In addition, such a manual optimization relies on logical rules which are designed specifically for the considered scenario. That is, new rules have to be defined each time the scenario or performance metric is changed. Moreover, depending on the considered scenario, the manual optimization may not always achieve nearly optimum solutions because only a subset of the search space is explored. To tackle these challenges, automated optimization methods, which can work irrespective of the considered scenario and yet avoid a large number of network trial runs, need to be investigated.

In our previous work[11], a joint optimization strategy of PC parameters based on Taguchi’s method that exploits the mutual dependencies of the different links without a priori knowledge was proposed. Therein, the investigation was carried out in urban scenario only considering full compensation PC (FCPC) and two conventionally used performance metrics, i.e., 5%-ile UE TP and harmonic mean (HM) of the UE TP levels, were employed. Investigations showed that similar or better performance was achieved relative to the manual optimization depending on the considered performance metric.

1.2. Contributions

In this study, we build upon the concepts presented in[11] to provide a comprehensive framework for the automated PC optimization methodology in LTE-Advanced relay deployments. The main contributions of this study are then summarized as follows.

  • In order to prevent direct skilled human intervention with high effort, which is the case for manual optimizations[10], we propose an automated PC scheme. The proposed scheme is thus expected to reflect in cost reductions during the network planning phase, and faster response to network and environment changes which require revisiting the PC settings. In this context, the proposed approach can be seen as yet another feature of network planning.

  • We investigate Taguchi’s method and simulated annealing as two viable options for the proposed automated PC optimization scheme. In this context, simulated annealing is a well-known optimization method which has extensively been used in many engineering problems[1214]. This meta-heuristic search method significantly reduces the complexity and network trial runs and still converges to a near-optimum system state. Taguchi’s method is another promising optimization method which was first developed for the optimization of manufacturing processes[15] and has recently been introduced in the wireless communication field[11, 16, 17]. Unlike simulated annealing that heuristically discovers the multi-dimensional parameter space of candidate solutions, Taguchi’s method uses a so-called orthogonal array (OA)[18], where a reduced set of representative parameter combinations from the full search space is tested. The number of selected parameter combinations determines the number of experiments being carried out and evaluated using a performance metric. After considering all the experiments’ results, a candidate solution is found and the process is repeated till a desired criterion is fulfilled. Herein, a nearly orthogonal array (NOA) is used instead of an OA like in[19] to reduce significantly the computational complexity at the expense of a slight degradation in performance.

  • We propose the performance steering concept as a vital part of the automated PC scheme, which enables a flexible adaptation to changing optimization goals and target scenarios. This concept is a necessity to adapt to the different requirements of network operators for different scenarios, e.g., high TP regime is favored over low TP regime in hot-spot scenarios, whereas low TP regime is more vital in coverage-oriented scenarios. In particular, we introduce different novel performance metrics for the automated optimization to cover various optimization goals. We show that these new performance metrics can effectively be used as a means to tackle the increased heterogeneity due to relay deployments especially in scenarios with large inter-site distances (ISDs) such as suburban scenarios. We further show that conventionally used performance metrics cannot yield the expected enhancements in suburban scenarios, which proves the need for such performance metrics.

  • We compare the performance of the automated PC optimization based on Taguchi’s method and simulated annealing with that of the manual optimizations. A thorough evaluation of the optimization strategies within the LTE-Advanced uplink framework is carried out not only in urban scenario, but also in suburban scenario by considering both fractional PC (FPC) and FCPC. Besides, the scope of the study includes a complexity analysis where we show that the automated PC optimization can significantly reduce the number of network trial runs required to optimize the system performance compared to brute-force approach.

The remainder of this article is organized as follows. Section 2 provides the background discussion and definitions. In Section 3, the optimization problem and performance steering along with performance metrics are outlined. The optimization methods comprising manual and automated optimizations are presented in Section 4. The system model and simulation assumptions are given in Section 5. In Section 6, detailed performance evaluation and analysis are carried out. Finally, Section 7 concludes the article.

2. Background and definitions

In this section, we first briefly recall the framework of LTE uplink technology. Then, the open-loop FPC scheme of LTE Release 8 is outlined, which is followed by the discussion on the constraints of resource allocation.

2.1. LTE uplink technology and frame structure

LTE uplink has adopted SC-FDMA[20]. The bandwidth is divided into subbands which are called physical resource blocks (PRBs). The PRB defines the resource allocation granularity in LTE. Herein, a transmission bandwidth of 10 MHz is allocated for uplink and thus there are 48 PRBs available for data transmission on the physical uplink shared channel (PUSCH) and 2 PRBs are reserved to the uplink control channel.

In this study, we assume frequency-division duplex (FDD) mode. Given the expected frequency reuse one in LTE-Advanced networks, macrocell-served UEs (MUEs) and RUEs are served on the same frequency bands by DeNBs and RNs, respectively. Yet, considering the resource allocation strategy defined for inband Type 1 RNs in[21], relay and access link transmissions are time-division multiplexed. Moreover, users can be scheduled on a subset of the total available PRBs in each transmission time interval (TTI). A TTI (aka subframe) duration is 1 ms and an LTE frame consists of 10 subframes. During the backhaul subframes for the relay link, RUEs are not scheduled, and thus they are experiencing transmission gaps (Tx. gaps). An example frame structure is given in Figure1, where two subframes are reserved for the relay link. In particular, a maximum of six backhaul subframes can semi-statically be allocated for the relay link[21].

Figure 1
figure 1

FDD uplink LTE-Advanced frame structure for Type 1 RNs.

2.2. Uplink open-loop PC

The main task of PC mechanisms is to compensate the long-term channel variations and to limit the amount of generated inter-cell interference. Yet, the receiver dynamic rangeb of DeNBs and RNs should also be adjusted via PC. Large dynamic range may lead to reduced orthogonality between time-frequency resources within a cell and cause intra-cell interference[4]. To fulfill the aforementioned objectives, FPC[22] is used for the PUSCH to determine the UE transmit power. In this study, FPC is also employed for the relay-specific PUSCH (R-PUSCH) between RNs and DeNB. Accordingly, the transmit power of a node u (UE or RN) that employs open-loop FPC is given in dBm as

P u = min P max , P 0 + 10 log 10 M u + α L ,
(1)

where Pmax is the maximum allowed transmit power which has an upper limit of 23 dBm for UE power class 3 and 30 dBm for RN transmissions (optionally 37 dBm for suburban scenarios)[21], P0 is the power offset comprising cell-specific and node-specific components, which is used for controlling the received signal-to-noise ratio (SNR) target, and it can be set from −126 dBm to Pmax with a step size of 1 dB, M u is the number of PRBs allocated to node u, α is a 3-bit cell-specific path loss compensation factor that can be set to 0.0 and from 0.4 to 1.0 with a step size of 0.1, and L is the downlink path loss estimate calculated at the receiving node.

Open-loop PC compensates slow channel variations, i.e., path loss changes including shadowing. If α is set to one in (1), the path loss is fully compensated and the resulting scheme is called FCPC. For a given P0 value, FCPC improves the cell-edge user performance at the cost of increased inter-cell interference due to higher transmit power levels. Yet, the inter-cell interference can be reduced by using values smaller than one, which can increase the cell-center performance at the cost of penalizing the cell-edge performance[5, 6]. Moreover, one important motivation to study the applicability of the existing FPC for the relay enhanced cells is the desired backward compatibility between LTE Release 8 and LTE-Advanced terminals.

2.3. Constraints on resource allocation

The main difference of SC-FDMA compared to orthogonal frequency division multiple access (OFDMA) is the single-carrier constraint, where only a set of adjacent PRBs can be allocated to a user (LTE Release 8 constraint for backward compatibility). Furthermore, the maximum number of the UEs that can be scheduled in each TTI is limited by the possible number of scheduling grants which can be carried by the physical downlink control channel (PDCCH). Typically, eight to ten UEs can be scheduled per TTI because of PDCCH limitation (or PDCCH blocking)[23]. In this study, this number is set to eight. Besides, we assume that, in the considered relay scenario, there is no limitation on the relay-specific PDCCH (R-PDCCH) as the relay link experiences good channel conditions due to, e.g., higher elevation and antenna gains, and since a small number of RNs are expected to be deployed per cell; 4-RN deployments are considered herein.

In addition, the maximum number of PRBs—denoted by Mmax,u—which can be assigned to a UE depends on the difference between Pmax and the per-PRB power spectral density (PSD) of that UE. The per-PRB PSD of a UE can be obtained via the open-loop component of (1) by setting M u  = 1 such that

PS D u = min P max , P 0 + α L .
(2)

The actual per-PRB PSD is given by (2) as long as the UE is not power limited, otherwise Pmax will equivalently be spread over the assigned PRBs resulting in decreased signal-to-interference-plus-noise ratio (SINR) per PRB. Such an assignment may result in outage especially when the UE is experiencing poor channel conditions. Then, Mmax,u is obtained as

M max , u = round 10 0.1 P max PS D u .
(3)

This will limit the number of assigned PRBs per UE, ensuring that power-limited UEs will not be scheduled on more resources than what they can afford, and as a consequence, unused resources can be better utilized by other users resulting in more efficient bandwidth usage. This functionality is called adaptive transmission bandwidth[24]. Moreover, in[25], it is shown that adaptive transmission bandwidth is particularly advantageous for suburban scenarios having large ISDs. Therefore, adaptive transmission bandwidth is applied only for suburban scenarios in this study. It is worth noting that although adaptive transmission bandwidth is necessary to prevent outage, it could yield a higher inhomogeneity within the cell as cell-center UEs are also scheduled on the resources left by power-limited UEs.

3. Optimization problem and performance metrics

First, we introduce the relay scenario and the optimization problem to be addressed. Then, the performance steering concept along with different performance metrics is presented.

3.1. Optimization problem

The considered relay deployment along with the different link types is depicted in Figure2. Cell selection for the UEs is based on the strongest downlink received signal power, whereas RNs are connected to the overlaying macrocell. As shown in Figure2, RUEs are mostly those at the macrocell edge, whereas MUEs are generally located in the cell center. Accordingly, in order to enhance the overall system performance, the PC parameter optimization should be done on all links considering the interdependencies. Tuning Pmax and P0 on these links simultaneously is a challenging task given the possible parameter range discussed in Section 2.2. Therefore, a brute-force approach becomes infeasible due to high computational complexity. A reasonable optimization approach should take into account the mutual dependencies of relay and access links, whereas the end-to-end performance is determined by the qualities of both links.

Figure 2
figure 2

One-tier relay deployment shown in a sector; 4 RNs are deployed at the cell-edge.

For the optimization problem, we define the following four configuration parameters:

x 1 = P 0 direct link , x 2 = P 0 access link , x 3 = P 0 relay link , x 4 = P max RUE .
(4)

Here, P 0 direct link , P 0 access link , and P 0 relay link are the values of P0 on the direct, access, and relay links, respectively. Note that the maximum RUE transmission power x 4 = P max RUE is added to the configuration parameter set since it is known from[10] that further gain can be achieved by tuning it. Concretely, in[10] it is observed that for this gain the interference caused by RUEs should be reduced, and to achieve this either P 0 access link or P max RUE can be decreased. The optimization problem is given by

x ^ 1 , x ^ 2 , x ^ 3 , x ^ 4 = arg max x 1 , x 2 , x 3 , x 4 y TP 1 , , TP C ,
(5)

where y is the objective function defined by system level performance metric, and C is the total number of UEs from which the statistics are collected in the network.

3.2. Performance steering

By using a proper performance metric as objective function in (5), the system performance can flexibly be steered. In the following, we provide the utilized performance metrics along with their usage within the performance steering context.

  • Γq%: The q th%-ile level of the user TP cumulative distribution function (CDF) targets a certain level on the TP CDF, i.e., the objective function is given by

    y = Γ q% = F s 1 q 100 ,
    (6)

where Fs–1 (q/100) is the inverse of the CDF level at the q/100-quantile, called as q-percentile (%-ile). For instance, Γ5% reflects the cell-edge bit rate or equivalently the cell coverage performance and Γ50% is the median TP level. This set of performance metrics can be adopted for propagation scenarios where targeting a specific q%-ile in the optimization does not harm other percentiles significantly.

  • Γ HM Q% : This is the HM of the user TP levels which are higher than the Qth%-ile of user TP CDF. The corresponding objective function is defined as

    y = Γ HM Q% = C * c = 1 C * 1 TP c s . t . TP c > F s - 1 Q 100 ,
    (7)

where C* is the total number of users whose TP levels (TP c ) satisfy the given condition, which depends on the selected cut-off percentile Q. Furthermore, it obviously follows that C* ≤ C, where the description of C is given after (5). This metric prioritizes the performance of cell-edge UEs and thus an optimization using this metric leads to a more homogeneous user experience in the network. For the considered propagation scenario, the parameter Q should be adjusted in such a way that the TP levels of cell-edge UEs, which are in outage or experiencing too low TP levels for some applied parameter settings, can be omitted from the metric calculation. In particular, such low TP levels bias the HM in a negative way by giving much more priority to those users with low TP levels and neglecting other users. Nevertheless, Q should not be set to a large value, e.g., Q > 15%, because otherwise the performance of cell-edge UEs would be neglected. Accordingly, Q > 0 should especially be considered for propagation scenarios with large ISDs, e.g., suburban scenario. Note that as a special case, if Q = 0 this performance metric corresponds to the conventional HM of the user TP levels, which is denoted by ΓHM.

  • Γ AM w 1 , w 2 , , w L : This metric is the weighted arithmetic mean (AM) of the normalized Γ q j % values as defined in (6), for j = (1, 2,…,L). The objective function is

    y = Γ AM w 1 , w 2 , , w L = j = 1 L w j Γ q j % κ q j % j = 1 L w j ,
    (8)

where κ q j % is the normalization factor which corresponds to qj%-ile of the UE TP CDF of the reference eNB-only deployment. This normalization is necessary to ensure a proper calculation of the weighted AM in relay deployments as the TP levels at different UE TP CDF percentiles can significantly be different. That is, without normalization the low UE TP CDF percentiles would be drowned in the high UE TP CDF percentiles, and thus the impact of low UE TP CDF percentiles would be negligible. This performance metric provides a high parametric flexibility through the selection of the weights and percentiles such that the priority of certain CDF percentiles can be increased by increasing the corresponding weights. As the cell coverage along with a more homogeneous user experience is the target of relay deployments, we focus on lower percentiles, i.e., we select (q1, q2, q3) = (5, 25, 50)%-ile for the performance metric. Then, the performance metric reads as

y = Γ AM w 1 , w 2 , w 3 = w 1 Γ 5 % κ 5 % + w 2 Γ 25 % κ 25 % + w 3 Γ 50 % κ 50 % j = 1 3 w j .
(9)

For instance, if the cell coverage is to be prioritized, w1 in (9) is selected to be larger than the other weights.

We will mainly utilize Γq% and ΓHM for urban scenarios while Γ AM w 1 , w 2 , w 3 and Γ HM Q % are more appropriate for suburban scenarios. In particular, Γ AM w 1 , w 2 , w 3 is considered instead of Γq% for suburban scenarios to better cope with the inhomogeneity of user experience, i.e., diverse user TP levels in the cell, which is due to larger ISD. Furthermore, as briefly explained above Γ HM Q % is preferred over ΓHM in suburban scenarios as it avoids too low TP levels. Recall that Γq% is a conventional performance metric used in literature for performance evaluation, whereas Γ HM Q % and Γ AM w 1 , w 2 , , w L are the newly proposed performance metrics in this study.

4. Optimization methods

Herein, we describe PC optimization methods in detail.

4.1. Manual optimization

In manual optimization, the logical steps and parameter ranges are determined by using the network knowledge that is obtained through performance metrics and the targets, e.g., cell coverage enhancement and more homogeneous user TP distribution. In the following, according to[10] we describe two manual optimization approaches that differ in terms of adaptation to the considered deployment scenario. For further details on manual optimization, interested readers are referred to[10].

4.1.1. Urban scenario-specific optimization method

This method, referred to as 4 step optimization, is particularly considered for urban scenarios where inter-cell interference due to small ISD is the main limiting factor for the performance[10]. The parameter tuning in each step is based on the performance results from the preceding step. Namely, the four steps are given as follows.

  1. 1.

    Simulations are carried out to optimize PC parameters of UEs in eNB-only scenario. Basically, this step aims at obtaining a trade-off between cell edge and cell center UEs. Accordingly, two parameter settings are determined depending on the prioritization strategy. Namely, the cell coverage-oriented setting (based on FCPC) prioritizes the 5%-ile user TP and the cell capacity-oriented setting (based on FPC) prioritizes the aggregate user TP (aka cell capacity) at the cost of the reduced 5%-ile user performance.

  2. 2.

    Relay scenario is adopted and parameters resulting from Step 1 are applied for both MUEs and RUEs. Simulation results of this step are used as a starting point for Steps 3 and 4.

  3. 3.

    PC parameters of the RUEs are optimized. After RNs are deployed in the network the inter-cell interference increases. More concretely, RUEs experience very high TP levels while causing interference to MUEs. As we focus on achieving a homogeneous user performance over the whole cell area, being a requirement for LTE-Advanced, the performance of MUEs is prioritized. In particular, the interference imposed by RUEs is reduced by adequately adjusting their PC parameters such that the impact of the interference due to RUEs on the performance of MUEs becomes minimal.

  4. 4.

    Keeping PC parameters of the RUEs fixed, the PC parameters of MUEs are optimized. This step aims at further improvement of the performance of MUEs by readjusting their PC parameters. Thus, 50%-ile and 5%-ile UE TP CDF levels are taken as performance criteria. We note that these percentiles are considered because they mainly reflect MUE performance.

We note that due to stepwise optimization process in this method, the PC parameters for RUEs may not be optimal after Step 4. Nevertheless, simulation results in Section 6.2 will show that automated optimization methods, which jointly optimize PC parameters, and this manual optimization method can achieve similar performances.

Following these steps, the parameter configurations for MUEs and RUEs are tuned. Further, the PC parameter setting for the relay link is as well optimized. Relatively better link towards DeNB and the requirement for high capacity suggest the cell capacity-oriented setting found in Step 1 as an appropriate starting point. Then, the P0 value on the relay link P 0 relay link is optimized such that 50%-ile user TP level is maximized. It is worth noting that the relay link performance is highly interference-limited and the fine tuning of the P0 value does not yield a significant performance enhancement.

4.1.2. Suburban scenario-specific optimization method

This method, referred to as 3 metric optimization, is particularly considered for suburban scenarios where, contrary to urban scenarios, due to larger ISD the impact of inter-cell interference decreases drastically. On the other hand, path loss of cell edge UEs may not be compensated for and they can easily be driven to power limitation even if one PRB is assigned. The result is inhomogeneous user experience over the coverage area where cell center UEs experience high TP levels because of reduced inter-cell interference and cell edge UEs suffer from high path loss. In this regard, this optimization method takes into account the inhomogeneity of the user experience via jointly optimizing several performance metrics[10]. The method comprises three steps:

  1. 1.

    Simulations are carried out to tune PC parameters of UEs in eNB only scenario via jointly optimizing aggregate, 50% ile and 5% ile user TP levels. In this step, a setting (based on FPC) which enables a good trade-off between the aforementioned performance metrics is attained. We note that these performance metrics are conventionally used in LTE-Advanced performance evaluation [21].

  2. 2.

    Relay scenario is adopted and parameters resulting from Step 1 are applied to both macro- and RUEs. Simulation results of this step are used as a starting point for Step 3.

  3. 3.

    PC parameters of MUEs and RUEs are reconfigured both at DeNBs and RNs, respectively, in relay scenario. In this step, a fine tuning of parameter settings ( P 0 direct link ,  P 0 access link ,  P max RUE ) is done around the PC setting found in Step 2. Here, 50%-ile and 5%-ile user TP levels are taken as the performance criteria and the obtained PC setting is referred to as the trade-off setting. Similar to urban scenario-specific optimization method, these performance metrics are targeted in this step to achieve a homogeneous user performance over the whole cell area.

After having found the PC configurations for MUEs and RUEs, the optimum parameter setting for the relay link is determined. The trade-off setting found in[10] is used as the starting point. The P0 value on the relay link P 0 relay link is then optimized such that 50%-ile user TP level is maximized. Furthermore, in contrast to urban scenarios, the fine tuning of P 0 relay link can yield notable performance enhancement.

4.2. Simulated annealing

Simulated annealing is a local search-based optimization method that provides a means to escape local maxima[12]. The method starts by selecting an initial candidate solution vector x = (x1, x2,x3,x4)  Ω, where Ω is the solution space of all candidate solutions and x1, x2,x3,x4 are the PC configuration parameters as defined in (4). Let f(x) be the value of the performance metric y evaluated for the parameters in x. We recall that the definitions of the performance metric y are outlined in Section 3.2. In each step, a new candidate solution x’ is generated randomly from the neighborhood N(x) of the current solution x. The neighborhood structure N(x) is typically defined as the set of candidates that slightly differ from x[12, 26]. In this study, a new candidate solution x’ is generated by adding a random displacement value between [−δ, +δ] to each configuration parameter in x[16]. If f(x’) ≥ f(x), x’ is accepted as current solution in the next step, otherwise it is accepted with some probability depending on the so-called temperature parameter T and the magnitude of decrease Δx,x in the performance metric. More concretely, a random number n is generated from a uniform distribution between 0 and 1, and it is compared to exp (−Δx,x/T). If n is lower than exp (−Δx,x/T), x’ is accepted as current solution, otherwise, it is rejected. The higher the temperature T, the higher the acceptance probability is. Moreover, the higher Δx,x, the lower the probability of accepting the newly generated candidate solution is. During the search, the temperature T is decreased slowly until the algorithm converges into a steady state. The simulated annealing algorithm is described in Table1.

Table 1 Pseudo code for the description of simulated annealing[12, 17]

As simulated annealing is a heuristic approach, decisions have to be made on the initial temperature T0 and the reduction function R(T)[13]. The initial temperature is set such that a candidate solution yielding a predefined decrease ∆max in the optimization function is accepted with a predefined probability p = exp(−∆max/T0)[14]. That is, having defined p and –∆max,T0 can be obtained as T0 = −∆max/ln(p) where ln(•) is the natural logarithm operator. To lower the temperature T a geometric temperature reduction function is used[13], i.e., R(T) = λT, where λ is a constant less than one.

4.3. The optimization procedure based on Taguchi’s method using NOA

Let us next introduce Taguchi’s method[16]. The optimization approach is depicted in Figure3 and will be discussed in detail in the following.

Figure 3
figure 3

The iterative optimization procedure based on Taguchis method using NOA.

4.3.1. Construct the proper NOA

Originally, Taguchi’s method uses an OA which contains a reduced set of N parameter combinations from the full search space[18]. Every parameter x t has a set of testing values corresponding to a set of levels, i.e., level 1 is mapped to the first testing value of a parameter, level 2 to the second value, and so on (see Section 4.3.2). Each of the N parameter combinations is tested in a corresponding experiment i where the objective function, equivalently the performance metric (see Section 3.2), y is evaluated resulting in a measured response y i . In an OA, each testing value of a parameter x t is tested at least once with every other value of parameter xjt. This property of the OA accounts for the interactions that might exist between the configuration parameters. In order to reduce the number of experiments, an OA is replaced by an NOA[19]. Considering an NOA, each testing value of a parameter x t is not necessarily tested with every other value of parameter xjt. Therefore, NOA considers only partially the interactions among the parameters and is easier to construct. An NOA can be constructed for any number of parameters and number N of experiments at the expense of considering partially the interactions among the configurations parameters. Various algorithms exist for constructing NOA. In this study, the NOA is built using the algorithm described in[27].

Therefore, the first step in the optimization procedure is to construct a proper NOA. For this purpose, the number of configuration parameters has to be determined. In our case, the total number of configuration parameters is k = 4 as given in (4). Thus, an NOA having four columns should be constructed with a predefined number of experiments N and levels s. In this study, we construct an NOA having N = 36 experiments and s = 9 levels (see Table2). Using this NOA, each parameter will be tested with nine different values at each iteration of the algorithm. Every column of the NOA corresponds to a configuration parameter. For example, the first column can be assigned to x1, the second to x2, and so on. Having constructed the required NOA, the levels of each parameter should be mapped to parameter values in order to conduct the experiments.

Table 2 An NOA having k = 4 configuration parameters, N  =  36 experiments, s  =  9 levels with the measured responses and their corresponding SN ratios

4.3.2. Map each level to a parameter value

Let min{x t } and max{x t } be the minimum and the maximum feasible values for parameter x t , respectively. In the first iteration, m = 1, the center value of the optimization range for parameter x t is defined as

V t m = min x t + max x t 2 .
(10)

In any iteration m, the level = s / 2 is mapped to V t m . The other s – 1 levels are distributed around V t m by adding or subtracting a multiple integer of a step size β t m . For m = 1, the step size is defined as

β t m = max x t min x t s + 1 .
(11)

In iteration m, the mapping function f t m for a level ℓ to a dedicated value of the parameter x t can be described as

f t m = { V t m s / 2 β t m 1 s / 2 1 V t m = s / 2 V t m + s / 2 β t m s / 2 + 1 s .
(12)

For instance, consider the parameter x 4 = P max RUE having a minimum value of min{x4} = 7 dBm and a maximum max{x4} = 23 dBm. If x4 is tested with 9 levels, i.e., s = 9, level ℓ = 5 is mapped in first iteration to (7 + 23)/2 = 15 dBm, level 4 to 15 – β 4 1  = 13.4 dBm, level 6 to 15 +  β 4 1  = 16.6 dBm, and so on. As the power setting x t cannot be decimal, the mapped value f t m of a level ℓ is further quantized to the nearest integer. The values of V t m and β t m are updated at the end of each iteration if the termination criterion, see Section 4.3.5, is not met.

4.3.3. Apply Taguchi’s method

To interpret the experimental results, Taguchi’s method converts the measured responses to the so-called signal-to-noise (SN) ratios[13] which are not to be confused with the SNRs of the received signals. The SN ratio is computed for each experiment i as

S N i = 10 log 10 y i 2 .
(13)

Then, the average SN ratio is computed for each parameter and level. In the example of Table2, the average SN ratio of x1 at level ℓ = 1 is computed by averaging the SN ratios of the experiments where x1 is tested at level 1, i.e., SN1, SN2, SN3, and SN4. The best level of each parameter is the level having the highest average SN ratio. According to the mapping function f t m , the best setting of a parameter x t in iteration m is found and denoted by V t best, m .

4.3.4. Shrink the optimization range

At the end of each iteration, the termination criterion is checked. If it is not met, the best values found in iteration m are used as center values for the parameters in the next iteration m + 1:

V t m + 1 = V t best , m .
(14)

It may happen that the best value of a parameter x t found in iteration m is close to min{x t } or max{x t }. In this case, there is a need for a procedure to consistently check if the mapped value of a level is within the optimization range. Moreover, the optimization range is reduced by multiplying the step size of each parameter x t by a reduction factor ξ < 1:

β t m + 1 = ξ β t m .
(15)

4.3.5. Check the termination criterion

With every iteration, the optimization range is reduced and the possible values of a parameter get closer to each other. Hence, the set used to select a near-optimal value for a parameter becomes smaller. The optimization procedure terminates when all step sizes of the parameters are less than a predefined threshold ε, i.e.,

β t m < ε t .
(16)

In this study, the algorithm ends when the mapped values of levels 1 and 9 do not differ by more than 1 dB for each parameter. To this end, ε is set as a rough approximation to 1/(s – 1) = 1/8.

5. System model

This section describes essential features of the system model such as applied resource sharing techniques and the channel models that pertain to the propagation scenarios. The simulation parameters complying with the latest 3GPP guidelines within the LTE-Advanced framework[21] are given at the end of the section.

5.1. Resource sharing

The performance of relay-enhanced networks depends significantly on the resource allocation strategy. For RUEs, the experienced end-to-end TP, TP e , is defined as the minimum of the TPs on the relay and access links as given in (17).

T P e = min T P eNB RN , T P RN UE .
(17)

In this study, a resource fair round robin scheduling is applied for all UEs. We assume that subframes configured for the relay link are assigned exclusively to RNs at the DeNB, i.e., co-scheduling of MUEs and RNs is not considered. For the resource allocation on the backhaul subframes and access link, we utilize the scheme in[28], where resource shares of the RNs on the relay link are proportional to the number of attached RUEs. The available capacity on the relay link is then distributed among RUEs utilizing max–min fairness. Moreover, the number of backhaul subframes to be allocated to RNs is chosen such that the overall system performance is optimized; two and four subframes are configured in case of urban and suburban 4-RN deployments, respectively[28].

5.2. Simulation parameters

The simulated network is represented by a regular hexagonal cellular layout with 19 tri-sectored sites, i.e., 57 cells. RNs admit regular outdoor deployment at the sector border and indoor users are assumed, where 25 uniformly distributed UEs are dropped per sector and the full buffer traffic model is applied. In total, 50 user drops (or snap-shots) are simulated using a Matlab-based system level semi-static simulator, where results are collected from the inner most sector only, to ensure proper modeling of interference (two tiers of tri-sector sites). It is worth noting that for the final CDF plots 200 snap-shots are simulated. The number of snap-shots is selected to be large enough such that the difference between the repeated tests is ignorable. Simulation parameters follow the latest parameter settings agreed in 3GPP[21] and are summarized in Table3. Moreover, all available resources in a cell are assumed to be used and hence a rather pessimistic interference modeling is considered at the access link.

R = S { 0 , SINR < SIN R min BW S E max , SINR SIN R max BW B eff log 2 1 + A eff SINR , otherwise .
(18)
Table 3 Simulation parameters

The SINR to link TP mapping is carried out by the approximation given in (18), where the bandwidth efficiency (Beff) and SINR efficiency (Aeff) given in Table3 are utilized to adapt the mapping to LTE specifications. The approximation is based on Shannon’s capacity formula adjusted by the two parameters Beff and Aeff[29]. Further, a minimum SINR level SINRmin = −7 dB is considered, below which data detection is not possible. This limit is introduced due to control channel requirements. In (18), R is the per PRB TP, BW is the bandwidth per PRB, SEmax is the maximum spectral efficiency depending on the highest modulation and coding scheme (MCS) for a given SINRmax and S is the overhead scaling accounting for LTE uplink overhead. TP is computed from SINR using the Shannon approximation similarly as described in[30]. An overhead of 25% is assumed, which accounts for control symbols and pilots. Such a mapping of SINR is used to model the adaptive MCS applied in a system.

Relay site planning is assumed as modeled in[21]. In particular, an increased line-of-sight (LOS) probability is considered and a bonus of 5 dB is added on the relay link when experiencing non-LOS (NLOS) propagation conditions. Two antenna sets are considered for RNs. Directional antennas are assumed at the DeNBs and RNs for backhaul transmission, while omni-directional antennas are assumed for the RN access link transmission. Log-normal shadow fading is modeled as well and applied for NLOS propagation conditions only, while fast fading is not simulated.

6. Simulation results and analysis

6.1. Evaluation methodology

The automated PC optimization based on either Taguchi’s method or simulated annealing and manual optimization are compared with respect to achieved TP performances. Moreover, computational complexities are given in terms of the number of network trial runs necessary for the optimization methods. The complexity of the manual optimization is not addressed since extensive skilled human intervention is required during the optimization process. Furthermore, it is shown that a brute-force approach is not feasible and the proposed automated PC optimization achieves substantially lower complexity.

6.1.1. Performance evaluation

The performance evaluation is carried out assuming the 3GPP urban (ISD 500 m) and suburban (ISD 1732 m) scenarios where four RNs are deployed per cell. For the eNB-only deployment, the cell capacity-oriented setting with P0 = −55 dBm and α = 0.6 (FPC), i.e., the resultant setting of Step 1 in four-step manual optimization in urban scenario, and the trade-off setting with P0 = −63 dBm and α = 0.6 (FPC), i.e., the resultant setting of Step 1 in three-metric optimization in suburban scenario[10] are applied. Recall that the eNB-only deployment is taken as a benchmark to determine the relative gains of different optimization strategies.

The performances of the automated and manual optimization methods are compared via UE TP CDFs in terms of achieved TP levels at different CDF percentiles. Recall that 5%-ile UE TP CDF reflects the cell edge user performance, while 50%-ile UE TP CDF is the median TP level and it as well gives an intuition about the mean UE TP. Besides, the impact of using FPC (α = 0.6) and FCPC (α = 1.0) on the system performance of urban scenarios is addressed while only FPC is considered in suburban scenarios since it is shown that FCPC renders a suboptimum system performance in suburban scenarios[10]. Furthermore, the numerical results for both joint P0 and joint P0 and Pmax optimizations are analyzed. In case of joint P0 optimization, the P0 values on all links are jointly optimized while fixing Pmax of UEs and RNs to the upper bounds. Note that to perform joint P0 optimization for Taguchi’s method, it is enough to drop the column of x 4 = P max RUE from the NOA and follow the optimization algorithm as described in Section 4.3. Moreover, for FCPC and FPC the ranges of P0 [−113, –83] dBm and P0 [−73, –43] dBm are considered, respectively, while the range of P max RUE . [7, 23] dBm is considered for both. Within these ranges, the PC parameters selected by Taguchi’s method and simulated annealing are rounded to the nearest integers before any network trial run, whereas different step sizes, e.g., 2 and 6 dB, depending on the logical step are used for manual optimization.

6.1.2. Evaluation of computational complexity

The criterion used for complexity is the number of times the performance metric y is evaluated[17]. In case of simulated annealing, the performance metric y is evaluated G times in the inner loop of the pseudo-code given in Table1, i.e., between lines 10 and 16, and this process is repeated U times in the outer loop defined in lines 8 and 19. Therefore, the total number of evaluations performed by simulated annealing is G • U. In case of Taguchi’s method, N experiments are performed at each iteration and the algorithm terminates after M iterations when the termination criterion in (16) is met. Note that Taguchi’s method decides on a new move after conducting N evaluations of the performance metric y in contrast to simulated annealing that decides on a move after each evaluation. The total number of evaluations performed by Taguchi’s method is then N • M.

In order to have a fair comparison between the two optimization methods in terms of TP performance, the same computational complexity is applied: Simulated annealing and Taguchi’s method are run for the same number of evaluations and the TP performances of their optimized configuration parameters are compared, i.e., G • U = N • M.

Then, to visualize the convergence rate of each algorithm, the value of the performance metric y is plotted as a function of the number of evaluations. The parameters used by Taguchi’s method and simulated annealing are summarized in Table4. Besides, considering the parameter ranges given in the previous section and the optimization problem in (5), a brute-force approach requires 29,791 and 506,447 network trials for joint P0 and joint P0 and Pmax optimizations, respectively. Moreover, each network trial run still requires significant time to collect reliable statistics. Therefore, such an approach is not feasible. In addition, according to the selected parameters for the automated PC optimization given in Table4, G • U = N • M = 540 network trials are required in the worst case, which implies less than 2 and 0.2% of the total network trial runs needed in the brute-force approach for joint P0 and joint P0 and Pmax optimizations, respectively.

Table 4 Input parameters for simulated annealing and Taguchis method

6.2. Simulation results

In this section, the simulation results are first analyzed for urban scenarios. Then, the simulation results of suburban scenarios are presented.

6.2.1. Urban scenario

First, the numerical results of joint P0 optimization are analyzed, where Γ5% and ΓHM are used as the performance metrics. In addition, the results are presented for joint P0 and Pmax optimization, where Γ5% is used as the performance metric to exemplify the impact of this optimization. A comparison of both optimizations is as well given. Second, it is illustrated how the resultant performance is impacted via different performance metrics within the performance steering context.

6.2.1.1.Joint P0 optimization versus joint P0 and Pmax optimization

In Figure4, the joint P0 optimization is considered. The convergence curves of the optimization procedures are shown with respect to the number of evaluations for both performance metrics Γ5% and ΓHM. Besides, the corresponding UE TP CDFs are depicted in Figure5 along with the curves of manual optimization.

Figure 4
figure 4

The convergence of Taguchis method and simulated annealing for joint P 0 optimization in urban scenario; (a) using Γ 5% and (b) using Γ HM as performance metrics.

Figure 5
figure 5

The UE TP CDFs for Taguchis method and simulated annealing for joint P 0 optimization in urban scenario; the curves for manual optimization, which are the same in both figures, are as well given; ( a) using Γ5% and (b) using ΓHM as performance metrics.

In Figure4a, it is observed that utilizing Γ5% as the performance metric both Taguchi’s method and simulated annealing converge to similar 5%-ile UE TP CDF values when either FPC or FCPC is adopted. Nevertheless, as it is noticed in Figure5a, FCPC has a slightly better performance for the TP CDF percentiles below 5%-ile. On the other hand, FPC yields significant gains at the higher TP levels within a wide range of percentiles. Moreover, it can be observed that Taguchi’s method slightly outperforms both simulated annealing and manual optimization, where manual optimization lies between the two. In addition, Figure4b depicts that for both FPC and FCPC Taguchi’s method and simulated annealing converge to the same value when ΓHM is used as the performance metric. Besides, using FPC a higher final HM of the UE TP values is reached, thanks to the higher TP levels achieved on a wide range of CDF percentiles as shown in Figure5b. Another important observation in Figure5b is that for each PC optimization strategy, namely FPC and FCPC, the curves of all three methods almost overlap. This justifies the statement that ΓHM provides a trade-off performance between higher and lower TP CDF percentiles for Taguchi’s method and simulated annealing noting that the logical steps of manual optimization aims at such a trade-off.

The obtained parameter settings are tabulated in Table5. It can be seen that different parameter combinations can yield similar performances. Especially, the parameter values for the access (in RN cells) and relay links could be noticeably different. This stems from two facts. First, the end-to-end TP values of the RUEs are mainly determined by the relay link capacity as it is the bottleneck on the UE-DeNB end-to-end link. Thus, a PC parameter change on the access link does not affect the end-to-end TP provided that the access link capacity remains larger than that of the backhaul link. Second, in urban scenario due to smaller ISD the relay link performance is also interference-limited. Accordingly, an increase or decrease in transmit power level of RNs on the relay link does not result in a significant performance change provided that wanted signal power remains much higher than the thermal noise power. It is as well worth noting that relay deployment significantly outperforms eNB-only deployment, e.g., using FPC along with ΓHM, the methods achieve, respectively, 123 and 61% gains at 5%-ile and 50%-ile TP CDF levels (see Figure5b).

Table 5 Parameter configurations for joint P 0 optimization in urban scenario

Next, we adopt Γ5% performance metric to illustrate the impact of the joint P0 and Pmax optimization. The convergence of the method is then depicted in Figure6. Moreover, the corresponding UE TP CDFs are shown in Figure7. Note that the CDF curves of manual optimization are obtained as well by considering joint P0 and Pmax optimization. In addition, the optimized PC parameters are tabulated in Table6.

Figure 6
figure 6

The convergence of Taguchis method and simulated annealing using Γ 5% as performance metric for joint P 0 and P max optimization in urban scenario.

Figure 7
figure 7

The UE TP CDFs for Taguchis method and simulated annealing using Γ 5% as performance metric for joint P 0 and P max optimization in urban scenario.

Table 6 Parameter configurations for joint P 0 and P max optimization in urban scenario

Comparing Figure6 with Figure4a, it can be seen that taking Pmax optimization into account some improvement is obtained at 5%-ile UE TP CDF level where the improvement is more pronounced for FCPC (around 2.3% relative to joint P0 optimization) while it reads as marginal for FPC (around 1.3% relative to joint P0 optimization). However, joint P0 and Pmax optimization tends to have a slower convergence than joint P0 optimization. For instance, simulated annealing with FPC requires 40 and 60 evaluations to converge for joint P0 optimization, and joint P0 and Pmax optimization, respectively. Besides, as it is observed in Figure6, for FPC both Taguchi’s method and simulated annealing converge to the same 5%-ile UE TP CDF level, whereas for FCPC Taguchi’s method converges to a larger value. On the other hand, Figure7 depicts that using FCPC, Taguchi’s method shows slightly poorer performance at high percentiles compared to manual optimization. Moreover, simulated annealing yields substantial performance degradation compared to both Taguchi’s method and manual optimization. Accordingly, compared to joint P0 optimization, Taguchi’s method enables a trade-off between high and low percentiles if joint P0 and Pmax optimization along with FCPC is considered. However, joint P0 and Pmax optimization is not justified for simulated annealing compared to joint P0 optimization because of the aforementioned performance degradation (see Figure7).

It is worth noting that, in urban scenario, simulated annealing converges faster than Taguchi’s method within the context of PC parameter optimization in relay deployments as observed in Figures4 and6.

6.2.1.2 Performance steering

Unlike large-ISD scenarios power limitation and user outage are less pronounced in urban scenario, which translates into a more homogeneous user performance over the cell area. As a result, the performance metrics which are targeting a specific UE TP CDF level, i.e., Γq%, can be utilized to steer the performance toward the desired performance level without causing significant performance degradation at other performance levels. This is illustrated in Figure8 where joint P0 optimization along with simulated annealing is considered in case of FPC. Therein, the curve of manual optimization is utilized as a reference to compare achieved gains. It can be seen that via Γ50% the 50%-ile UE TP CDF can be further increased at the cost of a performance degradation below 20%-ile, whereas via Γ5% the best 5%-ile UE TP CDF can be achieved with worse performance at higher percentiles. Furthermore, compared to ΓHM (see Figure5b), using Γ HM Q% for Q > 0 the priority can be given to higher percentiles. For example, using Γ HM 10% marginal gains can be achieved as of 30%-ile; however, the performance degrades below 18%-ile (see Figure8). In addition, similar to ΓHM, a trade-off between lower and higher UE TP CDF percentiles is observed when ΓAM(1,1,1) is used as performance metric. Besides, because of homogeneity of urban scenario, employing different weights, e.g., (w1, w2, w3) = {(2, 1, 1), (1, 1, 2)}, for Γ AM w 1 , w 2 , w 3 still yields comparable performances (UE TP CDFs not shown herein for simplicity). As a result, Γq% is more appropriate for targeting a specific performance level without degrading other UE TP CDF percentiles significantly. In Table7, the optimized PC parameter settings are tabulated for different performance metrics also including the ones of which UE TP CDFs are not shown.

Figure 8
figure 8

The UE TP CDFs for simulated annealing using FPC and different performance metrics for joint P 0 optimization in urban scenario.

Table 7 Parameter configurations for different performance metrics for joint P 0 optimization and FPC in urban scenario

6.2.2. Suburban scenario

First, the numerical results for joint P0 optimization and joint P0 and Pmax optimization are analyzed and a comparison is provided. Second, the effectiveness of performance steering is presented by means of different performance metrics. In particular, within the performance steering context it is shown how proposed performance metrics differ from using conventional performance metrics, i.e., Γq%, and how they address the inhomogeneity of suburban scenario, which is due to large ISD.

6.2.2.1. Joint P0 optimization versus joint P0 and Pmax optimization

The performances of joint P0 and joint P0 and Pmax optimizations are compared in Figure9 for Taguchi’s method using Γ AM 1 , 1 , 1 as the performance metric. Note that setting all the weights to one implies that the target UE TP CDF percentiles have the same priority level. According to UE TP CDFs, it can be seen that joint P0 and Pmax optimization does not bring additional gain compared to joint P0 optimization. A similar result is as well obtained when simulated annealing is employed. Thereby, we focus on joint P0 optimization in what follows.

Figure 9
figure 9

The UE TP CDFs for Taguchis method using Γ AM 1 , 1 , 1 as performance metric for comparison of joint P 0 and P max optimization, and joint P 0 optimization in suburban scenario.

6.2.2.2. Performance steering

Suburban scenario is characterized by its large ISD and hence by the high inhomogeneity in the user performance. In particular, cell edge UEs usually experience low received SNR values at the access node and they can easily be driven into power limitation, which implies a limited resource allocation to prevent outage (see Section 2.3). On the other hand, cell center UEs can make use of the additional resources which cannot be used by the cell edge UEs and at the same time they experience less interference levels, thanks to large ISD. Consequently, cell edge UEs experience low TP levels while cell center UEs can achieve very high TP levels. In this regard, performance steering takes such factors into account and increases the flexibility in achieving target performance levels. In the following, we illustrate this concept in detail. Specifically, we show the performance difference between conventional performance metrics, e.g., 5%-ile and 50%-ile UE TP CDF levels, and proposed performance metrics. We also exemplify how the adjustment of proposed performance metrics can prioritize different UE TP CDF percentiles.

  • Using ΓHMQ%as performance metric:

For different propagation scenarios, the cut-off percentile Q can be adapted according to the target performance requirements. Therefore, the cut-off percentile can be considered as a means of performance steering; however, its flexibility is rather limited by the considered scenario. The 10%-ile UE TP CDF is selected as the cut-off percentile for suburban scenario because it provides a good trade-off between low and high percentiles. This trade-off is illustrated for both Taguchi’s method and simulated annealing in Figure10 where the results of manual optimization are also presented as a reference. In Figure10, it is observed that for both Taguchi’s method and simulated annealing, compared to using Γ HM 10% as the performance metric, using ΓHM yields slightly better performance up to 15%-ile UE TP CDF at the cost of reduced performance for higher percentiles. This behavior is more pronounced for Taguchi’s method where the performance degradation at higher percentiles becomes significant as depicted in Figure10a. Thus, the use of Γ HM 10% is preferred over ΓHM.

Figure 10
figure 10

The UE TP CDFs using Γ HM and Γ HM 10% as performance metrics in suburban scenario; ( a ) for Taguchi’s method and (b) for simulated annealing.

In addition, the convergence curves for Taguchi’s method and simulated annealing are depicted in Figure11 where Γ HM 10% is used as performance metric. It is observed that both optimization strategies converge to similar values. This observation is also supported by the similar PC parameter settings which are tabulated in Table8. Crucially, in contrast to urban scenario, it is observed that when the performances of different optimization strategies are similar the PC parameter settings also come out to be similar especially for the access link (RUEs) and relay link. The reason is twofold. First, due to increased RN cell range, see different path loss models in Table3, the access link capacity can be lower than that of the relay link and thus it can determine the end-to-end TP level of the RUEs. Second, due to the increased ISD, the relay link performance is not strictly interference limited and an adjustment in transmit power level can increase or decrease the performance significantly.

Figure 11
figure 11

The convergence of Taguchis method and simulated annealing using Γ HM 10% as the performance metric in suburban scenario.

Table 8 Parameter configurations for different performance metrics in suburban scenario
  • Using Γ AM w 1 , w 2 , w 3 as performance metric:

We first compare the performance difference between conventional performance metrics, namely, Γ5%, Γ25%, and Γ50%, and weighted AM. Among different weight combinations ΓAM(1,1,1) is used for illustration. Moreover, as Taguchi’s method and simulated annealing exhibit similar performances (cf. Table8), Taguchi’s method is selected for comparison. The UE TP CDFs are plotted in Figure12, where the curve of manual optimization is also shown as a reference. It can be seen that the conventional performance metrics optimize their targeted UE TP CDF levels; however, performance degradation is observed at other UE TP CDF levels. On the other hand, when ΓAM(1,1,1) is employed as the performance metric, the three UE TP CDF percentiles are equally prioritized and a trade-off can be reached. Moreover, the performances of manual optimization, which already targets a trade-off between different UE TP CDF percentiles, and Taguchi’s method using ΓAM(1,1,1) are similar.

Figure 12
figure 12

The UE TP CDFs for Taguchis method using Γ AM 1 , 1 , 1 and conventional metrics as performance metrics in suburban scenario.

In order to demonstrate how performance steering is employed, the UE TP CDFs of Taguchi’s method using Γ AM w 1 , w 2 , w 3 with different weights are plotted in Figure13 along with the curve of Taguchi’s method using Γ HM 10% as a reference. The weight combinations of (w1, w2, w3) = {(1, 1, 1), (2, 1, 1), (1, 2, 1), (1, 1, 2)} are selected. Since Taguchi’s method using Γ HM 10% and Γ AM 1 , 2 , 1 show similar performances, the latter is not drawn for the clarity of the figure. Furthermore, Taguchi’s method is selected for the illustration because it performs similarly compared to simulated annealing. It can be seen that increasing a given weight relative to other weights the performance of the corresponding UE TP CDF percentile can be increased at the cost of performance degradation at other percentiles. Nevertheless, compared to conventional metrics (cf. Figure12), using Γ AM w 1 , w 2 , w 3 results in less performance degradation at less prioritized percentiles since their performances are as well taken into account. Besides, Taguchi’s method using ΓAM(1,1,2) marginally outperforms the case where ΓHM10% is utilized. Note that the optimized PC parameter settings are given in Table8.

Figure 13
figure 13

The UE TP CDFs for Taguchis method using Γ AM w 1 , w 2 , w 3 as the performance metric with different weights in suburban scenario.

The convergence curves pertaining to Taguchi’s method and simulated annealing using Γ AM w 1 , w 2 , w 3 with different weights are shown in Figure14, where for each weight combination the converged values of both optimizations are similar. Note that the converged values of different weight combinations cannot be compared with each other as they have different scaling values as introduced in (9). That is, a higher converged value compared to another weight combination does not necessarily mean a better performance.

Figure 14
figure 14

The convergence of Taguchis method and simulated annealing using Γ AM w 1 , w 2 , w 3 as the performance metric with different weights in suburban scenario.

We note that, like in urban scenario, simulated annealing converges faster than Taguchi’s method in suburban scenario as observed in Figures11 and14.

7. Conclusion

In this study, we have proposed an automated optimization of PC parameters and investigated Taguchi’s method and simulated annealing as two viable options. In comparison to current studies, which depend on manual learn-by-experience optimization, the automated PC scheme does not require extensive skilled human intervention during the optimization process, and thus significantly reduces the complexity along with reduced cost and optimization time. More crucially, it enables a flexible performance steering by defining novel performance metrics adhering to the operator’s requirements and goals.

The evaluation of the optimization methods within the LTE-Advanced uplink framework was carried out in 3GPP-defined urban and suburban propagation scenarios. The performance evaluation is supported by a thorough simulation campaign. It is shown that using the HM or the equally weighted AM in urban scenario, and the equally weighted AM in suburban scenario for the automated optimization lead to similar performances as the manual optimization which already aims at a trade-off between worse and best performing users. Further, it is seen that using a cut-off percentile different than zero for HM and different weights for weighted AM, the resultant TP performance can flexibly be steered.

Comparing both automated techniques, it is found that both simulated annealing and Taguchi’s method converge to similar values of the considered performance metrics. Nevertheless, it is as well observed that Taguchi’s method yields sets of PC parameters which provide a better overall performance in the user TP CDF as compared to simulated annealing and manual optimization in urban scenario.

Endnotes

aIn this study, trial runs refer to simulations with different parameter configurations. bThe receiver dynamic range is defined as the difference in dB between the 5th%-ile and 95th%-ile of the CDF of the total received power.

Abbreviations

3GPP:

3rd generation partnership project

AM:

Arithmetic mean

CDF:

Cumulative distribution function

DeNB:

Donor eNB

eNB:

Evolved-node B (LTE base station)

FCPC:

Full compensation power control

FPC:

Fractional power control

HM:

Harmonic mean

ISD:

Inter-site distance

MUE:

Macrocell-served UE

NOA:

Nearly orthogonal array

OA:

Orthogonal array

PC:

Power control

PDCCH:

Physical downlink control channel

PRB:

Physical resource block

PSD:

Power spectral density

PUCCH:

Physical uplink control channel

PUSCH:

Physical uplink shared channel

RN:

Relay node

R-PDCCH:

Relay-specific PDCCH

R-PUSCH:

Relay-specific PUSCH

RUE:

RN-served UE

TP:

Throughput

TTI:

Transmission time interval

UE:

User equipment.

References

  1. Lang E, Redana S, Raaf B: Business impact of relay deployment for coverage extension in 3GPP LTE-Advanced. In Proceedings of the IEEE International Conference on Communications (ICC) Workshops. Dresden, Germany; 2009:1-5.

    Google Scholar 

  2. Bou Saleh A, Redana S, Hamalainen J, Raaf B: On the coverage extension and capacity enhancement of inband relay deployments in LTE-Advanced networks. J. Electr. Comput. Eng. 2010, 1-12.

    Google Scholar 

  3. Schoenen R, Zirwas W, Walke BH: Capacity and coverage analysis of a 3GPP-LTE multihop deployment scenario. In Proceedings of the IEEE International Conference on Communications (ICC) Workshops. Beijing, China; 2008:31-36.

    Google Scholar 

  4. Priyanto BE, Sorensen TB, Jensen OK: In-band interference effects on UTRA LTE uplink resource block allocation. In Proceedings of the IEEE Vehicular Technology Conference (VTC) Spring. Marina Bay, Singapore; 2008:1846-1850.

    Google Scholar 

  5. Castellanos CU, Villa DL, Rosa C, Pedersen KI, Calabrese FD, Michaelsen P-H, Michel J: Performance of uplink fractional power control in UTRAN LTE. In Proceedings of the IEEE Vehicular Technology Conference (VTC). Spring, Marina Bay, Singapore; 2008:2517-2521.

    Google Scholar 

  6. Simonsson A, Furuskar A: Uplink power control in LTE—overview and performance, in Proceedings of the IEEE Vehicular Technology Conference (VTC) Fall. Canada, Calgary; 2008. pp. 1–5

    Google Scholar 

  7. Castellanos CU, Calabrese FD, Pedersen KI, Rosa C: Uplink interference control in UTRAN LTE based on the overload indicator, in Proceedings of the IEEE Vehicular Technology Conference (VTC) Fall. Canada, Calgary; 2008. pp. 1–5

    Google Scholar 

  8. Müllner R, Ball CF, Ivanov K, Lienhart J, Hric P: Contrasting open-loop and closed-loop power control performance in UTRAN LTE uplink by UE trace analysis, in Proceedings of the IEEE International Conference on Communications (ICC) Workshops. Dresden, Germany; 2009. pp. 1–6

    Google Scholar 

  9. Boussif M, Rosa C, Wigard J, Mullner R: Load adaptive power control in LTE Uplink. In Proceedings of European Wireless Conference. Lucca, Italy; 2010. pp. 288–293

    Google Scholar 

  10. Bulakci O, Redana S, Raaf B, Hamalainen J: Impact of power control optimization on the system performance of relay based LTE-Advanced heterogeneous networks. J. Commun. Netw. (Special Issue on Heterogeneous Networks) 2011, 13(4):345-359.

    Google Scholar 

  11. Bulakci O, Awada A, Bou Saleh A, Redana S, Hamalainen J, Wegmann B, Raaf B, Viering I: Joint optimization of uplink power control parameters in LTE-Advanced relay networks. In Proceedings of the IEEE Wireless Communications and Mobile Computing Conference (IWCMC). Istanbul, Turkey; 2011:2064-2069.

    Google Scholar 

  12. Henderson D, Jacobson SH, Johnson A: The Theory and Practice of Simulated Annealing. Kluwer Academic Publishers, New York; 2003.

    Book  Google Scholar 

  13. Hurley S: Planning effective cellular mobile radio networks. IEEE Trans. Veh. Technol. 2002, 51(2):243-253. 10.1109/25.994802

    Article  Google Scholar 

  14. Siomina I, Yuan D: Enhancing HSDPA performance via automated and large-scale optimization of radio base station antenna configuration, in Proceedings of the IEEE Vehicular Technology Conference (VTC) Spring. Singapore, Marina Bay; 2008. pp. 2061–2065

    Google Scholar 

  15. Roy R: Design of Experiments Using the Taguchi Approach: 16 Steps to Product and Process Improvement. Wiley, New York; 2001.

    Google Scholar 

  16. Cai Y, Liu D: Multiuser detection using the Taguchi method for DS-CDMA systems. IEEE Trans. Wirel. Commun. 2005, 4(4):1594-1607.

    Article  MathSciNet  Google Scholar 

  17. Awada A, Wegmann B, Viering I, Klein A: Optimizing the radio network parameters of the long term evolution system using Taguchi’s method. IEEE Trans. Veh. Technol. 2011, 60(8):3825-3839.

    Article  Google Scholar 

  18. Hedayat AS, Sloane NJA, Stufken J: Orthogonal Arrays: Theory and Applications. Springer-Verlag, New York; 1999.

    Book  MATH  Google Scholar 

  19. Awada A, Wegmann B, Viering I, Klein A: A joint optimization of antenna parameters in a cellular network using Taguchi’s method, in Proceedings of the IEEE Vehicular Technology Conference (VTC) Spring. Hungary, Budapest; 2011:1-5.

    Google Scholar 

  20. 3GPP, TS 36.300 V10.8.0: Overall Description; Stage 2 (Release 10). 2012.

    Google Scholar 

  21. 3GPP, TR 36.814 V9.0.0: Further Advancements for E-UTRA Physical Layer Aspects (Release 9). 2010.

    Google Scholar 

  22. 3GPP, TS 36.213 V8.8.0: Evolved Universal Terrestrial Radio Access (E-UTRA) Physical Layer Procedures (Release 8). 2009.

    Google Scholar 

  23. Holma H, Toskala A: LTE for UMTS—Evolution to LTE-Advanced. 2nd edition. Wiley, New York; 2011.

    Book  Google Scholar 

  24. Viering I, Lobinger A, Stefanski S: Efficient uplink modeling for dynamic system-level simulations of cellular and mobile networks. EURASIP J. Wirel. Commun. Netw. 2010, 2010: 282465. 10.1155/2010/282465

    Article  Google Scholar 

  25. Calabrese FD, Rosa C, Anas M, Michaelsen PH, Pedersen KI, Mogensen PE: Adaptive transmission bandwidth based packet scheduling for LTE uplink, in Proceedings of the IEEE Vehicular Technology Conference (VTC) Fall. Canada, Calgary; 2008. pp. 1–5

    Google Scholar 

  26. Mathar R, Niessen T: Optimum positioning of base stations for cellular radio networks. J. Springer Wirel. Netw. 2000, 6(6):421-428. 10.1023/A:1019263308849

    Article  MATH  Google Scholar 

  27. Xu H: An algorithm for constructing orthogonal and nearly-orthogonal arrays with mixed levels and small runs. J. Am. Stat. Assoc. Am. Soc. Qual. Technometrics 2002, 44(4):356-368.

    Google Scholar 

  28. Bulakci O, Bou Saleh A, Redana S, Raaf B, Hamalainen J: Resource sharing in LTE-Advanced relay networks: uplink system performance analysis. J. Wiley Emerg. Telecommun. Technol. (Special Issue on LTE /LTE-Advanced) 2013, 24(1):32-48.

    Google Scholar 

  29. Mogensen PE, Wei N, Kovacs IZ, Frederiksen F, Pokhariyal A, Pedersen KI, Kolding T, Hugl K, Kuusela M: LTE capacity compared to the Shannon bound. In Proceedings of the IEEE Vehicular Technology Conference (VTC). Spring, Dublin, Ireland; 2007:1234-1238.

    Google Scholar 

  30. 3GPP, TR 36.942 V10.3.0: Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Frequency (RF) system scenarios (Release 10). 2012.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ömer Bulakci.

Additional information

Competing interests

In the course of this study, the doctoral studies of Ö. Bulakci, A. Awada, and A. Bou Saleh were supported by Nokia Siemens Networks that is a major telecommunications vendor, and as such is involved in the development of concepts for next generation mobile telecommunications systems. In addition, S. Redana is an employee of Nokia Siemens Networks. Nokia Siemens Networks has partially financed the research described in this article and has covered the article processing charge.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Bulakci, Ö., Awada, A., Saleh, A.B. et al. Automated uplink power control optimization in LTE-Advanced relay networks. J Wireless Com Network 2013, 8 (2013). https://doi.org/10.1186/1687-1499-2013-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-8

Keywords