- Open Access
Deployment and management of SDR cloud computing resources: problem definition and fundamental limits
© Gomez-Miguelez et al.; licensee Springer. 2013
- Received: 10 June 2012
- Accepted: 4 February 2013
- Published: 4 March 2013
Software-defined radio (SDR) describes radio transceivers implemented in software that executes on general-purpose hardware. SDR combined with cloud computing technology will reshape the wireless access infrastructure, enabling computing resource sharing and centralized digital-signal processing (DSP). SDR clouds have different constraints than general-purpose grids or clouds: real-time response to user session requests and real-time execution of the corresponding DSP chains. This article addresses the SDR cloud computing resource management problem. We show that the maximum traffic load that a single resource allocator (RA) can handle is limited. It is a function of the RA complexity and the call setup delay and user blocking probability constraints. We derive the RA capacity analytically and provide numerical examples. The analysis demonstrates the fundamental tradeoffs between short call setup delays (few processors) and low blocking probability (many processors). The simulation results demonstrate the feasibility of a distributed resource management and the necessity of adapting the processor assignment to RAs according to the given traffic load distribution. These results provide new insights and guidelines for designing data centers and distributed resource management methods for SDR clouds.
- Cloud Computing
- Traffic Load
- Blocking Probability
- Resource Allocation Algorithm
- Resource Allocator
Wireless communications technology continuously improves and already facilitates the provisioning of a wide variety of advanced communications services at competitive prices. Whereas current systems provide data rates of a few mega-bits per second (Mbps), 4G systems will offer up to 100 Mbps per user. A few seconds may be necessary today before a connection is established between the user equipment and the network. Long term evolution (LTE) and LTE-Advanced (LTE-A) promise connection establishment times of less than 50 and 10 ms, respectively, [1, 2].
A software-defined radio (SDR) cloud comprises a set of distributed antenna sites that connect to one or several data centers through low-latency and high-bandwidth communication links. The antenna sites process the radio frequency (RF) signals and convert signals from analog to digital and vice versa. The digital data is processed entirely in the data center, employing SDR and cloud computing technology.
SDR describes wireless transceivers that implement a significant part of the physical layer signal processing (DSP) in software that executes on general-purpose hardware . SDR applications or waveforms define the transceiver functionality of future radio equipment. This facilitates dynamic reconfigurations or radio transceivers, changing their transmission modes through changes in the software.
Cloud computing provides IT services to clients without reference to the infrastructure that hosts the services . The cloud is a generic platform for different business types, from small-scale to very-large scale. The upfront cost is minimal as the infrastructure is provided by the cloud operator, who rents resources to cloud clients. The elasticity of clouds permits business grows without long-term planning and resource preallocations. A pay-per-use business model on top of a virtualized computing resource pool enables resource sharing and on-demand resource provisioning. Computing resources (hardware and software) can then be dynamically allocated and efficiently used, ensuring faster amortization (CAPEX) and better scalability as well as savings in power consumption, security, maintenance and software licensing, among others (OPEX).
The SDR cloud provides essentially the same benefits as a general purpose cloud. It inherits the resource-as-a-service and pay-per-use business concepts: computing power (infrastructure as a service—IaaS), system software (platform as a service—PaaS), and applications (software as a service—SaaS) will be provided on demand and without knowledge of the physical location and types of CPUs, discs, software repositories, and so forth.
A single data center is shared between several radio operators and thousands of end users. (Some 100,000 user sessions may be active at the same time in a city of one million or more inhabitants.) Virtualization is employed for ensuring secure and fair resource sharing, where one radio operator—the SDR cloud client—is not aware of others using the same physical machines. Different business models or agreements are possible. A minimum set of resources may be guaranteed to each radio operator, for instance. The remaining or unused resources can then be shared—fairly or competitively—as a function of the market, environment, or policy, among others. This requires a flexible, though efficient computing resource management framework as a basis for the SDR cloud business. Such framework, in other words, plays an essential role for the deployment and operation of SDR clouds. It, particularly, needs to ensure real-time resource allocation and execution in dynamic environments with different resource and service constraints. This is the topic of this article.
This article elaborates a relation between the wireless communications system requirements or constraints and the SDR cloud computing resource management before deriving optimal solutions for the high-level resource provisioning. Each service request requires loading the corresponding transceiver waveform. Real-time resource provisioning and hard real-time execution needs to be ensured for seamless service provisioning. The SDR cloud resource allocator (RA) will therefore determine the mapping of waveforms to the available computing resources on demand and under stringent timing and resource constraints. We show that the maximum traffic load that a single RA can handle is limited. It is a function of the complexity of the resource allocation algorithm, the call setup delay, and the user rejection or blocking probability. The radio access technology specifies the maximum call setup delay, whereas the radio operator determines a blocking probability target. We introduce a general execution time model for characterizing the complexity of different resource allocation algorithms and derive expressions for the average call setup delay and maximum traffic load. The results show that SDR cloud data centers can be efficiently managed in a distributed way. They provide guidelines for designing data centers and distributed resource management methods for SDR clouds.
The rest of the article is organized as follows: After providing some background on computing resource management methods and algorithms (Section 2), we identify the problem (Section 3) and elaborate a RA complexity model (Section 4). In the central part of the article, we define and solve an optimization problem for assigning computing resources to an RA as a function of the environmental parameters (Section 5). We finally apply our solution for managing the resources associated to a single radio cell (Section 6) and multiple cells (Sections 7 and 8) under different wireless communications traffic characteristic.
Massively parallel computing architectures will dominate the high-performance computing landscape. A platform with a large number of parallel processors is more suitable for executing many applications than a single powerful processor . The high and heterogeneous computing demands of SDR applications, in particular, are executed more efficiently on a multiprocessing execution environment [8, 9]. Empirical studies have shown that scheduling hard real-time tasks on many-core processors is challenging [10, 11]. Sophisticated resource allocation algorithms are consequently necessary for managing the real-time computing demands and the limited computing resources.
Distributed computing has a long research record. The multiprocessor mapping and scheduling problem, in particular, has been vastly investigated in the heterogeneous computing context [12–14]. Heterogeneous computing refers to a coordinated use of distributed and heterogeneous computing resources . It is similar to grid computing  or metacomputing .
It is well known that the computing resource allocation problem is NP-complete, in general . Heuristic approaches were therefore proposed, presenting a polynomial relation between the problem size and the computing complexity. Grid or cloud computing RAs dispatch computing jobs or independent task for their distributed execution. Grid computing workloads exhibit little intra-job parallelism, the average job completion time is several hours, and typical job inter-arrival times are in the order of seconds or minutes . Many grid or cloud workloads are data-intensive .
Grids and clouds are accessed via the internet, which is relatively slow and has unpredictable delays. They were originally built for providing very high computing power for scientific or popular applications with no stringent real-time constraints. Rather than ensuring real-time allocation and execution, grid or cloud RAs therefore follow other objectives. Doulamis et al. , for example, discuss the fair sharing of CPU rates and allocate resources to users as a function of resource availabilities, user demands, and socio-economic values. Lui et al.  focus on the joint resource allocation of computing and network resources in federated computing and network systems. They present various resource allocation schemes that can provide performance and reliability guarantees for modern distributed computing applications. Entezari-Maleki and Movaghar  develop a probabilistic task scheduling method for minimizing the mean response time of grid jobs.
The SDR cloud concept has been recently introduced  and merges three fundamental technologies: centralized baseband processing, automatic computing resource allocation and virtualization. Related study addresses centralized baseband processing [24, 25] and offline, that is, design-time resource allocation . We focus on automatic computing resource allocation, enabling runtime resource management and seamless real-time execution. Each wireless communications service request needs to be served in real time, providing sufficient computing resources for the continuous real-time data processing. Two general approaches exist for scheduling real-time tasks on multiprocessor platforms. Tasks can be statically assigned prior to execution or migrate between processors during execution. The former can be achieved through partitioned scheduling, where an application is partitioned among the processing elements (mapping) before being locally scheduled. The latter approach is typically associated with global or dynamic scheduling. The contention for the global scheduling queue and non-negligible migration overheads among processing elements can result in significant scheduling overheads in practice . The migration cost limits the number of cores that a global scheduler can manage [10, 11]. Non-preemptive static partitioned scheduling, on the other hand, is pertinent to high performance many-core and multiprocessor platforms. It facilitates implementation and introduces low run-time resource overheads .
A constant execution period and practically deterministic and regular execution patterns characterize SDR applications. The real-time constraints of the DSP processing chains can then be given as minimum throughput and maximum latency constraints and static schedulers can be employed . The mapping and scheduling can thus be calculated only once for each waveform as part of the session establishment process. The SDR cloud resource management performance is then limited by the RA’s execution time per invocation (user session request) and the session arrival rate. The derivation of this limit is the objective of this article.
Wireless subscribers access communications services anywhere, anytime, and under different circumstances. Measurements have shown that the average user establishes seven or eight voice sessions per day of 90 s in the mean . Data users realize a larger number of shorter sessions. The number of concurrent sessions in a large city may range between 10,000 and 120,000 as a function of place and time.
The SDR cloud RA needs to be able to handle the spatial and temporal variety in the traffic load. A single data center ideally executes all waveforms and centrally manages all session requests. The corresponding RA then needs to be able to dispatch thousands of requests per second.
Modern wireless communications standards, however, impose restrictions on the maximum session establishment time . The call setup delay t s is the transition time from a dormant (camping or idle ) state to the transmission or reception state. Each session establishment here consists of allocating sufficient computing resources to the corresponding transceiver waveform. The shorter the call setup time the better the always connected illusion. LTE-A therefore establishes 10 ms as the target call setup delay. Wireless operators moreover define a maximum blocking probability target , which should be satisfied in the mean. The blocking probability p b denotes the probability of a user session request being rejected due to insufficient computing resources. Wireless communications systems need to be accordingly dimensioned.
Description of parameters
Number of nodes or processors
Number of waveform modules or tasks
Call setup delay
Call setup delay constraint
Blocking probability constraint
Resource allocator’s (RA’s) execution time model
Scaling factor of RA model
Nodes’ exponent (n α ) of RA model
Modules’ exponent (m β ) of RA model
Cost function’s weight
Traffic load in Erlangs
Maximum traffic load a single RA can manage
Average session initiation requests per second
Average session duration in seconds
Number of users that can be served with n
processors for a given waveform model
Parameters α and β specify the complexity order of a RA. The same expression also serves as a general execution time model of an RA implementation. Parameters F, α, and β can be found by measuring the RA execution time for different n and m and then performing model fitting. Although other models may be more accurate for certain RA algorithms, (1) is simple and general.
Without loss of generality, we suggest a simple and well-known algorithm for providing numerical examples for the analysis performed in this article. The g- or greedy-mapping  is a baseline mapping algorithm. It maps one process after another, choosing the processor that leads to the minimum mapping cost. Cost metrics are therefore computed based on a suitable cost function. The cost function we suggest manages the limited processing and interprocessor bandwidth resources and, accordingly, distributing the processing load while minimizing the data flows between processors .
Throughout this section, we will use the previously derived execution time model (2). The analysis is also valid for other RA complexity models provided that the complexity increases with the number of processors.
5.1 Optimization problem
We analyze the relation between the call setup delay, the blocking probability, and the RA capacity. To this aim, we derive the optimal number of resources for processing user signals as a function of the environmental conditions and constraints. The wireless communications traffic model is a stochastic birth-death process. The time between consecutive session establishments follows a Poisson distribution with a mean of 1 / λ. That is, λ corresponds to the average number of new user session requests per second. The session duration follows an exponential distribution, where 1 / μ corresponds to the average session duration in seconds. The traffic load is then ρ = λ / μ Erlangs.
Before solving this problem, we first need to model the call setup delay t s (n) and blocking probability p b (n) constraints.
The session establishment process can be modeled as a double-queuing process: New users enter an infinite queue whose service time is the execution time of the RA, that is, tRA(n, m). They leave this allocation queue and enter a second multi-server queue of size c. The service time of the active sessions queue is exponentially distributed with an average of 1 / μ, which corresponds to the average session duration.
This model can be represented by a two-dimensional state transition diagram, where state probability pi,j indicates the probability that there are i users waiting for the allocation queue while j users have active sessions. The model can be simplified if we consider that the mapping time is much shorter than the average session duration, that is, tRA(n, m) ≪ 1 / μ. This allows separating the two queues. Following Kendall’s representation, we model the allocation queue as an infinite length M / D / 1 queue and the active sessions queue as a blocking and finite-size M / M / c / c queue with no wait sates. For simplifying the mathematical analysis, here we consider waveforms of m = 10 tasks and analyze tRA as a function of n.
5.2.1 Call setup delay
5.2.2 Blocking probability
The blocking probability of the active sessions queue (M / M / c / c queue) is the probability that c users are occupying all available resources. When this happens, a new user is rejected due to insufficient computing capacity. Parameter c therefore represents the maximum number of waveforms that can be loaded to n processors. This number is difficult to characterize since depending on many factors, including the computing capacity of each processor, the interprocessor communication network, the waveforms’ computing characteristics, and the performance of the RA algorithm.
We need to solve the problem numerically for arbitrary θ. The first option is using numerical optimization. Integer optimization problems are very complex to solve, though. We therefore relax the integer nature of the optimization variable n and use a convex solver for finding a non-integer solution. We then evaluate the objective function for the two closest integers, choosing the maximum that satisfies the constraints.
where i is a real positive number. If we are able to obtain B(ρ, z) for a real number z < 1, then we can compute B(ρ, i) for any i. Various approximations for B(ρ, z) have been published based on parabolic interpolations. We used the expression of  for the numerical examples that follow.
5.4 Numerical examples
Assigning n∗ processors to the RA maximizes the system efficiency f(n). The optimal number of processors n∗ is a function of θ. The curve corresponding to θ = 0 represents the solution that minimizes the blocking probability (n∗ = nmax) while meeting the call setup delay constraint. The curve corresponding to θ = 1, at the other extreme, indicates the solution that minimize the use of processing resources (n∗ = nmin) while satisfying the blocking probability constraint. The intersection of the two curves provides the maximum system capacity ρmax. Parameter nmin becomes larger than nmax beyond that point and the problem has no solution.
The system capacity is almost 50 Erlangs for a call setup delay constraint of 50 ms, which corresponds to the LTE standard specification (Figure 6a,b). LTE-A indicates call setup times of 10 ms, reducing the RA capacity to some 25 Erlangs in this case (Figure 6c,d). The capacity can be improved by using more powerful processors. Assuming Φ(n) = n / 1.5, for example, leads to ρmax = 35 Erlangs for the LTE-A case (Figure 6e,f).
The previous section has indicated that the RA capacity ρmax is finite. Here we analytically derive this limit. The manageable number of processors is obtained from the tolerable execution time. The blocking probability then determines the RA capacity.
6.1 RA execution time limit
When , the capacity is limited by the stability of the M / D / 1 mapping queue (see (6)). The call setup delay is then dominated by the time the user needs to wait before being be served rather than the RA execution time itself. On the other hand, when the capacity is limited by the call setup delay constraint. This is the case with modern communications standards, such as LTE and LTE-A, where the call setup delay is dominated by the RA execution time.
The general expression of is a function of λ (13). Therefore, nmax is also a function of λ.
6.2 Processor limit
The expression ⌊·⌋ indicates rounding off to the closest integer value.
6.3 Traffic limit
SDR clouds will provide wireless communications services to very wide service areas and will, consequently, need to manage huge traffic loads. Incoming user session requests will then need to be assigned to different RAs. Each RA will absorb only a portion of the total traffic demand, managing part of the data center resources. The assignment of processors to RAs should adapt to the traffic load distribution while satisfying the constraints of (5).
For θ = 0 all processors will be employed for maximizing the sum of U(n) (see (3)). The processors are distributed between the RAs depending on the slope of the Erlang-B function. For 0.1 ≤ ϵ ≤ 0.3 and 0.7 ≤ ϵ ≤ 0.9 more processors are assigned to the cell with higher traffic load. This is different for 0.3 ≤ ϵ ≤ 0.7, because assigning more processors to the cell with lower service demand decreases the overall blocking probability. For ϵ ≤ 0.1 or ϵ ≥ 0.9, the traffic of one or the other cell exceeds the corresponding RA capacity and the problem becomes unfeasible. The deployment of additional RA are necessary for such traffic distributions.
When θ = 1, the number of processors is directly proportional to the traffic load, because the slope of the objective function is constant with n. For 0 < θ < 1, the resources are allocated as a function of the performance increment in relation to the amount of allocated resources. The number of allocated processors linearly increases with ϵ and (1-ϵ), respectively, until reaching the maximum number of processors nmax that still meets the session establishment delay constraint.
We simulate a non-homogeneous traffic demand, where the user session initiation and termination is modeled as a Poisson arrival and departure process. The user arrival rate is 4 times the departure rate, simulating an unstable situation for better analyzing the performance of the different strategies. The adaptive strategy solves Equation (18) with 16 RAs instead of 2. The static strategy does not track the traffic load distribution, but rather assigns 16 of the 256 processors to each RA. The second variant of the static strategy randomly distributes the 256 processes among the RAs.
Each processor has a capacity of 12 giga-operations per second (GOPS). The waveforms offer 64, 128, 384, and 1024 kbps data rate services, which are solicited with a probability of 0.5, 0.2, 0.2, and 0.1, respectively. The four waveform models are those from , requiring between 50 % and 75 % of a processor’s capacity (k < 1). The users follow a two-dimensional Gaussian distribution, centered and with a variance of 0.25 relative to the service area.
The blocking probability constraint is dropped in order to enable a fair evaluation between the three strategies. The optimal strategy then maximizes the number of served users (θ = 0). The session initialization constraint is 50 ms and the average session duration 40 s.
SDR clouds provide an alternative concept for designing and managing wireless communications infrastructure. Higher resource efficiencies are achievable by merging the digital signal processing resources of today’s base stations into data centers and employing cloud computing technology. The limited resources need to be properly managed, though. This article has addressed the SDR cloud computing resource management problem. Defining the concept of a RA that manages a subset of computing resources facilitates separating the signal processing algorithms design from the infrastructure and enables using resources on a pay-per-use basis.
Based on the call setup delay and the blocking probability constraints, we have defined the RA processor allocation problem as a constrained convex optimization problem. The feasibility region provides the maximum traffic capacity that a single RA can manage. The results have shown that modern cellular communications standards, such as LTE and LTE-Advanced, considerably limit the RA capacity. Assuming that two processors or more are required to process a transceiver processing chain in real time, less than 50 Erlangs of traffic can be handled by a single RA employing a greedy mapping algorithm. A distributed resource management is therefore necessary.
The data center processors need to be distributed among several RAs subject to the call setup delay and blocking probability constraints. The simulation results moreover indicate that the number of accepted users is severely degraded if the processors distribution is not adapted to the traffic distribution. Our solution is optimal, but does not scale well with the problem size. More precisely, the complexity of problem (18) grows exponentially with the number of RAs. Future research will therefore develop sub-optimal solutions for dynamically recalculating the assignment of processors to RAs and so adapt to varying traffic loads in very large-scale computing systems. Different traffic patterns and empirical data sets will also be considered.
This study was supported by Spanish Government (MICINN) and FEDER under project TEC2011-29126-C03-02.
- 3rd Generation Partnership Project: 3GPP specification: 36.913 Rel-9; Requirements for Evolved UTRA (E-UTRA) and Evolved UTRAN (E-UTRAN) Tech. rep., 3rd Generation Partnership Project (2009)Google Scholar
- 3GPP specification 36.913 Rel-10; Requirements for further advancements for Evolved Universal Terrestrial Radio Access (E-UTRA) (LTE-Advanced) Tech. rep., 3rd Generation Partnership Project (2011)Google Scholar
- Akyildiz IF, Lee WY, Vuran MC, Mohanty S: NeXt generation/dynamic spectrum access/cognitive radio wireless networks: a survey. Comput. Netw. 2006, 50(13):2127-2159. /10.1016/j.comnet.2006.05.001View ArticleGoogle Scholar
- Gomez Miguelez I, Marojevic V, Gelonch Bosch A: Resource management for software-defined radio clouds. IEEE Micro 2012, 32: 44-53.View ArticleGoogle Scholar
- Mitola J: The software radio architecture. IEEE Commun. Mag. 1995, 33(5):26-38. 10.1109/35.393001View ArticleGoogle Scholar
- Buyya R, Yeo CS, Broberg J, Brandic I, Venugopal S: Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the 5th utility. Future Gen. Comput. Syst 2009, 25(6):599-616. 10.1016/j.future.2008.12.001View ArticleGoogle Scholar
- Hill M, Marty M: Amdahl’s law in the multicore era. Computer 2008, 41(7):33-38.View ArticleGoogle Scholar
- Marojevic V, Balleste XR, Gelonch A: A computing resource management framework for software-defined radios. IEEE Trans. Comput. 2008, 57: 1399-1412.MathSciNetView ArticleGoogle Scholar
- van Berkel C: Multi-core for mobile phones. In Design, Automation Test in Europe Conference Exhibition, DATE. Nice, France: ; 2009:1260-1265.Google Scholar
- Bastoni A, Brandenburg BB, Anderson JH: An empirical comparison of global, partitioned, and clustered multiprocessor EDF schedulers. In Proceedings of the 2010 31st IEEE Real-Time Systems Symposium, RTSS’10. Washington, DC, USA: IEEE Computer Society; 2010:14-24. 10.1109/RTSS.2010.23View ArticleGoogle Scholar
- Brandenburg BB, Calandrino JM, Anderson JH: On the scalability of real-time scheduling algorithms on multicore platforms: a case study. In Proceedings of the 2008 Real-Time Systems Symposium, RTSS’08. Washington, DC, USA: IEEE Computer Society; 2008:157-169. 10.1109/RTSS.2008.23View ArticleGoogle Scholar
- Ramamritham K, Stankovic J, Zhao W: Distributed scheduling of tasks with deadlines and resource requirements. IEEE Trans. Comput. 1989, 38(8):1110-1123. 10.1109/12.30866View ArticleGoogle Scholar
- Kwok YK, Ahmad I: Static scheduling algorithms for allocating directed task graphs to multiprocessors. ACM Comput. Surv. 1999, 31(4):406-471. http://doi.acm.org/10.1145/344588.344618 10.1145/344588.344618View ArticleGoogle Scholar
- Lee YC, Zomaya A: A novel state transition method for metaheuristic-based scheduling in heterogeneous computing systems. IEEE Trans. Parallel Distrib. Syst. 2008, 19(9):1215-1223. 10.1109/TPDS.2007.70815View ArticleGoogle Scholar
- Khokhar A, Prasanna V, Shaaban M, Wang CL: Heterogeneous computing: challenges and opportunities. Computer 1993, 26(6):18-27.View ArticleGoogle Scholar
- Thain D, Livny M: Building reliable clients and servers. In The Grid: Blueprint for a New Computing Infrastructure. Edited by: Foster I, Kesselman C. Kaufmann: Morgan; 2003. ch. 19Google Scholar
- Smarr L, Catlett CE: Metacomputing. Commun. ACM. 1992, 35(6):44-52. http://doi.acm.org/10.1145/129888.129890 10.1145/129888.129890View ArticleGoogle Scholar
- Bokhari SH: On the mapping problem. IEEE Trans. Comput. 1981, 30(3):207-214. 10.1109/TC.1981.1675756MathSciNetView ArticleGoogle Scholar
- Iosup A, Epema D: Grid computing workloads. IEEE Internet Comput. 2011, 15(2):19-26.View ArticleGoogle Scholar
- Sakr S, Liu A, Batista D, Alomari M: A survey of large scale data management approaches in cloud environments. IEEE Commun. Surv. Tutor. 2011, 13(3):311-336.View ArticleGoogle Scholar
- Doulamis N, Doulamis A, Varvarigos E, Varvarigou T: Fair scheduling algorithms in grids. IEEE Trans. Parallel Distrib. Syst. 2007, 18(11):1630-1648.View ArticleGoogle Scholar
- Liu X, Qiao C, Yu D, Jiang T: Application-specific resource provisioning for wide-area distributed computing. IEEE Netw. 2010, 24(4):25-34.View ArticleGoogle Scholar
- Entezari-Maleki R, Movaghar A: A probabilistic task scheduling method for grid environments. Future Gen. Comput. Syst. 2012, 28(3):513-524. 10.1016/j.future.2011.09.005View ArticleGoogle Scholar
- Jonathan Segel MW: LightRadio: White paper 1. Tech. rep., Alcatel-Lucent. http://www.alcatel-lucent.com/lightradio/
- Liquid Radio: Let traffic waves flow most efficiently Tech. rep., Nokia Siemens Networks 2007–2013 http://www.nokiasiemensnetworks.com/portfolio/liquidnet/liquidradio Tech. rep., Nokia Siemens Networks 2007–2013
- Zhu Z, Gupta P, Wang Q, Kalyanaraman S, Lin Y, Franke H, Sarangi S: Virtual base station pool: towards a wireless network cloud for radio access networks. In Proceedings of the 8th ACM International Conference on Computing Frontiers. New York: ACM; 2011:34:1-34:10. http://doi.acm.org/10.1145/2016604.2016646Google Scholar
- Micheli GD: Synthesis and Optimization of Digital Circuits. New York: McGraw-Hill Higher Education; 1994.Google Scholar
- Guo J, Liu F, Zhu Z: Estimate the call duration distribution parameters in GSM system based on K-L divergence method. In International Conference on Wireless Communications, Networking and Mobile Computing, WiCom. Shanghai, China: ; 2007:2988-2991.Google Scholar
- Flexible Wireless Communications Systems and Networks (FlexNets) Web Site http://flexnets.upc.edu
- Gross D, Harris C: Fundamentals of queueing theory. New York [u.a.]: A Wiley-Interscience publication; 1998.Google Scholar
- Jagers A, van EAD: On the continued Erlang loss function. Oper. Res. Lett. 1986, 5: 43-46. http://doc.utwente.nl/69738/ 10.1016/0167-6377(86)90099-4MathSciNetView ArticleGoogle Scholar
- Rapp Y: Planning of Junction Network in a Multi-Exchange Area. Ericsson Tech. Rep 1964, 4: 77-130.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.