Call Admission Control Jointly with Resource Reservation in Cellular Wireless Networks

,


Introduction
In cellular wireless networks, to integrate multiservice with desired QoS, efficient resource management techniques are needed, while specified level of QoS is guaranteed to users belonging to each service class [1].In a wireless network, maximum packet delay for nondelay tolerant services, error-free transmission for delay-tolerant services must be guaranteed and maximum delay response must be provided for seamless image effect.Mobility, frequent handoffs and limited bandwidth are important constraints for QoS in wireless networks.
Service quality can be studied in three different levels as follows.(1) Packet level: in packet level, specified QoS parameters such as dropping probability, maximum packet delay and jitter must be guaranteed to users.(2) Call level: in call level, users expect that both blocking probability of new calls and dropping probability of handoff calls should be at minimum value.Handoff calls dropping is less desired than new calls blocking.For this reason, it is needed to decrease the probability of handoff calls at the expense of increasing the probability of new calls.(3) Class level: class level QoS is related to how bandwidth is shared by various classes of users.Common bandwidth sharing techniques are complete sharing (CS) complete partitioning (CP) and restricted access (RA) [2].Any class of users can use the entire bandwidth as long as sufficient capacity exists in CS.Bandwidth is partitioned at the beginning as a default value among incoming class of users in CP.
Call Admission Control schemes are the most efficient techniques used in the resource management.CAC coupled with resource management provides both maximum utilization in given bandwidth and call-level QoS requirements [3].When the total bandwidth is shared, higher priority is given to handoff calls to decrease the dropping probability.In the literature, CAC has been studied widely and several CAC schemes were proposed [4][5][6][7][8][9][10][11][12].Priority-based CAC schemes have also been proposed to provide the handoff calls with lower dropping probability over the new calls [4][5][6].Three call admission schemes known widely have been studied for different channel holding times of the new and handoff calls for only one service in [4] and a new approximation approach was proposed to reduce the computational complexity.In [5], exact product-form solution is studied to evaluate the symmetric CAC schemes such as New Call Bounding scheme in multiservice networks where different channel holding times of all the classes of calls are different.In [6], for multiple priorities, elasticthreshold-based CAC was designed and its performance was evaluated in terms of maximum reward obtainable with QoS satisfaction and threshold values were determined by sequentially adjusting the thresholds based on reward and reject rate.
CAC scheme proposed in [7] supports multiple admission priority classes.Proposed scheme adopts dynamic guard loading concept in which it adapts the threshold limits based on the current estimates of multiple handoff classes requests derived from current number of ongoing calls in neighboring radio cells and the mobility pattern.Another priority-based scheme is proposed and analyzed for integrated voice and data based on resource preemption [8].Proposed scheme deploys RA bandwidth sharing technique in which highpriority prioritized calls can all bandwidth unrestrictive way when there is enough capacity.If there is unoccupied bandwidth by prioritized calls upon the arrival of a new or handoff data calls, arriving data calls use the remaining bandwidth from the prioritized calls.This leads to available bandwidth usage of the data calls and better system resource utilization and performance results.In [9,10], optimal CAC is proposed by adopting the semi-Markov Decision Process (SMDP) to model the call admission scheme and bandwidth reallocation algorithm at the same time for time-varying multimedia traffic.A dynamic priority CAC is proposed in [11] to achieve better balance between CS and CP by computing the dynamic priority level based on predefined load partitions and the current carried load.In [12], two types of traffic are considered and partitioned to four priority classes; bandwidth reservation is made according to priority class.Although proposed scheme reserves different amounts of bandwidth for each prioritized class, bandwidth reservation thresholds are not optimal values.
In this paper, we propose a new call admission control scheme with adjusted capacity allocation to utilize the network resources efficiently.The main novelty in the proposed scheme is that maximum K (kbps) amount of adaptable bandwidth is allocated to nonprioritized calls and this value is determined optimally by considering E[T n1 ] and BN 1 call level requirements to protect the nonprioritized calls from QoS degradation.Further, by searching algorithm, admission region is derived for prioritized and nonprioritized calls.
This paper is organized as follows.In Section 2, the system model that we considered is described.In Section 3, we propose a new CAC policy, present an analytical model by using Markov model and obtain the optimal admission values with developed algorithms.Section 4 compares performance results from analytical model with those of New Call Bounding scheme.Section 5 concludes the paper.

System Model
We considered that wireless cellular network has a number of base stations and the coverage of a base station is rounded by a cell.Network contains two traffic types: prioritized traffic calls and nonprioritized traffic calls.A mobile initiating a new prioritized or nonprioritized call when crossing the cell boundary towards the outside of the coverage, can still maintain seamless traffic transmission by handoff occurrence.It is assumed that system is in statistical equilibrium, where the mean rate of handoff arrival calls is equal to the mean rate of handoff departure calls in the cell and rounded six cells have the uniform traffic conditions.With these assumptions, single cell is referenced and system performance analysis is evaluated from single cell performance.
Arriving calls at the cell are new and handoff prioritized calls and nonprioritized calls.As nonprioritized calls (such as data) can tolerate delay, they use the same total bandwidth reserve and the equal priority is given to new and handoff nonprioritized calls.Prioritized calls (such as voice) cannot tolerate delay, to maintain the seamless transmission; differentiation between the new and handoff calls is required for prioritized traffic calls.As dropping an ongoing handoff prioritized call is less desired than blocking a new prioritized call arrival, an amount of capacity C is reserved as a guard channel for only handoff prioritized call arrivals.New and handoff call arrivals to cellular system are assumed to be Poisson arrival process.Prioritized and nonprioritized call duration and the cell residence time are assumed to be exponentially distributed with means 1/μ dr1 , 1/μ r1 , 1/μ dr2 , and 1/μ r2 , respectively.The channel occupancy time of prioritized call μ −1 2 is also assumed to be exponentially distributed with mean 1/(μ dr2 + μ r2 ) [13,14].Nonprioritized calls can adapt to varying bandwidth traffic conditions; here, call admission control scheme admit new and handoff nonprioritized calls without dropping bandwidth below the minimum pre-determined level.Call duration for nonprioritized calls, on the other hand, depends both on bandwidth left over to each nonprioritized call and nonprioritized call file size.Although nonprioritized calls file size is not distributed exponentially for tractability in the mathematical analysis [15,16], it is assumed to be exponentially distributed with mean 1/μ fn 1 .The channel occupancy time μ −1 1 also is exponentially distributed with means 1/(μ dr1 + μ r1 ).

Call Admission Scheme
Proposed CAC policy uses (CS) access in which both prioritized and nonprioritized calls can use all the capacity according to the CAC policy limitations as shown in Figure 1.However, due to their lower priority, policy limits the admission of nonprioritized calls into the network and also limits the bandwidth that can be used by new and handoff nonprioritized calls.The number of nonprioritized calls that will be admitted to the network is determined optimally in accordance with CAC policy's QoS considerations on nonprioritized calls such as upper bound of mean call response time and blocking/dropping probability under M (optimal with N 1 ) Total bandwidth, C (Mbps) Figure 1: Resource (total bandwidth) reservation scheme.
varying traffic load conditions.New prioritized calls can use up to certain bandwidth at the system.Handoff prioritized calls can use the entire bandwidth over all the nonprioritized calls (new or handoff).Minimum T 2 and maximum N 2 , where N 2 is the number of new and handoff prioritized calls and T 2 is the number of new prioritized calls allowed, can be determined optimally by the CAC searching algorithm given in Algorithm 1.
Since prioritized calls cannot tolerate the delay, they require constant c 2req amount of bandwidth to meet their QoS requirements.Whereas nonprioritized calls can tolerate the certain amount of delay, their required bandwidth amount can be adaptable to varying bandwidth.Proposed CAC scheme reserves at most optimal K (Mbps) bandwidth determined by searching algorithm in Algorithm 1 to nonprioritized calls when the total number of prioritized calls at the system is less than N 2 − M, where M is the optimal threshold number for nonprioritized calls and reserves remaining C − n 2 c 2req (Mbps) bandwidth when the number of prioritized calls is more than N 2 − M. Actually, this admission scheme defines the New Call Bounding admission scheme which limits the new calls number (N 1 ) with a threshold (M); if the number of new calls does not exceed the threshold, it is admitted; otherwise, it is blocked, while handoff calls is rejected only when there is no bandwidth in the system.But this scheme assumes that all prioritized and nonprioritized calls require constant bandwidth and reserves constant bandwidth for the delaytolerant calls, that is, nonprioritized calls.It leads to lack of capacity using for delay-tolerant calls in their upper bound of reserved bandwidth while there is no prioritized call at the system.Without any change in the optimal M threshold number, proposed CAC policy in conjunction with bandwidth reservation, changes the reserved area for the nonprioritized calls dynamically upon each new prioritized call arrival.Admission policy for proposed CAC is given in Algorithm 2.
Optimal CAC parameters for prioritized and nonprioritized calls can be obtained as follows from Algorithm 1.
Steps ( 1)-(3) determine the largest number of prioritized calls (C/c 2req ) that channel can accommodate with minimum bandwidth requirement of prioritized calls, if blocking probability is larger than required level, the algorithm stops due to insufficient channel capacity.N 2 is searched by increasing the N 2 in each searching step until prioritized calls blocking probability BN 2 is smaller than the required blocking probability.Maximum value of N 2 cannot exceed the calls (C/c 2req ).Steps ( 4)-(8) determine the maximum value of T 2 by equalizing T 2 to N 2 first and by decreasing T 2 in each searching step, until prioritized calls dropping probability BH 2 is smaller than the required dropping probability.Steps ( 9)-( 10) first start from N 1 = 1, computing M threshold number and steps ( 11)-( 19) compute c 1 (n 1 , n 2 ) reserved bandwidth for nonprioritized calls jointly with steady-state probability of prioritized calls, N 1 and M.
BN 1 is the blocking probability of the new prioritized calls and E[T n1 ] is the mean response time of nonprioritized calls.To determine the number of nonprioritized calls N 1 , two restrictions (E[T n1 ], BN 1 ) are considered under the control of optimal tradeoff consisting of an increase in nonprioritized calls response time and a decrease in the blocking probability of nonprioritized calls by increasing the number of admitted nonprioritized calls to the system.Steps ( 20)-( 22) search the maximum N 1 and M in each search step, until two restrictions are satisfied, and step (23) outputs the obtained results.
3.1.New Call Bounding Scheme.This scheme limits the admission of nonprioritized calls into the system to provide the call-level QoS requirements for handoff prioritized calls while acceptable QoS requirement is still guaranteed to nonprioritized calls.M is the threshold number for the nonprioritized calls.If the number of nonprioritized calls exceeds M, they are blocked, otherwise admitted.K, when the number of prioritized calls is less than N 2 − M in the system, defines maximum bandwidth amount reserved for nonprioritized calls.New and handoff prioritized and nonprioritized call arrivals are assumed to be Poisson arrival process with mean rate λ n1 , λ h1 , λ n2 , and λ h2 , respectively [4].
The offered prioritized and nonprioritized loads when prioritized and nonprioritized call users are in the system are given by ρ 1 = λ 1 /μ 1 , ρ 2 = λ 2 /μ 2 and λ 1 = λ n1 + λ h1 , λ 2 = λ n2 + λ h2 , where λ 1 and λ 2 are the total mean arrival rate of prioritized and nonprioritized calls.c 1req denotes the required capacity to maintain the QoS requirements for nonprioritized calls.When there are n 1 nonprioritized calls and n 2 prioritized calls in the system, the probability of these n 1 and n 2 nonprioritized and prioritized calls in the system is given by a product-form solution as follows where (1) N 2 = 1; %T d upper , QN 1 (QH 1 ), QN 2 (QH 2 ) are upper bounds.
The state space is defined as Thus, nonprioritized call blocking and prioritized call dropping probabilities can be obtained as where represents the floor function that rounds its input to the nearest integer less than or equal to the value of input itself.
The mean nonprioritized calls response time is obtained by division of total mean number of nonprioritized calls [E n1 ] in the system to the mean call arrival rate H, which is known as Little's law [17].In New Call Bounding scheme, capacity for the delay-tolerant calls, that is, for nonprioritized calls, is constant and does not change with the increase or decrease in the number of other types of calls in the system; hence, the purpose of this paper is to show the impacts of changeable capacity on system performance with the same number of users as those of New Call Bounding scheme.E[T n1 ] is defined as the mean nonprioritized calls response time and calculated as Total channel utilization efficiency n is the ratio of used bandwidth and the total system bandwidth.From all the users' channel occupancy probabilities, n is calculated as; ( Total mean throughput (calls/s) is the mean rate that all nonprioritized calls are served and calculated as where f n1 is the mean file size for nonprioritized calls.
When a prioritized (new) call arrives if (the total number of prioritized calls < T 2 ) admit the call else reject the call When a prioritized (handoff) call arrives if (the total number of prioritized calls < N 2 admit the call else reject the call When a non-prior.(new or handoff) call arrives if (the total number of prior.calls < N 2 − M) allocate the (K) bandwidth to the nonprioritized calls admit the call else reject the call else if (the total number of prior.calls ≥ (N 2 − M) allocate the (remaining) bandwidth admit the call else reject the call Algorithm 2: Proposed CAC policy.

Proposed CAC Scheme.
Proposed CAC scheme handles the nonprioritized and prioritized calls separately.Firstly, when the proposed CAC scheme admits both traffic types of calls into system behaves in the same admission policy with that of New Call Bounding scheme described in Section 3 except that Proposed CAC policy provides with adaptable bandwidth reservation instead of fixed bandwidth set in the system.Secondly, in the proposed CAC policy, each type of calls is analyzed by one-dimensional Markov chain model based on their service type.Since nonprioritized calls can use bandwidth amount determined by Algorithm 1, steady-state probability π(n 1 ), in which n 1 calls are in the system, can be obtained by M/G/1/K-PS queue model [18].Prioritized calls require certain capacity due to their nontolerant structure to delay; their steady-state probabilities π(n 2 ), in which n 2 calls are in the system, can be obtained by M/M/K/K queue model.

Prioritized Calls Resource Allocation.
Prioritized traffic load when the system is in state n 2 is given by ρ n2 = ρ 2 .Steady-state probability π(n 2 ) can be obtained by where α is the fraction of the handoff prioritized traffic load, β is the threshold constant for admitting the new prioritized calls when n 2 = T 2 , and π(0), is normalization constant given by

Nonprioritized Calls Resource Allocation.
Varying capacity for the nonprioritized calls is given by where c 1 (n 1 , n 2 ) is the available capacity to nonprioritized traffic when the system is occupied by n 2 number of prioritized calls.Total shared bandwidth conditions between nonprioritized and prioritized calls are given by From the M/G/1/K-PS model, the mean nonprioritized call response time can be determined under nonprioritized traffic load by considering the variabilities in the service capabilities of nonprioritized calls.Nonprioritized traffic load when the system is in state n 2 is given by Nonprioritized traffic load requires ρ n1 < 1 so that system could be stable for the greater values of ρ n1 , the system becomes unstable and the mean response time of nonprioritized calls presents a state out of its maximum value [19].
The mean offered traffic load of nonprioritized calls is given by Threshold number M is calculated numerically from optimal number of N 1 .c 1req gets minimum and maximum capacity in the range of ( , respectively.M is given by Steady-state probability π(n 1 ), in which n 1 calls are in the system, can be obtained as Nonprioritized calls blocking and dropping probabilities can be obtained as where BH 1 is the dropping probability of the handoff prioritized calls.
The mean response time of nonprioritized calls is calculated according to Little's law and given by According to scheme, to determine the bandwidth utilization efficiency, nonprioritized calls (new and handoff) use K (Mbps) bandwidth at most, and remaining bandwidth (C − K) (Mbps) is unoccupied with π 1 (n 1 )π 2 (0) probability if any priority (new and handoff) call does not arrive to the system.On the other hand, if any nonprioritized call (new or handoff) does not arrive to the system, only unoccupied bandwidth corresponds to (C − n 2 c 2req ) (Mbps) with π 2 (n 2 )π 1 (0) probability.Utilization efficiency is obtained as where Z denotes N2 n2=1 π 2 (n 2 )π 1 (0) • (C − n 2 c 2req ).The mean total throughput can be obtained as Overload probability P ov is defined as the probability that capacity used by a nonprioritized call user drops under a threshold c 1drop and obtained as, . (23)

Fixed Iterative Algorithm for Calculation of both Nonprioritized and Prioritized Handoff Calls Arrival Rate.
To begin to compute steady-states probabilities, we should know the handoff call arrival rates for both types of service.Any handoff arrival rate for a call type must be equal to handoff departures rates in a cell.The mean handoff arrival rate can be determined as [20] , We note that determination of handoff arrival rate depends on the steady-state probability which is unknown at the begining.By setting the initial values for handoff call arrival rates and using the iterative approach [21], we can determine the actual handoff arrival rates.Initial values for λ h1 and λ h2 can be set as [22] where H 1 and H 2 are handoff probability of prioritized and nonprioritized calls and given as With these initial values, we can use the following iterative algorithm.
Step 5. Compute the performance measurements such as blocking and dropping probabilities, response time, throughput, and utilization efficiency according to (1) and (23).
Figure 2 shows the mean nonprioritized call response time T n1 as a function of prioritized calls arrival rate.The mean nonprioritized response time decreases exponentially as prioritized traffic calls arrival rate increases.The reason for this decrease in response time is the varying capacity nature of the nonprioritized call because that more prioritized call load allows more increased reserved bandwidth probability for the nonprioritized calls when comparing the fixed capacity of the New Call Bounding scheme.We observed that nonprioritized calls response time takes its greatest value (T n1 = 9.3879 sec) with a certain value of ρ n2 (i.e., the study case with ρ n2 = 10.3905 and λ n2 = 0.0578 calls/s).Figure 3 shows dropping probability of prioritized calls as a function of λ n2 .It is shown that dropping probability of prioritized calls has highly low rates in proposed scheme.Dropping probability can achieve upper bound (0.1%) with the increase of prioritized calls arrival rate λ n2 , whereas dropping probability of New Call Bounding scheme overestimates upper bound.
In proposed scheme, when prioritized calls are admitted to the system, upper-bound requirements of blocking and dropping probabilities are considered as policy limits.Prioritized (nonprioritized) calls number leading to exceed of restriction limit for blocking (dropping) probability is not allowed in the system.Hence, blocking (dropping) probability does not exceed the upper bound.
Figure 4 shows that blocking probability of prioritized calls BN 2 is under (1%) in all call arrival rate increases, while New Call Bounding scheme cannot meet the required QoS.In New Call Bounding Scheme, BN 2 = BH 2 as it uses the scheme without any of the threshold for its handoff calls.
Figure 5 shows blocking probabilities of nonprioritized calls as a function of prioritized call arrival rate.Even if prioritized calls arrival rate increases, blocking probability of nonprioritized calls remains under the limits of upper bound of nonprioritized calls.
Figure 6 shows nonprioritized calls throughput (calls/s) as a function of prioritized call arrival rate.
Prioritized call arrival rate increases throughput γ increases exponentially.After call arrival rate λ n2 = 0.1011, the increase is faster as system cannot operate effectively in heavy prioritized load condition.It performs sufficiently high throughput in the offered call arrival rate (λ n2 = 0.1011) condition than that of New Call Bounding scheme.Throughput performance is the largest as γ = 0.2969.
The probability of unoccupied bandwidth depends on the probability of none of the prioritized calls existence, which gets the highly low values (π n2 (0) = 9.7205 • 10 −006 -8.3723 • 10 -043 ) and utilization efficiency performs better than that of New Call Bounding scheme (0.6968-0.8664).
Overload probability of nonprioritized calls is defined as the probability, in which required bandwidth for the nonprioritized calls is less than the 0.8c 1req .Figure 8 shows overload performance.Overload probability decreases (0.1659-0) with the increase of prioritized call arrival rate λ n2 because of the increase of the capacity reserved for nonprioritized calls.
After low values of call arrival rate (λ n2 = 0.0722), overload probability decreases to zero, which points that the required capacity for the nonprioritized calls is maintained.However, in New Call Bounding scheme, overload does not occur from the fact that capacity reserved for nonprioritized calls is fixed and it is not changed with traffic load variation.Set parameter for the nonprioritized calls is larger than the 0.8c 1req .

Conclusion
In this paper, we proposed a new call admission scheme with resource management for nonprioritized and prioritized calls in cellular network.New Call Bounding scheme is chosen for comparison because admission policy of the proposed CAC is taken from the New Call Bounding scheme.However, before settling on the proposed study, we studied on how we can improve the New Call Bounding scheme performance with proper and effective resource management without changing the number of each different service type user.We have developed two iterative algorithms one for obtaining the optimal number of prioritized and nonprioritized calls under different traffic load conditions, which dynamically searches the optimal number of N 1 , N 2 , T 2 , and threshold M value for each traffic load parameter in each searching interval optimally under QoS requirements of the policy such as E[Tn 1 ], BN 1 , DH 1 and the other for bandwidth allocation that works mutually with first algorithm.It is shown that the admission scheme can maintain all upper-bound QoS requirements in terms of throughput, nonprioritized calls response time, blocking and dropping probabilities and provide better system performance by sharing total bandwidth between prioritized and nonprioritized calls effectively.

Figure 2 :Figure 3 :
Figure 2: The mean response time of nonprioritized calls versus prioritized calls arrival rate.