Proposed CAC policy uses (CS) access in which both prioritized and nonprioritized calls can use all the capacity according to the CAC policy limitations as shown in Figure 1. However, due to their lower priority, policy limits the admission of nonprioritized calls into the network and also limits the bandwidth that can be used by new and handoff nonprioritized calls. The number of nonprioritized calls that will be admitted to the network is determined optimally in accordance with CAC policy's QoS considerations on nonprioritized calls such as upper bound of mean call response time and blocking/dropping probability under varying traffic load conditions. New prioritized calls can use up to certain bandwidth at the system. Handoff prioritized calls can use the entire bandwidth over all the nonprioritized calls (new or handoff). Minimum and maximum , where is the number of new and handoff prioritized calls and is the number of new prioritized calls allowed, can be determined optimally by the CAC searching algorithm given in Algorithm 1.

**Algorithm 1:** Determining algorithm of optimal number of the prioritized and nonprioritized calls.

(1) ; % , are upper bounds.

(2) while ()

(3) subject to

(4)

(5) end

(6) while () && ()

(7)

(8) end

(9)

(10)

(11)for

(12) for

(13)

(14) for

(15)

(16) end

(17)

(18) end

(19) end

(20) while (

(21)

(22) end

(23) Output ()

Since prioritized calls cannot tolerate the delay, they require constant amount of bandwidth to meet their QoS requirements. Whereas nonprioritized calls can tolerate the certain amount of delay, their required bandwidth amount can be adaptable to varying bandwidth. Proposed CAC scheme reserves at most optimal *K* (Mbps) bandwidth determined by searching algorithm in Algorithm 1 to nonprioritized calls when the total number of prioritized calls at the system is less than , where *M* is the optimal threshold number for nonprioritized calls and reserves remaining (Mbps) bandwidth when the number of prioritized calls is more than . Actually, this admission scheme defines the New Call Bounding admission scheme which limits the new calls number *(N* _{
1
} *)* with a threshold ; if the number of new calls does not exceed the threshold, it is admitted; otherwise, it is blocked, while handoff calls is rejected only when there is no bandwidth in the system. But this scheme assumes that all prioritized and nonprioritized calls require constant bandwidth and reserves constant bandwidth for the delay-tolerant calls, that is, nonprioritized calls. It leads to lack of capacity using for delay-tolerant calls in their upper bound of reserved bandwidth while there is no prioritized call at the system. Without any change in the optimal*M* threshold number, proposed CAC policy in conjunction with bandwidth reservation, changes the reserved area for the nonprioritized calls dynamically upon each new prioritized call arrival. Admission policy for proposed CAC is given in Algorithm 2.

**Algorithm 2:** Proposed CAC policy.

When a prioritized (new) call arrives

if (the total number of prioritized calls )

admit the call

else reject the call

When a prioritized (handoff) call arrives

if (the total number of prioritized calls

admit the call

else reject the call

When a non-prior. (new or handoff) call arrives

if (the total number of prior. calls )

allocate the (*K*) bandwidth to the nonprioritized calls

compute

if () && ()

admit the call

else reject the call

else if (the total number of prior. calls

allocate the (remaining) bandwidth

compute

if () && ()

admit the call

else reject the call

Optimal CAC parameters for prioritized and nonprioritized calls can be obtained as follows from Algorithm 1. Steps (1)–(3) determine the largest number of prioritized calls () that channel can accommodate with minimum bandwidth requirement of prioritized calls, if blocking probability is larger than required level, the algorithm stops due to insufficient channel capacity. is searched by increasing the *N* _{
2
} in each searching step until prioritized calls blocking probability *BN* _{
2
} is smaller than the required blocking probability. Maximum value of cannot exceed the calls . Steps (4)–(8) determine the maximum value of *T* _{
2
} by equalizing *T* _{
2
} to *N* _{
2
} first and by decreasing *T* _{
2
} in each searching step, until prioritized calls dropping probability is smaller than the required dropping probability. Steps (9)-(10) first start from , computing *M* threshold number and steps (11)–(19) compute reserved bandwidth for nonprioritized calls jointly with steady-state probability of prioritized calls, *N* _{
1
} and *M*.

is the blocking probability of the new prioritized calls and is the mean response time of nonprioritized calls. To determine the number of nonprioritized calls , two restrictions () are considered under the control of optimal tradeoff consisting of an increase in nonprioritized calls response time and a decrease in the blocking probability of nonprioritized calls by increasing the number of admitted nonprioritized calls to the system. Steps (20)–(22) search the maximum *N* _{
1
} and *M* in each search step, until two restrictions are satisfied, and step (23) outputs the obtained results.

### 3.1. New Call Bounding Scheme

This scheme limits the admission of nonprioritized calls into the system to provide the call-level QoS requirements for handoff prioritized calls while acceptable QoS requirement is still guaranteed to nonprioritized calls.*M* is the threshold number for the nonprioritized calls. If the number of nonprioritized calls exceeds *M*, they are blocked, otherwise admitted. *K*, when the number of prioritized calls is less than in the system, defines maximum bandwidth amount reserved for nonprioritized calls. New and handoff prioritized and nonprioritized call arrivals are assumed to be Poisson arrival process with mean rate , , , and , respectively [4].

The offered prioritized and nonprioritized loads when prioritized and nonprioritized call users are in the system are given by , and , , where and are the total mean arrival rate of prioritized and nonprioritized calls. denotes the required capacity to maintain the *QoS* requirements for nonprioritized calls. When there are nonprioritized calls and prioritized calls in the system, the probability of these and nonprioritized and prioritized calls in the system is given by a product-form solution as follows

where

The state space is defined as .

Thus, nonprioritized call blocking and prioritized call dropping probabilities can be obtained as

where represents the floor function that rounds its input to the nearest integer less than or equal to the value of input itself.

The mean nonprioritized calls response time is obtained by division of total mean number of nonprioritized calls in the system to the mean call arrival rate H, which is known as Little's law [17]. In New Call Bounding scheme, capacity for the delay-tolerant calls, that is, for nonprioritized calls, is constant and does not change with the increase or decrease in the number of other types of calls in the system; hence, the purpose of this paper is to show the impacts of changeable capacity on system performance with the same number of users as those of New Call Bounding scheme. is defined as the mean nonprioritized calls response time and calculated as

Total channel utilization efficiency is the ratio of used bandwidth and the total system bandwidth. From all the users' channel occupancy probabilities, is calculated as;

Total mean throughput (calls/s) is the mean rate that all nonprioritized calls are served and calculated as

where is the mean file size for nonprioritized calls.

### 3.2. Proposed CAC Scheme

Proposed CAC scheme handles the nonprioritized and prioritized calls separately. Firstly, when the proposed CAC scheme admits both traffic types of calls into system behaves in the same admission policy with that of New Call Bounding scheme described in Section 3 except that Proposed CAC policy provides with adaptable bandwidth reservation instead of fixed bandwidth set in the system. Secondly, in the proposed CAC policy, each type of calls is analyzed by one-dimensional Markov chain model based on their service type. Since nonprioritized calls can use bandwidth amount determined by Algorithm 1, steady-state probability , in which calls are in the system, can be obtained by M/G/1/K-PS queue model [18]. Prioritized calls require certain capacity due to their nontolerant structure to delay; their steady-state probabilities , in which calls are in the system, can be obtained by M/M/K/K queue model.

#### 3.2.1. Prioritized Calls Resource Allocation

Prioritized traffic load when the system is in state is given by . Steady-state probability can be obtained by

where is the fraction of the handoff prioritized traffic load, is the threshold constant for admitting the new prioritized calls when , and , is normalization constant given by

#### 3.2.2. Nonprioritized Calls Resource Allocation

Varying capacity for the nonprioritized calls is given by

where is the available capacity to nonprioritized traffic when the system is occupied by number of prioritized calls. Total shared bandwidth conditions between nonprioritized and prioritized calls are given by

From the M/G/1/K-PS model, the mean nonprioritized call response time can be determined under nonprioritized traffic load by considering the variabilities in the service capabilities of nonprioritized calls. Nonprioritized traffic load when the system is in state is given by

Nonprioritized traffic load requires so that system could be stable for the greater values of , the system becomes unstable and the mean response time of nonprioritized calls presents a state out of its maximum value [19]. The mean offered traffic load of nonprioritized calls is given by

Threshold number is calculated numerically from optimal number of . gets minimum and maximum capacity in the range of and , respectively. is given by

Steady-state probability , in which calls are in the system, can be obtained as

Nonprioritized calls blocking and dropping probabilities can be obtained as

where is the dropping probability of the handoff prioritized calls.

The mean response time of nonprioritized calls is calculated according to Little's law and given by

According to scheme, to determine the bandwidth utilization efficiency, nonprioritized calls (new and handoff) use *K* (Mbps) bandwidth at most, and remaining bandwidth () (Mbps) is unoccupied with probability if any priority (new and handoff) call does not arrive to the system. On the other hand, if any nonprioritized call (new or handoff) does not arrive to the system, only unoccupied bandwidth corresponds to ( (Mbps) with probability. Utilization efficiency is obtained as

where denotes .

The mean total throughput can be obtained as

Overload probability is defined as the probability that capacity used by a nonprioritized call user drops under a threshold and obtained as,

### 3.3. Fixed Iterative Algorithm for Calculation of both Nonprioritized and Prioritized Handoff Calls Arrival Rate

To begin to compute steady-states probabilities, we should know the handoff call arrival rates for both types of service. Any handoff arrival rate for a call type must be equal to handoff departures rates in a cell. The mean handoff arrival rate can be determined as [20]

We note that determination of handoff arrival rate depends on the steady-state probability which is unknown at the begining. By setting the initial values for handoff call arrival rates and using the iterative approach [21], we can determine the actual handoff arrival rates. Initial values for and can be set as [22]

where and are handoff probability of prioritized and nonprioritized calls and given as

With these initial values, we can use the following iterative algorithm.

Step 1.

Set the initial values for and according to (25).

Step 2.

Calculate the steady-state probabilities; , , , and according to the (7), (16), (9), (10), (17), and (18).

Step 3.

Calculate the mean handoff arrival rates using (24).

Step 4.

Let *ε* (>0) be a predefined small value. If *ε* is smaller than the differentiation of ( and ), ( and ), algorithm (iteration) goes on, , and and go to Step 2.

Step 5.

Compute the performance measurements such as blocking and dropping probabilities, response time, throughput, and utilization efficiency according to (1) and (23).