 Research
 Open Access
A resource block organization strategy for scheduling in carrier aggregated systems
 Guillermo Galaviz^{1, 2}Email author,
 David H Covarrubias^{1},
 Angel G Andrade^{2} and
 Salvador Villarreal^{1}
https://doi.org/10.1186/168714992012107
© Galaviz et al; licensee Springer. 2012
 Received: 15 October 2011
 Accepted: 14 March 2012
 Published: 14 March 2012
Abstract
Carrier aggregation (CA) is a promising technology that will allow IMTAdvanced system candidates to achieve spectrum bandwidths of up to 100 MHz using available system resource blocks (RB), even if these are fragmented. Implementation of CA functionality is managed through the use of schedulers capable of assigning multiple RBs to a user. When each available RB is handled individually, the delay from assigning multiple RBs to each user can affect the quality of service (QoS). In this article we develop an efficient scheduling strategy to reduce spectrum resource assignment delay in systems that make use of CA. This strategy is based on an apriory organization of available RBs in sets. Two different RB Organization Algorithms are compared. In order to evaluate the performance of the proposed strategy numerical simulation was performed using a Round Robin scheduler for the downlink of a macrocellular environment. Results show that using the proposed strategy it is possible to reduce the delay required to assign resources to users without affecting the downlink user capacity when compared to block by block scheduling strategies proposed in literature. The benefits of using the proposed strategy are discussed as well as improvement opportunities.
Keywords
 Schedule Strategy
 Packet Delay
 User Request
 Resource Block
 Channel Quality Indicator
1 Introduction
Wireless cellular communication systems have been part of our everyday life for more than 30 years. Currently, wireless cellular systems are evolving from voice oriented solutions into broadband wireless access systems. Recent developments of next generation wireless cellular systems in the IMT Advanced initiative specify a 1Gbps downlink data rate for static users and 100 Mbps for high mobility users. In order to achieve such high data rates in a wireless system, spectrum efficiency has become a physical layer design priority. Technology developments such as MIMOOFDM together with high order modulation schemes and efficient error correcting codes, allow for a spectrum efficiency of up to 15bps/Hz [1]. However, even with such high spectrum efficiency (achieved only under optimum channel conditions) there is a large requirement of spectrum bandwidth. For the 1Gbps transfer rate and a spectrum efficiency of 15bps/Hz, approximately 67 MHz of bandwidth would be required by one user during enough time to complete a data transfer.
Current versions of broadband wireless systems make use of channel bandwidths of up to 20 MHz [2]. Therefore, a different spectrum management scheme is required for next generation wireless systems in order to provide the required bandwidth. Due to the fragmentation of the spectrum bands for next generation broadband wireless cellular systems, the expected growth of broadband wireless users, and the large bandwidths required to provide high data rate services [3], the spectrum available is considered to be scarce and fragmented.
Carrier aggregation (CA) has been defined as an enabling technology to overcome the spectrum scarcity and fragmentation problem. CA allows a system to aggregate multiple spectrum resources (resource blocks or RBs) and assign them to a single user in order to provide the sufficient bandwidth for a given service. CA works by allowing the system to assign RBs that may or may not be contiguous and considering the possibility that they are in different frequency bands. This derives in three different types of CA [4]:

Contiguous CA: Aggregation of contiguous RBs within the same frequency band.

Noncontiguous intraband CA: Aggregation of noncontiguous RBs available within the same frequency band.

Noncontiguous interband CA: Aggregation of noncontiguous RBs available in different frequency bands.
In order to implement CA spectrum assignment to a single user, a scheduler with multiple RB assignment capabilities is required by the system. In general, the task of the scheduler will be to optimize resource usage in a feasible amount of time. Resource usage can be measured as the throughput handled by the network. However, due to the quality of service (QoS) that has to be offered, it has to be possible to every user to make use of the available resources regardless of the achievable throughput. This characteristic is referred to as fairness. From the network perspective, there will always be a tradeoff between throughput and fairness.
Some of the scheduling proposals for CA systems available in literature involve an adaptation of algorithms used in nonCA systems, such as Proportional Fair and Processor Sharing. The way they provide CA capabilities is by scheduling each available RB individually. When all the resources required by a user are scheduled, the user has enough resources (bandwidth) to be serviced. Proposals in [5–7] deal with CA scheduling using this strategy, which we refer to as Block by Block Scheduling. A different strategy is presented in [8], where a water filling algorithm is used to provide CA capabilities. A novel approach for CA is presented in [9], where a technique called separated burstlevel scheduling (SBLS) is implemented using a two level scheduler structure.
Since the number of RBs required by next generation wireless cellular systems can be quite large, the time required by a scheduler to assign all the resources needed by a user can become considerably high if RBs are handled individually. This will yield in a potentially excessive delay of the scheduling tasks. This operation time will actually depend on the organization of the spectrum resources that the scheduler can handle. Delay is an important aspect that has to be considered within IMTAdvanced system candidates. In order to fulfill the QoS goals of high data rates, low latency and high user capacity for demanding applications such as high definition real time video an achievable user plane packet delay of 2 ms is defined [1]. This packet delay involves the time epoch at which a packet arrives at the scheduler queue until it is completely transmitted in an unloaded system. Achieving this goal involves an improvement of several subsystems. One important component of the packet delay comes from the scheduling process. Efficient scheduling of available resources is typically characterized by higher delays, while simple scheduling algorithms usually waste system resources. Given the delay restrictions established for IMTAdvanced system candidates such as LTEAdvanced, it is important to consider the delay when designing schedulers with CA capabilities.
In this article we present a scheduling strategy based on the assignment of preorganized RB sets. We will refer to this strategy as Set Scheduling. The main idea behind Set Scheduling was presented in [10] together with preliminary results. In this article, Set Scheduling is further analyzed and evaluated in a macrocellular environment in order to fully understand its potential in reducing delay due to resource assignment when compared to Block by Block Scheduling. In order to evaluate Set Scheduling, two different algorithms for organizing RBs in sets are evaluated. Results obtained through the evaluation of Set Scheduling in a macrocellular environment show that it is possible to reduce scheduling delay due to resource assignment by up to four times and to obtain a user capacity improvement of up to 5% when compared to a Block by Block Scheduling strategy. We will also discuss how it provides a more efficient use of available resources as compared to [8]. Considering that packet delay involves the scheduling process, packet fragmentation and reconstruction (when using CA), signaling and physical layer processes, an improvement in the delay caused by any of these processes will eventually impact packet delay. Therefore, depending on the delay contribution of each process, the achieved reduction in scheduling delay using our proposal may have an important impact in packet delay.
The article is organized as follows: Section 2 presents the scenario and the parameters used for evaluation; Section 3 presents the proposed scheduling strategy, together with the RB organization algorithm; Section 4 shows the delay and throughput analysis for the evaluation scenario and the simulation results; Section 5 presents our conclusions.
2 Scenario and evaluation parameters
2.1 Scenario
At each scheduling slot, user requests are received with a Poisson distribution with mean λ_{ u }. Each request specifies a data rate ${R}_{{b}_{{u}_{i}}}$ and by a file size ${S}_{{b}_{{u}_{i}}}$ for the ith user. Each request is also associated with a channel quality indicator (CQI) report. The CQI will determine the achievable data rate and the amount of data that can be transported from a RB. Therefore the requested data rate will eventually define the bandwidth required by the user (number of required RBs), while the file size will define the number of packets needed to be transmitted (time slots) for a given CQI value.
We consider that the available bandwidth is organized in component carriers (CC) [4]. For evaluation we considered the characteristics of the long term evolution system (LTE) as presented in [12]. There are a total of L CC's. Each CC l, where l ∈ 1... L, is composed of an integer number RBs, where one RB is the minimum assignable resource to a user. Each RB is by itself a set of OFDM subcarriers. For our evaluation, RBs are represented as binary vectors for each of the frequency bands used. The position within the vector specifies the center frequency of the RB, a value of "1" indicates that the RB is available while a value of "0" indicates that the RB is not available.
where SNRdB is the user SNR in dB. As presented in [12], this mapping guarantees decodability of the transmitted information with a block error rate (BLER) of at most 10%. Our evaluation considers that requests are from slow moving or fixed users, therefore the CQI is maintained until the transmission of data is completed. For illustration purposes our simulations are based on a single cell multiuser scenario. Interference is not taken into account.
The parameters for Equation (2) and the values used are taken as the default in [13] and are as follows: PL is the PL in dB, W is the street width (20 m), h is the average building height (20 m), hbs is the base station height (25 m), hut is the user terminal height (1.5 m), d is the distance between user terminal and base station (variable with user position) and fc is the operating frequency in GHz (2.3 and 3.4 GHz).
A thermal noise power spectral density of 174 dBm/Hz was considered in order to calculate de SNR per user. The 2.3 GHz and 3.4 GHz frequency bands were used with equal transmission power. The transmission power of eNodeB was adjusted in order to have a minimum CQI of 5 at the cell edge in the 2.3 GHz frequency band. Since both bands transmit at the same power, a lower CQI is expected in the 3.4 GHz band for a given user.
CQI to S(CQI) and R(CQI) mapping
CQI  Modulation  S(CQI) [bits]  R(CQI) [kbps] 

CQI 5  QPSK  377  188.5 
CQI 8  QPSK  792  396.0 
CQI 15  QPSK  3319  1659.5 
CQI 22  16QAM  7168  3584.0 
where the term k_{i 1}represents the index of the last RB assigned to the previous user.
where $N{b}_{i}^{l}$ is the lower bound on the number of RBs required by user i when all assigned RBs have an equal and the highest achievable CQI, termed R(CQI)_{ h }. Any possible value of N_{ i }will fall within these two limits and depends specifically on the user channel conditions on the available carriers.
2.2 Evaluation parameters
Simulation parameters and values
Parameter  Value 

Site layout  Single cell, omnidirectional antenna 
Path loss  ITUR urban macrocellular NLOS [13] 
User location  Uniformly dropped within cell 
Operation frequency  2.3 and 3.4 GHz 
Thermal noise PSD  174dBm/Hz 
Minimum CQI at cell edge  5 @ 2.3GHz 
Available resource blocks  20 @ 2.3GHz, 160 @ 3.4GHz 
Requests per slot λ_{ u }  25, Poisson distributed 
Requested data rate ${R}_{{b}_{{u}_{i}}}$  Uniformly distributed(1 kbps to R_{ b }max) 
Requested file size ${S}_{{b}_{{u}_{i}}}$  Uniformly distributed(100 bits to S_{ b }max) 
Simulated slots  500 
3 Set Scheduling and resource block organization algorithm
3.1 Set Scheduling
In Block by Block Scheduling each available RB is handled individually. Depending on the scheduler used (i.e., Proportional Fair, Processor Sharing) the scheduling metrics are evaluated for each RB. The user who maximizes the specific scheduling metric obtains the RB assignment. This process is repeated until all RBs are assigned, time at which some users will complete all of their N_{ i }required RBs (with N_{ i }subject to the constraint in Equation (3)).
We consider that there is an important drawback in a block by block scheduling strategy. For a user i who requires N_{ i }RBs, the time required to assign all of them is of at least N_{ i }times of that required to assign a single RB. This time can grow even more if the RBs assigned are not contiguous. There is also the possibility that after all the available RBs are assigned, some users will not complete the total number of required RBs. This may result in inefficient use of the available resources.
There is one main drawback in the proposed Set Scheduling strategy. Additional complexity at the scheduler is required due to the RB organization algorithm. However, depending on the algorithm itself, this complexity can be low compared to the rest of the scheduler components.
3.2 Resource block organization algorithm
Note that the presented algorithms perform Contiguous CA. This simplifies the organization algorithm operation, but lacks the capacity to form sets from noncontiguous RBs. It has to be remarked that the organization operation is based solely on RB availability.
3.3 Operation of Set Scheduling
Algorithms 1 and 2 were used for set construction in the Set Scheduler structure presented in Figure 3. At each scheduling slot, sets are formed using all the available RBs. The scheduler then assigns each available set to users according to the scheduling rules. Due to the dynamic nature of the resource use, each set can have a different size with a minimum size of one, and a maximum size of Nmax.
Once the Set Matrix is ready, the scheduler proceeds to the resource assignment operation. An important restriction of our Set Scheduling evaluation is that sets are assigned to a user if and only if the user has the same CQI for all the RBs in the set. This yields a disadvantage in terms of resource assignment, but it reduces the delay involved in evaluating the constraint of Equation (3). It also guarantees decodability considering that the CQI is adequate for all the RBs in the set. In order to understand the impact of this restriction, consider a user that has different CQI levels in contiguous frequencies that span an available set. For such user, service would be denied until a set that falls within a range of frequencies that have the same CQI level is available. In the best case this restriction results in delayed attention, and in the worst case it would result in an unattended user request. Although this restriction seems to severely affect QoS, in our evaluations with no interference the probability that a user shows different CQI levels within a frequency band (2.3 or 3.4 GHz) is less than 3%. In real world applications, the use of adequate interference control mechanisms reduces the probability that a user experiences different CQI levels within a frequency band. This situation does not occur with Block by Block Scheduling since each RB is handled individually.
In our implementation, during the assignment process the scheduler first looks for a set that matches the number of required RBs for a given user. If such set is not available the scheduler is able to assign a set with a larger number of RBs. Only the required RBs will be assigned. The unused RBs from that set will be used to form a new set. The scheduling slot ends when all sets are assigned or all user requests are attended. Non attended users are queued for the next scheduling slot.
When compared to the scheduling presented in [8], there is an important advantage of using Set Scheduling. In [8] resources are assigned also as sets, but each set corresponds to a complete CC. This means that if a CC is composed of 100 RBs and a user requires 101 RBs, a total of 200 RBs will be assigned, corresponding to two CCs. Set Scheduling will only assign as many RBs as required, allowing to use unassigned RBs at a next scheduling slot. Still, it is not possible to directly compare our proposal with that in [8].
4 Evaluation results and analysis
Using the parameters from Section 2, numerical evaluation was performed to asses the performance of Set Scheduling in comparison to Block by Block Scheduling. The scheduling strategy in Figure 2 was implemented considering the possibility of noncontiguous interband CA subject to the restriction of Equation 3. The scheduling strategy in Figure 3 was implemented as described in Section 3. For evaluation a Round Robin scheduler was used. Although the evaluation scenario is simple, it allows to focus in the assessment of the capabilities of Set Scheduling.
The value of Nmax was evaluated at 15, 18, 20 and 22 RBs. The value of R_{ b }max was evaluated between 2,200 and 7,000 kbps and for a given CQI value it determines the average number of RBs that will be required per user. The value of S_{ b }max was evaluated at 2000, 3000, 4000 and 5000 bits, and for a given CQI value it determines the average number of time slots required to complete a user request transmission.
4.1 Delay analysis
where E[Delay] is the expected delay to assign all the required RBs to a given user, E[N] is the expected number of RBs per user and τ_{ s }is the time required to assign one single RB.
which given the discretization of R(CQI) results in a value of $\frac{1}{188.5\mathsf{\text{kbps}}}$.
Given the uniform distribution of the data rate requested described in Section 2, we can assume that $E\left[{R}_{{b}_{u}}\right]={R}_{b}\text{max}/2$. Once the value of E[N] is obtained using (10), the expected delay for the assignment of resources in Block by Block Scheduling in terms of τ_{ s }can be calculated using Equation (6).
where E[Delay_{set}] represents the expected delay per user due to resource assignment using Set Scheduling; E[τ_{ o }] represents the time required by the RB organization algorithm to obtain the Set Matrix; λ_{ u }is the average number of user requests per scheduling slot.
For Algorithm 1 the maximum value of E[τ_{ o }] is obtained when all RBs are available, and in the worst case for a value of Nmax = 22, it corresponds to 49 ⋅ τ_{ s }. Using Equation (14), for the preceded worst case scenario, the expected delay due to resource assignment using Set Scheduling corresponds to E[Delay_{set}] = 49 ⋅ τ_{ s }/ 25 + τ_{ s }= 2.96 ⋅ τ_{ s }. This delay calculation involves only the availability of RBs and the value of Nmax. For this calculation, it is considered that all user requests are attended.
As it can be observed in Figure 6 there is an important difference in the behavior of the proposed algorithms. Algorithm 1 has a monotonically increasing response with respect to available RBs. Algorithm 2 has a parabolic behavior with its maximum at the point where 50% of RBs are available. When the percentage of RBs is below 70%, Algorithm 1 outperforms Algorithm 2 in terms of E[τ_{ o }]. However, when a higher percentage of RBs is available for scheduling Algorithm 2 shows a much lower delay. This translates in the fact that when resources are more fragmented, Algorithm 1 will show a lower delay than Algorithm 2. This information is valuable since it makes possible to select an algorithm based on the expected availability of RBs. It is possible to have both algorithms in a system and switch between them depending on the resource availability in order to reduce resource assignment delay.
It is also possible to observe in Figure 6 that the expected delay E[τ_{ o }] is also dependant on the value of Nmax. For a larger value of Nmax a higher E[τ_{ o }] can be expected. In Algorithm 1, the worst case of delay shows that for Nmax = 22, E[τ_{ o }] = 49, while for Nmax = 15, E[τ_{ o }] = 38. This is a significative difference that can also be observed for Algorithm 2. Given this behavior, in order to reduce delay as much as possible the lowest possible value of Nmax has to be selected.
4.2 Complexity description
In order to compare the complexity of Block by Block Scheduling and Set Scheduling we present the general operation of both strategies. Only the general case for each process is described for comparison. The operations not included in each process are the same for each strategy, and involve frequency band distinction and restrictions such as the maximum value of RBs per user (Nmax).
Procedure 1 shows the general operation of Block by Block Scheduling. For each user request, this strategy will find and assign as many RBs as required in order to meet the restriction in Equation 3. Therefore, for a total of N_{ i }RBs, each user needs a total of N_{ i }find operations, as well as N_{ i }assign and N_{ i }update operations. As previously discussed, the delay due to resource assignment using this strategy will in fact depend on the value of N_{ i }. Since LTEAdvanced systems allow up to 500 RBs to be assigned to a single user in order to exceed the 1 Gbps requirement for IMTAdvance systems, the delay of Block by Block Scheduling can become considerably high. However, it has the advantage that each available RB can be optimally used for a given CQI value. The achievable data rate will be considered independently for each assigned RB.
Procedure 2 shows the general operation of Set Scheduling. In this procedure, each user request within the assignment process requires only one calculate, one find operation and one assign operation. Since the number of required RBs is known due to the restriction of equal CQI for the RBs in a set, no update operation is required. In Set Scheduling, the main cause of delay is the execution of the resource block organization algorithm at each scheduling slot. However, as it was presented in Section 4.1, the organization of available RBs in sets depends mainly on the availability of RBs and the implementation of the organization algorithm. Since the RB organization algorithm is executed once per scheduling slot, the delay due to its execution can be considered as "distributed" among the attended users.
Comparison of the number of operations required per attended user considering Block by Block Scheduling and Set Scheduling
Operation  Number of operations  

B by B scheduling  Set Scheduling  RB org. Algorithm. 1, (Algorithm. 2)  
Find  N _{ i }  1  1, (2) 
Assign  N _{ i }  1  0, (0) 
Update  N _{ i }  0  0, (0) 
Calculate  0  1  0, (0) 
Check  0  0  N_{ i }, (0) 
Total  3N_{ i }  N_{ i }+ 4, (5) 
4.3 User capacity analysis
where PQ is the percentage of user requests in queue; U_{att} represents the number of attended requests; U_{rec} corresponds to the number of received requests.
For Figures 8 and 9 it is also possible to bring the information provided by the Set Scheduling delay analysis. Using either one of the proposed algorithms for set construction, the lowest possible delay is achieved with the lowest value of Nmax. Therefore, from the PQ metric analysis, when two or more performance curves overlap the best selection of Nmax will correspond to the lowest value. As such, in Figure 8 a value of Nmax = 15 will be preferred for values of R_{ b }max lower than 3,000 kbps, while for values of R_{ b }max between 3,000 and 3,500 kbps a value of Nmax = 18 is preferred. For R_{ b }max greater than 4,000 kbps a value of Nmax = 20 performs the best.
The strong fluctuations observed in Figures 8, 9 and 10 occur due to the reduction of available resources and the randomness of the simulations performed. Once the average requested data rate increases to a point where it is not possible to attend all requests at every scheduling slot, the PQ metric starts to increase indicating a reduction in system capacity. Also, it can be observed that the average data rate at which the system cannot attend all requests is lower as the average user file size increases. This is particularly visible in Figure 10, where all curves show a very similar slope that starts to increase at a lower value of parameter R_{ b }max as S_{ b }max increases. This behavior clearly represents the fact that as S_{ b }max increases, fewer RBs are available at each scheduling slot, reaching the point of system resource depletion at a lower value of average user requested data rate.
The lower PQ metric means that the cell capacity is increased. Statistically, overall throughput can be calculated by multiplying the PQ metric times the mean data rate requested and the average number of users. Once the PQ metric is greater than zero, all of the available RBs are used at each scheduling slot, indicating that the throughput is at a maximum possible for the scheduling and traffic conditions.
4.4 Throughput evaluation
Since the PQ metric used to evaluate user capacity is not commonly used in literature, in this section we provide an evaluation of the throughput behavior of the proposed Set Scheduling strategy. The simulation parameters used to evaluate throughput are shown in Table 2. The maximum requested bit rate R_{ b }max was evaluated from 2,000 to 10,000 kbps, and S_{ b }max was evaluated for 2,000 and 6,000 bits. For Set Scheduling, a value of Nmax = 20 was used.
Equation 16 allows to compare Block by Block Scheduling and Set Scheduling fairly. From Figure 11 it is possible to observe that for both scheduling strategies, for a larger value of S_{ b }max the Throughput Percentage decays. This is due to the fact that as the file size increases, the number of time slots required by the user to complete a transfer also increases, thus reducing the number of available RBs at each scheduling slot. It is also possible to observe that in each case of S_{ b }max, Set Scheduling outperforms Block by Block Scheduling by up to 8 percent observed at a value of R_{ b }max = 6, 000 kbps. However, this advantage is reduced as R_{ b }max increases. This is due to the fact that at some point the maximum throughput that can be handled by the system is reached by both scheduling strategies. This point is reached when R_{ b }max is 10,000 kbps for a value of S_{ b }max = 2, 000 bits.
5 Conclusions
A scheduling strategy for CA using preorganized RB sets was presented and evaluated. We presented an analytical evaluation framework to determine the expected number of RBs required by users, based on a mapping of CQI values to data rates per RB and the statistical behavior of the CQI. This framework allowed us to evaluate a macrocellular environment in order to determine the potential delay advantage of using Set Scheduling.
Two different RB Organization Algorithms were implemented. It was possible to observe a marked difference in the delay behavior of the evaluated algorithms in terms of the percentage of available RBs. A dependance to the percentage of available RBs as well as to the value of Nmax are observed. This opens the possibility of designing a different RB organization algorithm with improved behavior and lower delay when compared to the algorithms presented. The capacity of reducing delay due to resource assignment using Set Scheduling depends directly on the performance of the RB Organization Algorithm.
Although the RB organization algorithm used provided only contiguous CA functionality, it still outperformed a block by block scheduler that used noncontiguous interband CA. Some of the improvements that can be made to the scheduling strategy presented in this article include the possibility of aggregating sets. Set aggregation can improve throughput. Also, it is possible to design a different type of scheduler whose metrics are calculated for a whole set. Another improvement is the possibility of implementing an adaptive RB organization algorithm, that takes into account the statistical behavior of user requests.
In general, we were able to show that Set Scheduling has the capacity of reducing the delay due to resource assignment when compared to Block by Block Scheduling without affecting user capacity measured with the PQ metric, throughput percentage and average user throughput.
Procedure 1 General Block by Block Scheduling process
i is the index for the user requests
j is the index for the RB vector
R(CQI)_{ j }is the achievable data rate for RB_{ j }
for Each scheduling slot do
while RBs Available do
if User requests in queue then
i ← User Index
Updated Sum Rate ← User i Requested Data Rate
while Updated Sum Rate > 0 do
find: Available RB_{ j }
assign: RB_{ j }to User i
update: Updated Sum Rate = Updated Sum Rate  R(CQI)_{ j }
end while
increment: User Index
else
break: No more user requests, process completed
end if
end while
end for
Procedure 2 General Set Scheduling process
for Each scheduling slot do
execute: Resource Block Organization Algorithm
while RB Set Available do
if User requests in queue then
i ← User Index
calculate N_{ i }← Number of required RBs for user i for the different CQI values
find: Set with size ≥ N_{ i }
assign: N_{ i }RBs from set to User i
increment: User Index
else
break: No more user requests, process completed
end if
end while
end for
Declarations
Acknowledgements
This research work was possible thanks to scholarship number 92845 granted by the National Council of Science and Technology (CONACYT, Mexico) and to the support provided by the Wireless Communications Group at the CICESE Research Center and the Autonomous University of Baja California (UABC), Mexico.
Authors’ Affiliations
References
 Dötling M, Mohr W, Osserain A: Radio Technologies and Concepts for IMTAdvanced. Wiley, NJ, USA; 2009.View ArticleGoogle Scholar
 Dahlman E, Parkvall S, Sköld J: 4G LTE/LTEAdvanced for Mobile Broadband. Academic Press, Oxford, UK; 2011.Google Scholar
 Lazarus M: The great spectrum famine. IEEE Spectr 2010, 47(10):2631.View ArticleGoogle Scholar
 Yuan G, Zhang X, Yang Y: Carrier aggregation for LTEadvanced mobile communication systems. IEEE Commun Mag 2010, 48(2):8893.View ArticleGoogle Scholar
 Lei L, Zheng K: Performance evaluation of carrier aggregation for elastic traffic for LTEadvanced systems. IEICE Trans Commun 2009, E92B(11):35163519. 10.1587/transcom.E92.B.3516View ArticleGoogle Scholar
 Songsong S, Chunyan F, Caili G: A resource scheduling algorithm based on user grouping for LTEadvanced system with carrier aggregation. In Computer Network and Multimedia Technology, 2009. CNMT 2009. International Symposium. Wuhan, China; 2009.Google Scholar
 Chen L, Chen W, Zhang X, Yang D: Analysis and simulation for spectrum aggregation in LTEadvanced system. Vehicular Technology Conference Fall (VTC 2009Fall) 2009. 2009 IEEE 70th, Anchorage, Alaska, USAGoogle Scholar
 Chungm Y, Tsai Z: A quantized waterfilling packet scheduling scheme for downlink transmissions in LTEAdvanced systems with carrier aggregation. In Software, Telecommunications and Computer Networks (SoftCOM), 2010 International Conference on. SplitAdriatic Islands, Croacia; 2010:275279.Google Scholar
 Zhang L, Zheng K, Wang W, Huang L: Performance analysis on carrier scheduling schemes in the longterm evolutionadvanced system with carrier aggregation. IET Commun 2011, 5(5):612619.View ArticleGoogle Scholar
 Galaviz G, Covarrubias DH, Andrade A: On a spectrum resource organization strategy for scheduling time reduction in carrier aggregated systems. IEEE Commun Lett 2011, 15(11):12021204.View ArticleGoogle Scholar
 Masashige S, Akihito M, Nobuhiko M: Performance evaluation of intercell interference coordination and cell range expansion in heterogeneous networks for LTEAdvanced downlink. In Wireless Communication Systems (ISWCS), 2011 8th International Symposium on. Aachen, Germany; 2011:844848.Google Scholar
 Mehlführer C, Wrulich M, Ikuno JC, Bosanska D, Rupp M: Simulating the long term evolution physical layer. In 17th European Signal Processing Conference (EUSIPCO 2009). Glasgow, Scotland; 2009.Google Scholar
 International Telecommunicaition Union, Guidelines for evaluation of radio interface technologies for IMTAdvanced, Report ITUR M.21351 2009.Google Scholar
 Meucci F, Cabral O, Velez FJ, Mihovska A, Prasad NR: Spectrum aggregation with multiband user allocation over two frequency bands. In Mobile WiMAX Symposium, 2009. MWS '09,IEEE. Napa Valley, California, USA; 2009:8186.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.