- Research
- Open Access
Individual packet deadline delay constrained opportunistic scheduling for large multiuser systems
- Muhammad Majid Butt^{1, 3}Email author,
- Ralf R Müller^{2, 3} and
- Kimmo Kansanen^{3}
https://doi.org/10.1186/1687-1499-2014-65
© Butt et al.; licensee Springer. 2014
- Received: 27 October 2013
- Accepted: 6 April 2014
- Published: 23 April 2014
Abstract
This work addresses opportunistic distributed multiuser scheduling in the presence of a fixed packet deadline delay constraint. A threshold-based scheduling scheme is proposed which uses the instantaneous channel gain and buffering time of the individual packets to schedule a group of users simultaneously in order to minimize the average system energy consumption while fulfilling the deadline delay constraint for every packet. The multiuser environment is modeled as a continuum of interference such that the optimization can be performed for each buffered packet separately by using a Markov chain where the states represent the waiting time of each buffered packet. We analyze the proposed scheme in the large user limit and demonstrate the delay-energy trade-off exhibited by the scheme. We show that the multiuser scheduling can be broken into a packet-based scheduling problem in the large user limit and the packet scheduling decisions are independent of the deadline delay distribution of the packets.
Keywords
- Opportunistic scheduling
- Large system analysis
- Delay constrained data
- Energy efficient communications
1 Introduction
We consider a wireless communication system with K users and a single central base station. Each user is subject to both time-varying frequency-selective fading and position-dependent path loss. This setting was addressed before in, e.g., [1], where proportional fair scheduling was compared to hard fair scheduling. While proportional fair scheduler [2] does not guarantee any upper bound on the delay of a data packet, hard fair scheduling enforces that each data packet is scheduled instantaneously. Packet delay can further be classified into average tolerable delay and maximum tolerable delay. This work focuses on the later definition of delay which is also called packet deadline.
In a practical system, information will be outdated after a certain delay time has passed and scheduling an outdated packet will be obsolete. There are two proper approaches to deal with the fact that packets become outdated: Either to drop them if they have not been scheduled in time or to force their transmission if they are reaching their deadline. Which way is more appropriate to go depends on the particular application, i.e. on the potential damage caused by a lost packet. In both cases, there is a trade-off between delay, throughput and power consumption.
Reference [3] deals with the trade-offs between average delay and average power. Reference [4] uses multiuser diversity to provide statistical quality of service (QoS) in terms of data rate, delay bound, and delay bound violation probability. In [5], an exact solution for the average packet delay under the optimal offline scheduler is presented when an asymmetry property of packet inter-arrival times and packet inter-transmission times holds. Online scheduling algorithms that assume no future packet arrival information are discussed as well. Their performances are comparable to those of the offline schedulers which assume identically and independently distributed inter-arrival times. The results of [3] have been extended to the multiuser context in [6]. It is found that to achieve an average power within the $\mathcal{O}(1/V)$ of the minimum power required for network stability, there must be an average queuing delay greater or equal to $\Omega \left(\sqrt{V}\right)$, where V>0 is a control parameter.
In [7], the authors consider the energy minimization problem for packet deadline-constrained applications. The channel of each user is discretized to one of a finite number of states. They consider two cases of rate-power curves. For both the cases, they obtain dynamic programming-based optimal solutions. When the rate-power relation is linear, they obtain a threshold-based scheduler which follows the optimal stopping theory formulation in [8]. For the case of a convex rate-power curve, a heuristic algorithm is proposed which gives a solution quite close to the optimal. A similar approach is applied in [9] where the authors consider the same problem for a point-to-point network. They consider a packet of B bits which has to be transmitted within the hard deadline of N time slots. During the transmission of the packet, no other packets are scheduled. The authors obtain close form expressions for the optimal policy only for the case N=2 using dynamic programming. For N>2, the optimal policy is numerically determined. It should be noted that the optimal solution is obtained only when either the rate-power curve is linear [7] or scheduling of a single packet is considered [9] following the framework of optimal stopping theory.
The problems of finding optimum solutions and the need for dynamic programming result from the interdependence of the users’ scheduling decisions. However, as the number of users becomes large, the instantaneous effect of the other users converges to its statistical average and optimum scheduling decisions can be made by each user individually without considering the fading states and queue lengths of the other users. In this context, this principle was first reported in [10]. It runs under various names in literature, e.g., large-system limit, mean-field approach, self-averaging, etc. For a more general discussion on the range of its applicability, see, e.g., [11, 12]. The many-user limit was applied in [13] and an algorithm called opportunistic superpositioning (OSP) was proposed to provide all users their desired average data rates while guaranteeing a certain average delay. The average delay of the users is inversely proportional to the scheduling probability and the scheduling threshold is used to control the delay. In the many-user limit, it is shown analytically that the required power can be made arbitrarily small at the expense of increased average delay.
In contrast to [13] and most other works discussed above, this paper addresses a system with a strict packet deadline (and not average) delay constraint. The packet deadline delay varies from packet to packet. The aim is to minimize the system energy while obeying the packet deadline delay constraint for each arriving packet. We first address the many-user limit where scheduling decisions can be taken based on each user’s own queue without loss of optimality. In this context, scheduling is not restricted to schedule one user at a time, but to schedule a finite fraction of the users, which experience favorable channel conditions and/or whose packets are close to their deadline, simultaneously. Though these users interfere with each other, they can be separated by means of superposition coding. Their effects onto each other decouple in the many-user limit and we can reformulate the multiuser scheduling problem as an equivalent single-user scheduling problem following the lines of thought in [14]. To the best of our knowledge, packet deadline-based scheduling has not been addressed in multiuser settings before. We apply the scheduling strategy which we find optimum in the many-user limit to the finite-user case and show that it, though suboptimum there, performs very well. We generalize the approach in [15], where an identical deadline is assumed for all the arriving packets and the simplified multiuser scheduler is limited to the policy of either scheduling all the buffered packets (simultaneously) or waiting for the next time slot. In this work, we provide a complete mathematical framework for the energy optimal packet-based scheduling and analyze the proposed scheme using Markov chain in the many-user limit. We show analytically that the scheduling decisions are independent of the deadline distribution but system energy depends on the deadline distribution. We discuss stochastic optimization techniques for optimization and show that the complexity in computing the thresholds remains acceptable.
The remainder of this paper is organized as follows: Section 2 describes the system model and Section 3 addresses the many-user considerations used in this work. The proposed multiuser scheduling scheme is presented in Section 4. The steady-state analysis of the queue is discussed in Section 5. We discuss the optimization procedure for the proposed scheme in Section 6. In Section 7, implementation issues of the proposed scheme are considered while numerical results are presented in Section 8. Section 9 concludes with the main results and contributions of this paper.
2 System model
We consider a multiple-access system with K users randomly placed within a certain geographical area. Each user is provided a certain fraction of the resources available to the system. We consider a time-slotted system. Arrivals occur at the start of a time slot and are queued in a finite buffer before transmission. Scheduling is performed at the end of a time slot taking into account the new arrivals within the current time slot. We consider an uplink (reverse link) scenario but the results can be generalized to a downlink (forward link) scenario in a straightforward manner using the multiple-access broadcast duality of the Gaussian channel [16] and the fact that scheduling decisions decouple in the many-user limit.
2.1 Channel model
Note that the distribution of g_{ k }(t) differs from user to user. Let N_{0} denote the noise power spectral density. The channel state information is assumed to be known at both the transmitter and the receiver side. This can be accomplished by channel estimation on the opposite link (downlink) in time-division duplex systems or by communication of explicit side information within the coherence time of the channel.
2.2 Physical layer communication
It is mandatory to allow multiple users to be scheduled simultaneously in the same time slot and in the same frequency band. Otherwise, a packet deadline of a finite number of time slots could not be met without allowing non zero dropping probability^{a}, as the number of packets that have reached the deadline could exceed the number of available frequency bands.
In our settings, there is no limit on the number of users scheduled simultaneously thanks to many-user considerations (discussed in the following); and a theoretical framework with zero outage probability is considered without loss of generality.
This energy assignment results in the minimum total transmit energy per symbol for the scheduled users. On the receiver side, the data from the user with the worst channel is decoded first, treating the signals from all other users as noise. The data from the current user is decoded after decoding the data from the previous users whose signals have been subtracted from the received signal. All users are decoded by repeating this step successively. This is the well-known successive interference cancelation (SIC). Collisions between simultaneous transmissions are avoided because in a multiuser environment, superposition coding and successive decoding ensure that data from multiple users are decoded successfully without errors on the receiver side^{b}.
2.3 Queuing model
At each time slot, none, one, or several packets arrive to the queue of each user. In general, an arriving packet is characterized by two parameters: its size and its deadline. Formally, the deadline is defined as the number of time slots available at the arrival time of the packet in the buffer before it has to be scheduled irrespective of channel conditions.
Without loss of generality, all packets are assumed to have a unit size. Note that larger packets can be modeled as being composed of multiple virtual packets of unit size. The deadlines of the packets are assumed to be finite and positive but arbitrary, otherwise. We model the arrival process by the probabilities p_{ i } that give the probability that an arriving virtual packet has deadline τ_{ i } with i∈{1…N}. The maximum size of the user’s buffer N is a system parameter and is given by the maximum of the deadlines τ_{ i }∀i of all the packets in the system.
For each packet in a queue, a decision is made whether it is scheduled at the present time slot or not. There is no limit on the maximum number of (virtual) packets in the queue. The system considered in our settings is entirely driven by the demands of the users. Each user’s demand on rate and delay has to be met by the system. Packet drops or outage are strictly prohibited. Since data rate and energy can be freely exchanged against each other, the users’ demands can always be met with sufficient use of energy. Though, the higher the demands of the users, the more energy the system will consume. However, the system has certain degrees of freedom to reduce the energy consumption: It can decide when a certain packet is being transmitted within the time left to its delay deadline. The system can decide whether to split packets into sub-packets. These sub-packets can then be either transmitted simultaneously, transmitted at different times, or combinations of the these two options can be used. Furthermore, the system can decide which frequency bands to use for which user’s packets at which time. It may seem infeasible to build a system that can find the optimum strategy to schedule each packet at the right time. However, we will make two idealized assumptions that allow us to characterize the structure of the optimum scheduling policy up to a few parameters that can be optimized numerically. First, we assume that there exists a coding strategy that achieves the capacity region of the Gaussian multiple-access channel. State-of-the-art coding strategies for the Gaussian multiple-access channel are indeed very close to the capacity region [18, 19]. Second, we assume that the number of users and the available radio spectrum grow asymptotically large, with the ratio of the number of users to radio spectrum being constant. This assumption is a good approximation for a system, where the individual user’s data rate is much lower than the total data rate of the system [20].
3 Large-system considerations
for all users. Note that due to (6), E_{ k }, the energy per symbol for user k in (2) is a linear function of R_{ k }, the rate of user k in the many-user limit. Remarkably, this simplicity is inherent by the system (similar to multiuser diversity) due to the presence of large number of users in the system and we quantify in Section 8 that a few hundred users are enough to achieve the asymptotic results. The linearity of the energy per symbol greatly simplifies the scheduling decisions. Based on this, we have
Lemma 1.
In the many-user limit, scheduling decisions in the queue of a user k can be made on a packet-by-packet basis without loss of optimality. Furthermore, the optimal scheduling decision does not depend on the properties of the other packets in the same queue.
The lemma implies that we cannot save energy by scheduling only some of several packets of a user with the same number of remaining time slots before deadline, as the energy costs of the packets are additive due to (6) (and not exponential as appears in (2)). Thus, independence of scheduling decisions for every packet remains optimal.
Additionally, we can decouple scheduling decisions among different users based on many-user assumptions and our discussion in Section 1 [10, 11, 15].
Lemma 2.
In the many-user limit, scheduling decisions can be made on a user-by-user basis without loss of optimality. Furthermore, the optimal scheduling decisions for a queue of a user do not depend on the properties of the queues of the other users.
By applying many-user assumptions, Lemma 2 breaks the joint multiuser scheduling problem into an equivalent single user scheduling problem [15] while Lemma 1 decomposes the problem further into individual packet deadline-dependent scheduling
3.1 State space model
In the following, we develop a Markov decision process (MDP)-based model for the scheduling of deadline-dependent packets. We define the state of the MDP as the number of time slots remaining before a packet (virtual user) has to be scheduled irrespective of the fading conditions. The definition of the state appears to be very similar to the definition of the deadline in Section 2.3. However, the deadline is a system parameter associated with a packet at the time of arrival and is fixed. The state of a packet varies over the period of time it spends in the buffer. At the start of the MDP process, the state equals the deadline. In each subsequent time slot, if the packet is not scheduled, it decreases by one until it reaches one. The system parameter N defined in Section 2.3 determines the size of the Markov chain.
where P_{g|d=1}(x) denotes the distribution of the fading of the scheduled virtual users. Remarkably, the rates of the users affect (7) only via its total sum R.
4 Threshold based scheduling scheme
Scheduling is a decision process. We adopt a fading threshold-based policy which quantizes the fading vector into a finite number of intervals. These intervals depend on the state of the packet and the fading distribution. We introduce (quantized fading states) thresholds to determine whether a packet (virtual buffer) with a state i is scheduled or not. These thresholds may depend on all system variables in general. However, in the many-user limit, they will depend only on each user’s own parameters, i.e. fading and state.
Definition 1(Transmission threshold)
A transmission threshold κ_{ i } is defined as the minimum short-term fading value allowing for scheduling a packet (virtual user) with state i.
Note that scheduling decisions only depend on the short-term fading. This is easily proven by contradiction. Imagine scheduling decisions would depend on the path loss. This would not lead to unstable queues due to the hard deadline constraint. However, it would cause a greater average delay of users with worse path loss compared to users with better path loss. In fact, the path loss would be reflected as a bias in the average queuing time of packets and such a bias reduces the dynamics of the scheduling process. This is clearly an adverse effect.
Next, we state a few fundamental properties of these transmission thresholds.
Property 1.
This ensures that the hard deadline is kept regardless of the channel quality^{c}.
Property 2.
This is evident from the construction of the problem that the probability of scheduling of a packet must be increased as it comes close to the deadline which is achieved by reducing the channel-dependent threshold with decreasing state i.
In order to ease notation, we introduce an additional state N+1. We model the packet being in that state when it is not in the queue, i.e. before it has arrived and after it has been scheduled.
where p_{ i } denotes the probability that an arriving packet has deadline τ_{ i }, cf. Section 2.3. A packet with deadline τ_{ i }<τ_{ N } is inserted directly into state i and treated as a packet that arrived in the buffer with deadline τ_{ N } but has not been scheduled for N−i time slots. This reduces degrees of freedom available for the packet and results in high energy cost.
where f denotes the short-term fading as explained in Section 2.1. We remove subscript k as all the users have identical fading distribution.
5 Distribution of packet deadlines
Our modeling of the problem ensures that the scheduling decisions and the thresholds are independent of the deadline distribution of the packets. However, the average system energy expenditure depends on the deadline distribution. In the limiting case K→∞, the empirical average of the arrival rate converges uniformly to its expectation λ=R/K. However, the buffer occupancy of the scheduled states is not uniform and depends on the deadline and fading distributions. A variable buffer occupancy model helps us in understanding the energy behavior of the system as a function of deadline distribution of the arriving packets and the fading distribution. For example, a large value of p_{1} means more degree of freedom in scheduling and small energy expenditure while large value of p_{ N } implies strict latency requirements and large energy expenditure.
is the ratio of packets in the queue to the number of packets arriving. This is the average delay of the system.
6 Threshold optimization
Next, we would like to optimize the transmission thresholds. Our objective is to minimize the average transmitted energy per symbol given in (7) for the constraint that every packet is scheduled before reaching the deadline. Energy per symbol depends solely on the channel distribution P_{g|d=1}(.) of the scheduled virtual users (SVUs). The channel distribution of SVUs is a function of transmission thresholds or transition probabilities (interchangeably) and computed in the following based on the MDP model developed in the previous section.
where $\overrightarrow{\alpha}=\phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}{\alpha}_{N\to N+1}\cdots {\alpha}_{1\to N+1}]$ and Ω defines the possible vector space for $\overrightarrow{\alpha}$ with $\overrightarrow{\alpha}$ containing all the transition probabilities representing scheduling of a packet (decision variables). ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ follow the properties of homogenous Markov chain while ${\mathcal{C}}_{3}$ results from Property 1 of transmission thresholds. For the optimized $\overrightarrow{{\alpha}^{\ast}}$, the corresponding transmission threshold vector $\overrightarrow{{\kappa}^{\ast}}=\phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}{\kappa}_{N}^{\ast}\dots {\kappa}_{1}^{\ast}]$ can be computed using (11) and vice versa
To compute the solution of the optimization problem, we need to express the probability distribution of the fading of SVUs P_{g|d=1}(·) in (7) in terms of $\overrightarrow{\kappa}$.
Using standard methods for calculating the distribution of the product of two independent random variables, P_{g|d=1}(y) is calculated in the Appendix from (24) and the CDF of the path loss.
The energy in (7) is not a convex function of the transmission thresholds. In the following, we discuss two heuristic optimization techniques to compute transmission thresholds.
6.1 Optimization by simulated annealing
We choose to use the simulated annealing (SA) algorithm to optimize the energy function for the transmission thresholds that result in the minimum energy for a given deadline delay parameter. The simulated annealing algorithm was proposed in [21] and [22] separately. It uses ideas from statistical mechanics to solve combinational problems. It is believed to provide near-optimal solutions (even optimal) in many combinatorial problems.
- 1.
Objective function
In this work, the objective function is the system energy as given in (7).
- 2.
Description of the configuration of the system
It is essential to provide a clear description of the configuration of the system. In our case, the vector $\overrightarrow{\alpha}$ is the parameter which represents the configuration of the system at a particular instant. The transmission thresholds are related to the transition probabilities for a given deadline and short-term fading.
- 3.
A random generator for the new configuration
At the start of the algorithm, any configuration can be provided. In the next step, there must be a suitable method to provide a random change in the configuration. In this work, transition probability vector $\overrightarrow{\alpha}$ is varied in each step to provide a new configuration to evaluate (7).
- 4.
A cooling temperature schedule
The system is ‘heated’ at high temperature T at the start of the algorithm. Afterwards, the temperature is decreased slowly up to the point where the system ‘freezes’. The term heating and cooling originate in statistical thermodynamics where freezing of the system represents a situation where the system reaches a near-optimal solution and no further state^{e} transitions occur for further decrease of the temperature parameter. The cooling schedule depends on the specific problem and can be developed after certain experiments. In our simulations, we tested both Boltzmann annealing (BA) and fast annealing (FA) temperature cooling schedules which have been proven to provide global minimum solutions for a wide range of problems [23, 24]. In FA, it is sufficient to decrease the temperature linearly in each step q such that,${T}_{q}=\frac{{T}_{0}}{q+1}$(26)where T_{0} is a suitable starting temperature. Similarly in BA, global minima can be found sufficiently (in many problems) if temperature decreases logarithmically such that,${T}_{q}=\frac{{T}_{0}}{ln(q+1)}$(27) - 5.
Acceptance probability
Any new configuration in SA is accepted if it results in a lower system energy with probability 1. A change in energy in each step is denoted by Δ E. Any new state is accepted with probability Δ E/T if it results in a higher energy state and it is referred to as muting. Muting occurs frequently at the start of the algorithm and vanishes to happen as the temperature T approaches zero.
6.2 Optimization by recursion
- 1.
Start the optimization procedure for N=2 such that the optimization is a scalar problem and we only need to find the threshold κ _{ N } since κ _{1}=0.
- 2.
Given the optimized threshold vector^{f} for N, i.e. $\overrightarrow{{\kappa}^{\ast}}\left(N\right)=\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}{\kappa}_{N}^{\ast}\right(N),{\kappa}_{N-1}^{\ast}(N),\dots ,{\kappa}_{2}^{\ast}(N),0]$, we find the threshold vector for the deadline N+1 by the heuristic postulate ${\overrightarrow{\kappa}}^{\ast}(N+1)=\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}{\kappa}_{N}^{\ast}\right(N+1),\overrightarrow{{\kappa}^{\ast}}(N\left)\right]$ and optimize over ${\kappa}_{N}^{\ast}(N+1)$. Again, this is a scalar optimization problem.
The postulate ${\overrightarrow{\kappa}}^{\ast}(N+1)=\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}{\kappa}_{N}^{\ast}\right(N+1),\overrightarrow{{\kappa}^{\ast}}(N\left)\right]$ helps to reduce complexity for computation of thresholds significantly. In SA, computation complexity for computing thresholds is $\mathcal{O}\left(N\right)$ while recursive method requires just one additional threshold as N−1 thresholds are known. We show in Section 8 that the results produced by both of the heuristic algorithms are indistinguishable.
7 Implementation considerations
The proposed scheme solves the optimization problem offline as a function of the channel statistics and state for each buffered packet. The offline optimization task can be performed locally by the users and needs no centralized control since it only involves the fading statistics, but not the fading realizations. However, a centralized optimization would save complexity, since the outcome of the optimization is identical for all users. Similarly, the scheduling decisions can fully be taken by each user individually. However, the powers required by the users to transmit their packets depend on the ordering of the successive decoding. For a finite user system, it is not possible for the users to get the exact knowledge of the required transmit power to provide the rate. Therefore, the users need to transmit with a power margin. The average excess power of the users should vanish in the many-user limit such that the system obeys (7). This does not happen if successive decoding is used. However, joint decoding does not suffer from this problem as all the users are decoded at the same time (without a specific order). Thus, for successive decoding, there is a need for a centralized assignment of the transmit powers. If the number of scheduled users, however, is very large and joint decoding is employed, the users can calculate their transmit powers individually by closely approximating the empirical fading and rate distributions of the other scheduled users by their statistical averages following the ideas of [10]. With the application of joint decoding, the proposed scheme has the potential to be implemented in a distributed manner.
The simplicity in making the scheduling decisions based on comparing the offline computed thresholds with channel conditions is well-suited to delay sensitive applications and power-limited devices. By using the parameter τ_{ i } and the deadline distribution, we can control the energy-delay trade-off. A large value of τ_{ i } (and corresponding large p_{ i }) implies that the application data is more delay tolerant and the energy consumption will be closer to the energy consumption of the schemes without deadline delay guarantees.
8 Numerical results
We consider a multiple-access channel with M bands and assume that the short-term fading of the channels is statistically independent. Every user senses M channels and selects its best channel as the candidate for transmission. Therefore, a specific user is scheduled if its best channel is greater than the transmission threshold. This is the optimal multi-band allocation for the hard fairness asymptotic case [1]. Spectral efficiency is normalized by M to get spectral efficiency per channel C. We consider a system where users are placed uniformly at random in a cell except for a forbidden region around the access point of radius δ=0.01. The path loss is monomial with exponent 2. All users experience fast fading with exponential power distribution with unit mean on each of the M channels. The details of the path loss model can be found in the Appendix.
Thresholds computed via SA
N | κ _{4} | κ _{3} | κ _{2} | κ _{1} | (E_{ b }/N_{0})_{ s y s } |
---|---|---|---|---|---|
2 | - | - | 0.24 | 0 | −1.42 dB |
3 | - | 0.54 | 0.23 | 0 | −3.06 dB |
4 | 0.76 | 0.59 | 0.22 | 0 | −4.07 dB |
Recursively computed thresholds
N | κ _{4} | κ _{3} | κ _{2} | κ _{1} | (E_{ b }/N_{0})_{ s y s } |
---|---|---|---|---|---|
2 | - | - | 0.24 | 0 | −1.42 dB |
3 | - | 0.52 | 0.24 | 0 | −3.05 dB |
4 | 0.75 | 0.52 | 0.24 | 0 | −4.08 dB |
9 Conclusion
We have proposed an energy efficient opportunistic multiuser scheduling scheme in the presence of a hard deadline delay constraint for the individual packets. The proposed scheme schedules the data depending on the instantaneous short-term fading and transmission deadline of the packets and exploits good channel conditions to make the system energy efficient. The many-user analysis and MDP modeling of the proposed scheme is the major contribution of this work. The many-user model helps to compute solution for the case of convex rate-power curve. Our system modeling ensures that the multiuser scheduling can be broken into a packet-based scheduling problem in the many-user limit. Though threshold optimization for the packet transmission is not a convex optimization problem, it can be solved within small margins of optimality with quite low complexity. We show that random arrivals can be modeled as constant arrivals with random size in the many-user limit and the scheduling decisions are independent of the deadline distribution of the arriving packets. The numerical results demonstrate that the many-user considerations are applicable for a reasonable network size of a few hundred users. The hard deadline can be used as a tuning parameter by the system designer to control the trade-off between the energy efficiency of the system and the maximum latency tolerated by the application.
Endnotes
^{a} The dropping probability is defined as the probability that a packet cannot meet the deadline and dropped eventually after buffering for the time slots equal to deadline.
^{b} The problem of error propagation in successive decoding can easily be overcome by means of iterative (soft) multiuser decoding [20].
^{c} It should be noted that it may not be feasible to achieve deadline guarantee for every packet, e.g. due to shadowing or power limitation. The scheme can easily be extended to packet dropping scenario with non-zero dropping probability [25], but avoided here to focus on the main topic.
^{d} It should be noted that computation of steady-state probabilities in a MDP requires solution of state equations with the condition $\sum _{i}{\pi}_{i}=1$. Thanks to the tree structure of state diagram, we are able to compute the limiting probabilities in closed form via (17).
^{e} The state in SA refers to the configuration of the system, i.e. the current transmission thresholds. It has no relation with the state of the Markov process given by the buffering time of the packet.
^{f} Please note that optimization can also be performed for optimal $\overrightarrow{\alpha}$ as in Section 6.1 and computing optimal thresholds from $\overrightarrow{{\alpha}^{\ast}}$.
Appendix
In this work, the channel model of [1] is used. Signal propagation is characterized by a distance-dependent path loss factor and a frequency-selective short-term fading that depend on the scattering environment around the user terminal. As described in Section 2, these two effects are taken into account by letting ${g}_{k}^{m}={s}_{k}{f}_{k}^{m}$, where s_{ k } denotes the path loss of user k and ${f}_{k}^{m}$ is the short-term fading of user k in channel m.
where the path loss at the cell border is normalized to one.
Declarations
Acknowledgements
This work was supported by the Research Council of Norway (NFR) under the NORDITE/VERDIKT program (NFR contract no. 172177).
Authors’ Affiliations
References
- Caire G, Müller R, Knopp R: Hard fairness versus proportional fairness in wireless communications: the single-cell case. IEEE Trans. Inform. Theory 2007, 53(4):1366-1385.MathSciNetView ArticleGoogle Scholar
- Viswanath P, Tse DNC, Laroia R: Opportunistic beamforming using dumb antennas. IEEE Trans. Inform. Theory 2002, 46(6):1277-1294.MathSciNetView ArticleGoogle Scholar
- Berry RA, Gallager RG: Communication over fading channels with delay constraints. IEEE Trans. Inform. Theory 2002, 48(5):1135-1149. 10.1109/18.995554MathSciNetView ArticleGoogle Scholar
- Wu D, Negi R: Utilizing multiuser diversity for efficient support of quality of service over a fading channel. IEEE Trans. Veh. Tech 2005, 54(3):1198-1206. 10.1109/TVT.2005.844671View ArticleGoogle Scholar
- Chan W, Neely MJ, Mitra U: Energy efficient scheduling with individual packet delay constraints: offline and online results. In IEEE Infocom. Piscataway: IEEE,; 2007.Google Scholar
- Neely MJ: Optimal energy and delay tradeoffs for multiuser wireless downlinks. IEEE Trans. Inform. Theory 2007, 53(9):3095-3113.MathSciNetView ArticleGoogle Scholar
- Tarello A, Sun J, Zafar M, Modiano E: Minimum energy transmission scheduling subject to deadline constraints. Wireless Networks 2008, 14(5):633-645. 10.1007/s11276-006-0005-6View ArticleGoogle Scholar
- Bertsekas DP: Dynamic Programming and Optimal Control, Vol. 1. Nashua: Athena Scientific; 2007.Google Scholar
- Lee J, Jindal N: Energy-efficient scheduling of delay constrained traffic over fading channels. IEEE Trans. Wireless Comm 2009, 8(4):1866-1875.View ArticleGoogle Scholar
- Viswanath P, Tse DN, Anantharam V: Asymptotically optimal water-filling in vector multiple-access channels. IEEE Trans. Inform. Theory 2001, 47(1):241-267. 10.1109/18.904525MathSciNetView ArticleGoogle Scholar
- Benaïm M, Le Boudec J-Y: A class of mean field interaction models for computer and communication systems. Perform Eval 2008, 65(11–12):823-838.View ArticleGoogle Scholar
- Butt MM, Jorswieck EA: Maximizing system energy efficiency by exploiting multiuser diversity and loss tolerance of the applications. IEEE Trans. Wireless Comm 2013, 12(9):4392-4401.View ArticleGoogle Scholar
- Chaporkar P, Kansanen K, Müller RR: On the delay-energy tradeoff in multiuser fading channels. EURASIP J. Wirel. Commun. Netw 2009, 2009: 1-14.View ArticleGoogle Scholar
- Guo D, Verdu S: Randomly spread CDMA: asymptotics via statistical physics. IEEE Trans. Inform. Theory 2005, 51(6):1983-2010. 10.1109/TIT.2005.847700MathSciNetView ArticleGoogle Scholar
- Butt MM: Energy-performance trade-offs in multiuser scheduling: large system analysis. IEEE Wireless Commun. Lett 2012, 1(3):217-220.View ArticleGoogle Scholar
- Jindal N, Vishwanath S, Goldsmith A: On duality of Gaussian multiple access and broadcast channels. 768–783 2004., 50(5):Google Scholar
- Hanly S, Tse D: Multi-access fading channels-part II: delay limited capacities. IEEE Trans. Inform. Theory 1998, 44(7):2816-2831. 10.1109/18.737514MathSciNetView ArticleGoogle Scholar
- ten Brink S, Kramer G, Ashikhmin A: Design of low-density parity-check codes for modulation and detection. IEEE Trans. Comm 2004, 52(4):670-678. 10.1109/TCOMM.2004.826370View ArticleGoogle Scholar
- Sanderovich A, Peleg M, Shamai S: LDPC coded MIMO multiple access with iterative joint decoding. IEEE Trans. Inform. Theor 2005, 51(4):1437-1450. 10.1109/TIT.2005.844064MathSciNetView ArticleGoogle Scholar
- Caire G, Müller R, Tanaka T: Iterative multiuser joint decoding: optimal power allocation and low-complexity implementation. IEEE Trans. Inform. Theory 2004, 50(8):1950-1973.MathSciNetView ArticleGoogle Scholar
- Kirkpatrick S, Gelatt CD, Vecchi MP: Optimization by simulated annealing. Science 1983, 220(4598):671-680. 10.1126/science.220.4598.671MathSciNetView ArticleGoogle Scholar
- Cerny V: Thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm. Optim. Theor. Appl 1985, 45(1):41-52. 10.1007/BF00940812MathSciNetView ArticleGoogle Scholar
- Geman S, Geman D: Stochastic relaxation, Gibbs distribution and the Bayesian restoration in images. IEEE Trans. Pattern Anal. Mach. Intell 1984, 6(6):721-741.View ArticleGoogle Scholar
- Szu H, Hartley R: Fast simulated annealing. Phys. Lett. A 1987, 122(3–4):157-162.View ArticleGoogle Scholar
- Butt MM, Kansanen K, Müller RR: Hard Deadline constrained multiuser scheduling for random arrivals. In IEEE WCNC. Piscataway: IEEE,; 2011.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.