Skip to main content

Optimal resource allocation in wireless communication and networking

Abstract

Optimal design of wireless systems in the presence of fading involves the instantaneous allocation of resources such as power and frequency with the ultimate goal of maximizing long term system properties such as ergodic capacities and average power consumptions. This yields a distinctive problem structure where long term average variables are determined by the expectation of a not necessarily concave functional of the resource allocation functions. Despite their lack of concavity it can be proven that these problems have null duality gap under mild conditions permitting their solution in the dual domain. This affords a significant reduction in complexity due to the simpler structure of the dual function. The article discusses the problem simplifications that arise by working in the dual domain and reviews algorithms that can determine optimal operating points with relatively lightweight computations. Throughout the article concepts are illustrated with the optimal design of a frequency division broadcast channel.

Introduction

Operating variables of a wireless system can be separated in two types. Resource allocation variables p(h) determine instantaneous allocation of resources like frequencies and transmitted powers as a function of the fading coefficient h. Average variables x capture system’s performance over a large period of time and are related to instantaneous resource allocations via ergodic averages. A generic representation of the relationship between instantaneous and average variables is

x E f 1 h , p ( h ) ,
(1)

where f1(h,p(h)) is a vector function that maps channel h and resource allocation p(h) to instantaneous performance f1(h,p(h)). The system’s design goal is to select resource allocations p(h) to maximize ergodic variables x in some sense.

An example of a relationship having the form in (1) is a code division multiple access channel in which case h denotes the vector of channel coefficients, p(h) the instantaneous transmitted power, f1(h p(h)) the instantaneous communication rate determined by the signal to interference plus noise ratio, and x the ergodic rates determined by the expectation of the instantaneous rates. The design goal is to allocate instantaneous power p(h) subject to a power constraint so as to maximize a utility of the ergodic rate vector x. This interplay of instantaneous actions to optimize long term performance is pervasive in wireless systems. A brief list of examples includes optimization of orthogonal frequency division multiplexing[1], beamforming[2, 3], cognitive radio[4, 5], random access[6, 7], communication with imperfect channel state information (CSI)[8, 9], and various flavors of wireless network optimization[1018].

In many cases of interest the functions f1(h,p(h)) are nonconcave and as a consequence finding the resource allocation distribution p(h) that maximizes x requires solution of a nonconvex optimization problem. This is further complicated by the fact that since fading channels h take on a continuum of values there is an infinite number of p(h) variables to be determined. A simple escape to this problem is to allow for time sharing in order to make the range ofE f 1 ( h , p ( h ) ) convex and permit solution in the dual domain without loss of optimality. While the nonconcave function f1(h,p(h)) still complicates matters, working in the dual domain makes solution, if not necessarily simple, at least substantially simpler. However, time sharing is not easy to implement in fading channels.

In this article, we review a general methodology that can be used to solve optimal resource allocation problems in wireless communications and networking without resorting to time sharing[19, 20]. The fundamental observation is that the range ofE f 1 ( h , p ( h ) ) is convex if the probability distribution of the channel h contains no points of positive probability (Section “Duality in wireless systems optimization”). This observation can be leveraged to show lack of duality gap of general optimal resource allocation problems (Theorem 1) making primal and dual problems equivalent. The dual problem is simpler to solve and its solution can be used to recover primal variables (Section “Recovery of optimal primal variables”) with reduced computational complexity due to the inherently separable structure of the problem Lagrangians (Section “Separability”). We emphasize that this reduction in complexity, as in the case of time sharing, just means that the problem becomes simpler to solve. In many cases it also becomes simple to solve, but this is not necessarily the case.

We also discuss a stochastic optimization algorithm to determine optimal dual variables that can operate without knowledge of the channel probability distribution (Section “Dual descent algorithms”). This algorithm is known to almost surely converge to optimal operating points in an ergodic sense (Theorem 5). Throughout the article concepts are illustrated with the optimal design of a frequency division broadcast channel (Section “Frequency division broadcast channel” in “Optimal wireless system design”, Section “Frequency division broadcast channel” in “Recovery of optimal primal variables”, and Section “Frequency division broadcast channel” in “Dual descent algorithms”).

One of the best known resource allocation problems in wireless communications concerns the distribution of power on a block fading channel using capacity-achieving codes. The solution to this problem is easy to derive and is well known to reduce to waterfilling across the fading gain, e.g.,[21, p. 245]. Since this article can be considered as an attempt to generalize this solution methodology to general wireless communication and networking problems it is instructive to close this introduction by reviewing the derivation of the waterfilling solution. This is pursued in the following section.

Power allocation in a point-to-point channel

Consider a transmitter having access to perfect CSI h that it uses to select a transmitted power p(h) to convey information to a receiver. Using a capacity achieving code the instantaneous channel rate for fading realization h isr(h)=log(1+hp(h)/ N 0 ) where N0denotes the noise power at the receiver end. A common goal is to maximize the average rater:=E[r(h)] with respect to the probability distribution m h (h) of the channel gain h—which is an accurate approximation of the long term average rate—subject to an average power constraint P0. We can formulate this problem as the optimization program

P = max E log 1 + hp ( h ) N 0 s.t. E p ( h ) q 0 .
(2)

In most cases the fading channel h takes on a continuum of values. Therefore, solving (2) requires the determination of a power allocation functionp: R + R + that maps nonnegative fading coefficients to nonnegative power allocations. This means that (2) is an infinite dimensional optimization problem which in principle could be difficult to solve. Nevertheless, the solution to this program is easy to derive and given by waterfilling as we already mentioned. The widespread knowledge of the waterfilling solution masks the fact that is is rather remarkable that (2) is easy to solve and begs the question of what are the properties that make it so. Let us then review the derivation of the waterfilling solution in order to pinpoint these properties.

To solve (2) we work in the dual domain. To work in the dual domain we need to introduce the Lagrangian, the dual function, and the dual problem. Introduce then the nonnegative dual variableλ R + and define the Lagrangian associated with the optimization problem in (2) as

( p , λ ) = E log 1 + hp ( h ) N 0 + λ q 0 E p ( h ) .
(3)

The dual function is defined as the maximum value of the Lagrangian across all functionsp: R + R + , which upon definingP:={p:p(h)0} as the set of nonnegative functions can be defined as

g ( λ ) : = max p P ( p , λ ) = max p P E log 1 + hp ( h ) N 0 + λ q 0 E [ p ( h ) ] .
(4)

The dual problem corresponds to the minimization of the dual function with respect to all nonnegative multipliers λ,

D = min λ 0 g ( λ ) = min λ 0 max p P ( p , λ ) .
(5)

Since the objective in (2) is concave with respect to variables p(h) and the constraint is linear in p(h) the optimization problem in (2) is convex and as such it has null duality gap in the sense that P=D.

An entity that is important for the upcoming discussion is the primal Lagrangian maximizer functionp(λ): R + R + whose values for given h are denoted as p(h,λ). This function is defined as the one that maximizes the Lagrangian for given dual variable λ

p ( λ ) = arg max p P ( p , λ ) .
(6)

Using the definition of the Lagrangian maximizer function we can write the dual function asg(λ)=(p(λ),λ).

Computing values p(h,λ) of the Lagrangian maximizer function p(λ) is easy. To see that this is true rewrite the Lagrangian in (3) so that there is only one expectation

( p , λ ) = E log 1 + hp ( h ) N 0 λp ( h ) + λ q 0 .
(7)

With the Lagrangian written in this form we can see that the maximization of(p,λ) required by (6) can be decomposed in maximizations for each individual channel realization,

max p P ( p , λ ) = E max p ( h ) > 0 log 1 + hp ( h ) N 0 λp ( h ) + λ q 0 .
(8)

That the equality in (8) is true is a consequence of the fact that the expectation operator is linear and that there are no constraints coupling the selection of values p(h1) and p(h2) for different channel realizations h1h2. Functional values p(h) in both sides of (8) are required to be nonnegative but other than that we can select p(h1) and p(h2) independently of each other as indicated in the right hand side of (8).

Since the right hand side of (8) states that to maximize the Lagrangian we can select functional values p(h) independently of each other, values p(h,λ) of the Lagrangian maximizer function p(λ) defined in (6) are given by

p ( h , λ ) = arg max p ( h ) 0 log 1 + hp ( h ) N 0 λp ( h ) .
(9)

The similarity between (6) and (9) is deceiving as the latter is a much easier problem to solve that involves a single variable. To find the Lagrangian maximizer value p(h,λ) operating from (9) it suffices to solve for the null of the derivative with respect to p(h). Doing this yields the Lagrangian maximizer

p ( h , λ ) = 1 λ N 0 h + ,
(10)

where the operator [x]+ :=max(x,0) denotes projection onto nonnegative reals, which is needed because of the constraint p(h)≥0.

Of particular interest is the Lagrangian maximizer function p(λ) corresponding to the optimal Lagrange multiplier λ :=arg min λ 0 g(λ). Returning to the definition of the dual function in (4) we can bound D=g(λ) as

D = max p P ( p , λ ) p , λ .
(11)

Indeed, since D is given by a Lagrangian maximization it must equal or exceed the values of the Lagrangian for any function p, and for the optimal function pin particular. Considering the explicit Lagrangian expression in (3) we can write( p , λ ) as

( p , λ ) = E log 1 + h p ( h ) N 0 + λ q 0 E p ( h ) .
(12)

Observe that since pis the optimal power allocation function it must satisfy the power constraint implying that it must be q 0 E p ( h ) 0. Since the optimal dual variable satisfies λ≥0 their product is also nonnegative allowing us to transform (12) into the bound

p , λ E log 1 + h p ( h ) N 0 = P ,
(13)

where the equality is true because P and pare the otpimal value and arguments of the primal optimization problem (2). Combining the bounds in (11) and (13) yields

D = max p P p , λ p , λ P.
(14)

Using the equivalence P=D of primal and dual optimum values it follows that the inequalities in (13) must hold as equalities. The equality max p P (p, λ )=( p , λ ), in particular, implies that the function p is the Lagrangian maximizing function corresponding to λ=λ, i.e.,

p = p λ = arg max p p , λ .
(15)

The important consequence of (15) follows from the fact that the Lagrangian maximizer function p(λ) is easy to compute using the decomposition in (9). By extension, if the optimal multiplier λ is available, computation of the optimal power allocation function palso becomes easy. Indeed, making λ=λin (9) we can determine values p(h) of p as

p ( h ) = p ( h , λ ) = arg max p ( h ) 0 log 1 + hp ( h ) N 0 λ p ( h ) ,
(16)

which are explicitly given by (10) with λ=λ.

To complete the problem solution we still need to determine the optimal multiplier λ. We show a method for doing so in Section “Dual descent algorithms”, but it is important to recognize that this cannot be a difficult problem because the dual function is single-dimensional and convex—dual functions of maximization problems are always convex. This has to be contrasted with the infinite dimensionality of the primal problem. By working in the dual domain we reduce the problem of determining the infinite dimensional optimal power allocation function to the determination of the one dimensional optimal Lagrange multiplier λ.

Recapping the derivation of the optimal power allocation p, we see that there are three conditions that make (18) simple:

(C1) Since the optimization problem is convex it is equivalent to its Lagrangian dual implying that optimal primal variables can be determined as Lagrangian maximizers associated with the optimal multiplier λ[cf. (11)–(15)].

(C2) Due to the separable structure of the Lagrangian, determination of the optimal power allocation function is carried out through the solution of per-fading state subproblems [cf. (6)–(9)].

(C3) Because there is a finite number of constraints the dual function is finite dimensional even though there is an infinite number of primal variables.

Most optimization problems in wireless systems are separable in the sense of (C2) and have a finite number of constraints as [cf. (C3)] but are typically not convex [cf. (C1)].

To illustrate this latter point consider a simple variation of (2) where instead of using capacity achieving codes we use adaptive modulation and coding (AMC) relying on a set of L communication modes. The l-th mode supports a rate α l and is used when the signal to noise ratio (SNR) at the receiver end is between βl−1 and β l . Letting γ be the received SNR andI(E) denote the indicator function of the eventE, the communication rate function C(γ) for AMC can be written as

C AMC (γ)= l = 1 L α l I β l γ < β l + 1 .
(17)

The corresponding optimal power allocation problem subject to average power constraint q0 can now be formulated as

P = max E l = 1 L α l I β l hp ( h ) N 0 < β l + 1 s.t. E p ( h ) q 0 .
(18)

Similar to (2) the dual function of the optimization problem in (18) is one dimensional and its Lagrangian is separable in the sense that we can find Lagrangian maximizers by performing per-fading state maximizations. Alas, the problem is not convex because the AMC rate function C(γ) is not concave, in fact, it is not even continuous.

Since it does not satisfy (C1), solving (18) in the dual domain is not possible in principle. Nevertheless the condition that allows determination of p(h) as the Lagrangian maximizer p(h,λ) [cf. (16)] is not the convexity of the primal problem but the lack of duality gap. We’ll see in Section “Duality in wireless systems optimization” that this problem does have null duality gap as long as the probability distribution of the channel h contains no points of strictly positive probability. Thus, the solution methodology in (3)-(16) can be applied to solve (18) despite the discontinuous AMC rate function C(γ). This is actually a quite generic property that holds true for optimization problems where nonconcave functions appear inside expectations. We introduce this generic problem formulation in the next section.

Optimal wireless system design

Let us return to the relationship in (1) where h denotes the random fading state that ranges on a continuous space, p(h) the instantaneous resource allocation, f1(h,p(h)) a vector function mapping h and p(h) to instantaneous system performance, and x an ergodic average. The expectation in (1) is with respect to the joint probability distribution m h (h) of the vector channel h.

It is convenient to think of f1(h,p(h)) as a family of functions indexed by the parameter h with p(h) denoting the corresponding variables. Notice that there is one vector p(h) per fading state h, which translates into an infinite number of resource allocation variables if h takes on an infinite number of values. Consequently, it is adequate to refer to the setp:= { p ( h ) } h of all resource allocations as the resource allocation function. The number of ergodic limits x of interest, on the other hand, is assumed finite.

Instantaneous resource allocations p(h) are further constrained to a given bounded setP(h). These restrictions define a set of admissible resource allocation functions that we denote as

P : = { p : p ( h ) P ( h ) , for all h } .
(19)

Variables p(h) determine system performance over short periods of times. As such, they are of interest to the system designer but transparent to the end user except to the extent that they determine the value of ergodic variables x. Therefore, we adopt as our design objective the maximization of a concave utility function f0(x) of the ergodic average x. Putting these preliminaries together we write the following program as an abstract formulation of optimal resource allocation problems in wireless systems

P = max f 0 ( x ) s.t. x E f 1 ( h , p ( h ) ) , f 2 ( x ) 0 , x X , p P ,
(20)

where we added further constraints on the set of ergodic averages x. These constraints are in the form of a bounded convex set inclusionxX and a concave function inequality f2(x)≥0. In the problem formulation in (20) the setX is convex and the functions f0(x) and f2(x) are concave. The family of functions f1(h,p(h)) is not necessarily concave with respect to p(h) and the setP is not necessarily convex. The setsX andP are assumed compact to guarantee that x and p(h) are finite. For the expectation in (20) to exist we need to have f1(h,p(h)) integrable with respect to the probability distribution m h (h) of the vector channel h. This imposes a (mild) restriction on the functions f1(h,p(h)) and the power allocation function p. Integrability is weaker than continuity.

For future reference define xand p as the arguments that solve (20)

( x , p ) = arg max f 0 ( x ) s.t x E f 1 ( h , p ( h ) ) , f 2 ( x ) 0 , x X , p P.
(21)

The configuration pair (x,p) attains the optimum value P=f0(x) and satisfies the constraints x E f 1 h , p ( h ) and f 2 ( x )0 as well as the set constraints x X, and p P. Observe that the pair (x,p) need not be unique. It may be, and it is actually a common occurrence in practice, that more than one configuration is optimal. Thus, (21) does not define a pair of variables but a set of pairs of optimal configurations. As it does not lead to confusion we use (x,p) to represent both, the set of optimal configurations and an arbitrary element of this set.

To write the Lagrangian we introduce a nonnegative Lagrange multiplierΛ= λ 1 T , λ 2 T T where λ10 is associated with the constraintxE f 1 h , p ( h ) and λ20 with the constraint f2(x). The Lagrangian of the primal optimization problem in (20) is then defined as

( x , p , Λ ) : = f 0 ( x ) + λ 1 T E f 1 h , p ( h ) x + λ 2 T f 2 ( x ) .
(22)

The corresponding dual function is the maximum of the Lagrangian with respect to primal variablesxX andpP

g ( Λ ) : = max x X , p P ( x , p , Λ ) .
(23)

The dual problem is defined as the minimum value of the dual function over all nonnegative dual variables

D = min Λ 0 g ( Λ ) ,
(24)

and the optimal dual variables are defined as the arguments that achieve the minimum in (24)

Λ : = arg min Λ 0 g ( Λ ) .
(25)

Notice that the optimal dual argument Λis a set as in the case of the primal optimal arguments because there may be more than one vector that achieves the minimum in (24). As we do with the optimal primal variables (x,p), we use Λ to denote the set of optimal dual variables and an arbitrary element of this set. A particular example of this generic problem formulation is presented next.

Frequency division broadcast channel

A common access point (AP) administers a power budget q0to communicate with a group of J nodes. The physical layer uses frequency division so that at most one terminal can be active at any given point in time. The goal is to design an algorithm that allocates power and frequency to maximize a given ergodic rate utility metric while ensuring that rates are at least rmin and not more than rmax.

Denote as h i the channel to terminal i and define the vector h:=[h1…,h J ]Tgrouping all channel realizations. In any time slot the AP observes the realization h of the fading channel and decides on suitable power allocations p i (h) and frequency assignments α i (h). Frequency assignments α i (h) are indicator variables α i (h){0,1} that take value α i (h)=1 when information is transmitted to node i and α i (h)=0 otherwise. If α i (h)=1 communication towards i ensues at power p i (h) resulting in a communication rate C(h i p i (h)/N0) determined by the SNR h i p i (h)/N0. The specific form of the function C(h i p i (h)/N0) mapping channels and powers to transmission rates depends on the type of modulation and codes used. One possibility is to select capacity achieving codes leading toC( h i p i (h)/ N 0 )=log(1+ h i p i (h)/ N 0 ). A more practical choice is to use AMC in which case C(h i p i (h)/N0)=CAMC(h i p i (h)/N0) with CAMC(h i p i (h)/N0) as given in (17).

Regardless of the specific form of C(h i p i (h)/N0) we can write the ergodic rate of terminal i as

r i = E α i ( h ) C h i p i ( h ) N 0 .
(26)

The factor C(h i p i (h)/N0) is the instantaneous rate achieved if information is conveyed. The factor α i (h) indicates wether this information is indeed conveyed or not. The expectation weights the instantaneous capacity across fading states and is equivalent to the consideration of an infinite horizon time average.

Similarly, p i (h) denotes the power allocated for communication with node i, but this communication is attempted only if α i (h)=1. Thus, the instantaneous power utilized to communicate with i for channel realization h is α i (h)p i (h). The total instantaneous power is the sum of these products for all i and the long term average power consumption can be approximated as the expectation

E i = 1 J α i ( h ) p i ( h ) q 0 ,
(27)

that according to the problem statement cannot exceed the budget q0.

To avoid collisions between communication attempts the indicator variables α i (h) are restricted so that at most one of them is 1. Define the vector α(h):=[α1(h),…,α J (h)]T corresponding to values of the functionα:= α 1 , , α J T and introduce the set of vector functions

A : = α : α ( h ) { 0 , 1 } , 1 T α ( h ) 1 .
(28)

We can now express the frequency exclusion constraints asαA.

We still need to model the restriction that the achieved capacity r i needs to be between rmin and rmax but this is easily modeled as the constraint rminr i rmax. Defining the vector r=[r1,…,r J ]T this constraint can be written asrR with the setR defined as

R : = r = r 1 , , r J T : r min r i r max .
(29)

We finally introduce a monotonic nondecreasing utility function U(r i ) to measure the value of rate r i and formally state our design goal as the maximization of the aggregate utility i = 1 J U( r i ). Using the definitions in (26)–(29) the operating point that maximizes this aggregate utility for a frequency division broadcast channel follows from the solution of the optimization problem

P = max i = 1 J U ( r i ) s.t r i E α i ( h ) C h i p i ( h ) N 0 , q 0 E i = 1 J α i ( h ) p i ( h ) , r R , α A ,
(30)

where we relaxed the rate expression in (26) to an inequality constraint, which we can do without loss of optimality because the utility U(r i ) is monotonic nondecreasing.

The problem formulation in (30) is of the form in (20). The ergodic rates r in (30) are represented by the ergodic variables x in (20) whereas the power and frequency allocation functions p and α of (30) correspond to the resource allocation function p of (20). The setR maps to the setX and the setA to the setP. There are no functions in (30) taking the place of the function f2(x) of (20). The function f1(h,p(h)) in (20) is a placeholder for the stacking of the functions α i (h)C(h i p i (h)/N0) for different i and the negative of the power consumptions i = 1 J α i (h) p i (h) in (30). The power constraint q 0 E i = 1 J α i ( h ) p i ( h ) is not exactly of the form in (1) because q0is a constant not a variable but this doesn’t alter the fundamentals of the problem. The functions α i (h)C(h i p i (h)/N0) are not concave and the setA is not convex. This makes the program in (30) nonconvex but is consistent with the restrictions imposed in (20).

To write the Lagrangian corresponding to this optimization problem introduce multipliers λ:=[λ1,…,λ J ]T associated with the capacity constraints and μ associated with the power constraint. The Lagrangian is then given by

( r , p , α , λ , μ ) = i = 1 J U ( r i ) + λ i E α i ( h ) C h i p i ( h ) N 0 r i + μ q 0 E i = 1 J α i ( h ) p i ( h ) .
(31)

The dual function, dual problem, and optimal dual arguments are defined as in (22)–(25) by replacing(r,p,α,λ,μ) for(x,p,Λ). Since (30) is a nonconvex program we do not know if the dual problem is equivalent to the primal problem. We explore this issue in the following section.

Duality in wireless systems optimization

For any optimization problem the dual minimum D provides an upper bound for the primal optimum value P. This is easy to see by combining the definitions of the dual function in (23) and the Lagrangian in (22) to write

g ( Λ ) = max x X , p P f 0 ( x ) + λ 1 T E f 1 h , p ( h ) x + λ 2 T f 2 ( x ) .
(32)

Because the dual function value g(Λ) is obtained by maximizing the right hand side of (32), evaluating this expression for arbitrary primal variables yields an upper bound on g(Λ). Using a pair (x,p) of optimal primal arguments as this arbitrary selection yields the inequality

g ( Λ ) f 0 ( x ) + λ 1 T E f 1 h , p ( h ) x + λ 2 T f 2 ( x ) .
(33)

Since the pair x,p is optimal, it is feasible, which means that we must haveE f 1 h , p ( h ) x 0 and f 2 ( x )0. Lagrange multipliers are also nonnegative according to definition. Therefore, the last two summands in the right hand side of (33) are nonnegative from which it follows that

g ( Λ ) f 0 ( x ) = P.
(34)

The inequality in (34) is true for any Λ and therefore true for Λ=Λin particular. It then follows that the dual optimum D upper bounds the primal optimum P,

D P ,
(35)

as we had claimed. The difference D-P is called the duality gap and provides an indication of the loss of optimality incurred by working in the dual domain.

For the problem in (20) the duality gap is null as long as the channel probability distribution m h (h) contains no point of positive probability as we claim in the following theorem which is a simple generalization of a similar result in[20].

Theorem 1

Let P denote the optimum value of the primal problem (20) and D that of its dual in (24) and assume there exists a strictly feasible point (x0,p0) that satisfies the constraints in (20) with strict inequality. If the channel probability distribution m h (h) contains no point of positive probability the duality gap is null, i.e.,

P = D.
(36)

The condition on the channel distribution not having points of positive probability is a mild requirement satisfied by practical fading channel models including Rayleigh, Rice, and Nakagami. The existence of a strictly feasible point (x0,p0) is a standard constraint qualification requirement which is also not stringent in practice.

In order to prove Theorem 1 we take a detour in Section “Lyapunov’s convexity theorem” to define atomic and nonatomic measures along with the presentation of Lyapunov’s Convexity Theorem. The proof itself is presented in Section “Proof of Theorem 1”. The implications of Theorem 1 are discussed in Sections “Recovery of optimal primal variables” and “Dual descent algorithms”.

Lyapunov’s convexity theorem

The proof of Theorem 1 uses a theorem by Lyapunov concerning the range of nonatomic measures[22]. Measures assign strictly positive values to sets of a Borel field. When all points have zero measure the measure is called nonatomic as we formally define next.

Definition 1 (Nonatomic measure)

Let w be a measure defined on the Borel fieldB of subsets of a spaceX. Measure w is nonatomic if for any measurable set E 0 B with w(E0)>0, there exist a subset E of E0; i.e.,E E 0 , such that w(E0)>w(E)>0.

Familiar measures are probability related, e.g., the probability of a set for a given channel distribution. To build intuition on the notion of nonatomic measure consider a random variable X taking values in [0,1] and [2,3]. The probability of landing in each of these intervals is 1/2 and X is uniformly distributed inside each of them; see Figure1. The spaceX is the real line, and the Borel fieldB comprises all subsets of real numbers. For every subsetEB define the measure of E as twice the integral of x, weighted by the probability distribution of X on the set E, i.e.,

w X ( E ) : = 2 E xdX.
(37)
Figure 1
figure 1

Nonatomic measure. The random variable X is uniformly distributed in[0,1][2,3]. The measure w X (E):=2 E xdX is nonatomic because all sets of nonzero probability include a smaller set of nonzero probability. Lyapunov’s convexity theorem (Theorem 2) states that the measure rangeW:={ w X (E):EB} is convex. The range of w X is the, indeed convex, interval [0,3].

Note that, except for the factor 2, the value of w X (E) represents the contribution of the set E to the expected value of X and that when E is the whole spaceX, it holds w X (X)=2 E x (x). According to Definition 1, w X (E) is a nonatomic measure of elements ofB. Every subset E0 with w X (E0)>0 includes at least an interval (a,b). The measure of the set E:=E0−((a + b)/2,b) formed by removing the upper half of (a,b) from E0 is w X (E)=w X (E0)−(ba)/2. The measure of E satisfies w X (E)>0 as required for w X (E) to be nonatomic.

To contrast this with an example of an atomic measure consider a random variable Y landing equiprobably in [0,1] or 5/2; see Figure2. In this case, the measure w Y (E):=2 E ydY is atomic because the set E0={5/2} has positive measure w Y (E)=1. The only setE E 0 is the empty set whose measure is null.

Figure 2
figure 2

Atomic measure. The random variable Y lands with equal probability in Y=5/2 and uniformly in the interval [0,1]. The measure w Y (E):=2 E ydY is atomic because the set {1} has strictly positive probability and no set other than the empty set is strictly included in {1}. Theorem 2 does not apply. The range of w Y (E) is the nonconvex union of the intervals [0,1/2] and [5/2,3].

Theorem 2 (Lyapunov’s convexity theorem [[22]]).

Consider nonatomic measures w1,…,w n on the Borel fieldB of subsets of a spaceX and define the vector measure w(E):=[w1(E),…,w n (E)]T. The rangeW:= w ( E ) : E B of the measure w is convex. I.e., if w(E1)=w1and w(E2)=w2, then for any α[0,1] there exists E 0 B such that w(E0)=α w1 + (1−α)w2.

The difference between the distributions of X and Y is that Y contains a point of strictly positive probability, i.e., an atom. This implies presence of delta functions in the probability density function of Y . Or, in a rather cleaner statement the cumulative distribution function (cdf) of X is continuous whereas the cdf of Y is not.

Lyapunov’s convexity theorem introduced next refers to the range of values taken by (vector) nonatomic measures.

Returning to the probability measures defined in terms of the probability distributions of the random variables X and Y , Theorem 2 asserts that the range of w X (E), i.e., the set of all possible values taken by w X is convex. In fact, it is not difficult to verify that the range of w X is the convex interval [0,3] as shown in Figure1. Theorem 2 does not claim anything about w Y . In this case, it is easy to see that the range of w Y is the (non-convex) union of the intervals [0,1/2] and [5/2,3]; see Figure2.

Proof of Theorem 1

To establish zero duality gap we will consider a perturbed version of (20) obtained by perturbing the constraints used to define the Lagrangian in (22). The perturbation function P(Δ) assigns to each (perturbation) parameter setΔ:= δ 1 T , δ 2 T T the solution of the (perturbed) optimization problem

P ( Δ ) = max f 0 ( x ) s.t. E f 1 h , p ( h ) x δ 1 , f 2 ( x ) δ 2 , x X , p P.
(38)

The perturbed problem in (38) can be interpreted as a modified version of (20), where we allow the constraints to be violated by Δ amounts. To prove that the duality gap is zero, it suffices to show that P(Δ) is a concave function of Δ; see, e.g.,[23].

LetΔ= [ δ 1 T , δ 2 T ] T andΔ= [ δ 1 T , δ 2 T ] T be an arbitrary given pair of perturbations. Let (x,p) be a pair of ergodic limits and resource allocation variables achieving the optimum value P(Δ) corresponding to perturbation Δ. Likewise, denote as (x ,p ) a pair that achieves the optimum value P(Δ ) corresponding to perturbation Δ . For arbitrary α[0,1], we are interested in the solution of (38) under perturbation Δ α :=α Δ + (1−α)Δ . In particular, to show that the perturbation function P(Δ) is concave we need to establish that

P Δ α = P α Δ + ( 1 α ) Δ αP Δ + ( 1 α ) P Δ .
(39)

The roadblock to establish concavity of the perturbation functions is the constraintxE f 1 h , p ( h ) . More specifically, the difficulty with this constraint is the ergodic limitE f 1 h , p ( h ) . Let us then isolate the challenge by defining the ergodic limit span

Y : = y p P for which y = E f 1 h , p ( h ) .
(40)

The setY contains all the possible values that the expectationE f 1 h , p ( h ) can take as the resource allocation function p varies over the admissible setP. When the channel distribution m h (h) contains no points of positive probability, the setY is convex as we claim in the following theorem.

Theorem 3.

LetY be ergodic limit span set in (40). If the channel probability distribution m h (h) contains no point of positive probability the setY is convex.

Before proving Theorem 3 let us apply it to complete the proof of Theorem 1. For doing that consider two arbitrary pointsyY andyY. If these points belong toY there exist respective resource allocation functionspP andpP such that

y = E f 1 ( h , p ( h ) ) , y = E f 1 ( h , p ( h ) ) .
(41)

IfY is a convex set as claimed by Theorem 3, we must have that for any given α the point y α :=α y + (1−α)y also belongs toY. In turn, this means there exists a resource allocation function p α P for which

y α : = α E f 1 h , p ( h ) + ( 1 α ) E f 1 h , p ( h ) = E f 1 ( h , p α ( h ) ) .
(42)

Further define the ergodic limit convex combination x α :=α x + (1−α)x . We first show that the pair (x α ,p α ) is feasible for the problem for perturbation Δ α .

The convex combination satisfies the constraint x α X because the setX is convex. We also have f2(x α )≥δ2,α because the function f2(x) is concave. To see that this is true simply notice that concavity of f2(x) implies that f 2 ( x α )= f 2 [αx+(1α)x]α f 2 (x)+(1α) f 2 (x). But f2(x)≥δ2 and f2(x )≥δ2 because x and x are feasible in (38) with perturbations Δ and Δ . Substituting these latter two inequalities into the previous one yields f 2 ( x α )α δ 2 +(1α) δ 2 = δ 2 , α according to the definition of Δ α .

We are left to show that the pair (x α ,p α ) satisfies the constraintE f 1 h , p ( h ) x δ 1 , α . For doing so recall that since (x,p) is feasible for perturbation Δ and (x ,p ) is feasible for perturbation Δ′ we must have

E f 1 h , p ( h ) x δ 1 , E f 1 h , p ( h ) x δ 1 ′.
(43)

Perform a convex combination of the inequalities in (43) and use the definitions of x α :=α x + (1−α)x and δ 1 , α :=α δ 1 +(1α)δ 1 to write

α E f 1 h , p ( h ) + ( 1 α ) E f 1 h , p ( h ) x α δ 1 , α .
(44)

Combining (42) and (44) it follows thatE f 1 h , p α ( h ) x α δ 1 , α completing the proof that the pair (x α ,p α ) is feasible for the problem for perturbation Δ α .

The utility yield of this feasible pair is f0(x δ ) which we can bound as

f 0 ( x α ) α f 0 ( x ) + ( 1 α ) f 0 ( x ) .
(45)

Since (x,p) is optimal for perturbation Δ we have f0(x)=P(Δ) and, likewise, f0(x )=P(Δ ). Further noting that the optimal yield P(Δ α ) for perturbation Δ α must exceed the yield f0(x α ) of feasible point x α we conclude that

P ( Δ α ) αP ( Δ ) + ( 1 α ) P ( Δ ) .
(46)

The expression in (46) implies that P(Δ) is a concave function of the perturbation vector Δ. The duality gap is therefore null as stated in (36).

We proceed now to the proof of Theorem 3.

Proof.

(Theorem 3) Consider two arbitrary pointsyY andyY. If these points belong thoY there exist respective resource allocation functionspP andpP such that

y = E f 1 ( h , p ( h ) ) , y = E f 1 ( h , p ( h ) ) .
(47)

To prove thatY is a convex set we need to show that for any given α the point y α :=α y + (1−α)y also belongs toY. In turn, this means we need to find a resource allocation function p α P for which

y α : = α E f 1 h , p ( h ) + ( 1 α ) E f 1 h , p ( h ) = E f 1 ( h , p α ( h ) ) .
(48)

For this we will use Theorem 2 (Lyapunov’s convexity theorem). Consider the space of all possible channel realizations, and the Borel fieldB of all possible subsets of. For every s etEB define the vector measure

w ( E ) : = E f 1 ( h , p ( h ) ) d m h , E f 1 ( h , p ( h ) ) d m h ,
(49)

where the integrals are over the set E with respect to the channel distribution m h (h). A vector of channel realizationsh is a point in the space. The set E is a collection of vectors h. Each of these sets is assigned vector measure w(E) defined in terms of the power distributions p(h) and p (h). The entries of w(E) represent the contribution of realizations hE to the ergodic limits in (48). The first group of entries measure such contributions when the resource allocation is p(h). Likewise, the second group of entries of w(E) denote the contributions to the ergodic limits of the resource distribution p (h).

Two particular sets that are important for this proof are the empty set, E=, and the entire spaceE=. ForE=, the integrals E (·)d m h = (·)d m h in (49) coincide with the expected value operatorsE(·) in (47). We write this explicitly as

w ( ) = E f 1 ( h , p ( h ) ) , E f 1 ( h , p ( h ) ) = [ y , y ] .
(50)

For E=, or any other zero-measure set for that matter, we have w()=0.

The measure w(E) is nonatomic. This follows from the fact that the channel distribution contains no points of positive probability combined with the requirement that the resource allocation values p(h) belong to the bounded setsP(h). Combining these two properties it is easy to see that that there are no channel realizations with positive measure, i.e., w(h)=0 for allh. This is sufficient to ensure that w(E) is a nonatomic measure.

Being w(E) nonatomic it follows from Theorem 2 that the range of w is convex. Hence, the vector

w 0 = α w ( ) + ( 1 α ) w ( ) = α w ( ) ,
(51)

belongs to the range of possible measures. Therefore, there must exist a set E0 such thatw( E 0 )= w 0 =αw(). Focusing on the entries of w(E0) that correspond to the resource allocation function p it follows that

E 0 f 1 ( h , p ( h ) ) d m h = α E f 1 ( h , p ( h ) ) .
(52)

The analogous relation holds for the entries of w(E0) corresponding to p , i.e., (52) is valid if p(h) is replaced by p (h) but this fact is inconsequential for this proof.

Consider now the complement set E 0 c defined as the set for which E 0 E 0 c = and E 0 E 0 c =. Given this definition and the additivity property of measures, we arrive atw( E 0 )+w( E 0 c )=w(). Combining the latter with (51), yields

w ( E 0 c ) = w ( ) w ( E 0 ) = ( 1 α ) w ( ) .
(53)

We mimick the reasoning leading from (51) to (52), but now we restrict the focus of (53) to the second entries ofw( E 0 c ). It therefore follows that

E 0 c f 1 ( h , p ( h ) ) d m h = ( 1 α ) E f 1 ( h , p ( h ) ) .
(54)

Define now power distributions p α (h) coinciding with p(h) for channel realization hE0and with p (h) whenh E 0 c , i.e.,

p α ( h ) = p ( h ) h E 0 , p ( h ) h E 0 c .
(55)

The resource distribution p α satisfies the set constraint p α P in (19). Indeed, to see that p α (h)P(h) for allh note that p(h) and p (h) are feasible in their respective problems and as suchp(h)P(h) andp(h)P(h) for all channelsh. Because for given channel realization h it holds that either p α (h)=p(h) when hE0 or p α (h)=p (h) whenh E 0 c it follows that p α (h)P(h) for all channel realizationsh E 0 E 0 c =. According to the definition in (19) this implies that p α P.

Let us now ponder the ergodic limitE f 1 ( h , p α ( h ) ) associated with power allocation p α (h).

Using (52) and (54), the average link capacities for power allocation p α (h) can be expressed in terms of p(h), p (h) as

E f 1 ( h , p α ( h ) ) = E 0 f 1 ( h , p α ( h ) ) d m h + E 0 c f 1 ( h , p α ( h ) ) d m h = E 0 f 1 ( h , p ( h ) ) d m h + E 0 c f 1 ( h , p ( h ) ) d m h = α E f 1 ( h , p ( h ) ) + ( 1 α ) E f 1 ( h , p ( h ) ) .
(56)

The first equality in (56) holds becauset the space is divided into E0 and its complement E 0 c . The second equality is true because when restricted to E0, p α (h)=p(h); and when restricted to E 0 c , p α (h)=p (h). The third equality follows from (52) and (54).

Comparing (56) with (48) we see that the power allocation p α yields ergodic limit y α . Therefore y α Y implying convexity ofY as we wanted to show. □

Recovery of optimal primal variables

Having null duality gap means that we can work in the dual domain without loss of optimality. In particular, instead of finding the primal maximum P we can find the dual minimum D, which we know are the same according to Theorem 1. A not so simple matter is how to recover an optimal primal pair (x,p) given an optimal dual vector Λ. Observe that recovering optimal variables is more important than knowing the maximum yield because optimal variables are needed for system implementation. In this section we study the recovery of optimum primal variables (x,p) from a given optimal multiplier Λ.

Start with an arbitrary, not necessarily optimal, multiplier Λ and define the primal Lagrangian maximizer set as

x ( Λ ) , p ( Λ ) = arg max x X , p P ( x , p , Λ ) .
(57)

The elements of (x(Λ),p(Λ)) yield the maximum possible Lagrangian value for given dual variable. Comparing the definition of the dual function in (23) and that of the Lagrangian maximizers in (57) it follows that we can write the dual function g(Λ) as

g ( Λ ) = x ( Λ ) , p ( Λ ) , Λ .
(58)

Particular important pairs of Lagrangian maximizers are those associated with a given optimal dual variable Λ. As we show in the following theorem, these variables are related with the optimal primal variables (x,p).

Theorem 4.

For an optimization problem of the form in (20) let x ( Λ ) , p ( Λ ) denote the Lagrangian maximizer set as defined in (57) corresponding to a given optimal multiplier Λ. The optimal argument set x , p is included in this set of Lagrangian maximizers, i.e.,

x , p x ( Λ ) , p ( Λ ) .
(59)

Proof

Reinterpret (x,p) as a particular pair of optimal primal arguments. Start by noting that the value of the dual function g(Λ) can be upper bounded by the Lagrangian evaluated at (x,p)=(x,p)

g ( Λ ) = max x X , p P ( x , p , Λ ) ( x , p , Λ ) .
(60)

Indeed, the inequality would be true for anyxX andpP because that is what being the maximum means.

Consider now the Lagrangian( x , p , Λ ) that according to (22) we can explicitly write as

( x , p , Λ ) = f 0 ( x ) + λ 1 T E f 1 h , p ( h ) x + λ 2 T f 2 ( x ) .
(61)

Since the pair (x,p) is an optimal argument of (20) we must haveE f 1 h , p ( h ) x 0 and f2(x)≥0. The multipliers also satisfy λ 1 0 and λ 2 0 because they are required to be nonnegative. Thus, the last two summands in (61) are nonnegative from where it follows that

( x , p , Λ ) f 0 ( x ) .
(62)

Combining (60) and (62) and using the definitions g(Λ) and P=f0(x) yields

D = g ( Λ ) ( x , p , Λ ) f 0 ( x ) = P.
(63)

But since according to Theorem 1 the duality gap is null D=P and the inequalities must hold hold as equalities. In particular, we have

g ( Λ ) = ( x , p , Λ ) ,
(64)

which according to (58) means the pair (p,Λ) is a Lagrangian maximizer associated with Λ=Λ. Since this is true for any pair of optimal variables (p,Λ) it follows that the set (p,Λ) is included in the set of Lagrangian maximizers x ( Λ ) , p ( Λ ) as stated in (59). □

According to (59) optimal arguments x , p can be recovered from Lagrangian maximizers x ( Λ ) , p ( Λ ) associated with optimal multipliers. One has to take care to interpret this set inclusion properly. Equation (59) does not mean that we can always compute x , p by finding Lagrangian maximizers and as such it may or may not be a useful result. If the Lagrangian maximizer pair is unique then the set x ( Λ ) , p ( Λ ) is a singleton and by extension so is the (included) set x , p of optimal primal variables. In this case Lagrangian maximizers can be used as proxies for optimal arguments. When the set x ( Λ ) , p ( Λ ) is not a singleton this is not possible and recovering optimal primal variables from optimal multipliers Λ is somewhat more difficult.

In general, problems in optimal wireless networking are such that the Lagrangian maximizer resource allocation functions p(Λ) are unique to within a set of zero measure. The ergodic limit Lagrangian maximizers x(Λ), however, are not unique in many cases. This is more a nuisance than a fundamental problem. Algorithms that find primal optimal operating points regardless of the characteristics of the set of Lagrangian maximizers are studied in Section “Dual descent algorithms”.

Separability

To determine the primal Lagrangian maximizers in (57) it is convenient to reorder terms in the definition in (22) to write

( x , p , Λ ) : = f 0 ( x ) λ 1 T x + λ 2 T f 2 ( x ) + E λ 1 T f 1 h , p ( h ) .
(65)

We say that in (22) the Lagrangian is grouped by dual variables because each dual variable appears in only one summand. The expression in (65) is grouped by primal variables because each summand contains a single primal variable if we interpret f 0 (x) λ 1 T x+ λ 2 T f 2 (x) as a single term and the expectation as a weighted sum—which is not true, but close enough for an intuitive interpretation.

Writing the Lagrangian as in (65) simplifies computation of Lagrangian maximizers. Since there are no constraints coupling the selection of optimal x and p in (65) we can separate the determination of the pair (x(Λ),p(Λ)) in (57) into the determination of the ergodic limits

x ( Λ ) = arg max x X f 0 ( x ) λ 1 T x + λ 2 T f 2 ( x )
(66)

and the resource allocation function

p ( Λ ) = arg max p P E λ 1 T f 1 h , p ( h ) .
(67)

The computation of the resource allocation function in (67) can be further separated. The setP as defined in (19) constrains separate values p(h) of the function p but doesn’t couple the selection of p(h1) and p(h2) for different channel realizations h1h2. Further observing that expectation is a linear operation we can separate (67) into per-fading state subproblems of the form

p ( h , Λ ) = arg max p ( h ) P ( h ) λ 1 T f 1 h , p ( h ) .
(68)

The absence of coupling constraints in the Lagrangian maximization, which permits separation into the ergodic limit maximization in (66) and the per-fading maximizations in (68), is the fundamental difference between (57) and the primal problem in (20). In the latter, the selection of optimal x and p as well as the selection of p(h1) and p(h2) for different channel realizations h1h2 are coupled by the constraintxE f 1 h , p ( h ) .

The decomposition in (68) is akin to the decomposition in (16) for the particular case of a point to point channel with capacity achieving codes. It implies that to determine the optimal power allocation p it is convenient to first determine the optimal multiplier Λ. We then proceed to exploit the lack of duality gap and the separable structure of the Lagrangian to compute values p(h) of the optimal resource allocation independently of each other. The computation of optimal ergodic averages is also separated as stated in (66). This separation reduces computational complexity because of the reduced dimensionality of (66) and (68) with respect to that of (20).

For the separation in (66) and (68) to be possible we just need to have a nonatomic channel distribution and ensure existence of a strictly feasible point as stated in the hypotheses of Theorem 1. These two properties are true for most optimal wireless communication and networking problems. A particular example is discussed in the following section.

Frequency division broadcast channel

Consider the optimal frequency division broadcast channel problem in (30) whose Lagrangian(r,p,α,λ,μ) is given by the expression in (31). Terms in(r,p,α,λ,μ) can be rearranged to uncover the separable structure of the Lagrangian

( r , p , α , λ , μ ) = i = 1 J U ( r i ) λ i r i + μ q 0 E i = 1 J α i ( h ) λ i C h i p i ( h ) N 0 μ p i ( h ) .
(69)

This rearrangement is equivalent to the generic transformation leading from (22) to (65). As we observed after (65) the computation of Lagrangian maximizing ergodic limits and resource allocation functions can be separated as can the computation of resource allocation values corresponding to different fading channel realizations. This is the case in this particular example. In fact, there is more separability to be exploited in (69).

With regards to primal ergodic variables r notice that each Lagrangian maximizing rate r i (λ,μ) depends only on λ i and that we can compute each r i (λ,μ)=r i (λ i ) separately as

r i ( λ i ) = arg max r i r min , r max U ( r i ) λ i r i .
(70)

This can be easily computed as U(r i ) is a one dimensional concave function. As a particular case consider the identity utility U(r i )=r i . Since the Lagrangian becomes a linear function of r i , the maximum occurs at either rmaxor rmin depending on the sign of 1−λ i . When λ i =1 the Lagrangian becomes independent of c i . In this case any value in the interval [rmin,rmax] is a Lagrangian maximizer. Putting these observations together we have

r i ( λ i ) = r max if λ i < 1 , r min , r max if λ i = 1 , r min if λ i > 1 .
(71)

Notice that the Lagrangian maximizer r i (λ i ) is not unique if λ i =1. Therefore, if λ i = 1 it is not possible to recover the optimal rate r i from the Lagrangian maximizer r i ( λ i ) corresponding to the optimal multiplier. In fact, an optimal multiplier λ i = 1 is uninformative with regards to the optimal ergodic rate as it just implies that r i [ r min , r max ] which we know is true because this is the feasible range of r i . If you think this is an unlikely scenario because it is too much of a coincidence to have λ i = 1 , think again. Having λ i = 1 is the most likely situation. If λ i 1 the optimal rate r i is either r i = r max or r i = r min . However, the capacity bounds rminand rmaxare selected independently of the remaining system parameters. It is quite unlikely—indeed, not true in most cases—that the optimal power and frequency allocation yields a rate r i determined by these arbitrarily selected parameters.

As we observed in going from (67) to (68) determination of the optimal power and frequency allocation functions requires maximization of the terms inside the expectation. These implies solution of the optimization problems

p ( h , λ , μ ) , α ( h , λ , μ ) = arg max i = 1 J α i ( h ) λ i C h i p i ( h ) N 0 μ p i ( h ) . s.t. α i ( h ) { 0 , 1 } , 1 T α ( h ) 1 ,
(72)

where we opened up the constraintαA into its per-fading state components.

The maximization in (72) can be further simplified. Begin by noting that irrespectively of the value of α(h) the best possible power allocation p i (h,λ,μ) of terminal i is the one that maximizes its potential contribution to the sum in (72), i.e.,

p i ( h , λ , μ ) = arg max p i ( h ) λ i C h i p i ( h ) N 0 μ p i ( h ) .
(73)

If α i (h)=1 this contribution is added to the sum in (72). If it is multiplied by α i (h)=0 it is not added to the sum in (72). Either way p i (h,λ,μ) as given by (73) is the optimal power allocation for terminal i.

To determine the frequency allocation α(h,λ,μ) define the discriminants

d i ( h , λ , μ ) : = max p i ( h ) λ i C h i p i ( h ) N 0 μ p i ( h ) ,
(74)

which we can use to rewrite (72) as

α ( h , λ , μ ) = arg max i = 1 J α i ( h ) d i ( h , λ , μ ) . s.t. α i ( h ) { 0 , 1 } , 1 T α ( h ) 1 .
(75)

Since at most one α i (h)=1 in (75), the best we can do is to select the terminal with the largest discriminant when that discriminant is positive. If all discriminants are negative the best we can do is to make α i (h)=0 for all i.

The Lagrangian maximizers p i (h,λ,μ) and α(h,λ,μ) in (73) and (75) are almost surely unique for all values of Λ and μ. In particular, optimal allocations p i (h) and α(h) can be obtained by making Λ=Λand μ=μ in (73)–(75).

Dual descent algorithms

Determining optimal dual variables Λ is easier than determining optimal primal pairs (p,x) because there are a finite number of multipliers and the dual function is convex. If the dual function is convex descent algorithms are guaranteed to converge towards the optimum value, which implies we just need to determine descent directions for the dual function.

Descent directions for the dual function can be constructed from the constraint slacks associated with the Lagrangian maximizers. To do so, consider a given Λ and use the definition of the Lagrangian maximizer pair (x(Λ),p(Λ)) in (57) to write the dual function as

g ( Λ ) = f 0 ( x ( Λ ) ) + λ 1 T E f 1 h , p ( h , Λ ) x ( Λ ) + λ 2 T f 2 ( x ( Λ ) ) .
(76)

Further consider the Lagrangian x ( Λ ) , p ( Λ ) , M evaluated at an arbitrary multiplierM= [ μ 1 T , μ 2 T ] T and primal Lagrangian multipliers corresponding to the given Λ. This Lagrangian lower bounds the dual function valueg(M)=max(x,p,M) which allows us to write

g ( M ) f 0 ( x ( Λ ) ) + μ 1 T E f 1 h , p ( h , Λ ) x ( Λ ) + μ 2 T f 2 ( x ( Λ ) ) .
(77)

Subtracting (76) from (77) yields

( μ 1 λ 1 ) T E f 1 h , p ( h , Λ ) x ( Λ ) + ( μ 2 λ 2 ) T f 2 ( x ( Λ ) ) g ( M ) g ( Λ ) .
(78)

Defining the vector s ~ (Λ)= [ s ~ 1 T ( Λ ) , s ~ 2 T ( Λ ) ] T with components

s ~ 1 ( Λ ) : = E f 1 h , p ( h , Λ ) x ( Λ ) , s ~ 2 ( Λ ) : = f 2 ( x ( Λ ) ) ,
(79)

and recalling the multiplier definitionsM= [ μ 1 T , μ 2 T ] T andΛ= [ λ 1 T , λ 2 T ] T we can write

s ~ T ( Λ ) ( M Λ ) g ( M ) g ( Λ ) .
(80)

If the dual function g(Λ) is differentiable, the expression in (80) implies that s ~ (Λ)=g(Λ) is its gradient. If the dual function is nondifferentiable s ~ (Λ) defines a subgradient of the dual function. In either case s ~ (Λ) is a descent direction of the dual function. This can be verified by substituting M=Λ in (80) to conclude that for any ΛΛit must be

s ~ T ( Λ ) ( Λ Λ ) g ( Λ ) g ( Λ ) < 0 .
(81)

Since the inner product of s ~ (Λ) and (ΛΛ) is negative the vectors s ~ (Λ) and (ΛΛ) form an angle smaller than Π/2. This can be interpreted as meaning that standing at Λ, the vector s ~ (Λ) points in the direction of Λ.

Having a descent direction available we can introduce a time index t and a stepsize ε t to define the dual subgradient descent algorithm as the one with iterates λ(t) obtained through recursive application of

λ ( t + 1 ) = λ ( t ) ε t s ~ Λ ( t ) + .
(82)

This algorithm is known to converge to optimal dual variables if the stepsize vanishes at a nonsummable rate and to approach Λ if the stepsize is constant; see e.g.,[20], Section 6.

A problem in implementing (82) is that computing the subgradient component s ~ 1 (Λ) in (79) is costly. To compute s ~ 1 (Λ), we need to evaluate the expectationE f 1 h , p ( h , Λ ) where each of the resource allocations p(h,Λ) follows from the solution of the optimization problem in (68). Therefore, to approximate the expectationE f 1 h , p ( h , Λ ) we need to determine p(h,Λ) for a grid of channel values, which gets impractical if h has large dimension. A Montecarlo approximation ofE f 1 h , p ( h , Λ ) could be computed but that is also costly. Furthermore, to computeE f 1 h , p ( h , Λ ) we need to know the probability distribution m h (h) which needs to be estimated from channel observations. To overcome these difficulties we replace the gradient s ~ Λ ( t ) in (82) by a stochastic subgradient as we discuss in the following section.

Stochastic subgradient descent

Consider a given channel realization h and given multiplier Λ and define the vector

s 1 ( h , Λ ) : = f 1 h , p ( h , Λ ) x ( Λ ) ) .
(83)

This definition is made such that the expected value of s1(h,Λ) with respect to the channel distribution is the subgradient component s ~ 1 (Λ) in (79). Thus, if we define the vectors(h,Λ)= [ s 1 T ( h , Λ ) , s 2 T ( Λ ) ] T with s1(h,Λ) as in (83) and s 2 (Λ)= s ~ 2 (Λ)= f 2 (x(Λ)) we have

E s ( h , Λ ) = s ~ ( Λ ) .
(84)

Formally, (84) implies that s(h,Λ) is a stochastic subgradient of the dual function. Intutively, (84) implies that s(h,Λ) is an average descent direction of the dual function because its expectation is a descent direction. Thus, if we draw independent channel realizations h(t) and replace s ~ Λ ( t ) for s(h(t),Λ(t)) in (82) we expect to observe some sort of convergence towards optimum multipliers.

The advantage of this substitution is that to compute the stochastic subgradient s(h,Λ), we do not need to evaluate an expectation as in the case of the subgradient s ~ (Λ). As a consequence, using stochastic subgradients as descent directions results in an algorithm that is computationally lighter. Perhaps more important, we can operate without knowledge of the channel probability distribution if we use the current channel realization h(t) as our channel sample. These observations motivate the introduction of the following dual stochastic subgradient descent algorithm. (S1) Primal iteration. Given multipliers λ(t) observe current channel realization h(t) and determine primal Lagrangian maximizers [cf. (66) and (68)]

x ( t ) : = x ( Λ ( t ) ) = arg max x X f 0 ( x ) λ 1 T ( t ) x + λ 2 T ( t ) f 2 ( x ) , p ( t ) : = p h ( t ) , Λ ( t ) = arg max p ( h ( t ) ) P ( h ( t ) ) λ 1 T ( t ) f 1 h ( t ) , p ( h ( t ) ) .
(85)

(S2) Dual stochastic subgradient. With the Lagrangian maximizers determined by (85) compute the stochastic subgradients h ( t ) , Λ ( t ) = s 1 T h ( t ) , Λ ( t ) , s 2 T ( Λ ( t ) ) T of the dual function with components [cf. (83) and (79)]

s 1 h ( t ) , Λ ( t ) = f 1 h ( t ) , p h ( t ) , Λ ( t ) x ( Λ ( t ) ) , s 2 ( Λ ( t ) ) = f 2 ( x ( Λ ( t ) ) ) .
(86)

(S3) Dual iteration. With stochastic subgradients as in (86) and given step size ε descend in the dual domain along the direction −s(t) [cf. (82)]

Λ ( t + 1 ) = Λ ( t ) ε t s h ( t ) , Λ ( t ) + = Λ 1 ( t ) ε t f 1 h ( t ) , p h ( t ) , Λ ( t ) x ( Λ ( t ) ) Λ 2 ( t ) ε t f 2 ( x ( Λ ( t ) ) ) + .
(87)

The core of the dual stochastic subgradient descent algorithm is the dual iteration (S3). The purpose of the primal iteration (S1) is to compute the stochastic subgradients in (S2) that are needed to implement the dual descent update in (S3). We can think of the primal variables x(Λ(t)) and p(h(t),Λ(t)) as a byproduct of the descent implementation.

Convergence properties depend on whether constant or time varying step sizes are used. If the stepsizes ε t form a nonsummable but square summable series, i.e., t = 0 ε(t)= and t = 0 ε 2 (t)<, then using a simple supermartingale argument it can be shown that λ(t) converges to Λ almost surely[24]. If constant stepsizes ε t =ε for all t are used, λ(t) does not converge to Λ but it can be shown that λ(t) visits a neighborhood of the optimal multiplier set Λ[19, Appendix]. Excursions away from this set are possible, but the set is visited infinitely often. The suboptimality of this set is controlled by the step size ε.

If λ(t) approaches or converges to Λit follows as a consequence of Theorem 4 that an optimal primal pair (x,p) can be computed from the Lagrangian maximizers if the latter are unique. Observe that this does not require a separate computation because the Lagrangian maximizers are computed in the primal iteration (S1). One may question that at time t we do not compute the Lagrangian maximizer function p(Λ(t)) but just the single value p(h(t),Λ(t)). However, h(t) is the channel realization at time t which means that p(h(t),Λ(t)) is the value we need to compute to adapt to the current channel realization.

This permits reinterpretation of (S1)–(S3) as a policy to determine wireless systems’ operating points. At time t we observe current channel realization h(t) and determine resource allocation p(h(t),Λ(t)) which we proceed to implement in the current time slot. In this case the core of the algorithm is the primal iteration (S1) and the dual variable λ(t) is an internal state that determines the operating point. Steps (S2) and (S3) are implemented to update this internal state so that as time progresses λ(t) approaches Λand the policy becomes optimal because it chooses the best possible resource allocation adapted to the current channel realization h(t).

The reinterpretation of (S1)–(S3) as a policy to determine resource allocations p(t)=p(h(t),Λ(t)) associated with observed channel realizations h(t) motivates a redefinition of the concept of solution to the wireless optimization problem in (20). In principle, solving (20) entails finding the optimal resource allocation p and the optimal ergodic average x such that the problem constraints are satisfied andP= f 0 ( x ) [cf. (21)]. Heeding the interpretation of dual stochastic subgradient descent as a policy, we are interested in the optimality of the sequences of power allocationsp(N):= { p ( t ) } t N and average variablesx(N):= { x ( t ) } t N generated by (S1)–(S3). Further notice that since (S1)–(S3) is a stochastic algorithm the sequencesp(N) andx(N) generated in a particular run are instantiations of respective random processesp(N) andx(N). We are therefore interested in the optimality of the processesp(N) andx(N).

To be more specific consider the channel stochastic processh(N) whose instances are sequences of channel realizationsh(N):= { h ( t ) } t N drawn independently from the channel probability distribution m h (h). Suppose we are also given a sequence of variablesx(N):= { x ( t ) } t N drawn from a stochastic processx(N) and a resource allocation function p(h) that dictates allocation of resources p(h(t)) for channel realization h(t). Assuming that the processx(N) is ergodic, the constraintxE f 1 h , p ( h ) is equivalent to

lim t 1 t u = 1 t x ( u ) lim t 1 t u = 1 t f 1 h ( u ) , p ( h ( u ) ) a.s.
(88)

Indeed, sincex(N) andh(N) are ergodic processes the limit lim t 1 t u = 1 t x(u) is a constant that we could denote by x and the limit lim t 1 t u = 1 t f 1 h ( u ) , p ( h ( u ) ) is equivalent to the expectationE f 1 h , p ( h ) .

Writing the constraintxE f 1 h , p ( h ) in the more cumbersome form shown in (88) has the advantage that the latter can be generalized to cases in which we are given stochastic processesp(N) andx(N) in whichx(N) is not necessarily ergodic and realizations p(t) of the processp(N) are more general than just functions of the channel state h(t). This concept of solution is formally defined next.

Definition 2

Consider the channel stochastic processh(N) whose instances are sequencesh(N):= { h ( t ) } t N drawn independently from the channel probability distribution m h (h). We say the stochastic processes p (N) and x (N) with realizations p (N):= { p ( t ) } t N and x (N):= { x ( t ) } t N problem in (20) if (i) Instantaneous feasibility. Sequence values satisfy the set constraints x (t)X, p (t)P(h(t)) for all times t. (ii) Almost sure average feasibility. Ergodic limits of sequences x (N) and p (N) are feasible with probability 1, i.e.,

lim t 1 t u = 1 t x (u) lim t 1 t u = 1 t f 1 h ( u ) , p ( u ) a.s. ,
(89)
f 2 lim t 1 t u = 1 t x ( u ) 0 a.s.
(90)

(iii) Almost sure optimality. The yield of the ergodic limit ofx(N) is almost surely optimal, i.e,

P = f 0 lim t 1 t u = 1 t x ( u ) a.s.
(91)

If the stochastic process x (N) is ergodic and the process p (N) is such that realizations p(t)=p(h(t)) are functions of current channel states, Definition 2 is equivalent to (21) with x = lim t 1 t u = 1 t x (u) and p(h(t))=p(h(t)). Definition 2 is more general because it allows correlation between values ofx(N) and lets p(t) be more complex than just a function of the current channel realization h(t). This added generality is needed because processesp(N) andx(N) defined as per (S1)–(S3) yield correlated processesp(N) andx(N) in which p(t) is a function of the current channel realization h(t) and the current Lagrange multiplier λ(t). Processesp(N) andx(N) are close to optimal in the sense of Definition 2 as we describe in the following theorem.

Theorem 5 (Ergodic stochastic optimization [[19]])

Consider the optimization problem in (20) as well as processesp(N) andx(N) generated by the stochastic dual descent algorithm (S1)–(S3). Let Ŝ 2 E s h , Λ 2 be a bound on the second moment of the norm of the stochastic subgradients s(h,Λ) and assume the same hypotheses of Theorem 1. Sequencesp(N) andx(N) are such that: (i) Feasibility. Items (i) and (ii) of Definition 2 hold true. (ii) Near optimality. The ergodic average of x(t) almost surely converges to a value with optimality gap smaller thanε Ŝ 2 /2, i.e,

P f 0 lim t 1 t u = 1 t x ( u ) ε Ŝ 2 2 a.s.
(92)

The sequencesp(N) andx(N) satisfy the constraints in (89) and (90) almost surely and the objective function evaluated at the ergodic limit x ̄ :=(1/t) u = 1 t x(u) is withinε Ŝ 2 /2 of optimal. SinceX andP are compact sets it follows that the bound Ŝ 2 is finite. Therefore, reducing ε it is possible to make f 0 ( x ̄ ) arbitrarily close to P and as a consequence the sequencesp(N) andx(N) arbitrarily close to optimal. It follows that the processesp(N) andx(N) generated by (S1)–(S3) are arbitrarily close to processes p (N) and x (N) that are optimal in the sense of Definition 2.

Variables p and x optimal in the sense of (21) are not computed by (S1)–(S3). Rather, (89) implies that, asymptotically, (S1)–(S3) is drawing resource allocation realizations p(t)=p(h(t),Λ(t)) and variables x(t):=x(Λ(t)) that are close to optimal as per Definition 2. The important point here is that having a procedure to generate stochastic processes close to optimal in the sense of Definition 2 is sufficient for practical implementation.

An example application of the dual stochastic subgradient descent algorithm (S1)–(S3) is discussed in the next section.

Frequency division broadcast channel

To implement dual stochastic descent for the frequency division broadcast channel we need to specify the primal iteration (S1) and the dual iteration (S2). To specify the primal iteration (S1) we need to compute Lagrangian maximizers for which it suffices to recall the expressions in Section “Frequency division broadcast channel” of “Recovery of optimal primal variables”. For the ergodic rate r i we make λ i =λ i (t) in (70) to conclude that the primal iterate r i (t)=r i (λ i (t)) is

r i ( t ) = arg max r i r min , r max U ( r i ) λ i ( t ) r i .
(93)

For the power allocations p i (t)=p i (h(t),λ(t),μ(t)) and the frequency assignments α(t)=α(h(t),λ(t),μ(t)) we need to set the multipliers to λ=λ(t) and μ=μ(t) and also set the value of the channel to its current state h=h(t). This substitution in (73) yields the power allocation

p i ( t ) = arg max p i ( h ( t ) ) λ i ( t ) C h i ( t ) p i ( h ( t ) ) N 0 μ p i ( h ( t ) ) .
(94)

To determine the frequency assignments α(t) we first substitute λ=λ(t), μ=μ(t), and h=h(t) in (74) to compute the discriminants d i (t)=d i (h(t),λ(t),μ(t))

d i ( t ) : = max p i ( h ( t ) ) λ i ( t ) C h i ( t ) p i ( h ( t ) ) N 0 μ p i ( h ( t ) ) ,
(95)

from where we conclude that the frequency assignment α(t)=α(h(t),λ(t),μ(t)) is given by the solution of [cf. (75)]

α ( t ) = arg max i = 1 J α i ( h ( t ) ) d i ( t ) . s.t. α i ( h ( t ) ) { 0 , 1 } , 1 T α ( h ( t ) ) 1 .
(96)

Recall that since at most one α i (h)=1 in (96), the optimal frequency allocation is to make α i (h)=1 for the terminal with the largest discriminant when that discriminant is positive. If all discriminants are negative we make α i (h)=0 for all i.

The ESO algorithm for optimal resource allocation in broadcast channels is completed with an iteration in the dual domain [cf. (83) and (87)]

λ i ( t + 1 ) = λ i ( t ) ε α i ( t ) C h i p i ( t ) N 0 r i ( t ) + , μ ( t + 1 ) = μ ( t ) ε q 0 i = 1 J α i ( t ) p i ( t ) + .
(97)

As per Theorem 5 iterative application of (93)–(97) yields sequences r i (t), α i (t) and p i (t) such that: (i) The sum utility for the ergodic limits of r i (t) is almost surely within a small constant of optimal; (ii) The power constraint in (27) and the rate constraints in (26) are almost surely satisfied in an ergodic sense. This result is true despite the presence of the non-convex integer constraintαA, the non-concave function C(h i p i (t)/N0), the lack of access to the channel’s probability distribution, and the infinite dimensionality of the optimization problem.

Numerical results

The dual stochastic subgradient descent algorithm for optimal resource allocation in frequency division broadcast channels defined by (93)–(97) is simulated for a system with J=16 nodes. Three AMC modes corresponding to capacities 1, 2 and 3 bits/s/Hz are used with transitions at SINR 1, 3 and 7. Fading channels are generated as i.i.d. Rayleigh with average powers 1 for the first four nodes, i.e., j=1,…,4, and 2, 3 and 4 for subsequent groups of 4 nodes. Noise power is N0=1 and average power available is q0=3. Rate of packet acceptance is constrained to be 0≤r i (t)≤2 bits/s/Hz. The optimality criteria is proportional fair scheduling, i.e., U i ( r i )=log( r i ) for all i. Steps size is ε=0. 1.

Figure3 shows evolution of dual variables λ i (t) and corresponding rates r i (t) for representative nodes i=1 with average channelsE h 1 (t)=1 and i=9 withE h 9 (t)=3. The time average rate r ̄ i (t):=(1/t) u = 1 t r i (u) is also shown. Neither multipliers λ i (t) nor rates r i (t) converge, but ergodic rates r ̄ i (t) do converge. Multiplier λ1(t) associated with node 1 is larger than multiplier λ9(t) of node 9. This improves fairness of resource allocation by increasing the chances of allocating user 1 even when the channel h1(t) is smaller than h9(t)—recall that channel h9(t) is stronger on average. Convergence of the algorithm is ratified by Figures4 and5. Figure4 shows evolution of the objective i = 1 J U i ( r ̄ i (t)) and the dual function value g(t):=g(λ(t),μ(t)). Notice that the objective value is decreasing towards the maximum objective. This is not a contradiction, because variables r ̄ i (t) are infeasible but approach feasibility as t grows. The dual function’s value is an upper bound on the maximum utility and it can be observed to approach the objective as t grows. Eventually, the objective value becomes smaller than the dual value as expected. Figure5 corroborates satisfaction of the power constraint in (27) and the rate constraints in (26). The amount by which the power constraint (27) is violated is shown in the top. In the bottom we show the corresponding figure for the rate constraint in (26). Since there are J of these constraints we show the minimum and maximum violation. All constraints are satisfied as t grows. Resulting power allocations appear in Figure6 for a channel with average powerE h if (t)=1 and for a channel withE h if (t)=2. Power allocation is opportunistic in that power is allocated only when channel realizations are above average.

Figure 3
figure 3

Primal and dual iterates in dual stochastic gradient descent. Evolution of dual variables λ i (t) and rates r i (t) for representative nodes with average channelsE h 1 (t)=1 andE h 9 (t)=3 for the algorithm in (93)–(97) are shown. Multipliers λ i (t) and capacities c i (t) do not converge, but ergodic rates c ̄ i (t) do.

Figure 4
figure 4

Optimal frequency division broadcast channel. Objective value i = 1 J U i ( c ̄ i (t)) and dual function’s value g(t):=g(λ(t),μ(t)) for the algorithm in (93)–(97) are shown along with lines marking optimal utility and 90% of optimal yield. Utility yield becomes optimal as time grows.

Figure 5
figure 5

Power and capacity constraints. Feasibility as time grows is corroborated for the power constraint in (27) (top) and rate constraints in (26) (bottom). For the rate constraint we show the maximum and minimum value of constraint violation.

Figure 6
figure 6

Power allocations. Power allocated as a function of channel realization is shown for channels with average powerE h if (t)=1 (top) andE h if (t)=2 (bottom). The resulting power allocation is opportunistic in that power is allocated only when channel realizations are above average.

Conclusions

This article reviews recent results which state that problems of the form in (20) in which nonconcave functions appear inside expectations have null duality gap as long as the probability distribution of the fading coefficient h contains no points of strictly positive probability. Lack of duality gap permits solution in the dual domain leading to a substantial reduction in the computational cost of determining optimal operating points of the wireless system. Working in the dual domain leads to a solution methodology that can be interpreted as a generalization of the derivation of the waterfilling power allocation in point to point channels reviewed in Section “Power allocation in a point-to-point channel”.

Specifically, the problem of determining the optimal resource allocation function p in (20) is challenging due to its infinite dimensionality and lack of convexity. However, in the dual domain we need to determine the optimal multiplier Λthat minimizes the dual function in (23). This is simpler because the dual function is convex and finite dimensional. Once we have found an optimal dual variable we can determine optimal operating points as Lagrangian maximizers. In doing so we can exploit the separable structure of the Lagrangian to decompose the optimization problem into the per fading state subproblems in (68). We emphasize that solving the optimization programs in (68) is not necessarily easy if the dimensionality of h is large. Nevertheless solving (68) is always simpler than solving (20) and in some cases plain simple. Lack of duality gap and Lagrangian separability are further exploited to propose the dual stochastic subgradient descent algorithm (S1)–(S3) which converges to an optimal operating point with probability 1 in an ergodic sense.

There are three key points that permit the development of the solution methodology outlined in the previous paragraph: Nonatomic fading distribution. A nonatomic fading distribution leads to the lack of duality gap. The fact that P=D, i.e., that primal and dual optimal values are the same, is what allows us to work in the dual domain without loss of optimality. In formal terms, lack of duality gap is the tool that we used to recover the optimal primal variables (x,p) from the optimal dual variable Λby determining the primal Lagrangian maximizers for(x,p, Λ ) [cf. Theorem 4]. It is important to distinguish between convexity of the optimization problem and lack of duality gap. Null duality gap may follow from convexity, but convexity is rare in wireless communications systems. Lack of duality gap can also follow from a nonatomic fading distribution, which is a common occurrence in wireless systems. Lagrangian Separability. According to Theorem 4 null duality gap permits computation of the optimal pair (x,p) as the Lagrangian maximizers (x(Λ),p(Λ)). This is not a simplification per se but leads to a simplification because the computation of the Lagrangian maximizer function p(Λ) can be separated into per fading state problems whose solution determines values p(h,Λ) of this function [cf. (66)–(68)]. The Lagrangian is separable in this sense because neither the constraints nor the objective function involve a nonlinear function coupling the selection of values p(h1) and p(h2) for different channel realizations h1h2. Whenever p(h1) and p(h2) appear as part of the same constraint they appear as different terms of an expectation operation. This absence of coupling is what permits exchanging the order of maximization and expectation in going from (67) to (68). Finite number of constraints. Working in the dual domain is simpler than working in the primal domain because the dual function is finite dimensional whereas the primal problem is infinite dimensional. We have a finite dimensional dual function as long as the original optimization problem has a finite number of constraints.

Nonatomic fading distributions, Lagrangian separability, and having a finite number of constraints are properties that appear in many, indeed most, problems in optimal design of wireless systems. In such cases the methodology described in this article can be applied to their solution.

Further reading

The use of dual problems as a shortcut to solve optimization problems in communications has a rich history[2528]; see also[29] for a comprehensive treatment. Lack of duality gap in non-convex optimization problems has also been observed in the context of asymmetric digital subscriber lines[30, 31]. In network optimization lack of duality gap leads to the optimality of layered architectures which renders the complexity of wireless networking essentially identical to the complexity of physical layer optimization[3235]. For the use of techniques discussed here in the solution of specific problems we refer the reader to[3642]. For further details on dual stochastic sub gradient descent, the literature on convergence of subgradient descent algorithms[4345], and stochastic subgradient descent[4649] is of interest.

References

  1. Wang X, Giannakis GB: Resource allocation for wireless multiuser OFDM networks. IEEE Trans. Inf. Theory 2011, 57(7):4359-4372.

    Article  MathSciNet  Google Scholar 

  2. Ntranos V, Sidiropoulos N, Tassiulas L: On multicast beamforming for minimum outage. IEEE Trans. Wirel. Commun 2009, 8(6):3172-3181.

    Article  Google Scholar 

  3. Sidiropoulos ND, Davidson TN, Luo ZQ: Transmit beamforming for physical-layer multicasting. IEEE Trans. Signal Process 2006, 54(6):2239-2251.

    Article  Google Scholar 

  4. Bazerque JA, Giannakis GB: Distributed scheduling and resource allocation for cognitive OFDMA radios. Mobile Nets. Apps 2008, 13(5):452-462. 10.1007/s11036-008-0083-z

    Article  Google Scholar 

  5. Quan Z, Cui S, Sayed AH: Optimal linear cooperation for spectrum sensing in cognitive radio networks. IEEE J. Sel. Topics Signal Process 2008, 2: 28-40.

    Article  Google Scholar 

  6. Hu Y, Ribeiro A: Optimal wireless networks based on local channel state information. IEEE Trans. Signal Process 60(9):4913-4929. (September 2012)

    Google Scholar 

  7. Hu Y, Ribeiro A: Adaptive distributed algorithms for optimal random access channels. IEEE Trans. Wirel. Commun 2011, 10(8):2703-2715.

    Article  Google Scholar 

  8. Hu Y, Ribeiro A: A Optimal wireless multiuser channels with imperfect channel state information. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1, (Kyoto Japan, 2012), pp. 1–4

    Google Scholar 

  9. Hu Y, Ribeiro A: Optimal transmission over a fading channel with imperfect channel state information. Global Telecommun. Conf., vol. 1, (Houston, TX, 2011), pp. 1–5

    Google Scholar 

  10. Chen L, Low SH, Chiang M, Doyle JC: Cross-layer congestion control, routing and scheduling design in ad hoc wireless networks. Proc. IEEE INFOCOM, vol. 1, (Barcelona, Spain, 23–29 April 2005), pp. 1–13

    Google Scholar 

  11. Chiang M, Low SH, Calderbank RA, Doyle JC: Layering as optimization decomposition. Proc IEEE 2007, 95: 255-312.

    Article  Google Scholar 

  12. Eryilmaz A, Srikant R: Joint congestion control, routing, and MAC for stability and fairness in wireless networks. IEEE J. Sel. Areas Commun 2006, 24(8):1514-1524.

    Article  Google Scholar 

  13. Georgiadis L, Neely MJ: Resource allocation and cross-layer control in wireless networks. Found Trends Netw 2006, 1: 1-144.

    Article  MATH  Google Scholar 

  14. Lee JW, Mazumdar RR, Shroff NB: Opportunistic power scheduling for dynamic multi-server wireless systems. IEEE Trans. Wirel. Commun 2006, 5(6):1506-1515.

    Article  Google Scholar 

  15. Lin X, Shroff NB, Srikant R: A tutorial on cross-layer optimization in wireless networks. IEEE J. Sel. Areas Commun 2006, 24(8):1452-1463.

    Article  Google Scholar 

  16. Neely MJ, Modiano E, Rohrs CE, Dynamic power allocation and routing for time-varying wireless networks: IEEE J. Sel. Areas Commun. 2005, 23: 89-103.

    Article  Google Scholar 

  17. Wang X, Kar K: Cross-layer rate optimization for proportional fairness in multihop wireless networks with random access. IEEE J. Sel. Areas Commun 2006, 24(8):1548-1559.

    Article  Google Scholar 

  18. Yi Y, Shakkottai S: Hop-by-hop congestion control over a wireless multi-hop network. IEEE/ACM Trans. Netw. 2007, 15(133–144):1548-1559.

    Google Scholar 

  19. Ribeiro A: Ergodic stochastic optimization algorithms for wireless communication and networking. IEEE Trans. Signal Process 2010, 58(12):6369-6386.

    Article  MathSciNet  Google Scholar 

  20. Ribeiro A, Giannakis G: Separation principles in wireless networking. IEEE Trans. Inf. Theory 2010, 56(9):4488-4505.

    Article  MathSciNet  Google Scholar 

  21. Boyd S, Vandenberghe L: Convex Optimization. (Cambridge University Press, Cambridge, 2004)

    Book  MATH  Google Scholar 

  22. Lyapunov AA: Complètement, Sur les, Fonctions-vecteur, URSS, Additives. Bull. Acad. Sci. Sèr. Math 1940, 4: 465-478.

    Google Scholar 

  23. Rockafellar RT: Convex Analysis. (Princeton University Press, Princeton, NJ, 1970)

    Book  MATH  Google Scholar 

  24. Shor NZ: Minimization Methods for Non-Differentiable Functions. (Springer, Berlin, 1985)

    Book  MATH  Google Scholar 

  25. Kelly FP, Maulloo A, Tan D: Rate control for communication networks: shadow prices, proportional fairness and stability. J. Oper. Res. Soc 1998, 49(3):237-252.

    Article  MATH  Google Scholar 

  26. Low SH, Lapsley DE: Optimization flow control, I: basic algorithm and convergence. IEEE/ACM Trans. Netw 1998, 7(6):861-874.

    Article  Google Scholar 

  27. Low SH: A duality model of TCP and queue management algorithms. IEEE/ACM Trans. Netw 2003, 11(4):525-536. 10.1109/TNET.2003.815297

    Article  MathSciNet  Google Scholar 

  28. Low SH, Paganini F, Doyle JC: Internet congestion control. IEEE Control Syst. Mag 2002, 22: 28-43.

    Article  MATH  Google Scholar 

  29. Srikant R: The Mathematics of Internet Congestion Control. (Birkhauser, 2004)

    Book  MATH  Google Scholar 

  30. Luo ZQ, Zhang S: Dynamic spectrum management: complexity and duality. IEEE J. Sel. Topics Signal Process 2008, 1(2):57-73.

    Google Scholar 

  31. Yu W, Lui R: Dual methods for nonconvex spectrum optimization of multicarrier systems. IEEE Trans. Commun 2006, 54(7):1310-1322.

    Article  Google Scholar 

  32. Berry RA, Yeh EM: Cross-layer wireless resource allocation. IEEE Signal Process. Mag 2004, 21(5):59-68. 10.1109/MSP.2004.1328089

    Article  Google Scholar 

  33. Neely MJ: Energy optimal control for time-varying wireless networks. IEEE Trans. Inf. Theory 2006, 52(7):2915-2934.

    Article  MathSciNet  MATH  Google Scholar 

  34. Neely MJ, Modiano E, Li CP: Fairness and optimal stochastic control for heterogeneous networks. Proc. IEEE INFOCOM, vol 3 (Miami, FL 13–17, March 2005), pp. 1723–1734

    Google Scholar 

  35. Ribeiro A, Giannakis G: Layer separability of wireless networks. Proc. Conf. on Info. Sciences and Systems, vol. 1 (Princeton Univ. Princeton, NJ, 2008), pp. 821–826

    Google Scholar 

  36. Gatsis N, Ribeiro A, Giannakis G: A class of convergent algorithms for resource allocation in wireless fading networks. IEEE Trans. Wirel. Commun 2010, 9(5):1808-1823.

    Article  Google Scholar 

  37. Gatsis N, Ribeiro A, Giannakis G: Cross-layer optimization of wireless fading ad-hoc networks. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1 (Taipei, Taiwan, 2009), pp. 2353–2356

    Google Scholar 

  38. Hu Y, Ribeiro A: Optimal wireless networks based on local channel state information. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1 (Prague Czech Republic, 2011), pp. 3124–3127

    Google Scholar 

  39. Hu Y, Ribeiro A: Adaptive distributed algorithms for optimal random access channels. Proc. Allerton Conf. on Commun. Control Computing, vol. 1 (Monticello, 2010), pp. 1474–1481

    Google Scholar 

  40. Ribeiro A, Giannakis G: Optimal FDMA over wireless fading mobile ad-hoc networks. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1 (Las Vegas, NV, 2008), pp. 2765–2768

    Google Scholar 

  41. Ribeiro A, Luo T, Sidiropoulos N, Giannakis G: Modelling and optimization of stochastic routing for wireless multihop networks. Proc. IEEE Int. Conf. on Computer Commun, vol. 1 (Anchorage, AK, 2007), pp. 1748–1756

    Google Scholar 

  42. Ribeiro A, Sidiropoulos N, Giannakis G: Optimal distributed stochastic routing algorithms for wireless multihop networks. IEEE Trans. Wirel. Commun 2008, 7(11):4261-4272.

    Article  Google Scholar 

  43. Juditsky A, Lan G, Nemirovski A, Shapiro A: Stochastic approximation approach to stochastic programming. SIAM J. Optim 2009, 19(4):1574-1609. 10.1137/070704277

    Article  MathSciNet  MATH  Google Scholar 

  44. Larsson T, Patriksson M: A Str omberg, Ergodic primal convergence in dual subgradient schemes for convex programming. Math. Progr 1999, 86(2):283-312. 10.1007/s101070050090

    Article  MathSciNet  MATH  Google Scholar 

  45. Nedic A, Ozdaglar A: Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim 2009, 19(4):1757-1780. 10.1137/070708111

    Article  MathSciNet  MATH  Google Scholar 

  46. Polyak BT, Autom Newstochasticapproximationtypeprocedures: Remote Control. 1990, 51: 937-946.

    Google Scholar 

  47. Polyak BT, Juditsky AB: Acceleration of stochastic approximation by averaging. SIAM J. Control Optim 1992, 30(4):838-855. 10.1137/0330046

    Article  MathSciNet  MATH  Google Scholar 

  48. Ribeiro A: Stochastic learning algorithms for optimal design of wireless fading networks. Proc. IEEE Workshop on Signal Process. Advances in Wireless Commun vol. 1 (Marakech, Morocco, 2010), pp. 1–5

    Google Scholar 

  49. Ribeiro A: Ergodic stochastic optimization algorithms for wireless communication and networking. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1, (Dallas, TX, 2010), pp. 3326–3329

    Google Scholar 

Download references

Acknowledgements

Work in this article is supported by the Army Research Office grant W911NF-10-1-0388 and the National Science Foundation award CAREER CCF-0952867. Part of the results in this article were derived while the author was at the University of Minnesota. The work presented here has benefited from discussions with Yichuan Hu, Dr. Nikolaos Gatsis, and Prof. Georgios B. Giannakis. The Associate Editor, Dr. Deniz Gunduz, provided valuable corrections to a draft version of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alejandro Ribeiro.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ribeiro, A. Optimal resource allocation in wireless communication and networking. J Wireless Com Network 2012, 272 (2012). https://doi.org/10.1186/1687-1499-2012-272

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2012-272

Keywords