 Research
 Open Access
 Published:
Resource allocation for secondary users with chance constraints based on primary links control feedback information
EURASIP Journal on Wireless Communications and Networking volume 2012, Article number: 346 (2012)
Abstract
Resource allocation for secondary users is an important issue in cognitive radio networks. In our article, we introduce a resource allocation scheme for secondary users to share spectrum in a cognitive radio network. Secondary users can exploit the spectrum owned by primary links when the interference level does not exceed certain requirements. Uncertainties of channel gains pose a great impact on the allocation scheme. Since there are uncertainties about the channel states, we apply chance constraints to represent the interference level requirements with uncertainties. Secondary users can exceed the interference level with a predefined small probability level. Since chance constraints are generally difficult to solve and full information about the uncertain variables is not available due to the fading effects of wireless channels, we reformulate the constraints into stochastic expectation constraints. With sample average approximation method, we propose stochastic distributed learning algorithms to help secondary users satisfy the constraints with the feedback information from primary links when maximizing the utilities.
1 Introduction
Today’s wireless networks are faced with an increase in spectrum demand while radio spectrum is a limited and valuable resource [1, 2]. On the contrary, some licensed frequency bands are unoccupied for most of the time and some other bands are just partially used while unlicensed users have needs to get access to the spectrum [3]. Spectrum utilization can be improved significantly by allowing secondary users to exploit the spectrum unoccupied by primary users [4]. Cognitive radio has been proposed as the method to promote the efficient utilization of the spectrum by allowing secondary users to access the spectrum when it is unoccupied [4, 5]. In cognitive radio networks, primary users may get some encouragements to provide some feedback information such that their own transmission would not be severely affected by secondary users once they want to share the spectrum [6]. Some monitoring nodes in the networks may also assume the tasks to provide feedback information in some practical scenarios [6, 7]. The spectrum then can be better utilized while primary users’ transmissions would not be affected, which is one main motivations for cognitive radio networks. Moreover, resource allocation with time varying channels is an interesting research area in wireless communication. Uncertainties of the channel states trigger many important research issues since the uncertainties would pose an impact on the resource allocation decisions [3, 4].
Power control with spectrum sharing is an important topic in cognitive radio networks. Some researchers propose a framework for dynamic spectrum sharing in cognitive radio networks to facilitate the usage of bandwidth only under certain channel states [8, 9]. Spectrum sharing for unlicensed bands is also taken into consideration and optimal spectrum allocation is given under the condition that all channel information is available to all users [10]. Moreover, perfect channel state information (CSI) is assumed to be known by secondary users when they have access to the spectrum [11, 12]. However, in most practical scenarios, there is an uncertainty in obtaining the channel information. Uncertainties of channel gains would affect the power allocation decisions of secondary users when they try to exploit the spectrum owned by primary users [13]. Secondary users want to maximize their own utilities while not impairing primary users’ transmission quality even under the cases with imperfect CSI [13].
In some scenarios, secondary users can exceed the required interference level with some small probability instead of staying strictly below the required level. Chance constraints are widely applied in tackling uncertainties [6, 14]. Some researchers analyze the chance constraint with known distribution by reformulating into a deterministic form with some knowledge of distributions of random variables [14, 15]. Others approximate the chance constraint with convex approximation, which still requires the knowledge of the distribution even for Monte Carlo simulation [16]. Chance constraints are generally difficult to solve especially when we do not have the full information about the distributions of the random variables due to the fading effects of wireless channels [17, 18]. Furthermore, it is difficult to obtain the exact information about the distributions in general. Sample average approximation method can be applied to help the secondary users learn the uncertain channel information [18, 19]. However, direct measurements of channel gains are difficult and transmission of such large amount of information would occupy a lot of resources. Some apply the average approximation methods in a robust way [20]. Some consider subgradient learning method while transforming chance constraints into stochastic programming problems [21–23]. Some researchers also consider the long term ergodic optimization problems with learning processes [24, 25].
In our article, we introduce a resource allocation scheme for secondary users to share spectrum in a cognitive radio network. Secondary users can exploit the spectrum owned by primary links while their interference levels can exceed the given requirements with a small predefined probability. To tackle the chance constraints, we demonstrate a stochastic approach with sample average approximation method for secondary users to obtain the feedback information from primary links when channel gains are uncertain. Secondary users do not need to have the full information about the channel gains and they just need to know the outage information from the primary links. We propose two stochastic distributed learning algorithms (SDLA) to optimize the utilities with given feedback information.
The rest of this article is organized as follows: Section 1 introduces the system model. Section 1 illustrates deterministic approaches with chance constraints when full information about the distributions is available. Section 1 illustrates our stochastic approaches with chance constraints when full information is not available and demonstrates our SDLAs. Section 1 gives our numerical results and discussion. The last section contains our conclusions.
2 System model
We consider a cognitive radio network with N primary links and M secondary users. We define the sets \mathcal{M}=\{1,2,\dots M\} and \mathcal{N}=\{1,2,\dots N\}. Secondary users would like to transmit to the secondary base station when accessing the spectrum. We assume that every primary link represents a channel which is orthogonal to each other. To make the notation simple, we assume that the i th primary link denotes the i th channel. Secondary users can exploit the channels when the interference caused by them would not exceed a certain level to pose a great threat for the primary link transmission. We consider the channel gains are time varying such that there are some uncertainties.
Without loss of generality, we consider the k th secondary user. We use u_{ k }(P_{ k }) to represent the utility function for the k th secondary user. We define {\mathit{P}}_{\mathit{k}}=[{P}_{k}^{1}\phantom{\rule{1em}{0ex}}{P}_{k}^{2}\dots \phantom{\rule{1em}{0ex}}{P}_{k}^{i}\dots {P}_{k}^{N}] to represent the power vector for the k th secondary user. {P}_{k}^{i} is the allocated power of the k th secondary user for the i th channel. In our article, we assume that u_{ k }(P_{ k }) is a concave and monotonic increasing function with respect to every element {P}_{k}^{i}. Utility functions with such properties are widely applied, e.g, {u}_{k}=\sum _{i=1}^{N}log\left(1+\frac{\underset{k}{\overset{i}{P}}{G}_{k}^{i}}{{N}_{0}+\sum _{j=1,j\ne k}^{M}{P}_{j}^{i}{G}_{j}^{i}}\right), where {G}_{k}^{i} is the channel gain between the k th secondary user and the secondary base station at the i th channel. The choice of utility functions is open and there are other examples of utility functions [26, 27].
The total power that can be allocated to the k th secondary user serves as a constraint and we have
P_{ T } is the total power the secondary user can use. For simplicity, we assume that all secondary users have the same total power constraint.
We define the feasible set for power allocation as
Moreover, secondary users cannot exceed certain interference level when exploiting the channel. In some practical scenarios, the primary links may allow secondary users exceed the interference constraint with a certain probability α instead of staying strictly below it. In this case, we have the chance constraints as follows:
where V represents the predefined interference level. {G}_{k,p}^{i} denotes the channel gain between the k th secondary user and the primary receiver with respect to the i th channel. To make the analysis simple, we assume all primary links have the same level requirement.
In some practical scenarios, α represents the quality of service (QoS) that the primary links demand facing interference from secondary users, e.g., when α=0, the primary links cannot tolerate any exceeded interference; when α=0.2, the primary links allow some exceeded interference with a small probability.
The interference constraints on a single secondary user may also be used in the area of admission control. In some cases, when secondary users exceed the required interference level, the primary link may block the secondary user out. Due to the uncertainty of channel gain {G}_{k,p}^{i}, secondary users need to satisfy the constraints to share the spectrum. Secondary users also want to maximize their utilities while satisfying the constraints (3).
3 Power allocation with complete information about probabilistic constraints
In order to maintain the interference level below the requirement, the k th secondary user needs to adjust his or her power allocation under the uncertainties of channel gains. The k th secondary user wants to maximize his or her utility and the optimization problem is formulated as follows
We can see that there are probabilistic constraints in the optimization problem. To obtain an optimal solution, the chance constraints themselves need to be reformulated into some deterministic forms.
Chance constraints are generally nonconvex. However, in our system model, since the dimension in every chance constraint is one with respect to {P}_{k}^{i} and {G}_{k,p}^{i}, it can be shown that the constraints represent convex sets when full information about the distribution is available. Given full information about the distribution, (4) is a convex optimization problem.
Lemma 1
Suppose that channel gain {G}_{k,p}^{i}\phantom{\rule{1em}{0ex}}\forall i are bounded and we have the cumulative density functions (CDF) {F}_{k}^{i}\left(.\right), respectively. The optimization problem is a convex optimization problem.
Proof
Based on the assumptions in our system model, the objective function is concave for every {P}_{k}^{i}. Thus it is to maximize a concave function. For the constraint set, the probabilistic constraint is equivalent to the following
where we define \frac{V}{{P}_{k}^{i}}=+\infty when {P}_{k}^{i}=0. Moreover, since {F}_{k}^{i}\left(.\right) is monotonic nondecreasing, the above constraint is equivalent to
where we have
Therefore, we can reformulate the constraint set as follows
The constraint set is clearly convex. Thus we are maximizing a concave function with the convex set. Our optimization problem is a convex optimization problem. □
Given the complete information about the distribution, we define the feasible set for the k th secondary user as follows
where {P}_{k,max}^{i} is the maximum power for the k th secondary user at the i th channel. It can be calculated based on the CDF of the respective distribution. We also define the set X_{com}=X_{1,com}×X_{2,com}×⋯×X_{k,com}×⋯×X_{M,com}.
The optimization problem can be rewritten as follows
As an example, when {u}_{k}=\sum _{i=1}^{N}log\left(1+\frac{\underset{k}{\overset{i}{P}}{G}_{k}^{i}}{{N}_{0}+\sum _{j=1,j\ne k}^{M}{P}_{j}^{i}{G}_{j}^{i}}\right), the optimal solution can be obtained as follows
where {\left[.\right]}_{0}^{{P}_{k,max}^{i}} is the projection into the interval [0,{P}_{k,max}^{i}] and η_{ k } is a Lagrangian variable.
Under this example, secondary users can optimize power allocation with others’ choices. It can be shown that a game \mathcal{G}=[M,{u}_{k}(.),{\mathit{P}}_{\mathit{k}}] is formed. It can also be shown that with feasible set X_{com}for all secondary users, the game is a potential game. Thus a Nash equilibrium (NE) exists in our game.
Proposition 1
With full information and for the game \mathcal{G}=[M,{u}_{k}(.),{\mathit{P}}_{\mathit{k}}], there exists a NE in our game model.
Proof
It is obvious that the set X_{com}is a convex, nonempty and compact set for all k\in \mathcal{M}. Noting that u_{ k }is concave with respect to {P}_{k}^{i}, the existence of NE thus follows the standard results in game theory in [28, 29]. □
Proposition 2
With full information and for the game \mathcal{G}=[M,{u}_{k}(.),{\mathit{P}}_{\mathit{k}}], it is a potential game.
Proof
We define a function as follows:
It can be easily verified that
for any feasible solution P_{ k }and {\mathit{P}}_{\mathit{k}}^{\prime} and fixed P_{ −k } where P_{ −k } represents all other secondary users’ strategies. Therefore, Φ(P_{ k }) is a potential function for the game the game \mathcal{G}=[M,{u}_{k}(.),{\mathit{P}}_{\mathit{k}}]. Due to the properties of potential game, we know that our game is a potential game [30]. □
We propose an algorithm to help secondary users update the choices of power allocation (Algorithm 1).
3.1 Algorithm 1: Updating choice of power consumption for each user (complete information)
1: for k=1 to M do
2: Given other users’ information
3: The k th secondary user updates power allocation according to the result in (7)
4: end for
5: Until all secondary users meet the NE.
Remark 1
We can see that with full information about the distribution of {G}_{k,p}^{i},\forall k,\phantom{\rule{1em}{0ex}}\forall i, the result for a single secondary user is a typical waterfilling solution. The algorithm is similar to iterative water filling algorithm (IWFA). The solutions and algorithms are commonly found in multiple access channels and potential games [31, 32]. Therefore, with complete information, the chance constraints in our model can be formulated into convex ones and the problem can be solved readily.
4 A stochastic approximation approach based on the outage event
The reformulation of probabilistic constraints into deterministic convex sets requires the full knowledge of the distributions of the channel gains. Even without exact information about the distributions, we still need to measure the channel such that approximations can be applied to reformulate the probabilistic constraints. However, measurement is difficult in some scenarios.
To reduce the complexity of measurement, instead of approximating the information of the distribution for the specified random variable, we can consider an outage event as follows
Definition 1
When {P}_{k}^{i}{G}_{k,p}^{i}>V, we define that an outage occurs for the k th user at the i th channel. When {P}_{k}^{i}{G}_{k,p}^{i}\le V, no outage occurs.
We then have the following
where we have
{\zeta}_{k}^{i} represents the random variable such that it would be one when an outage occurs and zero when there is no outage.
It can be seen that the probability constraints are also equivalent to the stochastic constraints.
Mathematically, the chance constraints are reformulated as expectation constraints. Since we do not have the full information about the exact distribution of the outage event {\zeta}_{k}^{i}, we would use the sample average to approximate the expectation. Given L samples of the events, we have
where {\left({G}_{k,p}^{i}\right)}_{j} denotes the j th sample of the channel gains. Equivalent, the event {\left(\hat{{\zeta}_{k}^{i}}\right)}_{j}=\chi \{{P}_{k}^{i}{\left({G}_{k,p}^{i}\right)}_{j}>V\} denotes the j th sample of the outage event based on the j th sample of the channel gain. We assume that every sample of {\left({G}_{k,p}^{i}\right)}_{j} obtained is independent and identically distributed (iid).
We also have
The expectation of the approximate equals that of the original.
4.1 Feasibility of the stochastic approximation method
When L is sufficiently large, the sample average value is used to approximate the expectation value. We have the following lemma.
Lemma 2
Given the value of {P}_{k}^{i} and V, when every sample of {G}_{k,p}^{i} obtained is iid, let any σ>0, we have
Proof
By Hoeffding Inequality, we can have
where a_{ j }and b_{ j }represents the lower bound and upper bound of the random variable {\left(\hat{{\zeta}_{k}^{i}}\right)}_{j} respectively with probability one. Since {\zeta}_{k}^{i} can only take value of 0 or 1, we assign a_{ j }=0, ∀j and b_{ j }=1, ∀j. We finally have
□
This lemma says that if we take the average of L random variables to be our estimate of the expected value, then the probability of being far away from the true value is small so long as L is large.
Moreover, since all parameters are nonnegative, we can reformulate the probability constraint as follows
where when {P}_{k}^{i}=0, we denote \frac{V}{{P}_{k}^{i}}=+\infty. We also define the feasible set for the original problem with respect to i as follows
Thus the whole feasible set is defined as {X}_{\alpha}=\bigcap _{i=1}^{N}{X}_{\alpha}^{i}. The feasible region of the sample average approximation problem for γ∈[0,1) is
Similarly, the whole feasible set for the sample average approximation is defined as \hat{{X}_{\gamma}}=\bigcap _{i=1}^{N}\hat{{X}_{\gamma}^{i}}.
In particular, when taking the sample average approximation, the secondary users would like to be more conservative. Given τ>0, we define
and \hat{{X}_{\gamma ,\tau}}=\bigcap _{i=1}^{N}\hat{{X}_{\gamma ,\tau}^{i}}.
Since {P}_{k}^{i}{G}_{k,p}^{i}V is Lipschitz Continuous, when the channel gain {G}_{k,p}^{i} is bounded, we have
where, we define the Lipschitz constant as {c}_{k}^{i}>0. It can be shown that this constant {c}_{k}^{i} can represent the bounded constant for the channel gain {G}_{k,p}^{i}. {P}_{k}^{i} and {\left({P}_{k}^{i}\right)}^{\prime} are the i th element of the vector P_{ k }and {\left({\mathit{P}}_{\mathit{k}}\right)}^{\prime}. It can be shown that the strategy set X is bounded and let D be the diameter of X. Then we have the following lemma.
Lemma 3
Let γ∈[0,α), β∈(0,α−γ) and τ>0, we have
where m is a finite constant and c_{k,max}represents the largest Lipschitz constant of {c}_{k}^{i}, ∀i.
Proof
See Appendix. □
The number of primary links N is finite in all practical scenarios. We can see that by adding τ>0 and applying the sample average approximation method, the new strategy set is still feasible with large probability when the sample number L is sufficiently large.
Remark 2
In some practical scenarios, secondary users want to guarantee that the constraints are satisfied even with the approximation. Thus they would have γ<α and introduce the variable τ>0 to make the requirements tighter than the original ones. With these guard measures, the new strategy set is feasible with a large probability as Lemma 3 states.
4.2 SDLAI
To solve the optimization problem with sample average approximation, we can apply the general stochastic Arrow–Hurwciz algorithm [33]. First of all, we apply the Lagrangian with probabilistic constraints as follows
and we define {\Theta}_{k}\left({\mathit{P}}_{\mathit{k}}\right)=\sum _{i=1}^{N}(\alpha E({\zeta}_{k}^{i}\left)\right). We have the update of {P}_{k}^{i} and {\lambda}_{k}^{i} in vector form as follows
where x^{+} =max(x,0) and ⊙ denotes the multiplication of two elements in the two vectors, respectively. π_{ X }[.] represents the Euclidean projection to the set X. β(n) is the step size for the n th iteration. However, in our case, it is very difficult to obtain the gradient of \mathit{E}\left({\zeta}_{k}^{i}\right) with respect to {P}_{k}^{i}, that is, it is difficult to obtain the exact ∇Θ_{ k }. To apply this algorithm, we need to approximate the gradient for this indicator function. From [33], we can choose the following result as the approximate gradient.
where we have
where r>0 and the function h(.) has the following properties
and h(x) has a unique maximum at x=0.
There are many choices about h(x) [33]. From [33], we choose the following function
where we define
Remark 3
We can see that since r is a small positive constant as defined in [33], when {P}_{k}^{i}{G}_{k,p}^{i} is much smaller than V, the approximate gradient is zero such that the power would increase through updating because ∇u_{ k }is positive. When {P}_{k}^{i}{G}_{k,p}^{i} is very close to V, the approximate gradient represents a negative large constant such that power may decrease. Therefore, the update scheme can help power increase while the level of {P}_{k}^{i}{G}_{k,p}^{i} is not close to V and help power decrease when the level of {P}_{k}^{i}{G}_{k,p}^{i} would be higher than V.
Remark 4
Moreover, since the approximate gradient is not the exact value, the initial values of power vector P_{ k }for the algorithm should be small. These kind of initial values are common in gradient update schemes and are widely applied in gradient or subgradient methods [34].
It can be shown that this approximate gradient just represents an indicator for whether subtracting λ_{ k } in the power update equations. When the expectation constraint is satisfied, the approximate gradient is zero such that power is increased through updating. Otherwise the approximate gradient is negative such that power is decreased through updating. In some practical scenarios, it can be further simplified as we will show in our simulation results.
We can see that to update the power level, secondary users need to have the feedback information about λ and the approximate gradient. The secondary user updates power levels as primal update scheme while primary links would observe the outage events and feedback the respective information through λ as dual update scheme. The result is a primaldual update scheme in a stochastic setting.
Moreover, we also need to apply the sample average method to tackle the value of \mathit{\alpha}\mathit{E}\left({\mathit{\zeta}}_{\mathit{k}}^{\mathit{i}}\right). The algorithm scheme can be summarized as follows and we denote it as SDLAI:

Power level would be adjusted based on a gradientlike algorithm.

Step 1: Between the n th iteration and the (n + 1)th iteration, the k th secondary user obtains the feedback information of the outage event regarding the power allocation P_{ k }(n).

Step 2: The primary links calculate the outage with regard to all past history. When an outage can occur given the channel information of the j th iteration, ∀j≤n, we denote {O}_{k}^{i}\left(j\right)=1. Otherwise it is zero. We obtain an outage probability as \frac{{N}_{k}^{i}\left(n\right)}{n} for the n th iteration. {N}_{k}^{i}\left(n\right)=\sum _{j=1}^{n}{O}_{k}^{i}\left(j\right) represents the sum value for the outages that can happen during all previous iterations where we have 0\le {N}_{k}^{i}\left(n\right)\le n.

Step 3: Based on the observed outage probability, the primary link would update the respective λ based on the dual update scheme and calculate the approximate gradient. The primary link would feed back such information to the k th secondary user.

Step 4: The k th secondary user would then update the power level for the (n + 1)th iteration based on the primal update scheme.

Step 5: The power level and the respective dual variable λ would be updated iteratively.
The update of our algorithm is as follows
We apply learning algorithm to learn the event {\zeta}_{k}^{i} such that the constraint is satisfied in the long term. The secondary users can obtain some feedbacks to update the choices of power. Moreover, in the learning process, the secondary users just need to know something about the instantaneous outage events and do not need to know the distributions of specific channel gains. Also, secondary users do not need to handle the measurements. Thus the k th secondary users would just focus on the randomness of the term {\zeta}_{k}^{i} instead of the specified channel gains.
Remark 5
Chance constraints are generally difficult to tackle even when they are reformulated into expectation constraints, let alone we do not have the information about distributions. The reformulated expectation constraints are also different from general expectation constraints as discussed in stochastic programming areas, which pose great difficulties for the exact proof of convergence of the algorithms. In [35] and [36], researchers discuss the complexity of stochastic programming with chance constraints and illustrate that few analytical results can be provided even under sample average approximations, since there are some severe difficulties when facing chance constraints.
Remark 6
In our system model, for SDLAI, E\left({\zeta}_{k}^{i}\right) is close to \hat{{\zeta}_{k}^{i}} with large probability from Lemma 2 given a large L. Suppose that there exists a large number n_{0} and for L≥n_{0}, we can treat E\left({\zeta}_{k}^{i}\right)=\hat{{\zeta}_{k}^{i}} for relaxation. Then, in this case, for n≥n_{0}, our SDLAI can be written as follows
where E\left({\zeta}_{k}^{i}\right)=\hat{{\zeta}_{k}^{i}} can be obtained from previous iterations with approximations. Given a close deterministic approximations of gradient Θ_{ k }(n), the above SDLAI can be treated as a deterministic primaldual update algorithms. Such primaldual algorithms have been discussed in some areas [25, 37].
4.3 SDLAII
In SDLAI, though secondary users do not need to measure the channel gain {G}_{k,p}^{i}, the calculation of λ_{ k } and the approximate gradient are still required in the primary links. We can further reduce the burden of such calculation such that just simple calculation and feedback information is needed for both primary links and secondary users, which is desirable in practical scenarios.
We propose another scheme to adjust power levels with observed outage probability. Instead of updating power level with information about λ and the approximate gradient, the secondary user can update the power level just with the feedback information about the observed outage probability. In this case, only primal update is realized without dual variables.
We denote it as SDLAII. The algorithm scheme is similar to that in SDLAI but we change some steps as follows.

Changes of Step 3: No feedback of dual variables from primary links. Secondary users check the feasibility of the constraints with the observed outage probability to see whether \alpha >\frac{{N}_{k}^{i}\left(n\right)}{n}.

Change Step 4 as: If \alpha >\frac{{N}_{k}^{i}\left(n\right)}{n}, secondary users regard that the constraint is satisfied and would increase the power level for the next iteration. Otherwise they regard the power is so high that it needs to be reduced to satisfy the constraint.

Changes of Step 5: There is no update of dual variables and approximate gradients. Secondary users only update the power levels iteratively.
Though the observe outage probability is not the exact true value of the expectation, it can approximate to the true value when n is sufficient large by the law of large numbers, that is, when the number of iterations is large enough.
We define
and we define the vector form {\mathit{V}}_{\mathit{k}}\left(\mathit{n}\right)\phantom{\rule{2.77695pt}{0ex}}=\phantom{\rule{2.77695pt}{0ex}}\left[{V}_{k}^{1}\right(n\left){V}_{k}^{2}\right(n)\dots {V}_{k}^{i}(n)\dots {V}_{k}^{N}(n\left)\right].
We have the gradient method as follows
We can see that when the calculated outage level is smaller than α, {P}_{k}^{i} would increase according to the general gradient algorithm. When the calculated outage level is larger than α, we regard that {P}_{k}^{i} is so large that it should be decreased.
In vector form, we have
The secondary users update the power allocation based on the feedback of outage events such that in the long term, the probability constraints can be satisfied and the utility is maximized.
Remark 7
For SDLAII, similar to the remark discussion above about SDLAII, suppose that when n≥n_{0}, the constraint \alpha \hat{{\zeta}_{k}^{i}} is satisfied with large probability such that we can treat \alpha \ge \hat{{\zeta}_{k}^{i}} for future iterations. Our SDLAII for the k th secondary user can be written as follows
we can see that it is a deterministic gradient update algorithm and such algorithms has been discussed in [34, 38, 39].
5 Numerical results and discussion
5.1 Examples of u_{ k }(.) for simulation
For simulation, first, we illustrate some examples about the objective function u_{ k }(P_{ k }). We have one as the following
where we have
{\mathit{G}}^{\mathit{i}}=[{G}_{1}^{i}\phantom{\rule{1em}{0ex}}{G}_{2}^{i}\dots {G}_{k}^{i}\dots {G}_{M}^{i}]. Since {G}_{k}^{i} also changes every time slot, we want to maximize the expected value. {E}_{{\mathit{G}}^{\mathit{i}}}\left({R}_{k}^{i}\right) denotes the expected value with respect to G^{i}. We shall in the sequel drop the subscript to write E\left({R}_{k}^{i}\right) instead of {E}_{{\mathit{G}}^{\mathit{i}}}\left({R}_{k}^{i}\right) when not leading to confusion. It can be shown that this u_{ k }(P_{ k }) is concave and monotonic increasing with respect to every element {P}_{k}^{i}.
In particular, when {G}_{k}^{i} would not change much during the learning process of {G}_{k,p}^{i}, that is, the channel gain {G}_{k}^{i} remains more or less the same as constant during the algorithm, we take the following example
There are some practical scenarios in which the channel gain {G}_{k}^{i} would not change much. We regard that it would be a constant in the objective function. This kind of approximation is used in the stochastic modeling and algorithm in [6].
5.2 Simulation model
We consider a microcell area for wireless transmissions. We set that there are two secondary users allocated with N=20 channels. Due to the environment setting, shadowing effects play an important role in the uncertainties of channel gains. We set that the channel gain {G}_{k}^{i} is generated according to the following
where we set ω=3.3. {\kappa}_{k}^{i} is generated uniformly from [20,25]. d_{ k }is generated uniformly from [1,20] m, which represents that transmissions of secondary users occur in small area. {\epsilon}_{k}^{i}, which represents the shadowing effect, is generated from a gaussian distribution N(0,{\left({\sigma}_{k}^{i}\right)}^{2}). We set {\sigma}_{k}^{i} generated uniformly from the interval [0.1,2] in our simulation. The value of {\sigma}_{k}^{i} can be varied in an interval and what we choose here represents some common environment conditions of microcell with shadowing effects [40, 41].
Similarly, to model the uncertainty of {G}_{k,p}^{i}, for the channel gain between the primary links and secondary users, we consider that {G}_{k,p}^{i} is generated as follows
where α_{ p } is set to be 3.1. d_{k,p} is generated uniformly from [1,15] m. κ_{k,p}=20 and {\epsilon}_{k,p}^{i} is generated from a gaussian distribution N(0,{\left({\sigma}_{k,p}^{i}\right)}^{2}). {\sigma}_{k,p}^{i} is generated uniformly from the interval [1,8] for each i and k. The large interval for {\sigma}_{k,p}^{i} represents uncertainties in these channel gains since secondary users may not be able to obtain the exact statistical knowledge of the channel gains based on the transmission environment.
We also set α=γ in our simulation. The choice of distribution of {G}_{k,p}^{i} remains open and we just use this distribution to show some examples under a microcell with shadowing effects. For the stepsize β(n) in the algorithm, when \beta \left(n\right)=\frac{1}{n}, the result may finally converge to a stable point. However, in our simulation model, to speed up the gradientlike SDLA algorithms, we set β(n) equal to one. The choice of stepsize is also open. In our simulation, we just want to show some examples and try to speed up the learning process.
5.3 Simulation results and discussions
In Figure 1, we consider the example when {u}_{k}=\sum _{i=1}^{N}E\left({R}_{k}^{i}\right). For the data rates with full information, we obtain the results by averaging the optimization results for each iteration with full information about {G}_{k,p}^{i}. For data rates under SDLA, we obtain the final power allocations through iterations and apply the results to calculate the average value. To make the simulation simple, we set the approximate gradient with respect to power as minus one in our simulation.
It can be seen that in Figure 1, data rates generated by SLDAI and SLDAII are very close to those with full information about {G}_{k,p}^{i}. For both users, when the data rate is 0.6 bps/Hz, the algorithm with full information has only 0.7 dB gain over SDLA. When the data rate is 0.8 bps/Hz, the gain is just about 0.8 dB. When the total power ranges from 1 to 20 dB, the data rates under SDLAI are very close to those under SDLAII. When the data rate is 1 bps/Hz, for both users, SDLAI has only 0.02 dB gain over SDLAII. Notice that, for the algorithm with full information, optimization needs to be done every time while our SDLA can learn the result with finite iterations and do not require optimization every time.
In Figure 2, we consider the allocated power over iterations. It can be seen that the power allocated for different channels remains almost constant after about 400 iterations. For user one, the value for channel one remains about 0.5 and that for channel three remains about 1.4.
It can be seen that in Figure 3, when V=1, data rates generated by SLDAI and SLDAII are very close to those with full information. For user one, when the data rate is 0.8 bps/Hz, the algorithm with full information has only 0.8 dB gain over SDLA. When the data rate is 1 bps/Hz, the gain is just about 0.7 dB. When the total power ranges from 1 to 20 dB, the data rates under SDLAI are very close to those under SDLAII. When the data rate is 1 bps/Hz, for user one, SDLAI has only 0.02 dB gain over SDLAII.
In Figure 4, we set {u}_{k}=\sum _{i=1}^{N}{R}_{k}^{i}. It can be seen that the data rates generated by both SDLAI and SDLAII are very close to those with full information about {G}_{k,p}^{i}. For user one, when the data rate is 0.6 bps/Hz, the algorithm with full information has only 0.1 dB gain over SDLAI and SDLAII. For user two, when the data rate is 0.4 bps/Hz, the algorithm with full information has only 0.5 dB gain over SDLAI and SDLAII. Moreover, the data rates under SDLAI are very close to those under SDLAII when the total power ranges from 1 to 20 dB. For both users, when the data rate is 0.8 bps/Hz, SDLAI has about 0.03 dB gain over SDLAII.
In Figure 5, when α=0.02 and V=1, it can be seen that the data rates generated by both SDLAI and SDLAII are very close to those with complete information. For both users, when the data rate is 0.2 bps/Hz, the algorithm with full information has only 0.3 dB gain over SDLAI and SDLAII. Also, the data rates under SDLAI are very close to those under SDLAII when the total power ranges from 1 to 20 dB. Moreover, for both users, when total power is larger than 20 dB, the data rates would not improve due to maximum power constraints. Results under SDLAI and SDLAII are still close to those with complete information.
Moreover, comparing Figure 1 with Figure 4, our SDLA algorithm can give good performance especially when the variation of {G}_{k}^{i} is small over iterations. In Figure 4, the channel gains in the utility functions remain the same over the learning processes. The small variation of {G}_{k}^{i} over the learning process is reasonable in some practical scenarios. For example, when the standard deviation of the shadowing effects of the channel gain is small, the distance between the transmitter and the receiver becomes the key factor such that the value of the channel gains would not change very much.
6 Conclusion
Resource allocation for secondary users with uncertainties is an important issue in cognitive radio networks. In our article, we introduce a resource allocation scheme for secondary users to share spectrum in a cognitive radio network. Secondary users can exceed the interference level with a predefined small probability level to share the spectrum. There is an uncertainty about the channel states between secondary users and primary links. We apply chance constraints to represent the interference level requirements with the uncertainties. Since chance constraints are generally difficult to solve and full information about the uncertain variable is not available, we reformulate them into stochastic expectation constraints. The secondary users can learn the outage feedback information instead of measuring certain channel gains to satisfy the constraints. We propose two SDLAs to help secondary users adjust the power to maximize the utility with only feedback information from primary links. Our simulation results show that the algorithms can give performance close to that with complete information.
Appendix
Proof of Lemma 3
According to the proof of Theorem 10 in [36], for β∈(0,α−γ), there exist finite sets {Z}_{i}^{\tau}\subseteq X with
where ⌈.⌉ denotes the upper integer part, and for any {\mathit{P}}_{\mathit{k}}\in \hat{{X}_{\gamma ,\tau}} and any i there exists \mathit{z}\in {Z}_{i}^{\tau} such that \left\right\mathit{z}{\mathit{P}}_{\mathit{k}}\left\right\le \frac{\tau}{{c}_{k}^{i}}. Using the finite set {Z}_{i}^{\tau} we can define
Moreover, from the proof of Theorem 10 in [36], for all i it holds that {Z}_{\gamma}^{\tau ,i}\subseteq {Z}_{\alpha \beta}^{\tau ,i} implies \hat{{X}_{\gamma ,\tau}^{i}}\subseteq {X}_{\alpha}^{i}. From the proof in Section 3.2.2 in [42], for the finite set, we have
where we apply the the Bonferroni inequality as follows
where A_{ i }is an event.
From the proof in Section 3.2.2 in [42], since {Z}_{\gamma}^{\tau}\subseteq {Z}_{\alpha \beta}^{\tau} implies \hat{{X}_{\gamma ,\tau}}\subseteq {X}_{\alpha}, we can obtain the result as follows
References
Rappaport TS: Wireless Communications: Principles and Practice. Prentice Hall, New York; 2002.
Proakis J: Digital Communications. McGrawHill, New York; 2001.
Tse D, Viswanath P: Fundamentals of Wireless Communication. Cambridge University Press, Cambridge; 2005.
Haykin S: Cognitive radio: Brainempowered wireless communications. IEEE J. Sel. Areas Commun 2005, 23(2):201220.
Grokop L, Tse DN: Spectrum sharing between wireless networks. In Proc. 27th IEEE International Conference on Computer Communications (INFOCOM’08). Phoenix, AZ, USA; Apr 2008:201205.
Huang S, Liu X, Ding Z: Decentralized cognitive radio control based on inference from primary link control information. IEEE J. Sel. Areas Commun 2011, 29(2):394406.
Zhao G, Li GY, Yang C: Proactive detection of spectrum opportunities in primary systems with power control. IEEE Trans. Wirel. Commun 2009, 8(9):48154823.
Acharya J, Yates RD: A framework for dynamic spectrum sharing between cognitive radios. In IEEE 2007 International Conference on Communications (ICC’07). Glasgow, Britain; June 2007:51665171.
Cendrillon R, Yu W, Moonen M, Verlinden J: Optimal multiuser spectrum balancing for digital subscriber lines. IEEE Trans. Commun 2006, 54(5):922933.
Etkin R, Parekh A, Tse D: Spectrum sharing for unlicensed bands. IEEE J. Sel. Areas Commun 2007, 25(3):517528.
Pang JS, Scutari G, Palomar DP, Facchinei F: Design of cognitive radio systems under temperatureinterference constraints: a variational inequality approach. IEEE Trans. Signal Process 2010, 58(6):32513271.
Hayashi S, Luo ZQ: Spectrum management for interferencelimited multiuser communication systems. IEEE Trans. Inf. Theory 2009, 55(3):11531175.
Xing Y, Mathur CN, Haleem MA, Chandramouli R, Subbalakshmi KP: Dynamic spectrum access with qos constraints and interference temperature. IEEE Trans. Mobile Comput 2007, 6(4):423433.
Chung PJ, Du H, Gondzio J: A probabilistic constraint approach for robust transmit beamforming with imperfect channel information. IEEE Trans. Signal Process 2011, 59(6):27732782.
Zhang YJ, AMC So: Optimal spectrum sharing in mimo cognitive radio networks via semidefinite programming. IEEE J. Sel. Areas Commun 2011, 29(2):362373.
Nemirovski A, Shapiro A: Convex approximation of chance constrained programs. J. Opt 2006, 17(4):969996.
Ruszczynski A, Shapiro A: Stochastic Programming. In Handbooks in Operations Research and Management Science. Edited by: Shapiro A. Elsevier, Amsterdam; 2003.
Robbins H, Monro S: A stochastic approximation method. Ann. Math. Stat 1951, 22: 400407. 10.1214/aoms/1177729586
Pagnoncelli BK, Ahmed S, Shapiro A: Sample average approximation method for chance constrained programming: Theory and applications. J. Opt. Theory Appl 2009, 142(2):399416. 10.1007/s1095700995236
Nemirovski A, Juditsky A, Lan G, Shapiro A: Robust stochastic approximation approach to stochastic programming. SIAM J. Opt 2009, 19(4):15741609. 10.1137/070704277
Yousefian F, Nedic A, Shanbhag UV: On stochastic gradient and subgradient methods with adaptive steplength sequences. Automatica 2012, 48(1):5667. 10.1016/j.automatica.2011.09.043
Ram SS, Nedic A, Veeravalli VV: Distributed stochastic subgradient projection algorithms for convex optimization. J. Opt. Theory Appl 2010, 147(3):516545. 10.1007/s1095701097377
Wang X, Gao N, access Stochasticresourceallocationoverfadingmultiple, Trans broadcastchannels: IEEE Inf. Theory. 2010, 56(5):23822391.
Ribeiro A: Ergodic stochastic optimization algorithms for wireless communication and networking. IEEE Trans. Signal Process 2010, 58(12):63696386.
Zhang J, Zheng D, Chiang M: The impact of stochastic noisy feedback on distributed network utility maximization. IEEE Trans. Inf. Theory 2008, 54(2):645665.
Saraydar CU, Mandayam NB, Goodman DJ: Pricing and power control in a multicell wireless data network. IEEE J. Sel. Areas Commun 2001, 19(10):18831892. 10.1109/49.957304
Xiao M, Shroff NB, Chong EKP: A utility based power control scheme in wireless cellular systems. IEEE/ACM Trans. Netw 2003, 11(2):210221. 10.1109/TNET.2003.810314
Fudenberg D, Tirole J: Game Theory. MIT Press, Cambridge; 1991.
Osborne MJ, Rubinstein A: A Course in Game Theory. MIT Press, Cambridge; 1994.
Monderer D, Shapley L: Potential games. Games Econ. Behav 1996, 14(1):124143. 10.1006/game.1996.0044
Palomar DP, Chiang M: A tutorial on decomposition methods for network utility maximization. IEEE J. Sel. Areas Commun 2006, 24(8):14391451.
He G, Cottatellucci L, Debbah M: The waterfilling game theoretical framework for distributed wireless network information flow. EURASIP J. Wirel. Commun. Netw 2010, 2010: 13.
Andrieu L, Cohen G, VazquezAbad FJ: Stochastic programming with probabilistic constraints. In 10th International Conference on Stochastic Programming. Tucson, Arizona, USA; Oct 2004:493501.
Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, Cambridge; 2004.
Nemirovski A, Shapiro A: Continuous Optimization: Current Trends and Applications. Springer, Berlin, Germany; 2005.
Luedtke J, Ahmed S: A sample approximation approach for optimization with probabilistic constraints. SIAM J. Opt 2007, 19: 674699.
Voice T: Stability of congestion control algorithms with multipath routing and linear stochastic modeling of congestion control,. Ph.D. dissertation, University of Cambridge, 2006
Bertsekas DP: Nonlinear Programming. Athena Scientific, Belmont; 2003.
Bertsekas DP, Tsitsiklis JN: Parallel and Distributed Computation: Numerical Methods. Prentice Hall, Englewood Cliffs; 1989.
Mark JW, Zhuang W: Wireless Communications and Networking. Edited by: Higle Julie L. Prentice Hall, Englewood Cliffs; 2003.
Sklar B: Rayleigh channel in mobile digital communication systems part i: Characterization. IEEE Commun. Mag 1997, 35(7):90100. 10.1109/35.601747
Branda M: Stochastic Programming EPrint Series. Reformulation of General Chance Constrained Problems Using the Penalty Functions, vol. 2, pp. 1–18, May 2010
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zhou, K., Lok, T.M. Resource allocation for secondary users with chance constraints based on primary links control feedback information. J Wireless Com Network 2012, 346 (2012). https://doi.org/10.1186/168714992012346
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168714992012346
Keywords
 Power control
 Cognitive radio networks
 Probabilistic constraints
 Sample average approximation