Skip to main content

Structure-based learning in wireless networks via sparse approximation

Abstract

A novel framework for the online learning of expected cost-to-go functions characterizing wireless networks performance is proposed. The framework is based on the observation that wireless protocols induce structured and correlated behavior of the finite state machine (FSM) modeling the operations of the network. As a result, a significant dimension reduction can be achieved by projecting the cost-to-go function on a graph wavelet basis set capturing typical sub-structures in the graph associated with the FSM. Sparse approximation with random projection is then used to identify a concise set of coefficients representing the cost-to-go function in the wavelet domain. This Compressed Sensing (CS) approach enables a considerable reduction in the number of observations needed to achieve an accurate estimate of the cost-to-go function. The proposed method is characterized via stability analysis. In particular, we prove that the standard CS approach of the Least Angle Selection and Shrinkage Operator (LASSO) will not provide stability. We also determine a connection between the structure of the FSM induced by the wireless protocols and the restricted isometry property of the effective projection matrix. Simulation results of our approximation method show that 15 wavelet functions can accurately represent a cost-to-go function defined on a state space of 2000 states. Moreover, the number of state-cost observations needed to estimate the cost-to-go function is orders of magnitude smaller than that required by traditional online learning techniques.

Introduction

Given the recent explosion in the number and types of wireless devices, new design and optimization paradigms are needed to effectively manage the complex and heterogeneous nature of modern wireless networks. We propose a novel approach for the online learning of cost-to-go functions in networks modeled via large finite state machines (FSM). Typical cost functions measure performance metrics such as throughput, packet delivery probability and delay. Cost-to-go functions measure the expected long-term cost incurred by the network from any state of the FSM. Estimation of cost-to-go functions is instrumental for the optimization of network control strategies. Our estimation approach is based on the observation that wireless networking protocols induce a structured behavior of the FSM, enabling dimension reduction of its state space via wavelet-projection and compressed sensing-like techniques. The sparse approximation approach proposed herein considerably reduces the length of the trajectory of the FSM required to achieve an accurate estimate of the cost-to-go function compared to traditional learning techniques.

Markov models have been widely used for the analysis and optimization of wireless networks[19]. In one of the earliest works on protocol modeling[1], a Markov chain is proposed to analyze the saturation throughput of IEEE 802.11 medium access control. The FSM models the backoff countdown counter controlling channel sensing and access of a wireless terminal and the retransmission index of the packet under transmission. In general, the Markov chains defined in these models track the logical state of the wireless protocols (e.g., the retransmission index of the packet being transmitted, the number of packets in the buffer and the backoff counter) as well as environmental variables (e.g., the channel state).

The online optimization of control strategies based on these models requires the estimation of cost-to-go functions from a sample-path of state-cost observations[1012]. However, the immense size of the state space of FSMs associated with practical wireless networks limits the applicability of traditional online learning techniques to toy networks and extremely simple case studies. In fact, the estimation of cost-to-go functions in traditional online learning (e.g., Q-learning and Reinforcement learning[1012]) requires sufficient observation of a sample-path such that it hits all the states of the FSM a large number of times. Approximations of cost-to-go functions[13, 14] are generally based on oversimplified models and thus cannot be accurately used in general practical networks. For instance, the fluid approximation proposed in[14] is based on the assumption that the cost-to-go function is smooth in the state space of the FSM, meaning that only small variations of its value computed in neighboring states are allowed. This assumption is suitable for simple cases (e.g., buffer models and cost functions modeling buffer congestion), but does not hold for more complex FSM models and general cost functions.

This work provides the following contributions: 1) we present a framework based on CS for the approximation of cost-to-go functions; 2) we analyze the structure of the FSMs modeling wireless networks based on their decomposition into fundamental components; 3) we connect the structure of the FSM to the Restricted Isometry properties of the effective projection matrix; 4) we analyze the stability of CS in this context via perturbation analysis; 5) we present a methodology for the use of Diffusion Wavelets (DW) in online learning; 6) we present numerical results illustrating the performance of the proposed approach.

The framework proposed herein is not tailored to a specific canonical network example, but is rather based on the inherent structure of the FSMs modeling the operations of general wireless networks. The fundamental observation behind our framework is that the directed graph associated with the temporal evolution of the state of the FSM is inherently regular and local. As a consequence, typical trajectories on the FSM can be described by a number of graph sub-structures considerably smaller than the number of possible edges between states. Figure1 depicts a schematic of the proposed online learning algorithm. A trajectory of the FSM associated with the operations of the physical network is used to estimate the transition probability matrix and the cost function and formulate the estimation problem. The cost-to-go function is projected onto a graph wavelet basis set capturing relevant and typical substructures in the graph. Sparse approximation (and in particular the least-squares CS (LS CS) algorithm[15]) is then employed to identify a concise set of substructures to represent the cost function of interest.

Figure 1
figure 1

Schematic of the algorithm. Graphical representation of the proposed approach. The physical network is a collection of terminals (gray circles) connected by wireless links: data (solid lines) and interference (dashed lines) links. The state of the terminals is defined by a collection of variables whose value evolves over time. The temporal evolution of the state of the terminals and of the links is modeled by the logical graph of the network. A sample-path of the network on the logical graph generates a sequence of observations, that are used to estimate the transition matrix P ̂ (t) and the cost vector c ̂ (t). The cost function is projected onto a diffusion wavelet basis. A concise representation in the wavelet domain of the long-term cost function c ¯ is found as the solution of a sparse approximation problem.

We characterize the performance of sparse approximation applied to the estimation problem addressed herein in terms of the minimum number of states that need to be observed to achieve an accurate estimate of the cost-to-go function. Our analysis is based on the decomposition of the FSM in fundamental structures we refer to as sub-chains. The transition matrix associated with the individual sub-chains is analyzed to measure the incoherencea of the overall transition matrix, which is exploited to determine the conditions under which the restricted isometry property (RIP)[16] holds for our effective random projection matrix.

Note that whereas most prior work on sparse approximation focuses on static scenarios, the framework considered in this article addresses the problem of learning in dynamical systems. The inclusion of states visited a small number of times in the sample-path of the FSM results in instability of the estimation algorithm. To reduce this effect, we use the LS CS algorithm proposed in[15]. LS CS correlates the output of the sparse approximation algorithm by constraining variations in the support of the solution.

Relevant to the approach proposed herein, Mahadevan et al. in[17] proposed the use of DW[18] as a projection basis for the sparse approximation of cost-to-go functions. In[17], offline estimation of the cost-to-go function is considered, however, no performance analysis is undertaken. In contrast, we examine online learning and we provide a detailed analysis to assess the performance of sparse approximation applied to Markov models of wireless networks. Compressed sensing-based techniques have been previously applied to estimation problems in networks[1924]. These works address graphs related to the physical connectivity of the network, where nodes are terminals and links are specific wired or wireless links or modeled by undirected graphs. We address the fundamentally different problem of estimating functions defined on the state space of the FSM, i.e., the logical graph of the wireless network, modeling the temporal evolution of the network from a small number of state observations.

Numerical results for a case of interest show that a small number of graph wavelets (15) are sufficient to accurately approximate a cost-to-go function defined on a state space of approximately 2000 states. Moreover, the proposed algorithm can estimate the cost-to-go function by observing a trajectory of the state of the FSM visiting a small subset of states in the state space.

The rest of this article is organized as follows. Section ‘System model and problem formulation’ describes the model of the network and defines the estimation problem. The sparse learning algorithm is presented in Section ‘Sparse estimation of cost functions’. Section ‘Structure of the graph’ proposes the decomposition of the overall graph into sub-chains and analyzes the properties of the transition probability matrix. Section ‘Perturbation analysis and performance bounds’ discusses the stability of sparse approximation applied in our context and characterizes the performance of the learning algorithm in terms of how the number of state observations grows with the network size. Numerical results are presented in Section ‘Numerical results’. Section ‘Conclusions’ concludes the article. The proof of the stated theorems are in Appendices Appendix 1 and Appendix 2.

System model and problem formulation

The network is modeled as a FSM whose state evolves within the state spaceS with N = |S|. DefineS(t)S as the state of the FSM at time t = 0,1,2,…. We assume that the sequence S = {S(0),S(1),S(2),…} is a Markov process with transition probabilities

p (s, s )=P(S(t+1)= s |S(t)=s),
(1)

where P(·) denotes the probability of an event. The performance of the network is measured by a function c(s,s) that assigns a positive and bounded cost to the transition from state s to state s. The average cost from state s is

c(s)= E s S [c(s, s )]= s S p (s, s )c ( s , s ) b .
(2)

The function

c ¯ (S(t))=c(S(t))+ E τ = 1 γ τ c ( S ( t + τ ) ) ,
(3)

where E[·] denotes expectation and γ (0,1) is the discount factor, is the expected discounted long-term cost. This function is also known as the cost-to-go function and is central to DP and optimal control[10].

For any fixedS(t)=sS the function c ¯ (·) is independent of the time index t and can be rewritten as

c ¯ (s)=c(s)+ s S τ = 1 γ τ p τ (s, s )c( s ),
(4)

where

p τ (s, s )=P(S(t+τ)= s |S(t)=s)
(5)

is the τ-step transition from state s to s.c Consider the graph associated with the FSM, where vertices are states inS and edges are state-transitions with non-zero probability. The temporal distance τ in the evolution of the FSM translates into some number of hops in the graphical representation. Starting from a vertex s, c ¯ (s) is computed by sequentially summing the discounted and weighed cost of the reachable vertices for an increasing number of hops in the graph. The representation as a graph of the temporal evolution of the network is the key for the sparse learning algorithm proposed herein.

In online learning, the function c ¯ is estimated from a sample-path of state-cost observations. The sample-path O T of observations up to time T includes the sequence of states {S(0),S(1),…,S(T)} and state transition costs {c(S(0),S(1)),…,c(S(T−1),S(T))}. Denote by c ¯ the vector collecting the expected long-term cost c ¯ (s) for allsS, i.e., c ¯ =[c(1),c(2),,c(N)].d The objective is to build an estimator of c ¯ based on the observations O T minimizing a distance metric such as c ̂ T c ¯ 2 2 , where c ̂ T is the estimate at time T.

The main challenge to achieve an accurate estimation of c ¯ in wireless networks is the enormous size of the state spaceS underlying the associated FSM. In fact, in traditional online learning an accurate estimation of c ¯ requires a sample-path of observations where each state inS is visited a considerable number of times. In fact, all the allowed state transitions need to be observed a sufficient number of times to estimate their probability.

Sparse estimation of cost functions

We now present an algorithm for the online learning of cost-to-go functions in wireless networks from the observation of a state-cost trajectory of the associated FSM. The baseline observation is that networking protocols induce a structured behavior of the network, which is reflected in a structured graph associated with the FSM. Thus, every state-cost observation conveys information about multiple states due to the correlated behavior of the network. As a result, we can propose an algorithm to estimate c ¯ exploiting this structure from fewer observations than in traditional learning. The algorithm is composed of three elements:

  • observation: the transition probabilities and cost function c(·) are estimated by observing a state-cost sample-path;

  • projection: c ¯ is projected onto a graph wavelet basis set capturing typical structures in the graph;

  • sparse estimation of c ¯ : a sparse estimation algorithm is used to identify a concise set of basis functions providing the best fit with the estimated transition probabilities and cost function.

We define the N × N matrix P to be the probability transition matrix where P[s,s] = p(s,s) as in Equation (1).

The long-term cost c ¯ can be rewritten as

c ¯ =c+ τ = 1 γ τ P τ c=c+γP c ¯ .
(6)

Thus, c ¯ can be computed as the fixed point solution c ¯ =Ω( c ¯ ) of the operatorΩ( c ¯ )=c+γP c ¯ . The transition matrix P and cost vector c are not known a priori and need to be estimated from observation. At time T, the sample-path O T is used to compute the estimates P ̂ (T) and c ̂ (T) of P and c. We use the estimator

P ̂ ( T ) ij = t = 1 T 1 1 ( S ( t ) = i , S ( t + 1 ) = j ) t = 1 T 1 1 ( S ( t ) = i ) if S ( t ) = i , t = 0 , , T 1 0 otherwise ,
(7)
c ̂ ( T ) i = t = 1 T 1 1 ( S ( t ) = i ) c ( S ( t ) , S ( t + 1 ) ) t = 1 T 1 1 ( S ( t ) = i ) if S ( t ) = i , t = 0 , , T 1 0 otherwise
(8)

where 1(·) is the indicator function. More refined estimators can be employed to reduce the sampling rate[25].

The estimates P ̂ (T) and c ̂ (T) may be very noisy and incomplete estimates of P and c even for TN. In fact, an accurate estimation of the transition probabilities from a state s inS and the cost function c(s) may require a considerable number of visits to s. For asymptotically large T, the average number of times the FSM is in state s is (s), whereΠ(s)= lim t P (S(t)=s|S(0)= s 0 ) is the steady-state probability of s. Note that the average steady-state probability of the states inS is 1/N, and thus the average number of visits to a state is T/N. However, in a finite-length sample-path, the trajectory of the state of the FSM may remain confined in a region ofS even for lengths T larger than N and the number of states visited may be much smaller than T. Thus, due to the large size of the state space of FSMs modeling wireless networks, the number of observations needed to achieve an accurate estimation of the cost-to-go function is generally enormous, and it may be larger than the coherence time of the network, meaning that the statistics of the stochastic process modeling the operation of the network may change before the learning process achieves a meaningful estimate of c ¯ .

To cope with this issue we exploit the fact that FSMs modeling the operations of wireless networks and their associated graphs present a very regular connectivity structure and the transition probabilities are determined by a limited set of parameters, e.g., packet arrival probability in the buffer of the nodes and packet failure (see Section ‘Structure of the graph’). By regular, we mean that the connectivity structure from many nodes of the graph to their 1-hop neighbors is similar. Thus, the representation of the graph provided by the transition matrix is intrinsically redundant and trajectories of the network on the graph can thus presumably be described by a small number of functions capturing typical substructures of the graph. We observe that these substructures involve neighborhoods of states at different numbers of hops, corresponding to different temporal distances between observations in the sample-path.

A fundamental element of the proposed framework is the projection of the cost-to-go function c ¯ on a set of basis functions capturing the typical substructures of the graph at various time scales. In fact, every new observation updates the estimate of all the substructures including the observed transition. We employ the recently proposed DWs[18] as a basis set for the projection. DWs are a multiresolution geometric construction for the multiscale analysis of operators on graphs. DW functions are computed by sequentially applying a diffusion operator (for instance, the transition matrix P) at the current scale k, compressing the range via a local orthonormalization procedure, representing the operator in the compressed range and computing the P2k on this range. Functions defined on the support space are analyzed in multiresolution fashion, where dyadic powers of the diffusion operator correspond to dilations, and projections correspond to downsampling. Even if P is not known a priori, we assume that the location of the non-zero elements of P, that is, the connectivity structure of P, is known e. Define I(P) = sgn(P + PT). The basis set W is then computed on P symm where the i th row of P symm is

[ P symm ] i = [ I ( P ) ] i / j [ I ( P ) ] ij .
(9)

The symmetrization step is required as DWs presume symmetric diffusion operators. The design of wavelet functions tailored to the compression of directed graphs will further improve the performance of the algorithm proposed herein.

Define W as a diffusion wavelet basis set computed on P symm , where the DW functions are the columns of W. We have then c ~ Wx, where x is the representation vector collecting the coefficients of the wavelet functions in W. Given P and c, the representation vector x providing the most accurate approximation of c ¯ on W minimizes the Bellman residualΩ(Wx)Wx 2 2 . We have then

x =arg min x c(IγP)Wx 2 2 .
(10)

The main idea behind the estimation paradigm proposed herein is that the DW set of functions is a sparsifying basis for the cost-to-go function c ¯ . Due to the structured behavior defined by networking protocols, a small number of functions can represent the evolution and, thus, the collected cost, from large groups of states. The Least Angle Selection and Shrinkage Operator (LASSO) algorithm[26] minimizes the residual norm of the residual plus a regularization term. For the considered problem, the LASSO is formulated as

x (T)=arg min x R(T) c ̂ (T)R(T) B ̂ (T)Wx 2 2 +λx 1 ,
(11)

where B ̂ (T)=R(T)(Iγ P ̂ (T)) and R(T) is a random projection matrix. The -1 regularization term λx1is a sparsity-promoting term, meaning that the least significant coefficients in x are pushed toward zero.

In the Compressed Sensing (CS) literature, the matrix B ̂ (T) and W in the above equation are generally referred to as the sensing and representation matrices, respectively. Note that the elements in the rows of P ̂ (T) and c ̂ (T) corresponding to states not visited in the trajectory O T are set to zero and can be eliminated in the projection.

Structure of the graph

Wireless networking protocols induce a very structured temporal evolution of the network, and, thus, a very structured graph associated with the FSM. This structure is the key to show some general properties of the transition matrix P that determines the performance of the sparse reconstruction in terms of the minimum number of states that needs to be included in Equation (11) to achieve an accurate reconstruction. Our analysis is based on the decomposition of the overall graph into smaller graphs, which we refer to as sub-chains. The good incoherence properties of the transition matrices associated with the sub-chains are reflected in good incoherence of the overall transition matrix and, thus, result in good performance of the sparse reconstruction.

The decomposition into sub-chains of the complex graph associated with the FSM modeling the temporal evolution of wireless networks results from the observation that the state of the network is the collection of many individual descriptors tracking counters and variables associated with the functioning of protocols and the environment. The temporal evolution of each individual descriptor follows simple rules that can be easily analyzed to retrieve properties of the overall graph. We then define S(t) = {S1(t),…,S D (t)}, where S d (t) is the state of the d th sub-chain at time t. We denote by S d the state space of the d th sub-chain, with| S d |= N d .

The sub-chains track the evolution of the individual components of the state space. Although in the overall FSM the transition probabilities are a function of the overall state of the network, the connectivity structure of the sub-chains is preserved in the overall FSM. In fact, the state transition froms={ s 1 ,, s D }S to s ={ s 1 ,, s D }S in the overall FSM is allowedf only if the state transition from s d to s d is allowed in the corresponding sub-chain, for all d = 1,…,D. Thus, the properties of the connectivity structure of the sub-chains are inherited by the overall graph.

In stochastic models for wireless networks two classes of sub-chains can be identified:

  • Counter-like sub-chains (see, Figure2a,b: the FSM is associated with a counter. The value of the counter increments/decrements until it is reset to a given value. Examples of counter-like sub-chains are: the number of retransmissions of a packet in ARQ protocols, the backoff timers in DCF and the transmission windows and timers in TCP. This class can be further divided into forward counters (Figure2a and backward counters (Figure2b, depending on whether the counter is incremented or decremented until being reset to a predefined value;

Figure 2
figure 2

Sub-chains. (a) Counter-like sub-chain, forward counter. (b) Counter-like sub-chain, backward counter. (c) Random walk sub-chain.

  • Random walk sub-chains (see, Figure2c): the value of the descriptor variable is subject to random, but constrained, increments and decrements. Examples of random walk sub-chains are channel state descriptors and variables tracking the number of packets in a buffer.

For instance, in the pioneering work[1], the Markov chain used to analyze the network is the composition of a random walk and a counter-like sub-chain. It can be observed that counter-like and random walk sub-chains present a very local and regular connectivity structure. By local, we mean that every state connects to a small neighborhood of states. Regularity implies that states connect to 1-hop neighbors in a similar fashion. For instance, in counter-like sub-chains, states connect to the state corresponding to a reset counter and the state associated with an incremented or decremented value (possibly plus a self-loop). As a result, the overall graph is regular and local. This property is instrumental towards having an efficient compression in the wavelet domain, meaning that only a limited number of notable substructures is needed to model the temporal evolution of the state of the network.

Define an indexing inS and S d assigning an integer in {1,2,…,N} to each statesS and an integer in {1,2,…,N d } to each state s d S d . Define also S d ( i d ) S d and S d ( i d ) S d as all the states of which i d is a 1-hop neighbor and the 1-hop neighbors of i d S d , respectively. Note that the sets S d ( i d ) and S d ( i d ) do not coincide due to the directionality of the graph modeling the behavior of the network. In fact, Markov processes modeling the operations of wireless networks are not invertible. In the counter-like and random walk structures, the connections from most of the states are the repetition of the same (local) connectivity structure. Thus, except in a few particular states whose effect decreases as the dimension of the Markov chain increases, given two states i d ,j d {1,…,n d }, with i d j d , the number of states in S d ( i d ) S d ( j d ) is generally small and so is the inner product k d = 1 N d p d ( k d , i d ) p d ( k d , j d ). Since the connectivity structure of the overall graph results from the composition of the connectivity structures of the sub-chains, the inner product of the columns of P is intuitively small. In order to perform a quantitative analysis of the inner products of P, we focus on the natural random walk defined on the sub-chainsg, that is we assign equal probability to all the allowed transitions from a given state. We have, then,

p d ( i d , j d )=1/| S d ( i d )|, i d S d , j d S d ( i d ),
(12)

where p d (i d ,j d ) is the transition probability from state i d and j d in the state space of the d th sub-chain. Then, the inner product between the i th and the j th columns of P is

k = 1 N p (k,j) p (k,j)= d = 1 D k d = 1 N d p d ( k d , i d ) p d ( k d , j d ),
(13)

where i d , j d , and k d are the state of the d th sub-chain in the states associated with states i, j, and k, respectively.

We then need to compute the inner products of the columns of the transition matrices associated with the classes of sub-chains. The average inner products of the backward counter, forward counter and random walk sub-chains are, respectively,

E i d , j d k d = 1 N d p d ( k d , i d ) p d ( k d , j d ) = 1 N d 2 + 2 4 ( N d 1 ) backward counter ,
(14)
E i d , j d k d = 1 N d p d ( k d , i d ) p d ( k d , j d ) = 5 + 3 N d 6 N d ( N d 1 ) forward counter ,
(15)
E i d , j d k d = 1 N d p d ( k d , i d ) p d ( k d , j d ) 1 N d 1 2 ( 2 + 1 ) ( 1 + ) 2 random walk ,
(16)

where in a random walk sub-chain the transition probability from state i d to state j d is larger than zero only if |j d i d | ≤ and we assume N d 2 + 1. We observe that all these mean inner products are of orderO( 1 N d ). As an example of how these quantities are computed, consider the forward counter-like sub-chain. The associated transition matrix is

1 2 1 2 0 0 0 0 0 1 3 1 3 1 3 0 0 0 0 1 4 1 4 1 4 1 4 0 0 0 1 4 1 4 0 1 4 1 4 0 0 1 4 1 4 0 0 0 1 4 1 4 1 3 1 3 0 0 0 0 1 3 .
(17)

We remark that the value of the transition probability p d (i d ,j d ) is the inverse of the number of outgoing links, i.e., allowed transitions, from i d . The average inner product E i d , j d k d = 1 N d p d ( k d , i d ) p d ( k d , j d ) is calculated by sequentially considering all the columns indexed by i d = 1,…,N d −1 and computing the product with the columns indexed by j d > i d .

The average inner products of the sub-chains decrease on average with the number of states of the associated FSM. Although in the general case the probability of transition from a state to its neighbors may be much different from that provided by the natural random walk associated with the graph structure, the locality and regularity of the structure of the sub-chains cause the average overlap of the sets S d ( j d ) to vanish as N d increases. Thus, sufficiently large sub-chains are associated with incoherent transition matrices.

The average inner product of a column with itself is also relevant to the performance of the sparse reconstruction (these means appear in the mean of the Gram matrix in the effective random projection). It is easy to compute that the average of this quantity for the counter-like backward sub-chain, counter-like forward sub-chain and random walk sub-chain is

E i d i d = 1 N d ( p d ( k d , i d ) ) 2 = 1 4 + 5 4 N d α d = 1 4
(18)
E i d i d = 1 N d ( p d ( k d , i d ) ) 2 = 2 3 + 1 36 N d + 1 8 N d α d = 2 3 ,
(19)
E i d i d = 1 N d ( p d ( k d , i d ) ) 2 = 1 2 + 1 + C N d α d = 1 2 + 1 ,
(20)

where C is a positive constant smaller than 1. We observe that each of these means can be expressed asO( 1 N d )+ α d , where α d = 1/4,2/3,1/(2 + 1) are for the backward counter, forward counter and random walk, respectively. The fact that α d < 1 for all the sub-chains is critical for the performance analysis of the CS approach.

Perturbation analysis and performance bounds

In this section, we characterize the performance of the sparse approximation of the cost-to-go function proposed herein. We first discuss the stability of the solution of (11) and then determine how much compression is possible to ensure good reconstruction of the value function v. The number of observations required for good reconstruction directly translates to the learning rate of our proposed algorithm. An exact analysis of the transition matrix is challenging; however, by exploiting the average behavior of several key structures, we can determine the relationship between the minimum number of observations for this compressed sensing problem and the size of the logical graph.

Perturbation analysis

We discuss in this section how estimation noise in the sensing matrix B ̂ (T)=(Iγ P ̂ (T)) affects the stability of the reconstruction provided by (11) as new states are visited by the sample-path. We show that the inclusion in (11) of a row in c ̂ (T) and P ̂ (T) associated with a state hit by the FSM a small number of times may result in a dramatic change of x(T) with respect to x(T−1). As we are looking for the fixed point solution of the operator Ω, instability and large variations of the sparse solution are highly undesirable. In order to improve the stability of the algorithm, in the numerical results presented in Section ‘Numerical results’ we use the LS CS algorithm proposed in[15].

Define

A ̂ (T)= B ̂ (T)W=(Iγ P ̂ )(T)W.
(21)

In[27], Xu et al. have shown that the regularized regression problem of LASSO:

min x c ̂ (T) A ̂ (T)x 2 2 +λx 1
(22)

is equivalent to the robust regression (RR) problem, stated as

min x max Δ A ̂ ( T ) U c ̂ ( A ̂ ( T ) + Δ A ̂ ( T ) ) x 2 ,
(23)

whereΔ A ̂ (T)=[ a ̂ 1 (T), a ̂ 2 (T),, a ̂ m (T)],U is the set of perturbation matricesU={Δ A ̂ (T): a ̂ i (T) 2 λ,i=1,,m} and m is the number of columns ofΔ A ̂ (T). We drop the dependence on T of the estimated matrices and vectors. Thus, the vector x is optimized for a worst case perturbation whose range is determined by the parameter λ, that is, the algorithm is robust to perturbations. Interestingly, the larger the value of λ, the sparser the output vector x, but also the larger the set of perturbations considered. In[27], it was also shown that LASSO is not stable, meaning that small variations of the sensing matrix A ̂ may lead to significantly different output vectors.

The following addresses the instability issues in the solution to the RR problem. In particular, the theorem shows that the inclusion of a new sample may result in suboptimal solutions to the RR problem. Moreover, due to the equivalence between LASSO and the RR problem, the same instability result applies to LASSO as well.

Theorem 1

Let x be the solution of the problem

min x max Δ A U c ̂ ( A ̂ + Δ A ̂ ) x 2 ,
(24)

whereU={Δ A ̂ : a ̂ i 2 λ,i=1,,m} and assume c ̂ = A ̂ x . Define y as the solution of

min y max Δ A ̂ U c ̂ ( A ̂ + Δ A ̂ ) y 2 ,
(25)

with U ={Δ A ̂ : a ̂ i 2 λ+ λ ,i=1,,m} and

c ̂ = c ̂ ĉ , A ̂ = A ̂ a ̂ .
(26)

Denote the support of y and x as I y and I x , respectively. Ifk I x such that

λmax max j I x | â j | , Γ k ( A ̂ , I x ) ,
(27)

where Γ k ( A ̂ , I x )= ( A ̂ k ) T M ( M T M ) 1 M A ̂ k , andM=[ A ̂ I x { k } ,c] then I y I x . Thus, the support of the solution of LASSO changes if a new state meeting the hypothesis is added to the Bellman residual.

The proof of the theorem is in Appendix 1.

Minimum number of observations

In Section ‘Numerical results’, we will employ the LS CS residual algorithm[15] to minimize the Bellman residual subject to a sparsity constraint:

Ω(v)v 2 2 =c(IγP)Wx 2 2 h .
(28)

We will observe the temporal evolution of the Markov Chain over multiple time-steps, as such, we will not observe the cost at every state. Thus, coupled with additional random mixing to exploit the benefits of compressed sensing, we will optimize the following modified Bellman residual:

R(T)cR(T)(IγP)Wx 2 2 ,
(29)

where R is a random matrix to be defined in the sequel and R(T) is the submatrix formed by retaining the columns of R indexed by states hit in the observation interval T. If the matrix R(T)(Iγ P)W satisfies the so-called restricted isometry property, defined below, then the squared error between x and x ̂ achieved by the Dantzig selector[28] can be bounded; in particular, there are guarantees on correctly estimating the locations of the non-zero components of x and thus the squared error is proportional to the number of non-zero components and the noise variance. Comparable analysis can be made for LASSO[29, 30]. We focus on properties of R(T)(Iγ P) recognizing that projecting onto an orthonormal W would be an isometric operation. We note that a negative result regarding RIP would call the use of our approach into question. However, a positive RIP result suggests that our method work. Analysis of LS CS also relies on RIP parameters.

Our proof exploits arguments from[31] with appropriate tailoring to our framework. We begin with the definition of the properties we wish to show.

Definition 1

(Restricted Isometry Property): The observation matrix B is said to satisfy the restricted isometry property of orderSN with parameter δ S (0,1), i.e. RIP(S,δ S ) if

(1 δ S )x 2 2 Bx 2 2 (1+ δ S )x 2 2 ,
(30)

holds for allx R N having no more than S non-zero entries. Note that B is a K × N matrix. RIP implies that B is approximately an isometry for S-sparse signals.

We have the following result,

Theorem 2

The matrix R H (Iγ P) does not satisfy RIP(S,δ S ) with the following probability bound,

P R H B does not satisfy RIP( δ S , S ) exp c 1 K S 2

if K 2 192 log n S 2 δ S 2 64 c 1 and c 1 δ S 2 64 . The matrix P is the transition probability matrix for a Markov chain formed by concatenating forward and backward counter-like and random walk sub-chains and γ < 1. The proof of Theorem 2 can be found in Appendix 2.

We observe that this result states that if the number of observations K is of orderO S 2 n log n then RIP is satisfied with high probability as the network grows large. We contrast this with the more typical results seen in say channel estimation problems where the order is O(S2 log n). We remark that our result on the RIP is not limited to LASSO, but leads to the more general conclusion that sparse estimation algorithms can be used to approximate cost-to-go functions of wireless networks. Furthermore, the proof of Theorem 2 shows that an arbitrary concatenation of sub-chains does not affect the RIP property in the limit of large wireless networks.

Numerical results

In this section, we present numerical results for an example of a wireless network to demonstrate the potential of the compressed sensing approach. We consider a wireless network where terminals store packets in a finite buffer of size Q and employ Automatic Retransmission reQuest (ARQ) to improve the delivery rate of packets. Time is divided in slots of fixed duration. For the sake of simplicity we assume that the transmission of a packet occurs in the duration of a time slot and that channel coefficients in the various slots are i.i.d.. Terminals with a non-empty buffer access the channel in a time slot with fixed probability equal to α. The failure probability of a packet transmitted by a terminal is a function of the set of terminals concurrently transmitting in the same slot. Packet arrival in the buffer of the terminals is modeled according to a Poisson process of intensity σ.

The FSM tracking the state of each individual terminal (see Figure3) is composed of two sub-chains: a random walk-like sub-chain tracking the number of packets in the buffer (state space {0,1,…,Q}) and a forward counter-like sub-chain tracking the retransmission index of the packet being transmitted (state space {0,1,…,F}, where F is the maximum number of transmissions of a packet). An additional binary variable is added to the FSM to track transmission/idleness of the terminal. The FSM tracking the state of the overall network is the composition of the FSMs of the individual terminals. The transition probabilities of the Markov process determining the trajectory of the state of the network in the state space of the FSM are a function of the packet arrival rate, of the failure probability function and of the transmission probability α.

Figure 3
figure 3

Example of composition of sub-chains. FSM of a terminal in the considered network: a forward-counter sub-chain (packet retransmission) and an random walk sub-chain (number of packets in the buffer). On the right-hand side, the fundamental connectivity structure of most of the states in the state spaceS.

The cost function c measures the normalized cost in terms of throughput loss with respect to the saturation throughput achieved by the terminals in the absence of interference. In particular, the cost function is defined as the sum for all the terminals of one minus the failure probability of the transmitted packets. Idleness is assigned a cost equal to 1.

For Q = 5 and F = 4 and 2 terminals the size of the state space is 1681. The transition matrix P is used to compute P symm defined in Equation (9) and the associated set of DW functions W[18]. DW basis sets are overcomplete. In order to keep complexity low, the columns of W are subsampled. In particular, we select 400 wavelet functions at different time scales.

Figure4 and5 depict the exact and reconstructed value function using LASSO mapped on the state-action space and the sorted magnitude of the coefficients x, respectively. In these figures, the exact vector c and transition matrix P are used in order to show the properties of the sparse reconstruction based on DW. An accurate approximation of c ¯ is achieved with approximately 15 active coefficients in x. This result shows that the temporal evolution of complex wireless networks can be effectively represented by a small number of wavelet function capturing typical substructures in the graph.

Figure 4
figure 4

Value function and its approximation. Value function (green) and its approximation (blue).

Figure 5
figure 5

Magnitude of the coefficients of x. Magnitude of the coefficients of x.

Figure6 plots the reconstruction error (norm-2 of difference between the real and reconstructed value functions weighted by the steady-state distribution) as a function of T achieved by LASSO for different values of the sampling rates. The estimated transition probability matrix P ̂ and c ̂ are used for the estimation of c ¯ . States are sampled by randomly eliminating rows (among the states visited by the sample path) of the cost vector c ̂ and transition matrix P ̂ . In the legend, the maximum number states included in the Bellman residual is reported. As expected, if only a few states are included in the estimation, LASSO achieves very poor performance irrespectively of the accuracy in the estimates of the cost vector and transition matrix. However, if all the states are included in the Bellman residual, then poorly estimated states introduce rows affected by large noise both in P ̂ and c ̂ . As shown in Section Perturbation analysis, noisy rows of P may destabilize the support of x and lead to poor reconstruction. However, the performance of LASSO is very sensitive to the sampling rate and it is unclear how to compute its optimal value.

Figure 6
figure 6

Reconstruction error as a function of the time slot achieved by LASSO with different sampling rates. Reconstruction error as a function of the time slot achieved by LASSO with different sampling rates. The number in the legend corresponds to K, that is, the number of states used in the reconstruction.

Figure7 depicts the reconstruction error achieved by the LS CS-based framework and that of standard Q-learning[10] as a function of the length of the observed sample-path. All states visited by the process are included in the Bellman residual. In order to improve stability, to generate this plot we used the LS CS algorithm. LS CS correlates x(T) to x(T−1) by constraining changes in the support of the representation vector. Interested readers are referred to[15] for a detailed description and performance characterization of the algorithm.

Figure 7
figure 7

Reconstruction error as a function of time. Comparison between the reconstruction error as a function of the time slot achieved by the proposed algorithm and Q-learning. All states visited by the sample path are used in the estimation.

The proposed algorithm achieves a considerable accuracy in the estimation of c ¯ after a very short number of state-cost observations, whereas standard learning converges slowly to c ¯ . Moreover, the solution is extremely stable and the LS CS-based algorithm appears to be very robust to estimation noise.

Conclusions

A novel framework for the online estimation of cost-to-go functions in wireless networks was proposed. We showed that the inherent regular and local structure of the graph associated with the FSM modeling the operations of wireless networks enables the sparse representation of cost-to-go functions. Our analysis, based on the decomposition of the overall graph in fundamental smaller structures, connects the structure of the FSM to the RIP of the transition probability matrix. Numerical results show that sparse approximation and projection onto DW basis sets enable a considerable reduction in the number of observations needed to estimate cost-to-go functions in wireless networks, and have the potential to make online learning practical in this context.

Endnotes

aThe incoherence of the transition matrix is connected to the magnitude of the inner products of its columns.

bNote that c(s,s) can be generalized to be a random variable. In this case the expectation is over all the possible values of c(s,s).

cControl can be included in the model by defining statistics and cost functions conditioned on a control action.

dThe indexing in the vector is based on a univocal map betweenS and {1,2,3,…,N}.

eNote that this assumption does not reduce the applicability of the proposed algorithm. In fact, the connectivity structure of the transition matrix is determined by standard protocols that are shared and known by all the nodes.

fBy allowed, we means that the state transition has probability equal to zero for any set of parameters.

gWe note that numerical evaluations of incoherence for many typical Markov chains has revealed that incoherence holds on average.

hIn this analysis we assume that W is an orthonormal set of basis functions. We are aware that DW are overcomplete and, thus, W is not an orthonormal basis set. The design of orthonormal wavelet basis tailored to FSMs modeling wireless networks is an important research direction.

Appendix 1

Proof of Theorem 1

Fix the index k in the support I x of x, define

a ̂ i = a ̂ i â i ,i=1,,m,
(31)

and the vector

u = u u span a ̂ i , i I x k , c ̂ ,
(32)

with u2 = 1 and

c ̂ = c ̂ b .
(33)

If

λ max j I x | a j |
(34)

and

max u u T a ̂ k λ+ λ
(35)

then

u T a ̂ j =|u a ̂ j + u â j |
(36)
u a ̂ j u T u u T u +| u â j |
(37)
u T u λ+| u â j |λ+| u â j |c+ λ
(38)

Therefore, if the conditions (34) and (35) hold, then due to Theorem 5 in[27], we have y k = 0 and I x I y .

Define

u= Md d T M T Md
(39)

where

M= c , a ̂ i , i I x .
(40)

Then, we find u that maximizes the left-hand side of (35) as the solution of

max d d T M T a ̂ k 2 d T M T Md .
(41)

Let M = QSZ be the singular value decomposition of M, then (41) is equal to

max d | d T ZS T Q T a ̂ k | 2 d T Z S T S T Z d T = max d | d ~ T S T a ~ | 2 d ~ T S T S d ~ .
(42)

where d ~ =Zd and a ~ =Q a ̂ k . Note that

a ~ = I S ( S T S ) 1 S T a ~ +S ( S T S ) 1 S T a ~
(43)
= I S ( S T S ) 1 S T a ~ +Sg
(44)

whereg=S ( S T S ) 1 S T a ~ . Then, from

max d | d ~ T S T a ~ | 2 d ~ T S T S d ~ = max d | d ~ T S T S g | 2 d ~ T S T S d ~ ,
(45)

and using the Schwarz inequality we obtain

| d ~ T S T a ~ | 2 d ~ T S T S d ~ ( d ~ T S T s d ~ ) ( g T S T s g ) d ~ T S T S d ~ = g T S T sg,
(46)

where the equality holds if d ~ =g. Therefore,

max d | d T S T a ̂ | 2 d T S T S d = a ~ T S ( S T S ) 1 S T a ~
(47)
= a ̂ T M ( M T M ) 1 M T a ̂
(48)

Appendix 2

Proof of Theorem 2

In this appendix, we prove the result on the minimum number of observed states needed for perfect reconstruction of c ¯ . We first state the following lemma:

Lemma 1

(Geršgorin) The eigenvalues of an m × m matrix G all lie in the union of the n discs d i = d i (c i ,r i ),i = 1,2,…n, centered at c i = G ii and with radius

r i = j = 1 , i j n G ij
(49)

We will apply Gersgorin’s lemma to the following Gram matrix,

G= I γ P T R T (T)R(T) I γ P .
(50)

For the sake of exposition we assume that R(T) is an K × n matrix whose components are drawn i.i.d. from a binary distribution i.e. R ik =± 1 K with probability 1 2 ; thus we have that the R(T) ik are zero mean andE R H T R ( T ) =I. Other properly constrained distributions for R(T) can be handled. The other matrices specified in (50) are square and of dimension n × n.

We shall show that every element of the Gram matrix, G is bounded as follows, where m ij = E[G ij ],

G ii m ii ε d , G ij m ij ε o S ij
(51)

The dimension of the state-space,|S|N is approximately d n d , thus from the analysis of the inner products/norms of columns of the transition matrix (see Equations (14)–(16)) we find thatE p i T p j O 1 N ,ij andE p i 2 O 1 N + α D , where p i is the i’th column of P. We note that for all three sub-chain structures examined α < 1 and thus lim D α D =0. In fact, if we concatenate several sub-chains we have thatE[ p i 2 ]O(1/N)+ d = 1 D α d , where |α d | < 1. Thus, E[p i 2] diminishes, and eventually vanishes, as the number of concatenated sub-chains increases. Additionally,E P ij = 1 n . Thus we can show that,

E G ik = ( a ) E b i T R T R b k =E b i T b k
(52)
and m ik O 1 n ik lim D m ik =0
(53)
m ii 1+O 1 n + d = 1 D α D lim D m ii =1
(54)

where b i is the i th column of B andBIγP. The equality (a) follows from the independence of the probability transition matrix P and the random projection matrix R. Equations (52) and (54) follow from Equations (13) and (14)–(16), (18)–(20). The next needed inequality is,

Lemma 2

(McDiarmid) Consider independent random variables Y 1 ,, Y m Y and a functionυ: Y m R. If for all i {1,…,m} and for all y 1 ,, y m , y i Y, the function υ satisfies

υ ( y 1 , , y i , , y m ) υ ( y 1 , , y i , , y m ) c i
(55)
thenP | υ Y _ E υ Y _ | t 2exp 2 t 2 i = 1 m c i 2
(56)

For our functions of interest, m = n2. We let R = R + Δ where Δj,l= 0 except for j = j0,l = l0, i.e. Δ j 0 l 0 = R j 0 l 0 R j 0 l 0 =±2 1 K with probability 1 2 , due to our assumptions on R. For clarity, we have dropped the subscript T on R(T). Thus we have for the diagonal elements of the Gram matrix, G ii = υ ii (R) for some R,

υ ii R υ ii R = b i T R T R R ′T R b i = 2 B l 0 , i R j 0 , l 0 R j 0 , l 0 m l 0 n R j 0 , m B m , i + B l 0 , i 2 R j 0 , l 0 2 R j 0 , l 0 2 = 0 4 K B l 0 , i m l 0 n B m , i

We have that B l 0 , i δ K ( l 0 i) γ n , where δ K (·) is the Kronecker delta function, and m l 0 n B m , i O(1). We set c l 0 , j 0 = 4 K δ K ( l 0 i ) γ n , and observe from the statement of McDiarmid’s inequalities that we need to evaluate

j 0 = 1 n l 0 = 1 n c l 0 , j 0 2 = 16 K 2 n 2 γ + γ 2 n 32 n K 2 g d

The same bound holds for the off-diagonal elements of the Gram matrix. Using these values of c l 0 , j 0 we invoke McDiarmid’s inequality to show the following, wherein (a) and (b) follow from union bound arguments and (c) follows from setting ε o = ε d = δ S 2

P G ii m i ε d 2exp 2 ε d 2 g d
(57)
P i = 1 n G ii m i ε d ( a ) 2nexp 2 ε d 2 g d
(58)
P i = 1 n j = 1 , i j n G ij m ij ε o S ( b ) 2 n 2 exp 2 ε o 2 S 2 g d
(59)

Then,

P R ( T ) B does not satisfy RIP ( δ S , S ) 3 n 2 exp K 2 δ S 2 64 n S 2
(60)

Equation (60) can be manipulated to yield the following relationship between the number of samples K and the size of the logical network, n: the RIP holds with high probability if K 2 192 log n S 2 δ S 2 64 c 1 , where c1 is constant selected to ensure that the denominator of the previous expression is positive and the Theorem is shown.

References

  1. Bianchi G: Performance analysis of the IEEE 802.11 distributed coordination function. IEEE J. Sel. Areas Commun 2000, 18(3):535-547.

    Article  Google Scholar 

  2. Konrad A, Zhao B, Joseph A, Ludwig R: A Markov-based channel model algorithm for wireless networks. Wirel. Netw 2003, 9(3):189-199. 10.1023/A:1022869025953

    Article  Google Scholar 

  3. Wu H, Peng Y, Long K, Cheng S, Ma J: Performance of reliable transport protocol over, IEEE 802.11 wireless, LAN analysis and enhancement. In proceedings of IEEE Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies, INFOCOM 2002. New York, USA; 2002:599-607. vol. 2, June 23–27

    Google Scholar 

  4. Zorzi M, Rao RR: On the use of renewal theory in the analysis of ARQ protocols. IEEE Trans. Commun 1996, 44(9):1077-1081. 10.1109/26.536913

    Article  Google Scholar 

  5. Badia L, Levorato M, Zorzi M: Markov analysis of selective repeat type II hybrid ARQ using block codes. IEEE Trans. Commun 2008, 56(9):1434-1441.

    Google Scholar 

  6. Modiano E: An adaptive algorithm for optimizing the packet size used in wireless ARQ protocols. Wirel. Netw 1999, 5(4):279-286. 10.1023/A:1019111430288

    Article  Google Scholar 

  7. Zhai H, Kwon Y, Fang Y: Performance analysis of IEEE 802.11 MAC protocols in wireless LANs. Wirel. Commun. Mob. Comput 2004, 4(8):917-931. 10.1002/wcm.263

    Article  Google Scholar 

  8. Su H, Zhang X: Cross-layer based opportunistic MAC protocols for QoS provisionings over cognitive radio wireless networks. IEEE J. Sel. Areas Commun 2008, 26: 118-129.

    Article  Google Scholar 

  9. Dianati M, Ling X, Naik K, Shen X: A node-cooperative ARQ scheme for wireless ad hoc networks. IEEE Trans. Veh. Technol 2006, 55(3):1032-1044. 10.1109/TVT.2005.863426

    Article  Google Scholar 

  10. Bertsekas DP: Dynamic Programming and Optimal Control. Athena Scientific, Belmont, MA,; 2001.

    MATH  Google Scholar 

  11. Mahadevan S: Average reward reinforcement learning: Foundations, algorithms, and empirical results. Mach. Learn 1996, 22: 159-195.

    MATH  Google Scholar 

  12. Schwartz A: A reinforcement learning method for maximizing undiscounted rewards. In Proceedings of the Tenth International Conference on Machine Learning. Amherst, Massachusett; 1993:305-305. vol. 298

    Google Scholar 

  13. Fu F, Schaar MVD: Structure-aware stocastic control for transmission technology. ArXiv preprint 2010. arXiv:1003.2471

    Google Scholar 

  14. Chen W, Huang D, Kulkarni A, Unnikrishnan J, Zhu Q, Mehta P, Meyn S, Wierman A: Approximate dynamic programming using fluid and diffusion approximations with applications to power management. In Proceedings of the 48th IEEE Conference on Decision and Control. Shangai, China; 2009:3575-3580. Dec. 16-18, 2009

    Google Scholar 

  15. Vaswani N: LS-CS-residual (LS-CS): compressive sensing on least squares residual. IEEE Trans. Signal Process 2010, 58(8):4108-4120.

    Article  MathSciNet  Google Scholar 

  16. Candes E, Wakin MB: An introduction to compressive sampling. IEEE Signal Process. Mag 2008, 25(2):21-30.

    Article  Google Scholar 

  17. Maggioni M, Mahadevan S: A multiscale framework for Markov decision processes using diffusion wavelets. 2006.http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.8956%26rep=rep1%26type=pdf

    Google Scholar 

  18. Coifman RR, Maggioni M: Diffusion wavelets. Appl. Comput. Harmonic Anal 2006, 21: 53-94. 10.1016/j.acha.2006.04.004

    Article  MathSciNet  MATH  Google Scholar 

  19. Crovella M, Kolaczyk E: Graph wavelets for spatial traffic analysis. In INFOCOM 2003 Twenty-Second Annual Joint Conference of the IEEE Computer and Communications. San Francisco, CA, USA; 2003:1848-1857. vol. 3, Mar. 30–Apr. 3

    Google Scholar 

  20. Firooz M, Roy S: Network tomography via compressed sensing. In proc. of IEEE Global Telecommunications Conference (GLOBECOM). Miami, Florida, USA; 2010:1-5. Dec. 6-10

    Google Scholar 

  21. Chen Y, Bindel D, Katz RH: Tomography-based overlay network monitoring. In Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement. Miami Beach, FL, USA; 2003:216-231. Aug. 25-29, 2003

    Google Scholar 

  22. Haupt J, Bajwa WU, Rabbat M, Nowak R: Compressed sensing for networked data. IEEE Signal Process. Mag 2008, 25(2):92-101.

    Article  Google Scholar 

  23. Wang M, Xu W, Mallada E, Tang A: Sparse recovery with graph constraints: fundamental limits and measurement construction. Arxiv preprint arXiv:1108.0443 (2011)to appear in Proceedings of IEEE INFOCOM 2012

  24. Xu W, Mallada E, Tang A: Compressive sensing over graphs. In Proceedings of the 30th IEEE International Conference on Computer Communications (IEEE INFOCOM). Shangai, China; IEEE, 2011:2087-2095. Apr. 10-15

    Google Scholar 

  25. Sherlaw-Johnson C, Gallivan S, Burridge J: Estimating a Markov transition matrix from observational data. J. Operat. Res. Soc 1995, 46(3):405-410.

    Article  MATH  Google Scholar 

  26. Tibshirani R: Regression shrinkage and selection via the Lasso. J. Royal Stat. Soc. Ser. B 1996, 58: 267-288.

    MathSciNet  MATH  Google Scholar 

  27. Xu H, Caramanis C, Mannor S: Robust regression and Lasso. IEEE Trans. Inf. Theory 2010, 56(7):3561-3574.

    Article  MathSciNet  Google Scholar 

  28. Candes E, Tao T: The dantzig selector: statistical estimation when p is much larger than n. Annals Stat 2007, 35(6):2313-2351. 10.1214/009053606000001523

    Article  MathSciNet  MATH  Google Scholar 

  29. Zhang CH, Huang J: The sparsity and bias of the Lasso selection in high-dimensional linear regression. Annals Stat 2008, 36(4):1567-1594. 10.1214/07-AOS520

    Article  MathSciNet  MATH  Google Scholar 

  30. Zhang T: Some sharp performance bounds for least squares regression with L1 regularization. Annals Stat 2009, 37(5A):2109-2144. 10.1214/08-AOS659

    Article  MATH  Google Scholar 

  31. Haupt J, Bajwa W, Raz G, Nowak R: Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Trans. Inf. Theory 2010, 56(11):5862-5875.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The study was supported by AFOSR under grants FA9550-08-0480 and FA9550-12-1-0215 and by the National Science Foundation (NSF) under Grant CCF-0917343.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marco Levorato.

Additional information

Competing interests

The authors declare that they have no competing interests.

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Levorato, M., Mitra, U. & Goldsmith, A. Structure-based learning in wireless networks via sparse approximation. J Wireless Com Network 2012, 278 (2012). https://doi.org/10.1186/1687-1499-2012-278

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2012-278

Keywords