Open Access

Non-commutative large entries for cognitive radio applications

EURASIP Journal on Wireless Communications and Networking20122012:167

https://doi.org/10.1186/1687-1499-2012-167

Received: 24 May 2011

Accepted: 10 May 2012

Published: 10 May 2012

Abstract

Cognitive radio has been proposed as a solution for the problem of underutilization of the radio spectrum. Indeed, measurements have shown that large portions of frequency bands are not efficiently assigned since large pieces of bandwidth are unused or only partially used. In the last decade, studies in different areas, such as signal processing, random matrix theory, information theory, game theory, etc., have brought us to the current state of cognitive radio research. These theoretical advancements represents a solid base for practical applications and even further developments. However, still open questions need to be answered. In this study, free probability theory, through the free deconvolution technique, is used to attack the huge problem of retrieving useful information from the network with a finite number of observations. Free deconvolution, based on the moments method, has shown to be a helpful approach to this problem. After giving the general idea of free deconvolution for known models, we show how the moments method works in the case where scalar random variables are considered. Since, in general, we have a situation where more complex systems are involved, the parameters of interest are no longer scalar random variables but random vectors and random matrices. Random matrices are non-commutative operators with respect to the matrix product and they can be considered elements of what is called non-commutative probability space. Therefore, we focus on the case where random matrices are considered. Concepts from combinatorics, such as crossing and non-crossing partitions are useful tools to express the moments of Gaussian and Vandermonde matrices, respectively. Our analysis and simulation results show that free deconvolution framework can be used for studying relevant information in cognitive radio such as power detection, users detection, etc.

1 Introduction

In the last decade, recent studies [1] have shown that future communication systems should be designed to be able to adapt to their environment in order to tackle the problem of the underutilization of a precious resource such as the radio spectrum. Measurements have shown that large portions of frequency bands are not efficiently used, that is, for most of the time, large pieces of bandwidth are unoccupied or partially occupied [2]. A possible solution, introduced by Mitola [3, 4], is represented by cognitive networks, that can be thought of as self-learning, adaptive and intelligent networks. In cognitive networks, unlicensed (secondary) systems improve spectral efficiency by sensing the environment and filling opportunistically the discovered holes spectrum (or white spaces) of licensed systems (primary), which have exclusive right to operate in a certain spectrum band [5]. The current development of microelectronics allows us to suppose that these wireless systems, for which the spectrum utilization will play a key role, will be realized in the near future. These systems provide an efficient utilization of the radio spectrum based on the methodology understanding-by-building to learn from the environment and to adapt their parameters to statistical variations in the input stimuli [6].

The current development in cognitive radio research is the result of a multidisciplinary study that allows us to analyze different aspects of cognitive radio. We identify in signal processing, game theory, information theory, random matrix theory, etc., enabling areas for the development of cognitive radio.

Signal processing plays a major role in designing cognitive wireless networks, especially in spectrum sensing to identify spectrum opportunities and in the design of cognitive spectrum access to exploit the identified spectrum holes. We refer to spectrum sensing as the process where devices look for a signal in the presence of noise for a given frequency band. Several digital signal processing techniques, such as matched filtering, energy detection, and cyclostationary feature detection are analyzed [7, 8] to improve radio sensitivity and detect the presence of primary users. In [9], it is proved that the energy detector is an efficient spectrum sensing technique when the secondary user has limited information on the primary user's waveform, i.e., only the power of the local noise is known. The authors of [10] formulate the spectrum sensing problem as a nonlinear optimization problem, minimizing the interference to the primary user and meeting the requirement of opportunistic spectrum utilization. Cooperation between users follows as a consequence of the following constraints: (1) secondary users should not interfere with the primary transmissions and they should be able to detect the primary signal even if decoding the signal may be impossible [9]; (2) secondary users are in general not aware of the exact transmission scheme used by primary users. Cooperation among all cognitive users operating in the same band reduces the detection time and increases the overall agility with which cognitive users are able to shift bands [1113]. Cooperation is designed in [14] as joint detection among all the cooperating users and in [15] as fusion center that makes the final decision about the occupancy of the band by fusing the decisions made by all cooperating users. In [16], cooperation is analyzed for the partial CSI (channel state information) scenario at the secondary users.

From a game theoretic point of view spectrum sharing may be considered as a competition. The importance of studying cognitive radio networks in a game theoretic framework is multifold. By modeling dynamic spectrum sharing between users as a game, users behaviors and actions can be analyzed in a formalized structure, where the theoretical results in game theory can be fully applied [17, 18]. The optimization of spectrum usage is generally a multi-objective optimization problem, which is very difficult to analyze and to solve. Moreover, game theory provides us game models that predict convergence and stability of networks [19]. In [20], a game-theoretic adaptive channel allocation scheme is proposed for cognitive radio networks. In particular, a game is formulated to analyze the selfish and cooperative behaviors of the players. The players of this game were the wireless nodes and their strategies were defined in terms of channel selection. In [21], the convergence dynamics of the different types of games in cognitive radio systems is studied. Then, a game theory framework is proposed for distributed power control to achieve agility in spectrum usage in a cognitive radio network.

Information theory is used to characterize the achievable rates in a cognitive radio network under different assumptions on how the secondary systems interfere with the primary ones. Fundamental understanding on the capacity of the cognitive systems are provided in [2226]. Using recent results on random matrix theory, the authors of [27, 28] propose a new method for signal detection in cognitive radio, based on the eigenvalues of the covariance matrix of received signal at the secondary users. In [29], a spectrum sensing technique that relies on the use of multiple receivers to infer on the structure of the received signals using random matrix theory is proposed. The authors show that their technique is quite robust and does not require the knowledge of signal or noise statistics. These methods do not require any prior information on the primary signal or on the noise power. In [30, 31] two hypothesis tests allowing to detect the presence of an unknown transmitter using several sensors are proposed and random matrix theory is used to provide the error associated with both tests.

We recognize as a crucial point of cognitive radio development understanding how much it is possible to infer from the network with the knowledge of just few observations. In the current study, we use free probability theory, through the concept of free deconvolution, to handle the problem of retrieving useful information from the network with a limited number of observations. Free deconvolution, based on the moments method, has shown to be a interesting tool to attack this problem.

In cognitive random networks, devices are autonomous and should take optimal decisions based on their sensing capabilities. We are particularly interested in measures such as capacity, signal to noise ratio, and estimation of the signal power. Such measures are usually related to the eigenvalues of the channel matrix and not on the specific structure, the eigenvectors. The fact that the spectrum of a stationary process is related to the information measure of the underlying process dates back to Kolmogorov [32]. The entropy rate of a stationary Gaussian stochastic process can be expressed by
H = log ( π e ) + 1 2 π - π π log ( S ( f ) ) d f
where S is the spectral density of the considered process. Therefore, a complete characterization of the information contained in the process is given in the case the autocorrelation of the process is known. Moreover, the authors of [33, 34] have shown that the entropy rate is also related to the minimum mean squared error (MMSE) of the best estimator of the process given the infinite past. In wireless communication, this means that it is possible retrieve one quantity from the other, especially as many receivers incorporate an MMSE component. In the discrete case, the entropy rate per dimension (or differential entropy) of a Gaussian stochastic process x i of size n is given by
H = log π e + 1 n logdet R = log π e + 1 n i = 1 n log λ i
where R = E ( x i x i H ) is the covariance and λ i are its eigenvalues. The knowledge these eigenvalues provides us with the information on Gaussian networks. In fact, in order to estimate the rate and in extension the capacity which is the difference between two differential entropies or any other measure which involves performance criteria, one needs to compute the eigenvalues of the covariance. For a number of observations K of the vector x i , i = 1,..., K, the covariance R is usually estimated by
R ^ = 1 p i = 1 p x i x i H = R 1 / 2 S S H R 1 / 2 ,
(1)

where S = [s1,..., s K ] is an n × K i.i.d zero mean Gaussian vector with variance 1 K . In cognitive random networks, the number of samples K is of the same order as n, due to the high mobility of the network and to the fact that the statistics are considered to be the same within a K number of samples. Because of this, the use of classical asymptotic signal processing techniques is not more efficient since they require a number of samples K >> n. Therefore,our main problem consists in retrieving information within a window of limited samples. In this sense, free probability theory, through the concept of free deconvolution, is a very appealing framework for the study of cognitive networks. The main advantage of free deconvolution framework is that it provides us with helpful techniques to obtain useful informations from a finite number of observations. The deconvolution framework comes here from the fact that we would like to invert Equation (1) and express R with respect to R ^ , since we can only have access to the sample covariance matrix. This is not possible, however, one can compute the eigenvalues of R knowing only the eigenvalues of R ^ .

In the following, the general idea of free deconvolution is presented. We show how the moments method works in the case where scalar random variables are considered. However, since in practical situations systems are more complex, the parameters of interest are no longer scalar random variables and they need to be represented by random matrices. Therefore, we analyze the case where random matrices are considered. We analyze moments method for matrices which show the freeness property and we show that it can be used to propose algorithmic methods to compute moments of finite Gaussian random matrices. Moreover, we analyze the case of matrices for which freness does not hold: Vandermonde, Hankel, Toeplitz. In the end, we present applications showing how the moments method approach can be used for studying cognitive radio: power detection, users detection, etc. In last section, we discuss our results presenting conclusions and open problems.

2 Information plus noise model

The example given in (1) is rarely met in practice in wireless communication since the transmitted signal s i is, usually, distorted by a medium, given by m i = f(s i ) with f any function, and the received signal y i is altered by some additive noise n i . We consider a finite number K of observations of the following n × 1 received signal, known as Information plus Noise model
y i = m i + n i i = 1 , , K
(2)
which can be rewritten in a matrix form stacking all observations as
Y = M + N
(3)

with M and N K × n independent random matrices. We are interested in retrieving information about the transmitted signal from the received signal, more explicitly to obtain the eigenvalues of MM H from the eigenvalues of YY H and NN H . This is exactly the goal of deconvolution.

In more general terms, the idea of deconvolution is related to the following problem [35]: Given A, B two n × n independent square complex Hermitian (or real symmetric) random matrices:
  1. (1)

    Can one derive the eigenvalue distribution of A from those of A + B and B? If feasible in the large n-limit, this operation is named additive free deconvolution,

     
  2. (2)

    Can one derive the eigenvalue distribution of A from those of AB and B? If feasible in the large n-limit, this operation is named multiplicative free deconvolution.

     

The techniques generally used to compute the operation of deconvolution in the large n-limit are the moments method [35] and the Stieltjes transform method [36]. Each of these methods has its advantages and its drawbacks. The moments method only works for measures with moments and characterizes the convolution only by giving its moments but it is easily implementable and, in many applications, one needs only a subset of the moments depending on the number of parameters to be estimated. Instead, the Stieltjes transform method works for any measure and it allows, when computations are possible, to recover the densities. Unfortunately, this method works only in very few cases, since the operations which are necessary are almost always impossible to implement in practice and combining patterns of matrices naturally leads to more complex equations for the Stieltjes transform and can only be performed in the large n-limit.

We analyze the concept of free deconvolution based on the moments method which uses the empirical moments of the eigenvalue distribution of random matrices to obtain information about the eigenvalues. The moments method has shown to be a fruitful technique in both the asymptotic and the finite setting to compute deconvolution, as well as the simplest patterns, sums and products, and products of many independent matrices.

3 Moments method

3.1 Scalar case

We start by showing how moments method works for the scalar case. We consider X and Y two independent random variables and Z = X + Y . We are interested in retrieving the distribution of X knowing the distribution of Z and Y . The idea is to consider the moment generating function
M Z t = E e t Z = M X t M Y t
from which
M X ( t ) = M Z ( t ) M Y ( t ) .
The knowledge of M X (t) gives us the distribution of the random variable X. However, it is not always easy to recover the distribution of X from M X (t). Another approach to solve the problem is to express the independence in terms of moments or cumulants. The cumulants are given by derivatives (at zero) of the function g X t = log E e t X . We denote by c n the cumulant of order n:
c n ( X ) = n t n t = 0 log E e t X .
The main advantage of using cumulants is due to the fact that for independent variables X and Y
g X + Y ( t ) = log [ E ( e t ( X + Y ) ) ] = log [ E ( e t X ) E ( e t Y ) ] = log [ E ( e t X ) ] + log E ( e t Y ) = g X ( t ) + g Y ( t ) ,
this means that
c n X + Y = c n X + c n Y .
Moments, denoted by m n X = E X n , and cumulants of a random variable can be deduced from each other by the moment-cumulant formula
m n ( X ) = i = 1 k 1 n k 1 , , k i 1 k 1 + + k i = n c k 1 ( X ) c k i ( X ) .
(4)

Therefore to obtain the distribution of X from the ones of X +Y and Y one can compute the cumulants of X by the formula c n (X) = c n (X + Y) − c n (Y) and then deduce the moments of X from its cumulants.

In the multiplicative case, we consider X and Y independent random variables and we are interested in retrieving the distribution of X from XY and Y. In this case, the problem can be easily solved since
E [ ( X Y ) n ] = E [ X n ] E [ Y n ] ,

then, we obtain E X n = E ( X Y ) n E Y n . Therefore, using the moments approach, we can compute the moments of X.

The moments method for scalar random variables seems to be very straightforward, however in general we have more complex situations. The generalization to multi-user multi-antenna communication systems has dramatically changed the nature of wireless communication problems. Furthermore, multi-dimensional stochastic problems need to be solved since cognitive devices are required to be simultaneously smarter and able to collaborate with one another. The random parameters in these problems are no longer scalar random variables but potentially vectors and matrices. The computation of deconvolution for random matrices is more complex than the scalar case and it is explained in the following.

3.2 Historical perspective

The origin of the moment approach for the derivation of the eigenvalue distribution of random matrices dates back to the work of Wigner [37]. Wigner was interested to the energy levels of nuclei (the positively charged central core of an atom). These energy levels are linked to the Hamiltonian operator by the Schrondinger equation, and the fact that these energy levels can be represented as the eigenvalues of the matrix representation of this operator, led Wigner to replace the exact matrix by a random matrix having the same properties. In most of the cases, it could be considered the following hermitian random matrix
H = 1 n 0 + 1 + 1 + 1 - 1 - 1 + 1 0 - 1 + 1 + 1 + 1 + 1 - 1 0 + 1 + 1 + 1 + 1 + 1 + 1 0 + 1 + 1 - 1 + 1 + 1 + 1 0 - 1 - 1 + 1 + 1 + 1 - 1 0
where the upper diagonal elements are i.i.d generated with a binomial distribution. His study revealed that, as the dimension of the matrix increases, the eigenvalues of the matrix become more and more predictable irrespective of the exact realization of the matrix (see Figure 1). The idea to show this is to compute, as the dimension increases, the moments of the matrix H, that is the trace at different exponent. Consider
Figure 1

Semicircle law and simulation for a 512 × 512 Wigner matrix.

d F n ( λ ) = 1 n i = 1 n δ ( λ - λ i )
then the moments of the eigenvalue distribution of H are given by:
m 1 ( H ) = 1 n Tr ( H ) = 1 n i = 1 n λ i = λ d F n ( λ ) m 2 ( H ) = 1 n Tr ( H 2 ) = 1 n i = 1 n λ i 2 = λ 2 d F n ( λ ) = m k ( H ) = 1 n Tr ( H k ) = 1 n i = 1 n λ i k = λ k d F n ( λ )
The traces above can be computed, as the dimension increases, using combinatorial tools. It turn out that all odd moments converge to zero, whereas all even moments converge to what is known as the Catalan numbers. The only distribution which has all its odd moments null and all its even moments equal to the Catalan numbers is the semi-circular law (see Figure 1) provided by
f ( x ) = 1 2 π 4 - x 2

with |x| ≤ 2. In this way, the moments approach is shown to be a useful method for computing the eigenvalues distribution of classical known matrices.

When more than one matrix is considered, the concept of asymptotic freeness [38] leaves us to compute the eigenvalue distribution of sums and products of random matrices.

3.3 Free probability framework

Free probability theory [38] was introduced by Voiculescu in the 1980s in order to attack some problems related to operator algebras and it can be considered as a generalization of classical probability theory to noncommutative algebras. The analogy between the concept of freeness and the independence in classical probability leaves us to work with noncommutative operators like matrices that can be considered elements in what is called a noncommutative probability space. The algebra of Hermitian random matrices is a particular case of such a probability space, for which the random variables, i.e., the random matrices, do not commute with respect to the matrix product.

Definition 3.1 A non-commutative probability space A , φ consists of a unitalanon-commutative algebra over and a unital linear function
φ : A , φ 1 A = 1
The elements of are called non-commutative random variables. In our case, will consist of n × n matrices or random matrices. For matrices, φ will be the normalized trace defined for any A A by
tr ( A ) = 1 n Tr ( A ) = 1 n i = 1 n A ( i , i ) ,
while for random matrices, φ will be the linear functional τ defined by
τ ( A ) = 1 n E ( Tr ( A ) ) = 1 n i = 1 n E [ A ( i , i ) ] .
Definition 3.2 Let A and B be n×n hermitian random matrices and the functional φ A : = lim n 1 n E [ Tr( A ) ] we say that A and B are aymptotically free if whenever there exist polynomials p i and q i such that φ[p i (A)] = 0 for all i and φ[q j (B)] = 0 for all j, then necessarily
φ [ p 1 ( A ) q 1 ( B ) p 2 ( A ) q 2 ( B ) ] = 0 .

Given A, B n × n hermitian and asymptotically free random matrices such that their eigen-values distributions converge to some probability measure µ A and µ B , respectively, then the eigenvalue distributions of A + B and AB converge to a probability measure which depends on µ A and µ B , called additive and multiplicative free convolution, and denoted by µ A µ B and µ A µ B , respectively.

Additive free deconvolution: The additive free deconvolution of a measure ρ by a measure ν is (when it exists) the only measure µ such that ρ = µ ν. In this case, µ is denoted by µ = ρ ν.

Multiplicative free deconvolution The multiplicative free deconvolution of a measure ρ by a measure ν is (when it exists) the only measure µ such that ρ = µ ν. In this case, µ is denoted by µ = ρ ν.

For a given n × n random matrix A, the p-th moment of A is defined, if it exists, as:
m A n , p = E [ tr ( A p ) ] = λ p d ρ n ( λ )
(5)
where d ρ n ( λ ) = E ( 1 n i = 1 n δ ( λ - λ i ) ) is the associated empirical mean measure, and λ i are the eigenvalues of A. The idea of additive and multiplicative free deconvolution stems from the fact that in the asymptotic case
m A + B p : = lim n 1 n E [ Tr ( ( A + B ) p ) ] = f ( m A ( 1 ) , , m A ( p ) , m B ( 1 ) , , m B ( p ) ) m AB p : = lim n 1 n E [ Tr ( ( A B ) p ) ] = g ( m A ( 1 ) , , m A ( p ) , m B ( 1 ) , , m B ( p ) )

which means that we can express the moments of A + B and the moments of AB as a function of the moments of A and the moments of B. In other words, the joint distribution of A + B and the joint distribution of AB depend only on the marginal distributions of A and B.

Even if matrices with finite dimensions are not free, the free probability framework, based on the moments, can still be used to propose an algorithmic method to compute these operations for finite size matrices. This means that
m A + B n , p = 1 n E [ Tr ( ( A + B ) p ) ] = f ( m A ( 1 ) , , m A ( p ) , m B ( 1 ) , , m B ( p ) ) m AB n , p = 1 n E [ Tr ( ( A B ) p ) ] = g ( m A ( 1 ) , , m A ( p ) , m B ( 1 ) , , m B ( p ) )

Hence, when, for n → ∞, the moment m A n , p converges almost surely to an analytical expression m A p that depends only on some specific parameters of A (such as the distribution of its entries).b Therefore, in the finite setting one is still able by recursion to express all the moments of A with respect only to the moments of A + B and B, or AB and B.

We will give a characterization of free deconvolution in terms of free cumulants, which are polynomials in the moments with a nice behaviour with respect to the freeness. The nomenclature comes from classical probability theory where corresponding objects are well known. There exists a combinatorial description of these classical cumulants, which depends on partitions of sets. In the same way, free cumulants can also be described combinatorially, the only difference to the classical case is the replacement of partitions by the so called non-crossing partitions [39].

Definition 3.3 A partition π of a set {1, 2,..., n} is a decomposition in subsets V i : π = {V1,..., V r } such that i = 1 r V i = 1 , , n , with V i and V i ∩ V j = for all ij.

The set of all partitions of {1, 2,..., n} is denoted by P ( n ) , and V i are called blocks of π.

For a given random variable X, the relationship between moments and cumulants given in (4) can be combinatorially expressed by
E ( X n ) = π P ( n ) c π

where c π = i = 1 | π | c | π i | when π = {π1,..., π |π| }.

Definition 3.4 A partition π of {1,..., n} is non-crossing if whenever we have four numbers 1 ≤ i < k < j < l ≤ n such that i and j are in the same block, k and l are in the same block, we also have that i, j, k, l belong to the same block.

We denote by NC(n) the set of non-crossing partition of {1,..., n}, and if this situation does not happen, then we call π a crossing partition. Examples of non-crossing and crossing partitions are give in Figures 2 and 3.
Figure 2

Non-crossing partition {{1, 3, 5}, {2}, {4}} of the set {1, 2, 3, 4, 5}.

Figure 3

Crossing partition {{1, 3, 5}, {2, 4}} of the set {1, 2, 3, 4, 5}.

The computation of free deconvolution by the moments method approach is based on the moment-cumulant formula, which gives a relation between the moments m A p m μ A p and the free cumulants κ A p κ μ A p of a matrix A, where µ A is the associated measure. It turns out that the cumulants are quantities much easier to compute, also thanks to the concept of non-crossing partitions. The moment-cumulant formula says that
m A p = π = { V 1 , , V k } N C ( p ) i = 1 k κ A | V i | ,
(6)

where |V i | is the cardinality of the block V i . From (6) it follows that the first p cumulants can be computed from the first p moments, and viceversa.

The following characterization allows us to compute easily the additive free convolution using free cumulants.

Theorem 3.5[38]Given A and B asymptotically free random matrices, µ A µ B is the only law such that for all p ≥ 1
κ μ A μ B p = κ μ A p + κ μ B p
(7)
Hence, the deconvolution of µA+Bby µ B , denoted by µ(A+B) µ B , is characterized by the fact that for all p ≥ 1
κ μ A + B μ B p = κ μ A + B p - κ μ B p .
(8)

The implementation of additive free deconvolution is based on the following steps: for the two matrices (A + B) and B, we first compute the free cumulants, then, considering the relation between the cumulants and the moments, we can obtain information about the distribution of the eigenvalues of A.

The moments method, in the multiplicative case, is based on the relation between the moments m A p m μ A p and the coefficients s A p s μ A p of the S-transformc of the measure associated to A. They can be deduced one from each other from the following relations for all p ≥ 1
m A 1 s A 1 = 1 , s A p = k = 1 p + 1 s A k + p 1 , , p k 1 p 1 + + p k = p + 1 m A p 1 m A p k .

Hence, we can compute multiplicative free convolution by the following characterization.

Theorem 3.6[38]Given A and B asymptotically free random matrices, µ A µ B is the only law such that:
S μ A μ B = S μ A S μ B
The multiplicative free deconvolution of μ AB by μ B , μ(AB) μ B , is characterized by the fact that for all p ≥ 1
s μ AB μ B p s μ B 1 = s μ AB p - k = 1 p - 1 s μ A B μ B k s μ B p + 1 - k .
(9)
Even though freeness usually does not hold for finite matrices, the moments method can still be used to propose algorithmic methods to compute their moments. Focusing on the study of random matrices in the finite case, the authors of [40] were able to derive the explicit series expansion of the eigenvalue distribution of various models, namely the case of non-central Wishart distributions as well as one sided correlated zero mean Wishart distributions. In particular, they proposed a general finite dimensional statistical inference framework based on the moments method in the finite case, which takes a set of moments as input and produces sets of moments as output with the dimensions of the matrices considered finite. They focus on the finite Gaussian case. The formulas of the moments presented in their contributions have been generated by iterations through partitions and permutations and concepts from combinatorics. The first and simplest result concerns the moments of a product of a deterministic matrix and a Wishart matrix. Let n, N be positive integers, X be n × N standard, complex, Gaussiand matrix and D a (deterministic) n × n matrix. Denoting the moments D p = tr (D p ) and M p = E [ tr ( ( D 1 N X X H ) p ) ] for any positive integer p, Theorem 1 in [40] allows us to express the moments M p in terms of the moments D p . In particular, the first three moments can be written as
M 1 = D 1 M 2 = D 2 + c D 1 2 M 3 = 1 + 1 N 2 D 3 + 3 c D 2 D 1 + c 2 D 1 3
where c = n N . By a simple recursion, we can express D p from M p . For the first three moments these recursions become
D 1 = M 1 D 2 = M 2 - c M 1 2 D 3 = ( M 3 - 3 c ( M 2 - c M 1 2 ) M 1 + c 2 M 1 3 ) 1 + 1 N 2 - 1 .
Considering the sum of a D deterministic n × N matrix and X a n × N standard, complex, Gaussian matrix, in accordance with the [40, Theorem 2], for any positive integer p the moments M p = E [ t r ( ( 1 N D + X ) ( D + X ) H ) p ) ] can be expressed in terms of the moments D p = t r ( ( 1 n D D H ) p ) as the following formulas:
M 1 = D 1 + 1 M 2 = D 2 + ( 2 + 2 c ) D 1 + ( 1 + c ) M 3 = D 3 + ( 3 + 3 c ) D 2 + 3 c D 1 2 + 3 + 9 c + 3 c 2 + 3 N 2 D 1 + 1 + 3 c + c 2 + 1 N 2

In this case also, by a simple recursion, one can express D p from M p . It is clear how the operation of deconvolution can be viewed as operating on the moments: explicit expression for the moments of the Gram matrices associated to our models (sum or product of a deterministic matrix and a complex standard Gaussian matrix) are found, and are expressed in terms of the moments of the matrices involved. Hence, deconvolution means to express the moments, in this case of the deterministic matrices, in function of the moments of the Gram matrices.

Similar results are found when the Gaussian matrices are assumed to be square and selfad-joint. The implementation of the results is also able to generate the moments of many types of combinations of independent Gaussian and Wishart random matrices.

The algorithms are based on iterations through partitions and permutations and they give us rather complex expression. However, the author of [41] have generated Matlab codes based on concepts as partitions and permutations in order to implement the above results.

Once known the moments, the Newton-Girard formulas [42] can be used to retrieve the eigenvalues from the moments. These formulas state a relationship between the elementary symmetric polynomials
Π 1 ( λ 1 , , λ n ) = λ 1 + + λ n Π 2 ( λ 1 , , λ n ) = 1 i < j n λ i λ j Π n ( λ 1 , , λ n ) = λ 1 λ n ,
and the sums of the powers of their variables
S p ( λ 1 , , λ n ) = 1 i n λ i p = n t p
(with t p being the p-th moment) through the recurrence relation
( - 1 ) m m m ( λ 1 , , λ n ) + i = 1 m ( - 1 ) k + m S k ( λ 1 , , λ n ) m - k ( λ 1 , , λ n ) = 0 .
(10)
If the sums of powers S p (λ1,..., λ n ) are known for 1 ≤ p ≤ n, the relation (10) can be used to compute the elementary symmetric polynomials ∏ m (λ1,..., λ n ) for 1 ≤ m ≤ n. Therefore, the characteristic polynomial
( λ - λ 1 ) ( λ - λ n )

(the roots of which provide the eigenvalues of the associated matrix) can be fully charac-terized since its n − k coefficient is given by ( 1) k k (λ1,..., λ n ). In this way the entire characteristic polynomial can be computed, and the eigenvalues can also be found.

4 Non free case

In recent works, deconvolution, based on the moments method, has been analyzed when n → ∞ for some particular matrices A and B. For instance, when A is a random Vandermonde matrix and B is a deterministic diagonal matrix [43], or when A and B are two independent random Vandermonde matrices [44]. The authors in [43] developed analytical methods for finding moments of random Vandermonde matrices with entries on the unit circle and provide explicit expressions for the moments of the Gram matrix associated to the models considered. The explicit expressions found for the moments are useful for performing deconvolution. In these cases the moments technique has been shown to be very appealing and powerful in order to derive the exact asymptotic moments of "non free matrices". This type of matrices occurs in cognitive radio [45].

Definition 4.1 An N × M random Vandermonde matrix with entries on the unit circle has the form
V = 1 N 1 1 e - j ω 1 e - j ω M e - j ( N - 1 ) ω 1 e - j ( N - 1 ) ω M ,

where the phases ω1,..., ω M are i.i.d. random variables in [0, 2π].

The asymptotic behaviour of random Vandermonde matrices is analyzed when N and M are large, both go to infinity at a given ratio M N c , with c constant. The scaling factor 1 N and the assumption that the entries lies on the unit circle guarantee that the analysis will give limiting asymptotic behaviour.

Definition 4.2 For ρ = { W 1 , , W r } P ( n ) , we define
K ρ , ω , N = 1 N n + 1 - | ρ | ( 0 , 2 π ) | ρ | k = 1 n 1 - e j N ( ω b ( k - 1 ) - ω b ( k ) ) 1 - e j ( ω b ( k - 1 ) - ω b ( k ) ) d ω 1 d ω | ρ | ,
(11)
where ω W 1 , , ω W | ρ | are i.i.d. (indexed by the blocks of ρ) with the same distribution of ω and where b(k) is the block of ρ which contains k. If the limit
K ρ , ω = lim N K ρ , ω , N
(12)

exists, we call it a Vandermonde mixed moment expansion coefficient.

These quantities do not behave exactly as cumulants, but rather as weights which tell us how a partition in the moment formula we present should be weighted. In this respect, the formulas presented for the moments are different from classical or free moment-cumulant formulas, since these do not perform this weighting. The limits Kρ,ωmay not always exist, and necessary and sufficient conditions for their existence seem to be hard to find. In [43], it has been proved that the limit in (12) exists if the density of ω is continuous. The calculation is based on combinatorial computation using crossing partitions since the matrices are not free.

Theorem 4.3 Let ω1,..., ω M be phases independent and identically distributed in [0, 2π]. we have that
m n = lim N E [ t r M ( P V H V ) n ] = ρ P ( n ) K ρ , ω c | ρ | - 1 P ρ
(13)

exist when M N c > 0 , P n = limN→∞tr M (P n ), P ρ = i = 1 k P W i .

Remark 1. The fact that all moments exist is not enough to guarantee that there exists a limit probability measure having these moments. However, it is proved in [46] that the Carleman's condition is satisfied.

Uniform phase distribution plays an important role or Vandermonde matrices.

Theorem 4.4 For Vandermonde matrices with uniform phase distribution u, the Vander-monde mixed moment expansion coefficient
K ρ , u = lim N K ρ , u , N
(14)

exists ρ. Moreover, Kρ,usatisfies the following properties

  • 0 ≤ Kρ,u≤ 1;

  • Kρ,uare rational numbers ρ;

  • Kρ,u= 1 ρ is non-crossing partition;

  • Let V ω , n = lim N E [ t r M ( V ω H V ω ) n ] (with V ω a Vandermonde matrix with phase distribution ω), then Vu,nVω,n.

The importance of uniform phase distribution is also expressed by the following theorem.

Theorem 4.5 The Vandermonde mixed moment expansion coefficient K ρ , ω = lim N K ρ , ω , N exist whenever the density p ω of ω is continuous on [0, 2π], then
K ρ , ω = K ρ , u | ρ | - 1 0 2 π p ω ( x ) | ρ | d x .
(15)

The behaviour of Vandermonde matrices is different when the density of ω has singularities and depends on the density growth rates near the singularities points. Indeed, for the case of generalized Vandermonde matrices, whose columns do not consist of uniformly distributed power, it is possible to define mixed moment expansion coefficients but the formulas are more complex.

When many independent Vandermonde matrices are considered, the following relations hold.

Theorem 4.6 If V1, V2,... are independent Vandermonde matrices with the same phase distribution ω with continuous density, then
lim N E [ t r M ( P 1 ( N ) V i 1 H V i 2 P n ( N ) V i n H V i 1 ) ] = ρ σ P ( n ) K ρ , ω c | ρ | - 1 P ρ
(16)

exist when M N , with σ= {σ1, σ2} = {{1, 3,... }, {2, 4,... }}, and "≤"denotes the refinement order, i.e., any block of ρ is contained within a block of σ.

If V1, V2,... are independent Vandermonde matrices with different phase distribution ω i with continuous density p ω i , Equation (16) still holds with Kρ,ωreplaced by
K ρ , u | ρ | - 1 0 2 π i = 1 n p ω i ( x ) | ρ | d x .

In [44], the authors generalize the above results replacing convergence in distribution with almost sure convergence in distribution.

Such matrices are applied to cognitive radio in [45], where authors consider a scenario with a primary and a secondary user wish to communicate with their corresponding receivers simultaneously over frequency selective channels is considered. Under realistic assumptions that the primary user is ignorant of the secondary user's presence and that the secondary transmitter has no side information about the primary's message, the authors propose a Vandermonde precoder that cancels the interference from the secondary user by exploiting the redundancy of a cyclic prefix.

4.1 Toeplitz and Hankel matrices

The same strategy used to compute the moments of Vandermonde matrices can be used for Hankel and Toeplitz matrices.

Definition 4.7 We define a Toeplitz matrix
T = 1 N X 0 X 1 X 2 X N - 2 X N - 1 X 1 X 0 X 1 X N - 2 X 2 X 1 X 0 X N - 1 X N - 2 X 1 X 0 ,
(17)

where X i are i.i.d., real-valued random variables with unit variance.

Theorem 4.8[44]We denote by M i the asymptotic moments of order 2i of a Toeplitz matrix T , then we have
M 1 = 1 M 2 = 8 3 M 3 = 11 .

Similar results hold for Hankel matrices.

Definition 4.9 We define an Hankel matrix
H = 1 N X 1 X 2 X N - 1 X N X 2 X 3 X N X N + 1 X N + 1 X N + 2 X N N + 1 X 2 N - 2 X 2 N - 1 ,
(18)

where X i are i.i.d., real-valued random variables with unit variance.

Theorem 4.10[44]We denote by M i the asymptotic moments of order 2i of a Hankel matrix H , then we have
M 1 = 1 M 2 = 8 3 M 3 = 14 .

Toeplitz and Hankel matrices are structured matrices used for compressive wide-band spectrum sensing schemes [47, 48] and for direction of arrival estimation [49].

5 Application

5.1 Power estimation

We consider a multi-user MIMO system where the received signal can be expressed by
y i = W P 1 2 s i + σ n i
(19)
where W, P, s i , and n i are respectively the N × K channel gain matrix, the K × K diagonal power matrix due to the different distances from which the users emit, the K × 1 matrix of signals and the N × 1 matrix representing the noise with variance σ. In particular, W, s i , n i are independent standard, complex, Gaussian matrices and vectors. We are interested in estimating the power with which the users send information, from M observations (during which the channel gain matrix stays constant) of the vector y i . We consider the 2 × 2-matrix
P 1 2 = 1 . 5 0 0 0 . 5
(20)
and we apply additive deconvolution at first, and then multiplicative deconvolution twice (each application takes care of one Gaussian matrix). We can estimate the eigenvalues of P when we have an increasing number L of observations of the matrix Y = [y1,..., y M ], representing the signals received (we average across several block fading channels). Hence, we estimate the moments of the matrix P based only on the moments of the matrix YY H . Knowing the moments of P, we can estimate the eigenvalues using Newton-Girard formulas. When L increases, we get a prediction of the eigenvalues which is closer to the true eigenvalues of P. Figure 4 illustrates the estimation of eigenvalues up to L = 1000 observations. The actual powers are 2.25 and 0.25, the variance σ of the noise is assumed to be equal to 0.1.
Figure 4

Estimation of the powers for the model (19), where the number L of observations increases, the sizes of the matrices are K = N = M = 2 and σ = 0.1. The actual powers are 2.25 and 0.25.

5.2 Understanding the network in a finite time

In cognitive MIMO networks, one must learn and control the "black box" (for instance the wireless channel) with multiple inputs and multiple outputs within a fraction of time and with finite energy. The fraction of time constraint is due to the fact that the channel (black box) changes over time. Of particular interest is the estimation of the capacity within the window of observation.

Let y be the output vector, x and n respectively the input signal and the noise vector, so that
y = x + σ n .
(21)
In the Gaussian case, the rate is given by
C = H y - H y | x = log 2 det ( π e R Y ) - log 2 det ( π e R N ) = log 2 det ( R Y ) det ( R N )

where R Y is the covariance of the output signal and R N is the covariance of the noise. Therefore, one can fully describe the information transfer in the system by knowing only the eigenvalues of R Y and R N . Unfortunately, the receiver has only access to a limited number L of observations of y and not to the covariance of R Y . However, in the case where x and n are Gaussian vectors, y can be written as y = R Y 1 2 u where u is an i.i.d standard Gaussian vector. The problems falls therefore in the realm of inference with a correlated Wishart model 1 L i = 1 L y i y i H = R i 1 2 1 L i = 1 L u i u i H R Y 1 2 .

In the simulation we have taken n as an i.i.d. standard Gaussian vector of dimension 2 and
R Y = 1 . 2 2 0 0 0 . 4 2 .
(22)
Considering L observations of (21), we stack the observations as columns in a compound matrix to get an unbiased estimate of the moments of R Y . In Figure 5, we can observe the convergence of the estimated capacity to the true one.
Figure 5

Estimation of the capacity for the model (21) up to L = 500 observations, with σ = 0.5.

In Figure 6, we estimate the eigenvalues of the matrix R Y versus the number of observations L. Once again, we observe the convergence of the estimated eigenvalues to the true eigenvalues.
Figure 6

Estimation of the eigenvalues for the 2×2-matrix R Y of (22) for various number of observations.

5.3 Users detection

We consider M mobile users, each with a single antenna, communicating with a base station equipped with N receiving antennas, arranged as a uniform linear array (ULA). The N × 1 received signal at the base station is given by
y = V P 1 2 x + n
(23)
where x is the M × 1 input signal transmitted by the M users, satisfying E [ x x H ] = I M , and n is the additive Gaussian noise such that E [ n n H ]= σ 2 I M . We suppose that the components in x and n are independent. The matrix P = diag(p1,..., p M ) represents the power with which users send information. In the case of a line of sight between the mobile users and the base station, the N × M matrix V has the following form
V ( θ ) = 1 N [ v ( θ 1 ) , , v ( θ M ) ] ,
where
v ( θ ) = 1 , e - j 2 π d λ sin ( θ ) , . ,  e - j 2 π d λ ( N - 1 ) sin ( θ ) T ,

and θ1,..., θ M are the angles of the users and are supposed to be i.i.d. and uniform on [−α, α], d is the interspacing distance between the antennas of the ULA, and λ is the wave-length of the signal.

We are interested in estimating the number of users from a finite number K of observations of the received signal. Stacking all observations in a matrix
Y = [ y 1 , , y K ] = V P 1 2 [ x 1 , , x K ] + [ n 1 , , n K ]
(24)
we can have access to the sample covariance matrix
W = 1 K Y Y H .
(25)

For estimation of the number of users M, we assume that the power distribution of P is known. Based on the knowledge of the power distribution, we are able to estimate the number of users in the system. Thanks to the moments method it is possible to estimate the moments of the sample covariance matrix in (25) from the moments of the power matrix P.

Proposition 5.1[43]Given the phase distribution ω and p ω its density function, we define
I n = ( 2 π ) n - 1 - π π p ω ( x ) n d x .
Denoting the moments of P and the moments of the sample covariance matrix W , respectively, by
P i = t r M ( P i )
(26)
W i = t r N ( W i )
(27)
then
W 1 = c 2 P 1 + σ 2
W 2 = c 2 p 2 + ( c 2 2 I 2 + c 2 c 3 ) ( P 1 ) 2 + 2 σ 2 ( c 2 + c 3 ) P 1 + σ 4 ( 1 + c 1 )
W 3 = c 2 1 + 1 K 2 P 3 + 3 1 + 1 K 2 c 2 2 I 2 + 3 c 2 c 3 P 1 P 2 + c 2 3 I 3 1 + 1 K 2 + 3 c 2 2 c 3 I 2 + c 2 c 3 2 ( P 1 ) 3 + 3 σ 2 ( 1 + c 1 ) + c 1 c 2 K M c 2 P 2 + 3 σ 2 ( 1 + c 1 ) + c 1 c 2 K M c 2 2 I 2 + c 3 ( c 3 + 2 c 2 ) ( P 1 ) 2 + 3 σ 4 c 1 2 + 3 c 1 + 1 + 1 K 2 c 2 P 1 + σ 6 c 1 2 + 3 c 1 + 1 + 1 K 2 ,

where c 1 = lim N N K , c 2 = lim N M N and c 3 = lim N M K .

Knowing the matrix P, we can compute the moments P i . From the moments P i , using the above expressions is possible to get the moments W i of the sample covariance matrix. We consider some candidate values of the number M of users. The estimate of M is chosen as the one which minimizes the sum of the square errors between the moments W i and the moments of the observed sample covariance matrix.

In Figures 7 and 8 we have taken N = 100, M = 30, σ = 0 . 1 , the distance d = 1 and the wavelength λ = 2d. We take the values of the matrix P equals to 0.5, 1.5, 2 with equal probability. Therefore just the first three moments are considered. We see that in Figure 8 the approximation is better even for a small number of observations.
Figure 7

Estimation of the number of users. Actual value of M = 25 and K = 1. The powers are 0.5, 1.5, 2 with equal probability for various number of observations.

Figure 8

Estimation of the number of users. Actual value of M = 25 and K = 10. The powers are 0.5, 1.5, 2 with equal probability for various number of observations.

5.4 Wavelength detection

We know that in cognitive radio, mobile users are interested in understanding which band of transmission is occupied. Considering the model (23), we estimate the wavelength λ using the moments method. As before, we consider some realizations of the sample covariance matrix and we estimate its moments in function of the moments of the power matrix, supposed to be known. In Figure 9, we consider K = 10, L = 30, N = 100, and σ = 0 . 1 , in addition to λ = 2, d = 1, α = π 4 . Candidate values of the wavelength are taken in the interval [0.5, 4] with step 0.1 and the estimate value is chosen as the one which minimizes the sum of the squared errors of the first three moments between the moments W i and the moments of the observed sample covariance matrix.
Figure 9

Estimation of the wavelength. Actual value of λ = 2 and K = 10. The powers are 0.5, 1, 1.5 with equal probability for various number of observations.

6 Conclusions and open problems

In the last decade, researchers and practitioners have devised cognitive radio as a possible solution for the problem of underutilization of the radio spectrum. These theoretical ad-vancements in cognitive radio research have set up a solid base for practical applications and even further developments. However, still open questions need to be answered. In particular, in the current work, we use free probability theory, through the concept of free deconvolution, to attack the problem of retrieving useful information from the network with a limited number of observations. Free deconvolution, based on the moments method, has shown to be a interesting tool to tackle this problem. First, we show how the moments method works in the case where scalar random variables are considered. Then, since in practical situations systems are so complex that the parameters of interest need to be represented by vectors and matrices and can not be modeled by scalar random variables, we analyze the case where random matrices are considered. We propose algorithm method to compute the moments of various models such as Gaussian and Vandermonde matrices. Matlab codes for cognitive radio is developed to implement this algorithm method. In the applications free deconvolution framework can be used for retrieving relevant information such as power with which users send information, number of users, etc.

We have analyzed how free deconvolution framework works for random matrices and how random matrices behave differently depending on their structure. Different directions of research can be followed in this framework. In Vandermonde matrix model, the deconvolution techniques have been performed taking into account only diagonal matrices. It could be interesting to address the case of general deterministic matrices. In this way, correlation between users can be considered. The knowledge of the correlation could be a relevant element to improve the cooperation among the users in a cognitive system.

The extension of free deconvolution techniques to more general functions of matrices is a hard task. The difficulty is related to the fact that up to now there is not a general hypothesis that guarantees the application of free deconvolution to any random matrix. This extension can take into account more general models that represent more realistic situations.

For future perspective we would like also to take into account a second order analysis. The study of the covariance matrices can improve the accuracy of the estimations related to the free deconvolution framework.

Declarations

Acknowledgements

This study was supported by the Alcatel-Lucent within the Alcatel-Lucent Chair on Flexible Radio at Supélec. The work of the first author has been done during her PhD at Alcatel-Lucent Chair on Flexible Radio at Supélec supported by Microsoft Research through its PhD Scholarship Program.

Endnotes

aAn algebra is unital if it contains a multiplicative identity element, i.e., an element 1 A with the property 1 A x = x 1 A = x for all elements x of the algebra. bNote that in the following, when speaking of moments of matrices, we refer to the moments of the associated measure. cLet X be a random variable and M X ( z ) : = m = 0 φ ( X n ) z n , we define the S-Transform of X as S X ( z ) = 1 + z z M X < - 1 > ( z )

where (·) < − 1 > denotes the inverse (under composition) (i.e., M X < - 1 > ( M X ( z ) ) = M X ( M X < - 1 > ( z ) ) = z ). dA standard complex Gaussian matrix X has i.i.d. complex Gaussian entries with zero mean and unit variance (in particular, the real and imaginary parts of the entries are independent, each with zero mean and variance 1/ 2).

Authors’ Affiliations

(1)
ETIS/ENSEA - University of Cergy Pontoise - CNRS
(2)
Supélec

References

  1. SEW Group: Report of the Spectrum Efficiency Working Group. Technical report, FCC 2002.Google Scholar
  2. Staple G, Werbach K: The end of spectrum scarcity [spectrum allocation and utilization]. IEEE Spectrum 2004, 41(3):48-52. 10.1109/MSPEC.2004.1270548View ArticleGoogle Scholar
  3. Mitola IJ: Cognitive radio for flexible mobile multimedia communications. In IEEE In-ternational Workshop on Mobile Multimedia Communications 1999 (MoMuC '99). San Diego, California, USA; 1999:3-10.View ArticleGoogle Scholar
  4. Mitola IJ: Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio. In PhD Thesis. Royal Institute of Technology (KTH) Stockholm, Sweden; 2000.Google Scholar
  5. Akyildiza I, Leea I, Vuran M, Mohantya S: Next generation/dynamic spectrum access/cognitive radio wireless networks: a survey. Computer Networks 2006, 50(6):2127-2159.View ArticleGoogle Scholar
  6. Haykin S: Cognitive radio: brain empowered wireless communications. IEEE J Sel Areas Commun 2005, 23(2):201-220.View ArticleGoogle Scholar
  7. Cabric D, Mishra S, Brodersen R: Implementation issues in spectrum sensing for cognitive radios. In Proc of 38th Asilomar Conference on Signals, Systems and Computers. Pacific Grove (CA); 2004:772-776.Google Scholar
  8. Cabric D, Brodersen R: Physical layer design issues unique to cognitive radio systems. IEEE 16th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC) 2005, 759-763.Google Scholar
  9. Sahai A, Hoven N, Tandra R: Some fundamental limits on cognitive radio. Proc Allerton Conference Communication, Control, and Computing 2004.Google Scholar
  10. Quan Z, Cui S, Sayed A: Optimal linear cooperation for spectrum sensing in cognitive radio networks. IEEE Journal of Selected Topics on Signal Processing 2008, 2(1):28-40.View ArticleGoogle Scholar
  11. Ganesan G, Li Y: Cooperative spectrum sensing in cognitive radio, part I: two user networks. IEEE Transactions on Wireless Communication 2007, 6(6):2204-2213.View ArticleGoogle Scholar
  12. Ganesan G, Li Y: Cooperative spectrum sensing in cognitive radio. In First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DyS-PAN). Volume 2005. Baltimore, Maryland; 2005:137-143.View ArticleGoogle Scholar
  13. Ghasemi A, Sousa E: Cooperative spectrum sensing in cognitive radio: The Cooperation-Processing Tradeoff. Wireless Communications and Mobile Computing 2007, 7(9):1049-1060. 10.1002/wcm.480View ArticleGoogle Scholar
  14. Mishra S, Sahai A, Broderson RW: Cooperative sensing among cognitive radios. In Proc of IEEE International Conference on Communications (ICC). Volume 4. Istanbul, Turkey; 2006:1658-1663.Google Scholar
  15. Unnikrishnan J, Veeravalli V: Cooperative sensing for primary detection in cognitive radio. IEEE Journal of Selected Topics on Signal Processing 2008, 2(1):18-27.View ArticleGoogle Scholar
  16. Nevat I, Peters G, Collings I, Yuan J: Cooperative spectrum sensing with partial CSI. In IEEE Statistical Signal Processing Workshop. Nice, France; 2011:373-376.Google Scholar
  17. Ji Z, Liu K: Cognitive radios for dynamic spectrum access-dynamic spectrum sharing: a game theoretical overview. IEEE Communications Magazine 2007, 45(5):88-94.MathSciNetView ArticleGoogle Scholar
  18. Neel J, Buehrer R, Reed B, Gilles R: Game theoretic analysis of a network of cognitive radios. In Procedings of 45th Midwest Symposium on Circuits and Systems. Tulsa, Oklahoma; 2002:409-412.Google Scholar
  19. Niyato D, Hossain E: Competitive pricing for spectrum sharing in cognitive radio net-works: dynamic game, inefficiency of Nash equilibrium, and collusion. IEEE Journal on Selected Areas in Communications 2008, 26(1):192-202.View ArticleGoogle Scholar
  20. Nie N, Comaniciu C: First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN). Baltimore, Maryland, USA; 2005:269-278.View ArticleGoogle Scholar
  21. Neel J, Reed J, Gilles R: Convergence of cognitive radio networks. In Proc IEEE Wireless Communications and Networking Conference (WCNC'04). Atlanta (GA), USA; 2004:2250-2255.Google Scholar
  22. Devroye N, Mitran P, Tarokh V: Achievable rates in cognitive radio channels. IEEE Transactions on Information Theory 2006, 52(5):1813-1827.MathSciNetView ArticleMATHGoogle Scholar
  23. Jovicic A, Viswanath P: Cognitive radio: an information-theoretic perspective. IEEE Transactions on Information Theory 2009, 55(9):3945-3958.MathSciNetView ArticleGoogle Scholar
  24. Jafar S, Srinivasa S: Capacity limits of cognitive radio with distributed and dynamic spectral activity. IEEE Journal on Selected Areas in Communications 2007, 25: 529-537.View ArticleGoogle Scholar
  25. Srinivasa S, Jafar S: Cognitive radios for dynamic spectrum access-the throughput potential of cognitive radio: a theoretical perspective. IEEE Communications Magazine 2007, 45(5):73-79.View ArticleGoogle Scholar
  26. Goldsmith A, Jafar S, Maric I, Srinivasa S: Breaking spectrum gridlock with cognitive radios: an information theoretic perspective. Proceedings of IEEE 2009, 97(5):894-914.View ArticleGoogle Scholar
  27. Zeng Y, Liang Y: Maximum-minimum eigen value detection for cognitive radio. In IEEE Communication Magazine IEEE 18th International Symposiumon Personal, Indoor and Mobile Radio Communications (PIMRC). Toronto, Canada; 2011:1-5.Google Scholar
  28. Zeng Y, Liang Y: Eigenvalue-based spectrum sensing algorithms for cognitive radio. IEEE Transaction on Communications 2009, 57(6):1784-1793.View ArticleGoogle Scholar
  29. Cardoso L, Debbah M, Bianchi P, Najim J: Cooperative spectrum sensing using random matrix theory. In IEEE 3rd International Wireless Pervasive Computing (ISWPC). Santorini, Greece; 2008:334-338.Google Scholar
  30. Bianchi P, Najim J, Maida M, Debbah M: Performance analysis of some eigen-based hypothesis tests for collaborative sensing. In IEEE/SP 15th Workshop on Statistical Signal Processing (SSP). Cardiff, United Kingdom; 2009:5-8.Google Scholar
  31. Bianchi P, Debbah M, Maida M, Najim J: Performance of statistical tests for single source detection using random matrix theory. IEEE Trans Inf Theory 2011, 57(4):2400-2419.MathSciNetView ArticleGoogle Scholar
  32. Cover T, Thomas J: Elements of Information Theory. Wiley, New York; 1991.View ArticleMATHGoogle Scholar
  33. Guo D, Shamai S, Verdú S: Mutual information and minimum mean-square error in Gaussian channels. IEEE Transactions on Information Theory 2006, 51(4):1261-1282.View ArticleMathSciNetMATHGoogle Scholar
  34. Palomar D, Verdú S: Gradient of mutual information in linear vector Gaussian channels. IEEE Transactions on Information Theory 2005, 52(1):141-154.View ArticleMathSciNetMATHGoogle Scholar
  35. Benaych-Georges F, Debbah M: Free deconvolution: from theory to practice. submitted to IEEE Transactions on Information Theory 2008.Google Scholar
  36. Dozier R, Silverstein J: On the empirical distribution of eigenvalues of large dimensional information plus noise-type matrices. J Multivar Anal 2007, 98(4):678-694. 10.1016/j.jmva.2006.09.006MathSciNetView ArticleMATHGoogle Scholar
  37. Wigner E: On the distribution of roots of certain symmetric matrices. Ann Math 1958, 67(2):325-327. 10.2307/1970008MathSciNetView ArticleMATHGoogle Scholar
  38. Hiai F, Petz D: The semicircle law, free random variables and entropy - Mathematical Surveys and Monographs No. 77. American Mathematical Society, Providence, RI, USA; 2006.View ArticleGoogle Scholar
  39. Speicher R: Free probability theory and non-crossing partitions. Lecture Notes 39e Seminaire Lotharingien de Combinatoire, Thurnau 1997.Google Scholar
  40. Ryan Ø, Masucci A, Yang S, Debbah M: Finite dimensional statistical inference. IEEE Transactions on Information Theory 2011, 57(4):2457-2473.MathSciNetView ArticleGoogle Scholar
  41. Ryan Ø: Tools for convolution with finite Gaussian matrices.2009. [http://ifi.uio.no/~oyvindry/finitegaussian/]Google Scholar
  42. Seroul R, O'Shea D: Programming for Mathematicians Springer. 2000.View ArticleGoogle Scholar
  43. Ryan Ø, Debbah M: Asymptotic behaviour of random Vandermonde matrices with entries on the unit circle. IEEE Transactions on Information Theory 2009, 55(7):3115-3148.MathSciNetView ArticleGoogle Scholar
  44. Ryan Ø, Debbah M: Convolution operations arising from Vandermonde matrices. IEEE Transactions on Information Theory 2011, 57(7):4647-4659.MathSciNetView ArticleGoogle Scholar
  45. Cardoso L, Kobayashi M, Ryan Ø, Debbah M: Vandermonde frequency division multiplexing for cognitive radio. In Proceedings of the 9th Workshop on Signal Processing Advances in Wireless Communications, SPAWC. Recife, Brazil; 2008:421-425.Google Scholar
  46. Tucci G, Whiting P: Eigenvalue results for large scale random Vandermonde Matrices With Unit Complex Entries. IEEE Transactions on Information Theory 2011, 57(6):3938-3954.MathSciNetView ArticleGoogle Scholar
  47. Polo Y, Wang Y, Pandharipande A, Leus G: Compressive wide-band spectrum sensing. In IEEE International Conference on Acoustics, Speech, and Signal Processing. Taipei, Taiwan; 2009:1-4.Google Scholar
  48. Wang Y, Pandharipande A, Polo Y, Leus G: Distributed compressive wide-band spec-trum sensing. In IEEE Information Theory and Applications Workshop. San Diego (CA); 2009:178-183.Google Scholar
  49. He Y, Hueske K, Coersmeier E, Gotze J: Efficient computation of joint direction-of-arrival and frequency estimation. In IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). Sarajevo, Bosnia and Herzegovina; 2008:144-149.Google Scholar

Copyright

© Masucci and Debbah; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.