# Cache-enabled small cell networks: modeling and tradeoffs

- Ejder Baştuǧ
^{1}, - Mehdi Bennis
^{2}, - Marios Kountouris
^{1, 3}and - Mérouane Debbah
^{1, 4}

**2015**:41

https://doi.org/10.1186/s13638-015-0250-4

© Baştuǧet al.; licensee Springer. 2015

**Received: **14 August 2014

**Accepted: **6 January 2015

**Published: **26 February 2015

## Abstract

We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users’ demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

### Keywords

Caching Stochastic geometry Small cell networks Mobile cellular networks Poisson point process## 1 Introduction

Increasing traffic demand from mobile users due to the rich media applications, video streaming, and social networks [1] is pushing mobile operators to make their mobile cellular networks evolve continuously (see long-term evolution [2]). small cell network [3,4] and their integration with WiFi [5], heterogeneous network [6], together with many other ideas from both industry and academia, have now started being deployed and integrated in current cellular networks. In Europe, projects such as NewCom# [7] in the 7th Framework Program of the European Commission are focusing on the design of next generation cellular networks, and a new framework, called Horizon 2020 [8], is going to take place to support these efforts.

At the same time, content providers are moving their users’ content to the intermediate nodes in the network, namely caching, yielding less delays for the access. Content delivery network such as Akamai [9] are for that purpose. In this context, information-centric network are emerging [10]. Mixing these infrastructural concepts with cellular networks is also of interest [11,12]. Predicting users’ behavior, and proactively caching the users’ content in the edge of the network, namely base stations and user terminals, also shows that further gains can be obtained in terms of backhaul savings and user satisfaction [13].

Even though the idea of caching in mobile cellular networks is somewhat recent, the origin of caching dates indeed back to the 1960s, where caching mechanisms are proposed to boost the performance of operating systems [14]. Additionally, in past decades, many web caching schemes such as [15] have appeared to sustain the data flow of the Internet. In the context of mobile cellular networks, there have been recent attempts on design of intelligent caching schemes by taking into account the wireless environment of mobile cellular networks. Due to its notorious non-tractability, these proposals are mainly based on approximate or heuristic solutions [16-18]. Besides these solutions, novel formulations and system models have been proposed to assess the performance of caching. For instance, information theoretical formulation of the caching problem is studied in [19]. The expected cost of uncoded and coded data allocation strategies is given in [20], where stochastically distributed cache-enabled nodes in a given area are assumed and the cost is defined as a function of distance. A game theoretical formulation of the caching problem as a many-to-many game is studied in [21] by taking into account data dissemination in social networks. The performance of caching in wireless device-to-device networks is studied in [22] in a scenario where nodes are placed on a grid and cache the content randomly. An alternative device-to-device caching scenario with randomly located nodes is given in [23], and relevant tradeoffs curves are derived.

The contribution of this work is to formulate the caching problem in a scenario where stochastically distributed small base station are equipped with storage units but have the limited backhaul capacity. In particular, we build on a tractable system model and define its performance metrics (outage probability and average delivery rate) as functions of signal-to-interference-plus-noise ratio, number of small base station, target file bitrate, storage size, file length, and file popularity distribution. By coupling the caching problem with a physical layer in this way and relying on recent results from [24], we show that a certain outage probability can be achieved either by 1) increasing number of small base station while the total storage size budged is fixed or 2) increasing the total storage size while the number of small base station is fixed. To the best of our knowledge, our work differs from the aforementioned works in terms of studying deployment aspects of cache-enabled small base station. Similar lines of work in terms of analysis with stochastic geometry tools can be found in [20,23]. However, the system model and performance metrics are different than what is studied here^{a}.

The rest of this paper is structured as follows. We describe our system model in Section 2. The performance metrics and main results are given in Section 3. In the same section, much simpler expressions are obtained by making specific assumptions on the system model. We validate these results via numerical simulations in Section 4 and discuss the impact of parameters on the performance metrics. Then, a tradeoff between the number of deployed small base station and total storage size is given in Section 5. Finally, our conclusions and future perspectives are given in Section 6
^{b}.

## 2 System Model

The cellular network under consideration consists of small base station, whose locations are modeled according to a Poisson point process *Φ* with density *λ*. The broadband connection to these small base station is provided by a central scheduler via wired backhaul links. We assume that the broadband connection is finite and fixed; thus, the backhaul link capacity of each small base station is a decreasing function of *λ*. This in practice means that deploying more small base station in a certain area yields sharing the total broadband capacity among backhaul links. We will define this function more precisely in the next sections.

We suppose that every small base station has a storage unit with capacity *S* nats (1 bit =ln(2)=0.693 nats); thus, they cache users’ most popular files given in a catalog. The size of each file in the catalog has a length of *L* nats and bitrate requirement of *T* nats/s/Hz. We note that the assumption on file length is for ease of analysis. Alternatively, the files in the catalog can be divided into chunks with the same length. The file popularity distribution of this catalog is a right continuous and monotonically decreasing probability distribution function, denoted as *f*
_{pop}(*f*,*γ*). The parameter *f* here corresponds to a point in the support of a file and *γ* is the shape parameter of the distribution. We assume that this distribution is identical among all users.

Every user equipped with a mobile user terminal is associated with the nearest small base station, where its location falls into a point in a Poisson-Voronoi tessellation on the plane. In this model, we neglect the overhead introduced by the file requests of users in the uplink, thereby only focus on the downlink transmission. In the downlink transmission, a tagged small base station transmits with the constant transmit power 1/*μ* Watts, and the standard unbounded power-law pathloss propagation model with exponent *α*>2 is used for the environment. The tagged small base station and tagged user experience Rayleigh fading with mean 1. Hence, the received power at the tagged user, located *r*-meters away from its tagged small base station, is given by *h*
*r*
^{−α
}. The random variable *h* here follows an exponential distribution with mean 1/*μ*, represented as *h*∼Exponential(*μ*).

*f*

_{pop}(

*f*,

*γ*). When requests reach to the small base station via uplink, the users are served immediately, either getting the file from the Internet via backhaul or being served from the local cache, depending on the availability of the file therein. If a requested file is available in the local cache of the small base station, a

*cache hit*event occurs; otherwise, a

*cache miss*event is said to have occurred. According to what we have explained so far, a sketch of the network model is given in Figure 1.

In general, the performance of our system depends on several factors. To meet the quality-of-experience requirements, the downlink rate provided to the requested user has to be equal or higher than the file bitrate *T* so that the user does not observe any interruption during its experience. Although this requirement can be achieved in the downlink, yet another bottleneck can be the rate of the backhaul in case of cache misses. In the following, we define our performance metrics which take into account the aforementioned situations. We then present our main results in the same section.

## 3 Performance metrics and main results

Performance metrics of interest in our system model are the *outage probability* and *average delivery rate*. We start by defining these metrics for the downlink. From now on, without loss of generality, we refer to the user *o* as *typical* user, which is located at the origin on the plane.

*o*which is located at a random distance

*r*far away from its small base station

*b*

_{ o }is given by:

*R*

_{ i }from the typical user (except the connected small base station

*b*

_{ o }) which have fading value

*g*

_{ i }. Assume that the

*success probability*is the probability of the downlink rate exceeding the file bitrate

*T*and the probability of requested file being in the local cache. Then, the outage probability can be given as the complementary of the success probability as follows:

where *f*
_{
o
} is the requested file by the typical user, and \(\Delta _{b_{o}}\) is the local cache of the serving small base station *b*
_{
o
}. Indeed, such a definition of the outage probability comes from a simple observation. Ideally, if a requested file is in the cache of the serving small base station (thus, the limited backhaul is not used) and if the downlink rate is higher than the file bitrate *T* (thus, the user does not observe any interruption during the playback of the file), we then expect the outage probability to be close to zero. Given this explanation and the assumptions made in the previous section, we state the following theorem for outage probability.

### Theorem 1 (Outage probability).

*β*(

*T*,

*α*) is given by:

where \(\Gamma (a,x) = \int ^{\infty }_{x}{t^{a-1}e^{-t}\mathrm {d}t}\) is the upper incomplete Gamma function and \(\Gamma (x) = \int ^{\infty }_{0}{t^{x-1}e^{-t}\mathrm {d}t}\) is the Gamma function.

###
*Proof*.

The proof is provided in Appendix A Proof of Theorem 1.

where *C*(*λ*) is the backhaul capacity provided to the small base station for single frequency in the downlink^{c}. The definition above can be explained as follows. If the downlink rate is higher than the threshold *T* (namely, the bitrate of the requested file) and the requested file is available in the local cache, the rate *T* is dedicated to the user by the tagged small base station, which in turn is sufficient for quality-of-experience. On the other hand, if the downlink rate is higher than *T* but the requested file does not exist in the local cache of the tagged small base station, the delivery rate will be limited by the backhaul link capacity *C*(*λ*), for which we assume that *C*(*λ*)<*T*. Given this definition for the delivery rate, we state the following theorem.

### Theorem 2 (Average delivery rate).

where *β*(*T*,*α*) has the same definition as in Theorem 1.

###
*Proof*.

The proof is deferred to Appendix B Proof of Theorem 2.

What we provided above are the general results. The exact values of outage probability and average delivery rate can be obtained by specifying the distribution of the interference, the backhaul link capacity *C*(*λ*), and the file popularity distribution *f*
_{pop}(*f*,*γ*). If this treatment does not yield closed form expressions, numerical integration can be done as a last resort for evaluating the functions. In the next section, as an example, we derive special cases of these results after some specific assumptions, which in turn yield much simpler expressions.

### 3.1 Special Cases

### Assumption 1.

- 1.
The noise power

*σ*^{2}is higher than 0, and the pathloss component*α*is 4. - 2.
Interference is Rayleigh fading, which in turn

*g*_{ i }∼Exponential(*μ*). - 3.The capacity of backhaul links is given by:$$ C\left(\lambda\right) \triangleq \frac{C_{1}}{\lambda} + C_{2}, $$(8)
where

*C*_{1}>0 and*C*_{2}≥0 are some arbitrary coefficients such that*C*(*λ*)<*T*holds. - 4.The file popularity distribution of users is characterized by a power law [25] such as:$$ f_{\text{pop}}\left(f,\gamma\right) \triangleq\left\{ \begin{aligned} & \left(\gamma - 1\right)f^{-\gamma},\quad f \geq 1, \\ &0,\qquad\qquad\quad\,\,\, f < 1, \end{aligned}\right. $$(9)
where

*γ*>1 is the shape parameter of the distribution.

The assumption *C*(*λ*)<*T* comes from the observation that the high-speed fiber-optic backhaul links might be very costly in densely deployed small base station scenarios. Therefore, we assume that *C*(*λ*) is lower than the bitrate of the file. On the other hand, we characterize the file popularity distribution with a power law. Indeed, this comes from the observation that many real-world phenomena can be characterized by power laws (i.e., distribution of files in web proxies, distribution of word counts in natural languages) [25]. According to our system model and the specific assumptions made in Assumption 1, we state the following results.

### Proposition 1 (Outage probability).

where \(\rho (T,4) = \sqrt {e^{T} - 1}\left (\frac {\pi }{2} - \text {arctan}\left (\frac {1}{\sqrt {e^{T}-1}}\right) \right)\) and the standard Gaussian tail probability is given as \(Q\left (x\right) = \frac {1}{\sqrt {2\pi }}\int _{x}^{\infty }{e^{-y^{2}/2}\mathrm {d}y}\).

###
*Proof*.

The proof is given in Appendix C Proof of Proposition 1.

### Proposition 2 (Average delivery rate).

where *ρ*(*T*,4) and *Q*(*x*) have the same definition as in Proposition 1.

###
*Proof*.

The proof is given in Appendix D Proof of Proposition 2.

The expressions obtained for special cases are cumbersome but fairly easy to compute and do not require any integration. Note that the *Q*(*x*) function given in the expressions is a well-known function and can be computed by using lookup tables or standard numerical packages.

## 4 Validation of the proposed model

So far, we have provided the results for outage probability and average delivery rate. In this section, we validate these results via Monte Carlo simulations. The numerical results shown here are obtained by averaging out over 1,000 realizations. In each realization, the small base station are distributed according to a Poisson point process. The file requests, signal, and interfering powers of the typical user are drawn randomly according to the corresponding probability distributions. The outage probability and average delivery rate are then calculated by considering signal-to-interference-plus-noise ratio and cache hit statistics. We note that all simulation curves match the theoretical ones. However, a slight mismatch is observed due to the fact that more precise discretization of continuous variables is avoided for affordable simulation times. As alluded to previously, the target file bitrate as well as average delivery rate are in units of nats/s/Hz. On the other hand, the storage size and file lengths are in units of nats.

### 4.1 Impact of storage size

### 4.2 Impact of the number of base stations

### 4.3 Impact of target file bitrate

*T*. Figure 5 shows its impact on the outage probability for different values of storage size. Clearly, increasing the target file bitrate results in higher outage probability. However, this performance reduction can be compensated by increasing the storage size of small base station. The impact of storage size reduces, as

*T*increases.

### 4.4 Impact of file popularity shape

*γ*. The impact of the parameter

*γ*on the outage probability, for different storage sizes, is given in Figure 6. Generally, a higher value of

*γ*means that only a small portion of files is highly popular compared to the rest of the files. On the contrary, lower values of

*γ*correspond to a more uniform behavior on the popularity distribution. Therefore, as

*γ*increases, the outage probability reduces due to reduced requirement in terms of storage size. However, in very low and high values of

*γ*, the impact on the outage probability is not high compared to the intermediate values.

## 5 David vs. Goliath: more SBSs with less storage or less SBSs with more storage?

In the previous section, we have validated our results via numerical simulations and discussed the impact of several parameters on the outage probability and average delivery rate. On top of those, we are interested in finding a tradeoff between the small base station density and the total storage size for a fixed set of parameters. We start by making an analogy with the well-known David and Goliath story to examine the tradeoff between the small base station density and total storage size.^{d} More precisely, we aim to answer the following question: should we increase storage size of current small base station (*David*) or deploy more small base station with less storage (*Goliath*) in order to achieve a certain success probability? The answer is indeed useful for the realization of such a scenario. Putting more small base station in a given area may not be desirable due to increased deployment and operation costs (*Evil*). Therefore, increasing the storage size of already deployed small base station may incur less cost (*Good*). To characterize this tradeoff, we first define the optimal region as follows:

### Definition 1 (Optimal region).

*p*

^{ † }is said to be achievable if there exist some parameters

*λ*,

*T*,

*α*,

*S, L*,

*γ*satisfying the following condition:

The set of all achievable *p*
^{
†
} forms the optimal region.

The optimal region can be tightened by restricting parameters *λ*,*T*,*α*,*S, L*,*γ* to some intervals. A detailed analysis on this is left for future work. Hereafter, we restrict ourselves to find the optimal small base station density for a fixed set of parameters. In such a case, optimal small base station density can be readily obtained by plugging these fixed parameters into *p*
_{out} and solving the equation either analytically or numerically (i.e., bisection method [26]). In the following, we obtain a tradeoff curve between the small base station density and total storage size, by solving these equations systematically in the form of an optimization problem.

### Definition 2 (small base station density vs. total storage size tradeoff).

*S*

_{total}=

*λ*

*S*, and fix

*T*,

*α*,

*L*, and

*γ*to some values in the optimal region given in Definition 1. Denote also

*λ*

^{⋆}as the optimal small base station density for a given

*S*

_{total}. Then,

*λ*

^{⋆}is obtained by solving the following optimization problem:

The set of all achievable pairs (*λ*
^{⋆},*S*
_{total}) characterizes a tradeoff between the small base station density and total storage size.

*p*

^{ † }=0.3), we see that it is sufficient to decrease the number of small base station by increasing the total storage size. Alternatively, the total storage size can be decreased by increasing the number of small base station. Moreover, for different values of parameter of interest (i.e.,

*T*∈{0.1,0.2} or

*L*∈{1,2}), there is also a scaling and shifting in this tradeoff. Regardless of this scaling and shifting, we see that David wins victory against Goliath.

## 6 Conclusions

We have studied the caching problem in a scenario where small base station are stochastically distributed and have finite-rate backhaul links. We derived expressions for the outage probability and average delivery rate and validate these results via numerical simulations. The results showed that significant gains in terms of outage probability and average delivery rate are possible by having cache-enabled small base station. We showed that telecom operators can either deploy more base stations or increase the storage size of existing deployment in order to achieve a certain quality-of-experience level.

## 7 Endnotes

^{a} Additionally, the related work [27] was made public after the submission of this work.

^{b} Compared to [28], this work contains more comprehensive mathematical treatment, proofs, and the trade-off analysis conducted in Section 5.

^{c} Without loss of generality, more realistic values of delivery rate can be obtained by making a proper signal-to-interference-plus-noise ratio gap approximation and considering the total wireless bandwidth instead of 1 Hz.

^{d} David vs. Goliath refers to the underlying resource sharing problem which arises in a variety of scenarios including massive MIMO vs. Small Cells [29].

## 8 A Proof of Theorem 1

*r*from the typical user, the outage probability can be written as:

*i*) as:

where \(f_{r}(r) = e^{-\pi \lambda r^{2}}2\pi \lambda r\) is the probability distribution function of *r* for Poisson point process [24], hence (*a*) follows from its substitution. The expression in (*b*) is obtained by plugging the signal-to-interference-plus-noise ratio formula and letting it on the left hand side of the inequality and (*c*) is the result of some algebraic manipulations for keeping fading variable *h* alone.

*I*

_{ r }and using the fact that

*h*∼Exponential(μ), the probability of random variable

*h*exceeding

*r*

^{ α }(

*e*

^{ T }−1)(

*σ*

^{2}+

*I*

_{ r }) can be written as:

*I*

_{ r }evaluated at

*s*conditioned on the distance of the nearest base station from the origin. Substituting (16) into (15) yields the following:

*g*

_{ i }as a random variable of arbitrary but identical distribution for all

*i*, and

*R*

_{ i }as the distance from the

*i*-th base station to the tagged receiver, the Laplace transform is written as:

*a*) comes from the independence of

*g*

_{ i }from the point process

*Φ*, and (

*b*) follows from the i.i.d. assumption of

*g*

_{ i }. The last step comes from the probability-generating functional of the Poisson point process, which basically says that for some function

*f*(

*x*), \(\mathbb {E}\left [\prod _{x \in \Phi }{f(x)}\right ]=\text {exp}\left (-\lambda \int _{\mathbb {R}^{2}}{(1 - f(x))\mathrm {d}x)} \right)\). Since the nearest interfering base station is at least at a distance

*r*, the integration limits are from

*r*to infinity. Denoting

*f*(

*g*) as the probability distribution function of

*g*, then plugging in

*s*=

*μ*

*r*

^{ α }(

*e*

^{ T }−1) and switching the integration order yields:

*v*

^{−α }→

*y*, the Laplace transform can be rewritten as:

*r*

^{2}→

*v*and after some algebraic manipulations, the expression becomes:

*β*(

*T*,

*α*) is given as:

*i*) of (13). The term (

*i*

*i*) is straightforward to derive. In the system model, as we assume that every small base station caches the same popular files and they have the same storage size, the cache hit probability becomes independent of the distance

*r*. This yields:

Plugging both (19) and (20) into (13) and rearranging the terms, we conclude the proof. ■

## 9 B Proof of Theorem 2

where (*a*) is obtained by plugging the delivery rate as defined in (6), and (*b*) follows from independence of the events and linearity of the expectation operator.

*r*, \(\mathbb {E}_{r}[\tau _{2}]\) can be expressed as:

Substituting these expressions into (21) concludes the proof. ■

## 10 C Proof of Proposition 1

*i*), the proof of Theorem 1 can be followed starting from (14) to (17). Then, the Laplace transform is written as:

*a*) comes from the new assumption that

*g*∼Exponential(

*μ*). Then, plugging

*s*=

*μ*

*r*

^{ α }(

*e*

^{ T }−1) yields:

*r*

^{2}→

*v*gives

*α*=4 in our special case, (25) simplifies to:

*a*=

*π*

*λ*(1+

*ρ*(

*T*,4)) and

*b*=

*μ*(

*e*

^{ T }−1)

*σ*

^{2}=(

*e*

^{ T }−1)/SNR gives:

*i*) of (22). The term (

*i*

*i*) of (22) can be obtained by using similar arguments given for (20) in the proof of Theorem 1, meaning that the cache hit probability is independent of distance

*r*. Thus:

where (*a*) follows from plugging definition of *C*(*f*,*λ*) given in Assumption 1 and changing the integration limits accordingly. The last term is the result of the integral. Therefore, we conclude the proof by plugging (27) and (28) into (22). ■

## 11 D Proof of Proposition 2

*i*) of (22). Thus, following the steps from (23) to (27), we obtain:

*T*out of the expectation and plugging (28) into the formula, for example:

where definition of *C*(*λ*) follows from Assumption 1. Substituting (30), (31), and (32) into (29) concludes the proof. ■

## Declarations

### Acknowledgements

This research has been supported by the ERC Starting Grant 305123 MORE (Advanced Mathematical Tools for Complex Network Engineering), the SHARING project under the Finland grant 128010 and the project BESTCOM.

## Authors’ Affiliations

## References

- Cisco, Cisco visual networking index: global mobile data traffic forecast update, 2013–2018. White Paper, [Online] http://goo.gl/l77HAJ (2014).
- 3GPP, Overview of 3GPP Release 13. [Online] http://www.3gpp.org/release-13 (2014).
- J Hoydis, M Kobayashi, M Debbah, Green small-cell networks. IEEE Vehicular Technol. Mag.
**6**(1), 37–43 (2011).View ArticleGoogle Scholar - TQ Quek, G de la Roche, I Güvenç, M Kountouris,
*Small cell networks: deployment, PHY techniques, and resource management*(Cambridge University Press, UK, 2013).View ArticleGoogle Scholar - M Bennis, M Simsek, W Saad, S Valentin, M Debbah, A Czylwik, When cellular meets wifi in wireless small cell networks. IEEE Commun. Mag. Spec. Issue HetNets.
**51**(6), 44–50 (2013).View ArticleGoogle Scholar - JG Andrews, Seven ways that HetNets are a cellular paradigm shift. IEEE Commun. Mag.
**51**(3), 136–144 (2013).View ArticleGoogle Scholar - Newcom#, Network of excellence in wireless communications. [Online] http://www.newcom-project.eu (2014).
- Horizon 2020, The EU framework programme for research and innovation. [Online] http://ec.europa.eu/programmes/horizon2020 (2014).
- E Nygren, RK Sitaraman, J Sun, The Akamai network: a platform for high-performance internet applications. ACM SIGOPS Oper. Syst. Rev.
**44**(3), 2–19 (2010).View ArticleGoogle Scholar - B Ahlgren, C Dannewitz, C Imbrenda, D Kutscher, B Ohlman, A survey of information-centric networking. IEEE Commun. Mag.
**50**(7), 26–36 (2012).View ArticleGoogle Scholar - S Spagna, M Liebsch, R Baldessari, S Niccolini, S Schmid, R Garroppo, K Ozawa, J Awano, Design principles of an operator-owned highly distributed content delivery network. IEEE Commun. Mag.
**51**(4), 132–140 (2013).View ArticleGoogle Scholar - X Wang, M Chen, T Taleb, A Ksentini, VCM Leung, Cache in the air: exploiting content caching and delivery techniques for 5G systems. IEEE Commun. Mag.
**52**(2), 131–139 (2014).View ArticleGoogle Scholar - E Baştuğ, M Bennis, M Debbah, Living on the edge: the role of proactive caching in 5G wireless networks. IEEE Commun. Mag.
**52**(8), 82–89 (2014).View ArticleGoogle Scholar - LA Belady, A study of replacement algorithms for a virtual-storage computer. IBM Syst. J.
**5**(2), 78–101 (1966).View ArticleGoogle Scholar - S Borst, V Gupta, A Walid, in IEEE INFOCOM. Distributed caching algorithms for content distribution networks (San Diego, USA, 2010), pp. 1–9.Google Scholar
- E Baştuğ, J-L Guénégo, M Debbah, in 20th International Conference on Telecommunications (ICT’13). Proactive small cell networks (Casablanca, Morocco, 2013).Google Scholar
- K Poularakis, G Iosifidis, V Sourlas, L Tassiulas, in IEEE Wireless Communications and Networking Conference (WCNC’14). Multicast-aware caching for small cell networks (Istanbul, Turkey, 2014).Google Scholar
- P Blasco, D Gunduz, in IEEE International Conference on Communications (ICC’14). Learning-based optimization of cache content in a small cell base station (Sydney, Australia, 2014).Google Scholar
- MA Maddah-Ali, U Niesen, Fundamental limits of caching. IEEE Trans. Inf. Theory.
**60**(5), 2856–2867 (2011).MathSciNetView ArticleGoogle Scholar - E Altman, K Avrachenkov, J Goseling, Coding for caches in the plane. arXiv preprint arXiv:1309.0604 (2013).Google Scholar
- K Hamidouche, W Saad, M Debbah, in 12th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). Many-to-many matching games for proactive social-caching in wireless small cell networks (Hammamet, Tunisia, 2014), pp. 569–574.Google Scholar
- M Ji, AFM Giuseppe Caire, Fundamental limits of caching in wireless D2D networks. arXiv preprint arXiv:1405.5336 (2014).Google Scholar
- A Altieri, P Piantanida, LR Vega, C Galarza, On fundamental trade-offs of device-to-device communications in large wireless networks. arXiv preprint arXiv:1405.2295 (2014).Google Scholar
- JG Andrews, F Baccelli, RK Ganti, A tractable approach to coverage and rate in cellular networks. IEEE Trans. Commun.
**59**(11), 3122–3134 (2011).View ArticleGoogle Scholar - ME Newman, Power laws, Pareto distributions and Zipf’s law. Contemp. Phys.
**46**(5), 323–351 (2005).View ArticleGoogle Scholar - WH Press,
*Numerical recipes 3rd Edition: the art of scientific computing*(Cambridge University Press, UK, 2007).MATHGoogle Scholar - B Blaszczyszyn, A Giovanidis, Optimal geographic caching in cellular networks. arXiv preprint arXiv:1409.7626 (2014).Google Scholar
- E Baştuğ, M Bennis, M Debbah, in International Symposium on Wireless Communication Systems (ISWCS’14). Cache-enabled small cell networks: modeling and tradeoffs (Barcelona, Spain, 2014).Google Scholar
- J Hoydis, M Debbah, David vs Goliath or small cells vs massive mimo. [Online] http://goo.gl/isfya5 (2011).

## Copyright

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.