Skip to main content

Cache-enabled small cell networks: modeling and tradeoffs

Abstract

We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users’ demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

1 Introduction

Increasing traffic demand from mobile users due to the rich media applications, video streaming, and social networks [1] is pushing mobile operators to make their mobile cellular networks evolve continuously (see long-term evolution [2]). small cell network [3,4] and their integration with WiFi [5], heterogeneous network [6], together with many other ideas from both industry and academia, have now started being deployed and integrated in current cellular networks. In Europe, projects such as NewCom# [7] in the 7th Framework Program of the European Commission are focusing on the design of next generation cellular networks, and a new framework, called Horizon 2020 [8], is going to take place to support these efforts.

At the same time, content providers are moving their users’ content to the intermediate nodes in the network, namely caching, yielding less delays for the access. Content delivery network such as Akamai [9] are for that purpose. In this context, information-centric network are emerging [10]. Mixing these infrastructural concepts with cellular networks is also of interest [11,12]. Predicting users’ behavior, and proactively caching the users’ content in the edge of the network, namely base stations and user terminals, also shows that further gains can be obtained in terms of backhaul savings and user satisfaction [13].

Even though the idea of caching in mobile cellular networks is somewhat recent, the origin of caching dates indeed back to the 1960s, where caching mechanisms are proposed to boost the performance of operating systems [14]. Additionally, in past decades, many web caching schemes such as [15] have appeared to sustain the data flow of the Internet. In the context of mobile cellular networks, there have been recent attempts on design of intelligent caching schemes by taking into account the wireless environment of mobile cellular networks. Due to its notorious non-tractability, these proposals are mainly based on approximate or heuristic solutions [16-18]. Besides these solutions, novel formulations and system models have been proposed to assess the performance of caching. For instance, information theoretical formulation of the caching problem is studied in [19]. The expected cost of uncoded and coded data allocation strategies is given in [20], where stochastically distributed cache-enabled nodes in a given area are assumed and the cost is defined as a function of distance. A game theoretical formulation of the caching problem as a many-to-many game is studied in [21] by taking into account data dissemination in social networks. The performance of caching in wireless device-to-device networks is studied in [22] in a scenario where nodes are placed on a grid and cache the content randomly. An alternative device-to-device caching scenario with randomly located nodes is given in [23], and relevant tradeoffs curves are derived.

The contribution of this work is to formulate the caching problem in a scenario where stochastically distributed small base station are equipped with storage units but have the limited backhaul capacity. In particular, we build on a tractable system model and define its performance metrics (outage probability and average delivery rate) as functions of signal-to-interference-plus-noise ratio, number of small base station, target file bitrate, storage size, file length, and file popularity distribution. By coupling the caching problem with a physical layer in this way and relying on recent results from [24], we show that a certain outage probability can be achieved either by 1) increasing number of small base station while the total storage size budged is fixed or 2) increasing the total storage size while the number of small base station is fixed. To the best of our knowledge, our work differs from the aforementioned works in terms of studying deployment aspects of cache-enabled small base station. Similar lines of work in terms of analysis with stochastic geometry tools can be found in [20,23]. However, the system model and performance metrics are different than what is studied herea.

The rest of this paper is structured as follows. We describe our system model in Section 2. The performance metrics and main results are given in Section 3. In the same section, much simpler expressions are obtained by making specific assumptions on the system model. We validate these results via numerical simulations in Section 4 and discuss the impact of parameters on the performance metrics. Then, a tradeoff between the number of deployed small base station and total storage size is given in Section 5. Finally, our conclusions and future perspectives are given in Section 6 b.

2 System Model

The cellular network under consideration consists of small base station, whose locations are modeled according to a Poisson point process Φ with density λ. The broadband connection to these small base station is provided by a central scheduler via wired backhaul links. We assume that the broadband connection is finite and fixed; thus, the backhaul link capacity of each small base station is a decreasing function of λ. This in practice means that deploying more small base station in a certain area yields sharing the total broadband capacity among backhaul links. We will define this function more precisely in the next sections.

We suppose that every small base station has a storage unit with capacity S nats (1 bit =ln(2)=0.693 nats); thus, they cache users’ most popular files given in a catalog. The size of each file in the catalog has a length of L nats and bitrate requirement of T nats/s/Hz. We note that the assumption on file length is for ease of analysis. Alternatively, the files in the catalog can be divided into chunks with the same length. The file popularity distribution of this catalog is a right continuous and monotonically decreasing probability distribution function, denoted as f pop(f,γ). The parameter f here corresponds to a point in the support of a file and γ is the shape parameter of the distribution. We assume that this distribution is identical among all users.

Every user equipped with a mobile user terminal is associated with the nearest small base station, where its location falls into a point in a Poisson-Voronoi tessellation on the plane. In this model, we neglect the overhead introduced by the file requests of users in the uplink, thereby only focus on the downlink transmission. In the downlink transmission, a tagged small base station transmits with the constant transmit power 1/μ Watts, and the standard unbounded power-law pathloss propagation model with exponent α>2 is used for the environment. The tagged small base station and tagged user experience Rayleigh fading with mean 1. Hence, the received power at the tagged user, located r-meters away from its tagged small base station, is given by h r α. The random variable h here follows an exponential distribution with mean 1/μ, represented as hExponential(μ).

Once users are associated with their closest small base station, we assume that they request some files (or chunks) randomly according to the file popularity distribution f pop(f,γ). When requests reach to the small base station via uplink, the users are served immediately, either getting the file from the Internet via backhaul or being served from the local cache, depending on the availability of the file therein. If a requested file is available in the local cache of the small base station, a cache hit event occurs; otherwise, a cache miss event is said to have occurred. According to what we have explained so far, a sketch of the network model is given in Figure 1.

Figure 1
figure 1

An illustration of the considered network model. The top right side of the figure shows a snapshot of PPP per unit area where the SBSs are randomly located. A closer look at communication structure of a cache-enabled SBS is shown in the main figure.

In general, the performance of our system depends on several factors. To meet the quality-of-experience requirements, the downlink rate provided to the requested user has to be equal or higher than the file bitrate T so that the user does not observe any interruption during its experience. Although this requirement can be achieved in the downlink, yet another bottleneck can be the rate of the backhaul in case of cache misses. In the following, we define our performance metrics which take into account the aforementioned situations. We then present our main results in the same section.

3 Performance metrics and main results

Performance metrics of interest in our system model are the outage probability and average delivery rate. We start by defining these metrics for the downlink. From now on, without loss of generality, we refer to the user o as typical user, which is located at the origin on the plane.

We know that the downlink rate depends on the signal-to-interference-plus-noise ratio. The signal-to-interference-plus-noise ratio of user o which is located at a random distance r far away from its small base station b o is given by:

$$\begin{array}{@{}rcl@{}} \textrm{SINR} &\triangleq \frac{hr^{-\alpha}}{\sigma^{2} + I_{r}}, \end{array} $$
((1))

where

$$\begin{array}{@{}rcl@{}} I_{r} &\triangleq \sum_{i \in \Phi / b_{o}}{g_{i}{R^{-\alpha}_{i}}}, \end{array} $$
((2))

is the total interference experienced from all other small base station at a distance R i from the typical user (except the connected small base station b o ) which have fading value g i . Assume that the success probability is the probability of the downlink rate exceeding the file bitrate T and the probability of requested file being in the local cache. Then, the outage probability can be given as the complementary of the success probability as follows:

$$\begin{array}{*{20}l}{} p_{\text{out}}(\lambda,T,\alpha,S, L, \gamma) &\triangleq 1 - \underbrace{\mathbb{P}\Big[\text{ln}(1 \,+\, \text{SINR}) >\! T, f_{o} \!\in\! \Delta_{b_{o}}\Big]}_{\text{success probability}}, \end{array} $$
((3))

where f o is the requested file by the typical user, and \(\Delta _{b_{o}}\) is the local cache of the serving small base station b o . Indeed, such a definition of the outage probability comes from a simple observation. Ideally, if a requested file is in the cache of the serving small base station (thus, the limited backhaul is not used) and if the downlink rate is higher than the file bitrate T (thus, the user does not observe any interruption during the playback of the file), we then expect the outage probability to be close to zero. Given this explanation and the assumptions made in the previous section, we state the following theorem for outage probability.

Theorem 1 (Outage probability).

The typical user has an outage probability from its tagged base station which can be expressed as:

$$ \begin{aligned} p_{\text{out}}(\lambda,T,\alpha,S, L, \gamma) &=\!1 -\pi\lambda \int^{\infty}_{0} \int^{S/L}_{0}\\ &\quad\times e^{-\pi\lambda v\beta(T,\alpha) - \mu(e^{T} - 1)\sigma^{2}v^{\alpha/2}} f_{\text{pop}}(\,f,\gamma\!) \mathrm{d}\,f \mathrm{d}v, \end{aligned} $$
((4))

where β(T,α) is given by:

$$ \begin{aligned} \beta(T,\alpha) &= \frac{2\left(\mu(e^{T} - 1)\right)}{\alpha} \mathbb{E}_{g}\left[ g^{\frac{2}{\alpha}} \left(\Gamma\left(-\frac{2}{\alpha},\mu\left(e^{T} - 1\right)g\right) \right.\right.\\ &\quad\left.\left.-\Gamma\left(-\frac{2}{\alpha}\right)\right) \right], \end{aligned} $$
((5))

where \(\Gamma (a,x) = \int ^{\infty }_{x}{t^{a-1}e^{-t}\mathrm {d}t}\) is the upper incomplete Gamma function and \(\Gamma (x) = \int ^{\infty }_{0}{t^{x-1}e^{-t}\mathrm {d}t}\) is the Gamma function.

Proof.

The proof is provided in Appendix A Proof of Theorem 1.

Yet another useful metric in our system model is the delivery rate, which we define as follows:

$${} \begin{aligned} \tau\triangleq\left\{ \begin{aligned} &T,\qquad \text{if ln}(1 + \text{SINR}) > T \mathrm{\;and\;} f_{o} \in \Delta_{b_{o}}, \\ &C(\lambda),\quad\!\! \text{if ln}(1 + \text{SINR}) > T \mathrm{\;and\;} f_{o} \not\in \Delta_{b_{o}},\quad \text{nats/s/Hz}\\ &0,\qquad\, \text{otherwise}, \end{aligned}\right. \end{aligned} $$
((6))

where C(λ) is the backhaul capacity provided to the small base station for single frequency in the downlinkc. The definition above can be explained as follows. If the downlink rate is higher than the threshold T (namely, the bitrate of the requested file) and the requested file is available in the local cache, the rate T is dedicated to the user by the tagged small base station, which in turn is sufficient for quality-of-experience. On the other hand, if the downlink rate is higher than T but the requested file does not exist in the local cache of the tagged small base station, the delivery rate will be limited by the backhaul link capacity C(λ), for which we assume that C(λ)<T. Given this definition for the delivery rate, we state the following theorem.

Theorem 2 (Average delivery rate).

The typical user has an average delivery rate from its tagged base station which can be expressed as:

$$ \begin{aligned} {\bar \tau}(\lambda,T,\alpha,S, L, \gamma) &= \pi\lambda \int^{\infty}_{0} e^{-\pi\lambda v\beta(T,\alpha) - \mu(e^{T} - 1)\sigma^{2}v^{\alpha/2}}\mathrm{d}v \\ &\quad\times\left(C(\lambda)+(T - C(\lambda))\int^{S/L}_{0}{f_{\text{pop}}(\,f,\gamma)\mathrm{d}\,f} \right), \end{aligned} $$
((7))

where β(T,α) has the same definition as in Theorem 1.

Proof.

The proof is deferred to Appendix B Proof of Theorem 2.

What we provided above are the general results. The exact values of outage probability and average delivery rate can be obtained by specifying the distribution of the interference, the backhaul link capacity C(λ), and the file popularity distribution f pop(f,γ). If this treatment does not yield closed form expressions, numerical integration can be done as a last resort for evaluating the functions. In the next section, as an example, we derive special cases of these results after some specific assumptions, which in turn yield much simpler expressions.

3.1 Special Cases

Assumption 1.

The following assumptions are given for the system model:

  1. 1.

    The noise power σ 2 is higher than 0, and the pathloss component α is 4.

  2. 2.

    Interference is Rayleigh fading, which in turn g i Exponential(μ).

  3. 3.

    The capacity of backhaul links is given by:

    $$ C\left(\lambda\right) \triangleq \frac{C_{1}}{\lambda} + C_{2}, $$
    ((8))

    where C 1>0 and C 2≥0 are some arbitrary coefficients such that C(λ)<T holds.

  4. 4.

    The file popularity distribution of users is characterized by a power law [25] such as:

    $$ f_{\text{pop}}\left(f,\gamma\right) \triangleq\left\{ \begin{aligned} & \left(\gamma - 1\right)f^{-\gamma},\quad f \geq 1, \\ &0,\qquad\qquad\quad\,\,\, f < 1, \end{aligned}\right. $$
    ((9))

    where γ>1 is the shape parameter of the distribution.

The assumption C(λ)<T comes from the observation that the high-speed fiber-optic backhaul links might be very costly in densely deployed small base station scenarios. Therefore, we assume that C(λ) is lower than the bitrate of the file. On the other hand, we characterize the file popularity distribution with a power law. Indeed, this comes from the observation that many real-world phenomena can be characterized by power laws (i.e., distribution of files in web proxies, distribution of word counts in natural languages) [25]. According to our system model and the specific assumptions made in Assumption 1, we state the following results.

Proposition 1 (Outage probability).

The typical user has an outage probability from its tagged base station which can be expressed as:

$$ \begin{aligned} p_{\text{out}}(\lambda,T,4,S, L,\gamma)&=1-\frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^{T}-1}{\text{SNR}}}} \text{exp}\left(\!\frac{\left(\lambda\pi(1 + \rho(T,4))\right)^{2}}{4(e^{T}\,-\,1)/\text{SNR}}\! \right) \\ &\quad\times Q\left(\!\frac{\lambda\pi(1 + \rho(T,4))}{\sqrt{2(e^{T}\,-\,1)/\text{SNR}}}\! \right)\!\left(\! 1 -\! \left(\!\frac{L}{L\,+\,S}\!\right)^{\gamma - 1}\!\right), \end{aligned} $$
((10))

where \(\rho (T,4) = \sqrt {e^{T} - 1}\left (\frac {\pi }{2} - \text {arctan}\left (\frac {1}{\sqrt {e^{T}-1}}\right) \right)\) and the standard Gaussian tail probability is given as \(Q\left (x\right) = \frac {1}{\sqrt {2\pi }}\int _{x}^{\infty }{e^{-y^{2}/2}\mathrm {d}y}\).

Proof.

The proof is given in Appendix C Proof of Proposition 1.

Proposition 2 (Average delivery rate).

The typical user has an average delivery rate from its tagged base station which can be expressed as:

$$ \begin{aligned} {\bar \tau}(\lambda,T,4,S, L, \gamma)&=\frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^{T}-1}{\text{SNR}}}} \text{exp}\left(\frac{\left(\lambda\pi(1 + \rho(T,4))\right)^{2}}{4(e^{T}-1)/\text{SNR}} \right) \\ &\quad\times Q\left(\!\frac{\lambda\pi(1 + \rho(T,4))}{\sqrt{2(e^{T}-1)/\text{SNR}}}\!\right)\!\left(\!T \,+\, \left(\!\frac{C_{1}}{\lambda} + C_{2} -\! T\!\right)\right.\\ &\quad\left.\times\left(\!\frac{L}{L+S}\!\right)^{\gamma - 1}\right), \end{aligned} $$
((11))

where ρ(T,4) and Q(x) have the same definition as in Proposition 1.

Proof.

The proof is given in Appendix D Proof of Proposition 2.

The expressions obtained for special cases are cumbersome but fairly easy to compute and do not require any integration. Note that the Q(x) function given in the expressions is a well-known function and can be computed by using lookup tables or standard numerical packages.

4 Validation of the proposed model

So far, we have provided the results for outage probability and average delivery rate. In this section, we validate these results via Monte Carlo simulations. The numerical results shown here are obtained by averaging out over 1,000 realizations. In each realization, the small base station are distributed according to a Poisson point process. The file requests, signal, and interfering powers of the typical user are drawn randomly according to the corresponding probability distributions. The outage probability and average delivery rate are then calculated by considering signal-to-interference-plus-noise ratio and cache hit statistics. We note that all simulation curves match the theoretical ones. However, a slight mismatch is observed due to the fact that more precise discretization of continuous variables is avoided for affordable simulation times. As alluded to previously, the target file bitrate as well as average delivery rate are in units of nats/s/Hz. On the other hand, the storage size and file lengths are in units of nats.

4.1 Impact of storage size

The storage size of small base station is one critical parameter in our system model. The effect of the storage size on the outage probability and the average delivery rate is plotted in Figures 2 and 3, respectively. Each curve represents a different value of target file bitrate. We observe that the outage probability reduces whereas the average delivery rate increases as we increase the storage size. Such behavior, observed both in theoretical and simulation curves, confirms our initial intuition.

Figure 2
figure 2

The evolution of outage probability with respect to the storage size. SNR=10 dB, λ=0.2,γ=2,L=1 nats, α=4,C 1=0.0005,C 2=0.

Figure 3
figure 3

The evolution of average delivery rate with respect to the storage size. SNR=10 dB, λ=0.2,γ=2,L=1 nats, α=4,C 1=0.0005,C 2=0.

4.2 Impact of the number of base stations

The evolution of outage probability with respect to the number of base stations is depicted in Figure 4. As the base station density increases, the outage probability decreases. This decrement in outage probability can be improved further by increasing the storage size of SBSs.

Figure 4
figure 4

The evolution of outage probability with respect to the base station density. SNR=10 dB, T=0.2,γ=2,L=1 nats, α=4,C 1=0.0005,C 2=0.

4.3 Impact of target file bitrate

Yet, another important parameter in our setup is the target file bitrate T. Figure 5 shows its impact on the outage probability for different values of storage size. Clearly, increasing the target file bitrate results in higher outage probability. However, this performance reduction can be compensated by increasing the storage size of small base station. The impact of storage size reduces, as T increases.

Figure 5
figure 5

The evolution of outage probability with respect to the target file bitrate. SNR=10 dB, λ=0.2,γ=2,L=1 nats, α=4,C 1=0.0005,C 2=0.

4.4 Impact of file popularity shape

Another crucial parameter in our setup is the shape of the file popularity distribution, parameterized by γ. The impact of the parameter γ on the outage probability, for different storage sizes, is given in Figure 6. Generally, a higher value of γ means that only a small portion of files is highly popular compared to the rest of the files. On the contrary, lower values of γ correspond to a more uniform behavior on the popularity distribution. Therefore, as γ increases, the outage probability reduces due to reduced requirement in terms of storage size. However, in very low and high values of γ, the impact on the outage probability is not high compared to the intermediate values.

Figure 6
figure 6

The evolution of outage probability with respect to the popularity shape parameter γ. SNR=10 dB, λ=0.2,γ=2,L=1 nats, α=4,C 1=0.0005,C 2=0.

5 David vs. Goliath: more SBSs with less storage or less SBSs with more storage?

In the previous section, we have validated our results via numerical simulations and discussed the impact of several parameters on the outage probability and average delivery rate. On top of those, we are interested in finding a tradeoff between the small base station density and the total storage size for a fixed set of parameters. We start by making an analogy with the well-known David and Goliath story to examine the tradeoff between the small base station density and total storage size.d More precisely, we aim to answer the following question: should we increase storage size of current small base station (David) or deploy more small base station with less storage (Goliath) in order to achieve a certain success probability? The answer is indeed useful for the realization of such a scenario. Putting more small base station in a given area may not be desirable due to increased deployment and operation costs (Evil). Therefore, increasing the storage size of already deployed small base station may incur less cost (Good). To characterize this tradeoff, we first define the optimal region as follows:

Definition 1 (Optimal region).

An outage probability p is said to be achievable if there exist some parameters λ,T,α,S, L,γ satisfying the following condition:

$$\begin{array}{*{20}l} p_{\text{out}}(\lambda,T,\alpha,S, L, \gamma) \leq p^{\dagger}. \end{array} $$

The set of all achievable p forms the optimal region.

The optimal region can be tightened by restricting parameters λ,T,α,S, L,γ to some intervals. A detailed analysis on this is left for future work. Hereafter, we restrict ourselves to find the optimal small base station density for a fixed set of parameters. In such a case, optimal small base station density can be readily obtained by plugging these fixed parameters into p out and solving the equation either analytically or numerically (i.e., bisection method [26]). In the following, we obtain a tradeoff curve between the small base station density and total storage size, by solving these equations systematically in the form of an optimization problem.

Definition 2 (small base station density vs. total storage size tradeoff).

Define the average total storage as S total=λ S, and fix T, α, L, and γ to some values in the optimal region given in Definition 1. Denote also λ as the optimal small base station density for a given S total. Then, λ is obtained by solving the following optimization problem:

$${} \underset{\lambda}{\text{minimize}}\qquad\qquad\, \lambda $$
((12))
$${} \mathrm{subject\ to} \qquad\qquad p_{\text{out}}(\lambda,T,\alpha,S_{\text{total}}/\lambda,L,\gamma) \leq p^{\dagger}. $$
((12a))

The set of all achievable pairs (λ ,S total) characterizes a tradeoff between the small base station density and total storage size.

Figures 7 and 8 show two different configurations of the tradeoff. In these plots, to achieve a certain outage probability (i.e., p =0.3), we see that it is sufficient to decrease the number of small base station by increasing the total storage size. Alternatively, the total storage size can be decreased by increasing the number of small base station. Moreover, for different values of parameter of interest (i.e., T{0.1,0.2} or L{1,2}), there is also a scaling and shifting in this tradeoff. Regardless of this scaling and shifting, we see that David wins victory against Goliath.

Figure 7
figure 7

The trade-off between SBSs density and total storage size for different file target bitrates. SNR=10 dB, α=4,L=1 nats, γ=3, and p =0.3.

Figure 8
figure 8

The trade-off between SBSs density and total storage size for different file lengths. SNR = 10 dB, α = 4,T = 0.2 nats/s/Hz, γ=3, and p =0.3.

6 Conclusions

We have studied the caching problem in a scenario where small base station are stochastically distributed and have finite-rate backhaul links. We derived expressions for the outage probability and average delivery rate and validate these results via numerical simulations. The results showed that significant gains in terms of outage probability and average delivery rate are possible by having cache-enabled small base station. We showed that telecom operators can either deploy more base stations or increase the storage size of existing deployment in order to achieve a certain quality-of-experience level.

7 Endnotes

a Additionally, the related work [27] was made public after the submission of this work.

b Compared to [28], this work contains more comprehensive mathematical treatment, proofs, and the trade-off analysis conducted in Section 5.

c Without loss of generality, more realistic values of delivery rate can be obtained by making a proper signal-to-interference-plus-noise ratio gap approximation and considering the total wireless bandwidth instead of 1 Hz.

d David vs. Goliath refers to the underlying resource sharing problem which arises in a variety of scenarios including massive MIMO vs. Small Cells [29].

8 A Proof of Theorem 1

In order to prove Theorem 1, we modify some useful results from [24]. Conditioning on the nearest base station at a distance r from the typical user, the outage probability can be written as:

$$\begin{aligned} p_{\text{out}}(\lambda,T,\alpha,S,\gamma) = \mathbb{E}_{r}\left[1 \!- \mathbb{P}\left[\text{ln}(1\! + \text{SINR}) >\! T, f_{o} \in \Delta_{b_{o}} \!\mid\! r\right] \right]. \end{aligned} $$

Since expectation is a linear operator and these two events are independent, the above expression can be decomposed as:

$${} \begin{aligned} p_{\text{out}}(\lambda,T,\alpha,S,\gamma) &= 1 - \underbrace{\mathbb{E}_{r}\left[ \mathbb{P}\left[\text{ln}(1 + \text{SINR}) > T \mid r \right] \right]}_{(i)}\\ &\qquad\quad\underbrace{\mathbb{E}_{r}\left[ \mathbb{P}\left[f_{o} \in \Delta_{b_{o}} \mid r\right] \right]}_{(ii)}. \end{aligned} $$
((13))

Proceeding term by term, we first write (i) as:

$${} \begin{aligned} \mathbb{E}_{r}[\mathbb{P}[\text{ln}(1 + \text{SINR})&> T \mid r ]] \\ &=\! \int_{r>0}{\mathbb{P}\left[\text{ln}(1 \,+\, \text{SINR}) \!>\! T \mid\! r \right] f_{r}(r)\mathrm{d}r} \end{aligned} $$
((14))
$$ \begin{aligned} &\qquad\qquad\qquad\stackrel{(a)}{=}\! \int_{r>0}{\mathbb{P}\left[\text{ln}(1 \,+\, \text{SINR}) \!>\! T \mid\! r \right] e^{-\pi\lambda r^{2}}2\pi\lambda r\mathrm{d}r} \\ &\qquad\qquad\qquad\stackrel{(b)}{=}\! \int_{r>0}{\mathbb{P}\left[\!\frac{hr^{-\alpha}}{\sigma^{2}+I_{r}} \!>\! e^{T}\,-\,1 \!\mid\! r \!\right]\! e^{-\pi\lambda r^{2}}2\pi\lambda r\mathrm{d}r} \\ &\qquad\qquad\qquad\stackrel{(c)}{=}\! \int_{r>0} \mathbb{P}\left[\!h \!>\! r^{\alpha}\!\!\left(\!e^{T}\!\,-\,1\!\right)\!(\sigma^{2}\! + I_{r})\! \mid\! r\! \right]\\ &\qquad\qquad\qquad\quad\times e^{-\pi\lambda r^{2}}2\pi\lambda r \mathrm{d}r, \end{aligned} $$
((15))

where \(f_{r}(r) = e^{-\pi \lambda r^{2}}2\pi \lambda r\) is the probability distribution function of r for Poisson point process [24], hence (a) follows from its substitution. The expression in (b) is obtained by plugging the signal-to-interference-plus-noise ratio formula and letting it on the left hand side of the inequality and (c) is the result of some algebraic manipulations for keeping fading variable h alone.

Conditioning on I r and using the fact that hExponential(μ), the probability of random variable h exceeding r α(e T−1)(σ 2+I r ) can be written as:

$${} \begin{aligned} &\mathbb{P}[h \!>\! r^{\alpha}\left(e^{T}\!\,-\,1\right)\left(\sigma^{2} \,+\, I_{r}\right) \mid r ] \\ &\qquad\qquad\qquad\quad\; = \mathbb{E}_{I_{r}}\!\left[ \mathbb{P}\left[h >\! r^{\alpha}\left(e^{T}\!\,-\,1\!\right)\!\left(\sigma^{2} \,+\, I_{r}\right)\! \mid\! r, I_{r} \right] \right] \\ &\qquad\qquad\qquad\quad\; = \mathbb{E}_{I_{r}}\!\left[ \text{exp}\left(\!-\mu r^{\alpha}\!\left(e^{T}\,-\,1\!\right)\left(\sigma^{2} \,+\, I_{r}\right)\!\right)\! \mid\! r \right] \\ &\qquad\qquad\qquad\quad\; = e^{-\mu r^{\alpha}(e^{T}-1)\sigma^{2}} \mathcal{L}_{I_{r}}\!\left(\!\mu r^{\alpha}\left(e^{T}\!\,-\,1\right)\! \right), \end{aligned} $$
((16))

where \(\mathcal {L}(s)\) is the Laplace transform of random variable I r evaluated at s conditioned on the distance of the nearest base station from the origin. Substituting (16) into (15) yields the following:

$$ \begin{aligned} \mathbb{E}_{r}\left[\mathbb{P}\left[\text{ln}(1 + \text{SINR}) \!>\! T \!\mid\! r \right]\right] &=\! \int_{r>0} \!\!e^{-\mu r^{\alpha}\left(e^{T}-1\right)\sigma^{2}} \mathcal{L}_{I_{r}}\\ &\quad\times\!\left(\!\mu r^{\alpha}\!\left(\!e^{T}\,-\,1\right) \!\right) \!e^{-\pi\lambda r^{2}}2\pi\lambda r \mathrm{d}r. \end{aligned} $$
((17))

Defining g i as a random variable of arbitrary but identical distribution for all i, and R i as the distance from the i-th base station to the tagged receiver, the Laplace transform is written as:

$${} \begin{aligned} \mathcal{L}_{I_{r}}(s) &= \mathbb{E}_{I_{r}}\left[e^{-sI_{r}}\right] = \mathbb{E}_{\Phi,\{g_{i}\}} \left[ \text{exp}\left(-s\sum_{i \in \Phi \backslash\{b_{o}\}}{g_{i}R^{-\alpha}_{i}} \right) \right] \\ &= \mathbb{E}_{\Phi,\{g_{i}\}} \left[ \prod_{i \in \Phi \backslash\{b_{o}\}}{ \text{exp}\left(-sg_{i}R^{-\alpha}_{i}\right)} \right] \\ &\stackrel{(a)}{=} \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_{o}\}}{ \mathbb{E}_{\{g_{i}\}}\left[\text{exp}\left(-sg_{i}R^{-\alpha}_{i}\right)\right]} \right] \\ &\stackrel{(b)}{=} \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_{o}\}}{ \mathbb{E}_{g}\left[\text{exp}\left(-sgR^{-\alpha}_{i}\right)\right]} \right] \\ &= \text{exp}\left(-2\pi\lambda \int^{\infty}_{r}{\left(1 - \mathbb{E}_{g} \left[ \text{exp}\left(-sgv^{-\alpha}\right) \right] \right)v\mathrm{d}v} \right), \end{aligned} $$

where (a) comes from the independence of g i from the point process Φ, and (b) follows from the i.i.d. assumption of g i . The last step comes from the probability-generating functional of the Poisson point process, which basically says that for some function f(x), \(\mathbb {E}\left [\prod _{x \in \Phi }{f(x)}\right ]=\text {exp}\left (-\lambda \int _{\mathbb {R}^{2}}{(1 - f(x))\mathrm {d}x)} \right)\). Since the nearest interfering base station is at least at a distance r, the integration limits are from r to infinity. Denoting f(g) as the probability distribution function of g, then plugging in s=μ r α(e T−1) and switching the integration order yields:

$${} \begin{aligned} \mathcal{L}_{I_{r}}\!\left(\!\mu r^{\alpha}\!\left(\!e^{T}\!\,-\,1\!\right) \!\right) &= \text{exp}\left(\!-2\pi\lambda\! \int^{\infty}_{0}\!\!\left(\int^{\infty}_{r}{\!\!\left(\!1 \,-\, e^{-\mu r^{\alpha}\left(e^{T} \!- 1\right)v^{-\alpha}g}\! \right)\!v\mathrm{d}v}\! \right)\right.\\ &\quad\left.\times\; f(g) \mathrm{d}g {\vphantom{\int^{\infty}_{0}}} \right). \end{aligned} $$

By change of variables v αy, the Laplace transform can be rewritten as:

$${} \begin{aligned} &\mathcal{L}_{I_{r}}\left(\mu r^{\alpha}\left(e^{T}\,-\,1\right) \right) = \\ &\text{exp}\left(\lambda\pi r^{2} - \frac{2\pi \lambda \left(\mu (e^{T} \,-\, 1)\right)^{\frac{2}{\alpha}}r^{2}}{\alpha} \int^{\infty}_{0} g^{\frac{2}{\alpha}} \left[ \Gamma\left(\!-\frac{2}{\alpha},\mu\left(e^{T} \!- 1\right)g\right)\right.\right.\\ &\left.\left.\quad\qquad-\,\Gamma\left(-\frac{2}{\alpha}\right) \right] f(g)\mathrm{d}g{\vphantom{\frac{2\pi \lambda \left(\mu (e^{T} \,-\, 1)\right)^{\frac{2}{\alpha}}r^{2}}{\alpha}}} \right). \end{aligned} $$
((18))

Plugging (18) into (17), using the substitution r 2v and after some algebraic manipulations, the expression becomes:

$${} \begin{aligned} \mathbb{E}_{r}\left[\mathbb{P}\left[\text{ln}(1 \!+ \text{SINR}) \!>\! \!T \!\mid\! r \right]\right] = \pi\lambda\! \int^{\infty}_{0}{\!\!e^{-\pi\lambda v\beta(T,\alpha) - \mu\left(e^{T} \!- 1\right)\sigma^{2}v^{\alpha/2}}\mathrm{d}v}, \end{aligned} $$
((19))

where β(T,α) is given as:

$${} \beta(T,\alpha) = \frac{2\left(\mu\left(e^{T} \!\,-\, 1\right)\right)}{\alpha} \mathbb{E}_{g}\!\left[ g^{\frac{2}{\alpha}}\! \left(\!\Gamma\!\left(\!-\frac{2}{\alpha},\mu\left(e^{T} \!\,-\, 1\right)g\right) \,-\, \Gamma\!\left(\!-\frac{2}{\alpha}\!\right)\! \right) \right]. $$

So far, we have obtained (i) of (13). The term (i i) is straightforward to derive. In the system model, as we assume that every small base station caches the same popular files and they have the same storage size, the cache hit probability becomes independent of the distance r. This yields:

$$ \mathbb{E}_{r}\left[\mathbb{P}\left[\,f_{o} \in \Delta_{b_{o}} \mid r\right]\right] = \int^{S/L}_{0}{ f_{\text{pop}}(\,f,\gamma)\mathrm{d}\,f }. $$
((20))

Plugging both (19) and (20) into (13) and rearranging the terms, we conclude the proof. ■

9 B Proof of Theorem 2

Average achievable delivery rate is \({\bar \tau } = \mathbb {E}\left [ \tau \right ]\), where the average is taken over the Poisson point process and the fading distribution. It can be shown that:

$$ \begin{aligned} {\bar \tau} &= \mathbb{E}[ \tau] \\ &\stackrel{(a)}{=} \mathbb{E} \left[ {\mathbb{P}\left[\text{ln}(1 \!+ \text{SINR}) \!>\! T\right]} \left(T\mathbb{P}\left[\,f_{o} \in \Delta_{b_{o}}\right] \,+\, C\left(\lambda \right)\mathbb{P}\left[\,f_{o} \not\in \Delta_{b_{o}}\right] \right) \right] \\ &\stackrel{(b)}{=} \mathbb{E}\left[ \underbrace{\mathbb{P}\left[\text{ln}(1\! + \text{SINR})\! >\! T \!\mid r \right]}_{\tau_{1}} \right] \\ &\quad\times \left(\mathbb{E}\left[ \underbrace{T\mathbb{P}\left[\,f_{o} \in \Delta_{b_{o}} \mid r \right]}_{\tau_{2}} \right] + \mathbb{E}\left[ \underbrace{C\left(\lambda \right)\mathbb{P}\left[\,f_{o} \not\in \Delta_{b_{o}} \mid r \right]}_{\tau_{3}} \right] \right) \\ &= \mathbb{E}\left[\tau_{1}\right] \left(\mathbb{E}\left[\tau_{2}\right] + \mathbb{E}\left[\tau_{3}\right] \right), \end{aligned} $$
((21))

where (a) is obtained by plugging the delivery rate as defined in (6), and (b) follows from independence of the events and linearity of the expectation operator.

Derivation of \(\mathbb {E}[\tau _{1}]\) can be obtained from the proof of Theorem 1, by following the steps from (14) to (19). On the other hand, the fact that the cache hit probability is independent of r, \(\mathbb {E}_{r}[\tau _{2}]\) can be expressed as:

$$\mathbb{E}_{r}[\!\tau_{2}] = T \int^{S/L}_{0}{ f_{\text{pop}}(\,f,\gamma)\mathrm{d}f }. $$

Using similar arguments, \(\mathbb {E}_{r}[\tau _{3}]\) is written as:

$$ \mathbb{E}_{r}[\!\tau_{3}] = C(\lambda)\left(1 - \int^{S/L}_{0}{ f_{\text{pop}}(f,\gamma)\mathrm{d}f } \right). $$

Substituting these expressions into (21) concludes the proof. ■

10 C Proof of Proposition 1

Since Proposition 1 is a special case of Theorem 1, we follow the similar steps. We first rewrite (13) as:

$${} \begin{aligned} p_{\text{out}}(\lambda,T,\alpha,S,\gamma) &= 1 - \underbrace{\mathbb{E}_{r}\left[ \mathbb{P}\left[\text{ln}(1 + \text{SINR}) > T \mid r \right] \right]}_{(i)}\\ &\quad\qquad\underbrace{\mathbb{E}_{r}\left[ \mathbb{P}\left[f_{o} \in \Delta_{b_{o}} \mid r\right] \right]}_{(ii)}. \end{aligned} $$
((22))

For the proceeding of (i), the proof of Theorem 1 can be followed starting from (14) to (17). Then, the Laplace transform is written as:

$${} \begin{aligned} \mathcal{L}_{I_{r}}(s) &= \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_{o}\}}{\mathbb{E}_{g}\left[\text{exp}\left(-sgR^{-\alpha}_{i}\right)\right]} \right] \\ &\stackrel{(a)}{=} \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_{o}\}}{\frac{\mu}{\mu + sR_{i}^{-\alpha}}} \right] \\ &= \text{exp}\left(-2\pi\lambda \int^{\infty}_{r}{\left(1 - \frac{\mu}{\mu + sv^{-\alpha}} \right)v\mathrm{d}v} \right), \end{aligned} $$
((23))

where (a) comes from the new assumption that gExponential(μ). Then, plugging s=μ r α(e T−1) yields:

$${} \mathcal{L}_{I_{r}}\!\left(\mu r^{\alpha}\!\left(e^{T} \,-\, 1 \right)\!\right) = \text{exp}\!\left(\!-2\pi\lambda\! \int^{\infty}_{r}\!\!{\frac{e^{T} \!\,-\, 1}{e^{T} \,-\, 1 + \left(\frac{v}{r}\right)^{\alpha}} v\mathrm{d}v} \right). $$

Using a change of variables \(u = \left (\frac {v}{r(e^{T}-1)^{\alpha /2}}\right)^{2}\) results in:

$$ \mathcal{L}_{I_{r}}\left(\mu r^{\alpha}\left(e^{T} - 1 \right)\right) = \text{exp}\left(-\pi r^{2}\lambda\rho(T,\alpha) \right), $$
((24))

where:

$$ \rho(T,\alpha) = (e^{T} - 1)^{2/\alpha} \int^{\infty}_{(e^{T} - 1)^{-2/\alpha}}{ \frac{1}{1 + u^{\alpha/2}} \mathrm{d}u }. \notag $$

Substituting (24) into (17) with r 2v gives

$$ \pi\lambda\int_{0}^{\infty}{e^{-\pi \lambda v(1 + \rho(T,\alpha)) - \mu(e^{T}-1)\sigma^{2}v^{\alpha/2}}\mathrm{d}v}. $$
((25))

Since α=4 in our special case, (25) simplifies to:

$$ \pi\lambda\int_{0}^{\infty}{e^{-\pi \lambda v(1 + \rho(T,4)) - \mu(e^{T}-1)\sigma^{2}v^{2}}\mathrm{d}v}, $$
((26))

where:

$$\begin{aligned} \rho(T,4) &= (e^{T} - 1)^{2/\alpha} \int^{\infty}_{(e^{T} - 1)^{-2/\alpha}}{ \frac{1}{1 + u^{2}} \mathrm{d}u } \\ &= (e^{T} - 1)^{2/\alpha}\left(\frac{\pi}{2} - \text{arctan}\left((e^{T}-1)^{-2/\alpha} \right) \right) \\ &= \sqrt{e^{T} - 1}\left(\frac{\pi}{2} - \text{arctan}\left(\frac{1}{\sqrt{e^{T}-1}}\right) \right). \end{aligned} $$

From this point, (26) can be further simplified since it has a form similar to:

$$ \int_{0}^{\infty}{e^{-ax}e^{-bx^{2}}\mathrm{d}x} = \sqrt{\frac{\pi}{b}} \text{exp}\left(\frac{a^{2}}{4b} \right) Q\left(\frac{a}{\sqrt{2b} }\right), \notag $$

where \(Q\left (x\right) = \frac {1}{\sqrt {2\pi }}\int _{x}^{\infty }{e^{-y^{2}/2}\mathrm {d}y}\) is the standard Gaussian tail probability. Setting a=π λ(1+ρ(T,4)) and b=μ(e T−1)σ 2=(e T−1)/SNR gives:

$${} \frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^{T}-1}{\text{SNR}}}} \text{exp} \left(\frac{\left(\lambda\pi(1 \,+\, \rho(T,4))\right)^{2}}{4\left(e^{T}\,-\,1\right)\!/\text{SNR}} \right) Q\! \left(\! \frac{\lambda\pi(1 \,+\, \rho(T,4))}{\sqrt{2\left(e^{T}\,-\,1\right)\!/\text{SNR}}} \right). $$
((27))

This is the final expression for (i) of (22). The term (i i) of (22) can be obtained by using similar arguments given for (20) in the proof of Theorem 1, meaning that the cache hit probability is independent of distance r. Thus:

$$ \begin{aligned} \mathbb{E}_{r}\left[\mathbb{P}\left[f_{o} \in \Delta_{b_{o}} \mid r\right]\right] &= \int^{S/L}_{0}{ f_{\text{pop}}\left(f,\gamma\right) \mathrm{d}f } \\ &\stackrel{(a)}{=} \int^{1 + S/L}_{1}{ \left(\gamma - 1\right)f^{-\gamma} \mathrm{d}f } \\ &= 1 - \left(\frac{L}{L+S}\right)^{\gamma - 1}, \end{aligned} $$
((28))

where (a) follows from plugging definition of C(f,λ) given in Assumption 1 and changing the integration limits accordingly. The last term is the result of the integral. Therefore, we conclude the proof by plugging (27) and (28) into (22). ■

11 D Proof of Proposition 2

The proposition is a special case of Theorem 2; thus, we have the similar steps. We start by rewriting (21) as:

$${} \begin{aligned} {\bar \tau} &= \mathbb{E}\left[ \underbrace{\mathbb{P}\left[\text{ln}(1 + \text{SINR}) >\! T \mid r \right]}_{\tau_{1}} \right]\! \left(\! \mathbb{E}\!\left[ \underbrace{T\mathbb{P}\left[\,f_{o} \in \Delta_{b_{o}} \!\mid r \right]}_{\tau_{2}} \right]\right.\\ &\left.\quad+ \mathbb{E}\left[ \underbrace{C\left(\lambda \right)\mathbb{P}\left[\,f_{o} \not\in \Delta_{b_{o}} \mid r \right]}_{\tau_{3}} \right] \right) \\ & = \mathbb{E}\left[\tau_{1}\right] \left(\mathbb{E}\left[\tau_{2}\right] + \mathbb{E}\left[\tau_{3}\right] \right). \\[-12pt] \end{aligned} $$
((29))

In this expression, the term \(\mathbb {E}\left [\tau _{1}\right ]\) can be obtained from the proof of Proposition 1. More precisely, observe that \(\mathbb {E}\left [\tau _{1}\right ]\) is identical to (i) of (22). Thus, following the steps from (23) to (27), we obtain:

$${} \begin{aligned} \mathbb{E}\left[\tau_{1}\right] &= \mathbb{E}\left[ \mathbb{P}\left[\text{ln}(1 + \text{SINR}) \!>\! T \mid r \right] \right] \\ &= \frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^{T}-1}{\text{SNR}}}} \text{exp} \left(\frac{\left(\lambda\pi(1 \,+\, \rho(T,4))\right)^{2}}{4(e^{T}\,-\,1)/\text{SNR}} \right) Q\! \left(\frac{\lambda\pi(1 \,+\, \rho(T,4))}{\sqrt{2(e^{T}\,-\,1)/\text{SNR}}} \right). \end{aligned} $$
((30))

On the other hand, \(\mathbb {E}\left [\tau _{2}\right ]\) can be obtained by taking T out of the expectation and plugging (28) into the formula, for example:

$$ \begin{aligned} \mathbb{E}\left[\tau_{2}\right] &= \mathbb{E}\left[ T\mathbb{P}\left[f_{o} \in \Delta_{b_{o}} \mid r \right] \right] \\ &= T\left(1 - \left(\frac{L}{L+S}\right)^{\gamma - 1}\right). \end{aligned} $$
((31))

Finally, \(\mathbb {E}\left [\tau _{3}\right ]\) is easy to derive as:

$$ \begin{aligned} \mathbb{E}\left[\tau_{3}\right] &= \mathbb{E}\left[ C\left(\lambda \right)\mathbb{P}\left[f_{o} \not\in \Delta_{b_{o}} \mid r \right] \right] \\ &= C\left(\lambda \right)\left(\frac{L}{L+S}\right)^{\gamma - 1} \\ &= \left(\frac{C_{1}}{\lambda} + C_{2}\right) \left(\frac{L}{L+S}\right)^{\gamma - 1}, \end{aligned} $$
((32))

where definition of C(λ) follows from Assumption 1. Substituting (30), (31), and (32) into (29) concludes the proof. ■

References

  1. Cisco, Cisco visual networking index: global mobile data traffic forecast update, 2013–2018. White Paper, [Online] http://goo.gl/l77HAJ (2014).

  2. 3GPP, Overview of 3GPP Release 13. [Online] http://www.3gpp.org/release-13 (2014).

  3. J Hoydis, M Kobayashi, M Debbah, Green small-cell networks. IEEE Vehicular Technol. Mag.6(1), 37–43 (2011).

    Article  Google Scholar 

  4. TQ Quek, G de la Roche, I Güvenç, M Kountouris, Small cell networks: deployment, PHY techniques, and resource management (Cambridge University Press, UK, 2013).

    Book  Google Scholar 

  5. M Bennis, M Simsek, W Saad, S Valentin, M Debbah, A Czylwik, When cellular meets wifi in wireless small cell networks. IEEE Commun. Mag. Spec. Issue HetNets. 51(6), 44–50 (2013).

    Article  Google Scholar 

  6. JG Andrews, Seven ways that HetNets are a cellular paradigm shift. IEEE Commun. Mag. 51(3), 136–144 (2013).

    Article  Google Scholar 

  7. Newcom#, Network of excellence in wireless communications. [Online] http://www.newcom-project.eu (2014).

  8. Horizon 2020, The EU framework programme for research and innovation. [Online] http://ec.europa.eu/programmes/horizon2020 (2014).

  9. E Nygren, RK Sitaraman, J Sun, The Akamai network: a platform for high-performance internet applications. ACM SIGOPS Oper. Syst. Rev. 44(3), 2–19 (2010).

    Article  Google Scholar 

  10. B Ahlgren, C Dannewitz, C Imbrenda, D Kutscher, B Ohlman, A survey of information-centric networking. IEEE Commun. Mag. 50(7), 26–36 (2012).

    Article  Google Scholar 

  11. S Spagna, M Liebsch, R Baldessari, S Niccolini, S Schmid, R Garroppo, K Ozawa, J Awano, Design principles of an operator-owned highly distributed content delivery network. IEEE Commun. Mag. 51(4), 132–140 (2013).

    Article  Google Scholar 

  12. X Wang, M Chen, T Taleb, A Ksentini, VCM Leung, Cache in the air: exploiting content caching and delivery techniques for 5G systems. IEEE Commun. Mag. 52(2), 131–139 (2014).

    Article  Google Scholar 

  13. E Baştuğ, M Bennis, M Debbah, Living on the edge: the role of proactive caching in 5G wireless networks. IEEE Commun. Mag. 52(8), 82–89 (2014).

    Article  Google Scholar 

  14. LA Belady, A study of replacement algorithms for a virtual-storage computer. IBM Syst. J. 5(2), 78–101 (1966).

    Article  Google Scholar 

  15. S Borst, V Gupta, A Walid, in IEEE INFOCOM. Distributed caching algorithms for content distribution networks (San Diego, USA, 2010), pp. 1–9.

  16. E Baştuğ, J-L Guénégo, M Debbah, in 20th International Conference on Telecommunications (ICT’13). Proactive small cell networks (Casablanca, Morocco, 2013).

  17. K Poularakis, G Iosifidis, V Sourlas, L Tassiulas, in IEEE Wireless Communications and Networking Conference (WCNC’14). Multicast-aware caching for small cell networks (Istanbul, Turkey, 2014).

  18. P Blasco, D Gunduz, in IEEE International Conference on Communications (ICC’14). Learning-based optimization of cache content in a small cell base station (Sydney, Australia, 2014).

  19. MA Maddah-Ali, U Niesen, Fundamental limits of caching. IEEE Trans. Inf. Theory. 60(5), 2856–2867 (2011).

    Article  MathSciNet  Google Scholar 

  20. E Altman, K Avrachenkov, J Goseling, Coding for caches in the plane. arXiv preprint arXiv:1309.0604 (2013).

  21. K Hamidouche, W Saad, M Debbah, in 12th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). Many-to-many matching games for proactive social-caching in wireless small cell networks (Hammamet, Tunisia, 2014), pp. 569–574.

  22. M Ji, AFM Giuseppe Caire, Fundamental limits of caching in wireless D2D networks. arXiv preprint arXiv:1405.5336 (2014).

  23. A Altieri, P Piantanida, LR Vega, C Galarza, On fundamental trade-offs of device-to-device communications in large wireless networks. arXiv preprint arXiv:1405.2295 (2014).

  24. JG Andrews, F Baccelli, RK Ganti, A tractable approach to coverage and rate in cellular networks. IEEE Trans. Commun. 59(11), 3122–3134 (2011).

    Article  Google Scholar 

  25. ME Newman, Power laws, Pareto distributions and Zipf’s law. Contemp. Phys. 46(5), 323–351 (2005).

    Article  Google Scholar 

  26. WH Press, Numerical recipes 3rd Edition: the art of scientific computing (Cambridge University Press, UK, 2007).

    MATH  Google Scholar 

  27. B Blaszczyszyn, A Giovanidis, Optimal geographic caching in cellular networks. arXiv preprint arXiv:1409.7626 (2014).

  28. E Baştuğ, M Bennis, M Debbah, in International Symposium on Wireless Communication Systems (ISWCS’14). Cache-enabled small cell networks: modeling and tradeoffs (Barcelona, Spain, 2014).

  29. J Hoydis, M Debbah, David vs Goliath or small cells vs massive mimo. [Online] http://goo.gl/isfya5 (2011).

Download references

Acknowledgements

This research has been supported by the ERC Starting Grant 305123 MORE (Advanced Mathematical Tools for Complex Network Engineering), the SHARING project under the Finland grant 128010 and the project BESTCOM.

Author information

Authors and Affiliations

Authors

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baştuǧ, E., Bennis, M., Kountouris, M. et al. Cache-enabled small cell networks: modeling and tradeoffs. J Wireless Com Network 2015, 41 (2015). https://doi.org/10.1186/s13638-015-0250-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-015-0250-4

Keywords