Backhaul-Aware Caching Placement for Wireless Networks

Backhaul-Aware Caching Placement for Wireless Networks

Xi Peng, Juei-Chin Shen, Jun Zhang and Khaled B. Letaief, Fellow, IEEE
Department of ECE, The Hong Kong University of Science and Technology
Mediatek Inc., Hsinchu
E-mail: {xpengab, eejzhang, eekhaled}@ust.hk, jc.shen@mediatek.com
Abstract

As the capacity demand of mobile applications keeps increasing, the backhaul network is becoming a bottleneck to support high quality of experience (QoE) in next-generation wireless networks. Content caching at base stations (BSs) is a promising approach to alleviate the backhaul burden and reduce user-perceived latency. In this paper, we consider a wireless caching network where all the BSs are connected to a central controller via backhaul links. In such a network, users can obtain the required data from candidate BSs if the data are pre-cached. Otherwise, the user data need to be first retrieved from the central controller to local BSs, which introduces extra delay over the backhaul. In order to reduce the download delay, the caching placement strategy needs to be optimized. We formulate such a design problem as the minimization of the average download delay over user requests, subject to the caching capacity constraint of each BS. Different from existing works, our model takes BS cooperation in the radio access into consideration and is fully aware of the propagation delay on the backhaul links. The design problem is a mixed integer programming problem and is highly complicated, and thus we relax the problem and propose a low-complexity algorithm. Simulation results will show that the proposed algorithm can effectively determine the near-optimal caching placement and provide significant performance gains over conventional caching placement strategies.

Wireless caching networks, caching placement, download delay
publicationid: pubid:

I Introduction

The explosive growth of global mobile data traffic [1], especially mobile video streaming, has led to significant increases in user latency and imposed a heavy burden on backhaul links that connect local BSs to the core network. The congestion in the backhaul may cause excessively long delays to the content delivery, and thus degrades the overall quality of experience (QoE). In order to support the increasing mobile data traffic, one promising approach to reduce delivery time and backhaul traffic is to deploy caches at BSs, thereby bringing frequently requested bulky data (such as videos) close to the users [2] in the hope of satisfying user requests without increasing the burden over backhaul links. In this way, the backhaul is used only to refresh the caches when the user request distribution evolves over time. Usually, the refreshing process does not require high-speed transmission and can work at off-peak times.

Caching capacity of local BSs can be regarded as a new type of resources of wireless networks besides time, frequency and space. However, the caching capacity is limited compared with the total amount of mobile traffic. Thus sophisticated caching placement strategies will be needed to fully exploit the benefit of caching. With the knowledge of channel statistics and file popularity distribution, the central controller is able to determine an optimal caching strategy to cater for user requests with locally cached content. Once caches are fully utilized, the requirements for backhaul can be greatly reduced and the download delay can be shortened, especially when the backhaul links are in poor conditions.

So far, the design of caching placement has not been fully addressed. Most of the previous studies fail to take physical layer features into consideration. For example, it was assumed in [3] and [4] that the wireless transmission was error-free. In [5], delays of D2D and cellular transmissions were simply set as constants and the proposed caching strategy was to store as many different files as possible. However, when taking multipath fading into consideration, storing the same content at multiple BSs can actually provide channel diversity gains and is perhaps more advantageous. The authors in [6] analyzed both uncoded and coded femto-caching in order to minimize the total average delay of all users. In their work, coded femto-caching was obtained as the convex relaxation of the uncoded problem. Nevertheless, it not only ignored physical layer features, but also imposed a certain network topology requirement which cannot be fulfilled in practice. Physical-layer operation including data assignment and coordinated beamforming in caching networks was considered in [7], but the caching placement was given as a prior.

There are also works studying the dynamic caching placement and update. In [8], the authors studied video caching in the radio access network (RAN) and proposed caching policies based on the user preference profile. Nevertheless, they considered neither the variation of the wireless channel during the transmission of a file nor the actual delay of wireless transmission and backhaul delivery. The authors of [9] concentrated on the caching content optimization in a single BS. The file popularity was assumed unknown and their strategy was optimized based on the observation of user request history over time. However, this work did not consider the effect of backhaul delays and assumed that the cache replacement was of negligible duration and operated frequently.

In this paper, we present a wireless caching network model to determine the optimal caching placement strategy for managing random user requests. In particular, we aim at minimizing the average download delay, which is one of the key QoE metrics, by taking wireless channel statistics into account. Moreover, to the best of our knowledge, the impact of backhaul delays on caching placement is studied in this paper for the first time. The design of the caching placement strategy is formulated as a mixed-integer nonlinear programming (MINLP) problem, which is computationally difficult to solve. Thus we resort to the relaxed formulation of the problem and provide a low-complexity algorithm. Simulation results show that the strategy derived from our proposed algorithm outperforms other well-known caching placement strategies. Furthermore, we provide some insights into the caching placement design. Specifically, in the case where the backhaul delay is very small, the most popular content has a higher priority to be cached. On the other hand, when the backhaul delay is relatively large, it should be encouraged to maximize the caching content diversity, i.e., to cache as much different content as possible, to reduce the chance of invoking backhaul transmission.

Ii System Model

In this work, we consider the downlink of a wireless caching network, in which single-antenna BSs and single-antenna mobile users are uniformly distributed in the considered region. Through backhaul links, the BSs are connected to a central controller, which also acts as a file library server. The library contains files and each of them can be divided into segments of equal size (with bits). Each BS is equipped with a storage unit of limited capacity. For simplicity, we assume that all users see channels with the same distribution. Without loss of generality, we focus on one user for calculating the performance metric of interest. A BS will be regarded as a candidate BS for user if it holds the content requested by user , no matter whether such content is previously cached or retrieved via the backhaul. User has its cluster of candidate BSs, denoted as , which consists of BS indices and has cardinality . The system model and data retrieval process are illustrated in Fig. 1. The user will choose the BS of the best wireless channel in to communicate with, as shown in Fig. 1 (a). As for the case that no BS holds the requested content, as shown in Fig. 1 (b), the central controller will pass such content to all BSs and we shall have .

Figure 1: System model and content retrieval process. (a) Content retrieval from only local BSs, ; (b) Content retrieval from the central controller with extra backhaul delay, .

Ii-a File Transmission Model

We consider segmented file transfer [10] (also known as multi-source file transfer), which has the advantage of allowing a requested file to be sourced and downloaded from different BSs in various time slots. Different segments of a requested file, when reaching the user, will be decoded and assembled into a complete file. Each segment can be as small as a single packet or as big as a stream lasting for several minutes. We will first discuss how to calculate the delay of downloading a segment, which is typically quantified as the average number of required time slots for a segment to be successfully decoded. Under the assumption that the segments of a file are subsequently sent over homogenous and ergodic channels, the download delay of this file is indeed the sum of all segment-download delays.

We shall assume that time is partitioned into -second-wide time slots indexed by . We will also assume that at each time slot, user communicates with the BS in that provides the highest signal-to-interference-plus-noise ratio (SINR), and this BS is able to transmit complex symbols. Let denote the channel coefficient between the selected BS and user , following the same distribution for all and . The channel is assumed to be block-fading with block length , i.e., the channel is constant for a duration of seconds and different channel block realizations are i.i.d.. At time slot , the transmit signal from the selected BS to user is denoted as

(1)

and the transmit power constraint is for all , and .

The received signal of user for time slot can be written as

(2)

where and . The noise is complex white Gaussian, i.e., .

The transmission of a segment to the intended user, in accordance with an incremental redundancy (IR) hybrid-ARQ protocol as adopted in LTE [11], will be presented below. Given a well-designed Gaussian codebook, the -bit file segment is encoded into a channel code , where is a given positive integer, which can be made arbitrarily large, and each subcode , containing complex symbols, can be individually used to recover the original file segment. These subcodes are further modulated into a series of signal bursts. At each time slot, one signal burst is allowed to be sent. If this burst is not decoded correctly, the user feeds back a negative acknowledgment (NACK) message over an error-free and low-latency channel. Once the BS receives this NACK message, the next burst is sent at the next time slot. This process continues until the BS receives an acknowledgement (ACK) message. If the transmission starts from the first time slot and ends at the -th time slot, the effective coding rate is , where the coding rate is defined as bits/sec/Hz as a complex symbol can be transmitted approximately in 1 s and 1 Hz [12].

Ii-B Caching Placement Strategy

In practice, the caching capacity at BSs cannot be arbitrarily large. Thus, it is crucial to optimize the caching strategy to exploit the limited caching capacity in order to maximize the benefits brought by caching.

We denote the caching placement strategy as a three-dimensional matrix consisting of binary entries, where , indicating the caching placement of BS , is defined as

(3)

in which means that segment of file is cached at BS in advance and means the opposite. Since each BS has a limited caching capacity , we have

(4)

When user requests segment of file , the number of candidate BSs holding the segment is

(5)

Note that there should be a constraint for in order to avoid duplicate caching for the same segment at a BS.

Iii Backhaul-Aware Caching Placement

The average download delay is a representative metric for the system performance in wireless caching networks. In this section, we will first derive an analytical expression of the download delay. We will then formulate the problem of minimizing the download delay subject to constraints on caching capacities.

Iii-a Download Delay

We assume Rayleigh fading channels and no interference among different users, e.g., they can be served by different subcarriers with orthogonal frequency-division multiple access (OFDMA). Denote the signal-to-noise ratio (SNR) from BS to user as , and the probability density function (PDF) of is given by

(6)

where is the average received SNR and we have . According to our previous assumption, at time slot , the received SNR of user can be obtained as

(7)

Hence, the PDF of is

(8)

For simplicity, we omit the subscript in the following derivation.

In the IR scheme, each user has a buffer with size . Hence, up to of the most recent signal bursts can be stored and used to decode the information. In practice, is chosen to reach a compromise between the decoding performance and the implementation cost. If the buffer is big enough, can be regarded as infinity. If only the current burst is used for decoding, we will have . It has been indicated in [12] that the mutual information across multiple time slots can be written as

(9)

where is the received SNR at time slot . When employing typical set decoding, the probability of decoding error for user at time slot can be expressed as

(10)

It is difficult to obtain a closed-form expression for the average download delay. Hence, we shall derive a lower bound.

Theorem 1.

For the IR hybrid-ARQ protocol, the average download delay of a segment in this system model is lower bounded by

Proof:

For a given user, the probability that its download delay of a segment ( bits) is larger than time slots is given by

(11)

The expected download delay for a segment can be obtained as

(12)

From (9) and (10), we can get

(13)

Note that if for , we have . As a result, we can get

(14)

With (8), we can obtain the lower bound of the error probability as

(15)

Substituting (15) into (12) yields the desired result. ∎

Theorem 1 implies the download delay for a segment is a function of , , and . We will adopt the lower bound when calculating the average download delay of a segment, which is given by

(16)

Note that if the requested segment has not been cached in any BS, this segment will first be delivered to all BSs via backhaul and then the wireless transmission will follow the same scheme as the above-mentioned cached segment downloading. In that case, the number of candidate BSs is , i.e. , , and an extra backhaul delay will be caused, which is denoted as .

Iii-B Problem Formulation and Relaxation

There are two cases when determining the candidate BSs. If , we have . Otherwise, when , based on our assumption, we have and the segment download delay is Accordingly, the download delay of file can be calculated as

(17)

We suppose that the file is requested with probability and thus . The requests for segments of the same file are of equal probability. Therefore, the average download delay of all the files can be written as

(18)

Our goal is to minimize the average download delay by arranging the placement of segments at each BS subject to the caching capacity limit, given physical layer constraints, including the coding rate, the buffer size at users and the received SNR target. With fixed , and , the caching placement problem is formulated as

(19)
subject to

where constraint C1 stands for the caching capacity limit of each BS and constraint C2 indicates that each segment can be cached by at most BSs. It turns out that problem is an MINLP problem and thus it is highly complicated to find the optimal solution. As a result, we will focus on developing effective sub-optimal algorithms.

By further examining problem , we find that the optimization of caching placement boils down to the determination of the number of candidate BSs for each segment. In order to simplify the notations, we define it as a vector , where . We shall take two caching strategies as an example. That is,

If is the optimal caching strategy, then is also optimal. This is because both and correspond to the same vector , which determines the average download delay of the two strategies. Therefore, (16) can be written as

(20)

with

(21)

For , we can find that is convex w.r.t. for .

For each segment , its download delay is

(22)

The indicator functions in (22) will cause the major difficulty in designing the caching placement strategy. To resolve this issue, we adopt an exponential function with to approximate the indicator function . Then, we can obtain an approximated function to represent the average download delay of a file as

(23)

where and are given by

(24)

and

(25)

We can find that is convex w.r.t. and is concave w.r.t. , which means that is the difference of the convex (DC) functions.

With a fixed , we consider an approximated problem instead of . That is,

(26)
subject to

If we relax the integer constraint first, problem turns out to be a DC programming problem, which is not easy to solve directly due to the non-convex smooth objective function . The successive convex approximation (SCA) algorithm [13] can circumvent such a difficulty by replacing the non-convex object function with a sequence of convex ones. Specifically, by starting from a feasible point , the algorithm generates a sequence according to the update rule

(27)

where is the point generated by the algorithm at the -th iteration, is the step size for the -th iteration, and is the solution of a convex optimization problem ,

(28)
subject to

is an approximation of at the -th iteration, which is defined as

(29)

and is a tight convex upper-bound of . The main steps of the algorithm are presented in Algorithm 1. The solution obtained from Algorithm 1 then should be rounded due to the constraint .

Initialization: Find a feasible point and set .

Repeat

Solve problem and obtain the solution ;

Update ;

Set ;

Until stopping criterion is met.

Algorithm 1 : The SCA Algorithm

Iv Simulation Results

In this section, we present numerical results to examine the performance of the proposed caching placement strategy and to investigate the impact of backhaul delay.

Some previous studies have shown that in practical networks the request probability of content can be fitted with some popularity distributions. In this work, we assume that the popularity of files follows a Zipf distribution with parameter (see [14]) and the files are sorted in a descending order in terms of popularity. We set the rate target as bits/sec/Hz and the average received SNR as . We consider the case where only the current burst is used for decoding at the user side; i.e., . The range of the backhaul delivery delay is selected on the basis of a measurement operated on a practical network, as was done in [8]. Their experiment implied that the backhaul delay of a piece of content approximately ranges from 30% to 125% of its wireless transmit delay. To investigate the impact of such delay, we choose .

First we compare the performance of the proposed algorithm with exhaustive search. The file library has three files and each of them is divided into three segments. There are four BSs and each has a capacity of two segments. We adopt a step size of and the stopping criterion idistributions given by Fig. 2 shows that the results given by the proposed efficient algorithm are very close to those obtained using the exhaustive search. It can also be observed that a slight increase in cache size can significantly reduce the download delay, which confirms that caching is of great potential to enhance future wireless networks.

Figure 2: Average download delay versus backhaul delivery delay.

Next, we compare the proposed algorithm with two standard caching placement strategies on a large-scale system, where there are 50 BSs, each with a capacity of segments. The file library has files, each of which is divided into segments. One standard strategy stores segments of the most popular content (MPC) [8] in the cache memory of each BS. The other one always places different segments in total at all BS caches, which aims at the largest content diversity (LCD) [5]. For the MPC policy, and ; and for the LCD policy, and . The MPC policy is often adopted when multiple users access a few pieces of content very frequently, such as popular movies and TV shows. The LCD policy ensures that for most requests, part of the segments can be served by the local caches, instead of having to be fetched from the remote central controller through the backhaul. The step size and the stopping criterion are the same as Fig. 2. Fig. 3 demonstrates that our proposed strategy outperforms the other schemes, and important insights are revealed. When the backhaul delay is small enough, it can be regarded as equivalent to the case of infinite caching capacity. In this case, the best way to save download time is to maximize the channel diversity for each segment. As a result, the proposed strategy and the MPC policy coincide at the point =0. When the backhaul delay increases, the advantage of caching diversity emerges, and the LCD policy will surpass the MPC policy. Our proposed strategy and the LCD policy will converge when backhaul links are suffering from severe delivery delays. This is because when there is a large , even a single delivery via backhaul will lead to a huge delay and thus backhaul transmission should be prevented as much as possible. Therefore, the strategy that can provide maximum caching content diversity is favorable.

Figure 3: Performance comparison of various caching placement strategies.

V Conclusions

This paper presented a framework to minimize the average download delay of wireless caching networks. A caching placement problem which takes into account physical layer processing as well as backhaul delays was formulated to fully exploit the benefit of caching. As the design problem is an MINLP problem, we relaxed it into a DC optimization problem and adopted the SCA algorithm to solve it efficiently. Simulation results showed that our strategy can significantly reduce the average download delay compared to conventional strategies, and the proposed low-complexity algorithm can achieve comparable performance to exhaustive search. Moreover, we demonstrated that the backhaul propagation delay will greatly influence the caching placement. Specifically, when the backhaul delay becomes very small or very large, our proposed strategy will gradually evolve to the MPC and the LCD strategy, respectively. In particular, for a practical value of the backhaul delay, the proposed caching placement serves as the best strategy. Therefore, it can be concluded that our work provides a promising model to formulate the download delay for wireless caching networks, and important insights are given for determining the optimal caching placement strategy under different backhaul conditions.

References

  • [1] Cisco Systems Inc., “Cisco visual networking index: Global mobile data traffic forecast update, 2014-2019,” White Paper, Feb. 2015.
  • [2] N. Golrezaei, A. Molisch, A. Dimakis, and G. Caire, “Femtocaching and device-to-device collaboration: A new architecture for wireless video distribution,” IEEE Commun. Mag., vol. 51, no. 4, pp. 142–149, Apr. 2013.
  • [3] J. Gu, W. Wang, A. Huang, and H. Shan, “Proactive storage at caching-enable base stations in cellular networks,” in Proc. IEEE Int. Symp. on Personal Indoor and Mobile Radio Comm. (PIMRC), London, UK, Sept. 2013, pp. 1543–1547.
  • [4] M. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
  • [5] N. Golrezaei, P. Mansourifard, A. Molisch, and A. Dimakis, “Base-station assisted device-to-device communications for high-throughput wireless video networks,” IEEE Trans. Wireless Commun., vol. 13, no. 7, pp. 3665–3676, Jul. 2014.
  • [6] K. Shanmugam, N. Golrezaei, A. Dimakis, A. Molisch, and G. Caire, “Femtocaching: Wireless content delivery through distributed caching helpers,” IEEE Trans. Inf. Theory, vol. 59, no. 12, pp. 8402–8413, Dec. 2013.
  • [7] X. Peng, J.-C. Shen, J. Zhang, and K. B. Letaief, “Joint data assignment and beamforming for backhaul limited caching networks,” in Proc. IEEE Int. Symp. on Personal Indoor and Mobile Radio Comm. (PIMRC), Washington, DC, Sept. 2014.
  • [8] H. Ahlehagh and S. Dey, “Video-aware scheduling and caching in the radio access network,” IEEE/ACM Trans. Netw., vol. 22, no. 5, pp. 1444–1462, Oct. 2014.
  • [9] P. Blasco and D. Gunduz, “Learning-based optimization of cache content in a small cell base station,” in Proc. IEEE Int. Conf. Commun. (ICC), Sydney, Australia, Jun. 2014, pp. 1897–1903.
  • [10] R. Rejaie, M. Handley, H. Yu, and D. Estrin, “Proxy caching mechanism for multimedia playback streams in the Internet,” in Proc. Int. Web Caching Workshop, San Diego, CA, Mar. 1999.
  • [11] A. Ghosh, J. Zhang, J. G. Andrews, and R. Muhamed, Fundamentals of LTE.   Prentice-Hall, 2010.
  • [12] G. Caire and D. Tuninetti, “The throughput of hybrid-ARQ protocols for the Gaussian collision channel,” IEEE Trans. Inf. Theory, vol. 47, no. 5, pp. 1971–1988, Jul. 2001.
  • [13] M. Razaviyayn, M. Hong, and Z.-Q. Luo, “A unified convergence analysis of block successive minimization methods for nonsmooth optimization,” SIAM J. Optim., vol. 23, no. 2, pp. 1126–1153, 2013.
  • [14] M. Zink, K. Suh, Y. Gu, and J. Kurose, “Characteristics of YouTube network traffic at a campus network-measurements, models, and implications,” Comput. Netw., vol. 53, no. 4, pp. 501 – 514, 2009.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14125
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description