Cache Placement Optimization for Coded Caching with Arbitrary Cache Size

Cache Placement Optimization for Coded Caching with Arbitrary Cache Size

Yong Deng and Min Dong Dept. of Electrical, Computer and Software Engineering
University of Ontario Institute of Technology, Ontario, Canada
Abstract

We consider content caching between a service provider and multiple cache-enabled users, using the recently proposed modified coded caching scheme (MCCS) with an improved delivery strategy. We develop the optimal cache placement scheme through formulating a cache placement optimization problem, aiming to minimize the average load during the delivery phase under arbitrary cache size and user requests. Through reformulation, we show that the problem is a linear programming problem. By exploring the properties in the caching constraints, we derive the optimal cache placement solution in closed-form, offering the complete optimal cache placement solution for any cache size and user population. We verify that the existing optimal scheme obtained at specific cache sizes is a special case of our solution. Through numerical studies, we show an interesting pattern of the optimal cache placement as the cache size varies, as well as the pattern of caching gain under the optimal caching as the number of users varies.

I Introduction

The last decades have witnessed a dramatic surge of wireless data due to the proliferation of mobile devices [1]. The increasingly more data-intensive new applications have caused a shift of wireless traffic to content-based data for access and sharing. With the growing data traffic and the requirement of timely content delivery in many wireless services, using network storage resources for content caching has emerged as a compelling technology to alleviate the network traffic load and reduce the content access latency for users [2, 3]. The availability of local caches at the network edge, e.g., base stations or users, creates new network resources and opportunities to increase the user service capacity. Cache-aided communication technologies are expected to bring promising solutions for content delivery in future wireless networks.

The caching design and analysis have attracted increasing research interests. Many studies have investigated into the cache placement and delivery strategies to understand the impact of caching on the wireless networks [4, 5, 6, 7]. Conventional uncoded caching allows users to pre-fetch part of the database to improve the hit rate [4, 5]. However, it is only optimal for the case of a single cache, and is non-optimal for multiple caches[8]. Coded caching is first introduced in[6], where the authors proposed a Coded Caching Scheme (CCS) that combines a cache placement scheme (uncoded) specifying the cached content and a coded multicasting transmission strategy, assuming uniform file and cache characteristics. With the use of coding, it provides the caching gain that is shown to substantially reduce the traffic load as the network size increases. Coded caching has since attracted considerable attentions for further studies, with extension to the decentralized cache placement scheme [7], transmitter caching in mobile edge networks [9, 10], considered for both transmitter and receiver caching in wireless interference networks [11]. Instead of proposing specific cache placement schemes or modifying the original scheme in the CCS for improvement, the optimization of cache placement in the CCS was separately studied in [12, 13], where using different approaches, the authors have obtained the optimal cache placement strategy to minimize the peak traffic load.

For the CCS, it is designed assuming the number of users is less than the number of files in the database. It adopts the peak load as the performance metric, targeting at the worst-case scenario of users requesting distinct files. In general, when the user requests have overlapping, some redundancy in the delivery phase exists which increases the load [14]. To address this limitation, a recent new study [14] proposed a Modified Coded Caching Scheme (MCCS). It proposed a new caching strategy to remove the redundancy introduced in the CCS during the delivery phase, reducing both the average and peak load. The cache placement is further shown to be optimal for the MCCS in [15]. However, the related state-of-the-art studies are only possible for specific cache memory sizes [14, 15]. The general optimal caching solution for the MCCS with an arbitrary cache size remains unknown.

In this paper, considering content caching and delivery between a service provider and multiple cached-enabled users, we develop the optimal cache placement solution for the MCCS with an arbitrary cache size. Specifically, we propose an optimization framework to formulate the cache placement problem, aiming to minimize the average load during the delivery phase, regardless of the randomness in the user requests and the cache size. Through reformulation, we show the optimization problem is in fact a linear programming problem. By exploring the properties in the constraints, we solve the problem to obtain the optimal placement solution in closed-form. We verify that the existing optimal cache placement scheme [14, 15] for specific cache sizes is a special case in our solution. To the best of our knowledge, this is the first work to provide the complete optimal cache placement scheme for the MCCS, regardless of the number of users and their cache memory size. Note that our optimization approach can also be used to derive the optimal cache placement solution for the original CCS with arbitrary cache size, for the peak load minimization. The same solution for the CCS has been obtained through optimization in independent work [12]. However, different approach is used there which cannot be applied to solve the cache placement problem for the MCCS.

Through simulation studies, we analyze the performance of proposed optimal caching scheme as compared with existing schemes, including centralized and decentralized CCS schemes. We show that the optimal cache placement has some interesting characteristics. For the cache size ranging from zero to the total size of all files, the cache placement shows a symmetric pattern. Furthermore, when the number of files and the cache size are fixed, while user population increases, the load increases with an interesting pattern, suggesting a changing caching gain achieved at different cache sizes.

Ii System Model

Consider a cache-aided transmission system with a server connecting to users, each with a local cache, over a shared error-free link. Shown in Fig. 1, examples of the described scenario include mobile edge computing networks where network edge nodes (e.g., base stations) with cache storage are connected through backhaul to a service provider residing in the cloud, or a server in a base station serving its in-cell users with local caches. The server has a database consisting of files, , each of size bits. Denote . We assume uniform popularity distribution of these files, i.e., , for . Denote the set of users by . Each user has a local cache of capacity bits, for , and we denote its cache size (normalized by file size) by .

Fig. 1: An example of cache-aided systems, where edge nodes are connected to the central service provider through a backhaul link. Each edge node has a local cache to alleviate the burden of the backhaul.

The system operates in two phases: cache placement phase and content delivery phase. The cache placement is performed in advance during the off-peak hours without knowing the user file requests, and is changed at a longer time scale. During this phase, for a cache placement scheme, each user uses a caching function to map files into its cached content: .

During file requests, each user independently requests one file from the server, with the index of requested file denoted by , . Denote as the demand vector containing the indices of file requested by all users. In the content delivery phase, based on the demand vector and the cache placement, the server generates coded messages and transmit them to the users over the shared link. Denote the codeword as , where is the encoding function for demand . Upon receiving the codewords, each user applies a decoding function to obtain the (estimated) requested file from the received signal and its cached content as

Thus, an entire coded caching scheme can be represented by the caching, encoding and decoding functions. The following defines a valid coded caching scheme.

Definition 1.

A caching scheme is valid if each user can reconstruct the file it requested, i.e. , , for any demand vector .

Coded Caching: In the CCS [6, 7] and the MCCS [14]), each file is partitioned into non-overlapping subfiles with equal size, one for each specified user subsets. During the cache placement phase, user caches those subfiles for the user subsets containing user . In the delivery phase, the server delivers the missing subfiles of the requested file not in a user’s local cache, using a coded multicasting delivery scheme.

Iii Problem formulation

A key design issue in a coded caching scheme is the cache placement. Existing coded caching schemes describe specific ways of file partitioning for the cache placement, when cache size is multiple of . Instead of this design approach, we formulate the coded caching problem as a cache placement optimization problem for any given cache size , to minimize the average rate (load) over the shared link, where we consider the delivery strategy specified in the MCCS.

Iii-a Cache placement

To formulate the problem, each file is partitioned into non-overlapping subfiles, one for each unique user subset . Since we assume that file lengths and popularity and the cache sizes are all uniform, a symmetric cache placement is adopted by treating all files equally. Thus, all the files are partitioned in the same way. That is, let denote the subfile of for user subset . Its size satisfies , for all . In addition, the size of these files only depends on the size of user subset under the symmetric cache placement.

Note that there are different user subsets with the same size , for . Let denote user subset of size , i.e., , for . Let denote cache subgroup containing all user subsets of size , for . Thus, all user subsets are partitioned into cache subgroups based on the subset size. Accordingly, all subfiles are partitioned into subgroups: , , where the subfiles in the same group have the same subfile size , for all , .

Let denote the cache placement vector (common to all files) describing the size of subfiles to be cached in each cache subgroup. In the cache placement phase, user caches all the subfiles in , , that are for user subsets containing user . In other words, user caches , .

For a given caching scheme, each original file should be able to be reconstructed by combining all its subfiles. For each file, among the partitioned subfiles, there are subfiles with size (for all user subsets with ). Thus, we have the file partitioning constraint

(1)

For the local cache at each user, note that among all user subsets of size , there are total different user subsets containing the same user, for . Since each file is partitioned based on user subsets, it means that for each file, the total number of subfiles a user can possibly cache is ; Considering the subfile size for each cache subgroup , this amounts to bits that can be cached by the user for each file. Define as the normalized cache size. We have the local cache size constraint at each user as

(2)

Iii-B Content delivery under the MCCS

The delivery scheme in the CCS is by multicasting a unique coded message to each user subset , , formed by bitwise XOR operation of subfiles (of the same size ) as:

(3)

Each user in subset can retrieve the subfile of its requested file. Assuming the worst case of distinct file requests, coded messages as in (3) for all user subsets are delivered. There are user subsets in , to which coded messages of size are delivered. The overall peak load is .

When the file requests are not distinct, the coded delivery in the original CCS contains some redundant subfiles. The recently proposed MCCS [14] provides a new delivery strategy that removes this redundancy existed in the CCS for further load reduction. Let denote the distinct requests for demand vector . Based on the MCCS, we have the following Definitions 2 and 3.

Definition 2.

Leader group: For a demand vector with distinct requests, a leader group is chosen for the delivery phase, where satisfies and users in have exactly distinct requests.

Definition 3.

Redundant group: Any user subset is called a redundant group if it does not intersect with the leader group, i.e. ; otherwise, it is a non-redundant group.

We have the following proposition which can be derived straightforwardly from the decentralized MCCS describe in [14].

Proposition 1.

A caching strategy is valid as long as all the coded messages formed by the non-redundant groups are delivered.

Iii-C Cache Placement Optimization for the MCCS

Our objective is to minimize the expected load by optimizing the cache placement. From Proposition 1, we know that is equal to the expected size of all the coded messages of the non-redundant groups. There are user subsets with size , and among them, are redundant subgroups, of which coded messages to them are redundant for users to recover the subfiles for their requested files. Removing these redundant transmissions, the expected load is given by

(4)

where the expectation is taken with respect to . Following the common practice, we define when or . The cache placement optimization problem is formulated as

s.t.
(5)
(6)

where constraints (5) and (6) are the requirements for the subfile size.

Iv Optimal Cache Placement for Average Load Minimization

For the uniform popularity among files, the probability of having distinct requests, for , is

(7)

where is the Stirling number of the second [16]. Based on this, we can express the expected load in (4) as

(8)

It is clear that is linear in ’s. In addition, all the constraints in P1 are also linear in ’s. Thus, P1 is a linear programming problem with respect to , and we can solve it to obtain the optimal cache placement solution which is given bellow.

Theorem 1.

For any cache size and , the optimal cache placement to minimize the expected load in P1 is , where , and

(9)

The minimum expected load is

(10)

We will detail the proof of Theorem 1 in Section IV-A.

Remark: Note that the minimization of the expected rate is also considered recently in[15] using a different approach, where the authors show that the solutions proposed by [14] for the cache size at points are optimal. However, the optimal cache placement remains unknown for arbitrary cache size between those points. Our result in Theorem 1 provides the complete solution for the optimal cache placement for the MCCS, regardless of the relationship among , and . In Section IV-A, we show that the solution in [15] is one special case of subfile partitioning in our proof of Theorem 1.

Iv-a Proof of Theorem 1

We first reformulate problem P1, and then solve it using the KKT conditions [17].

Iv-A1 Problem reformulation

Define , where , , , where , and , where . Then, we can rewrite P1 as

s.t. (11)
(12)
(13)
(14)

To solve P2, define the Lagrange multipliers , , for constraints (12), (13), (14) , respectively, and for constraint (11). Then, we have the KKT conditions as follows

(15)
(16)
(17)
(18)
(19)

Iv-A2 Optimal file partitioning strategy

We now introduce three lemmas (Lemmas 1-3) which help reduce the complexity in finding the solution. The corresponding proofs are omitted due to the space limitation.

Lemma 1.

At the optimality, inequality (12) is attained with equality, i.e., the cache storage is always fully utilized under the optimal cache placement vector for P1.

Lemma 2.

When , the optimal with the minimum expected load ; when , the optimal with the minimum expected load .

Lemma 2 describes the two extreme cases of having no cache memory () and sufficient cache size to hold all files (). In the following, we only need to discuss the case when .

By exploring the properties of KKT conditions (16)-(18), we can show that and have feasible solutions only if has less than three nonzero elements. This is stated in the following lemma.

Lemma 3.

For , the optimal cache placement vector has at most two non-zero elements.

Lemma 3 implies that the number of nonzero elements of an optimal caching vector can only be one or two (it cannot be zero from constraint (11)). We now derive the solution in these two cases separately.

Case 1) One non-zero element: In this case, there exists for some , and for , . From (11), we have . Thus, we have . To find the cache size that leads to this solution, note that since there is only one non-zero subfile size, from Lemma 1, we have . Thus, the relation of the normalized cache size and index is given by .

Thus, if for some satisfies , the optimal cache placement is , where . For given demand , the corresponding load can be computed based on the redundancy to be removed in the delivery phase, and we have

(20)

The expected load can be obtained as

(21)

Remark: Note that the optimal solution with one non-zero element in corresponds to equal file partitioning, where all subfiles have equal size. The optimal obtained above exactly matches the cache placement scheme proposed in [14] for cache size at points , which is shown to be optimal in [15] using a different approach than ours. Here we see that it is a special case in our general cache placement optimization problem.

Case 2) Two non-zero elements: In this case, there exist some and , such that , and , , . With only two non-zero variables and , from Lemma 1, we can rewrite (15) as

(22)

Also from (11), we have

(23)

From (22) and (23), we have and . Since and are both non-zero, the two solutions only exists when

(24)

Assume . Since is an increasing function of , we have . Based on (24), we have and , which means and should satisfy .

Lemma 4.

For and satisfying , the expected rate is a decreasing function of and an increasing function of .

From Lemma 4, we have the conclusion that the minimum expected load can only be obtained when . Any other relation would result in larger . Following this, for satisfying , we have the optimal and as

The corresponding expected load is . Substituting the values of , and into the above expressions, let , we have the following conclusion: For , the optimal cache placement where

Remark: The optimal cache placement indicates that, each file is split into two parts with sizes and . Then each part is further partitioned into subfiles of equal sizes, with the first partitioned into subfiles (for cache subgroup ), and the second partitioned into subfiles (for cache subgroup ). User will cache these two types of subfiles for all user subsets including .

Given any , the resulting load depends on the amount of redundancy removed in the delivery phase in the MCCS:

  • For : There are redundant coded messages for both user subsets of size and , the load can be derived as

    (25)
  • For : The redundant coded message can only be found for user subsets of size , the load is

    (26)
  • For : No redundant message for any user subsets, the load is

    (27)

The minimum expected load in this case can be computed similarly as in (21), which is given by

(28)

Cases 1 and 2 give the optimal placement for any : Case 1 gives the optimal placement solution for , , and Case 2 gives the optimal solution for , . Combining these with the solutions for and in Lemma 2, we have the results in Theorem 1.

V Numerical Results

We present the numerical results to evaluate the performance of the optimized cache placement scheme. Consider a system with files of equal size, users with the same cache size .

First, we show in Table I the values of the optimal cache placement vector for different cache size for and . Beside the two extreme cases when or , where all the files are either in the servers, or stored at the local cache, respectively, for in between, we observe that always has two non-zero elements. Also, interesting to observe that the non-zero elements in are shifting to cache subgroup of larger size as increases, and the optimal cache placement is symmetric for cache size moving from between interval .

Fig. 3 shows the trade-off of expected load vs. cache size for and . We compare the performances of the optimal cache placement scheme for the MCCS obtained in Theorem 1 and the state-of-art schemes, including the uncoded cache scheme, the centralized CCS[6, 12], the decentralized CCS[7] and the decentralized MCCS[14]. We see that the optimal placement solution for the MCCS always outperforms all the other schemes regardless the cache size . We also observe that the expected rate monotonically decreases with . Also, at all the points , has one non-zero element (i.e., equal file partitioning), and for in between, has two non-zero elements (i.e., two different subfile sizes for two cache subgroups).

For fixed number of files and cache size and , Fig. 3 shows how the expected load changes with the increasing number of users . The obtained optimal cache placement solution outperforms all the other schemes for all values. Under the optimal solution for both the CCS and the MCCS, the load increases as increases, but the rate of increment exhibits a certain pattern. The load increasing rate slows down when reaches , for , but becomes higher after passes those points. This suggests the caching gain increases as increases from to , with the highest gain achieved at , for . Note that for a given normalized cache size , determines the caching subgroup sizes and the number of user subsets for coded multicasting under the optimal cache placement. For , the optimal only has one cache subgroup (i.e., equal file partitioning), while at the two sides of this point different caching subgroups are used.

Optimal Cache Placement
0 1.0 0 0 0 0 0 0 0
1 0.3 0.1 0 0 0 0 0 0
2 0 0.086 0.019 0 0 0 0 0
3 0 0 0.043 0.003 0 0 0 0
4 0 0 0.01 0.023 0 0 0 0
5 0 0 0 0.014 0.014 0 0 0
6 0 0 0 0 0.01 0.023 0 0
7 0 0 0 0 0.003 0.043 0 0
8 0 0 0 0 0 0.019 0.086 0
9 0 0 0 0 0 0 0.1 0.3
10 0 0 0 0 0 0 0 1.0
TABLE I: The Optimal cache placement (, ).
Fig. 2: The expected load versus ( and ).
Fig. 3: The expected load versus ( and ).
Fig. 2: The expected load versus ( and ).

Vi Conclusion

In this paper, we used an optimization approach to formulate the general cache placement design for the MCCS with arbitrary cache size as a cache placement optimization problem to minimize the expected load for file delivery from the server to users. By showing that the resulting problem is a linear programming problem, we obtained the optimal solution. Our result provides a complete optimal cache placement solution that is general for any number of users, files, and cache size. Numerical studies showed the characteristics of the optimal cache placement solution as cache increases, and revealed how the expected load increases as the number of users increases.

References

  • [1] Cisco, “Global mobile data traffic forecast update, 2016-2021,” 2017.
  • [2] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The role of proactive caching in 5g wireless networks,” IEEE Commun. Mag., vol. 52, no. 8, pp. 82–89, 2014.
  • [3] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. Leung, “Cache in the air: exploiting content caching and delivery techniques for 5g systems,” IEEE Commun. Mag., vol. 52, no. 2, pp. 131–139, 2014.
  • [4] S. Borst, V. Gupta, and A. Walid, “Distributed caching algorithms for content distribution networks,” in Proc. IEEE Conf. on Computer Communications (INFOCOM), 2010, pp. 1–9.
  • [5] S.-H. Park, O. Simeone, and S. Shamai Shitz, “Joint optimization of cloud and edge processing for fog radio access networks,” IEEE Trans. Wireless Commun., vol. 15, no. 11, pp. 7621–7632, 2016.
  • [6] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, 2014.
  • [7] ——, “Decentralized coded caching attains order-optimal memory-rate tradeoff,” IEEE/ACM Trans. Netw., vol. 23, no. 4, pp. 1029–1040, 2015.
  • [8] U. Niesen and M. A. Maddah-Ali, “Coded caching with nonuniform demands,” IEEE Trans. Inf. Theory, vol. 63, no. 2, pp. 1146–1158, 2017.
  • [9] A. Sengupta, R. Tandon, and O. Simeone, “Fog-aided wireless networks for content delivery: Fundamental latency tradeoffs,” IEEE Trans. Inf. Theory, vol. 63, no. 10, pp. 6650–6678, 2017.
  • [10] M. A. Maddah-Ali and U. Niesen, “Cache-aided interference channels,” in Proc. IEEE Int. Symp. on Infor. Theory (ISIT), 2015, pp. 809–813.
  • [11] F. Xu, M. Tao, and K. Liu, “Fundamental tradeoff between storage and latency in cache-aided wireless interference networks,” IEEE Trans. Inf. Theory, vol. 63, no. 11, pp. 7464–7491, 2017.
  • [12] A. M. Daniel and W. Yu, “Optimization of heterogeneous coded caching,” arXiv preprint arXiv:1708.04322, 2017.
  • [13] S. Jin, Y. Cui, H. Liu, and G. Caire, “Structural properties of uncoded placement optimization for coded delivery,” arXiv preprint arXiv:1707.07146, 2017.
  • [14] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “The exact rate-memory tradeoff for caching with uncoded prefetching,” IEEE Trans. Inf. Theory, vol. 64, no. 2, pp. 1281–1296, 2018.
  • [15] S. Jin, Y. Cui, H. Liu, and G. Caire, “Uncoded placement optimization for coded delivery,” arXiv preprint arXiv:1709.06462, 2018.
  • [16] J. Riordan, Introduction to combinatorial analysis.   Courier Corporation, 2012.
  • [17] S. Boyd and L. Vandenberghe, Convex Optimization.   Cambridge University Press, March 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
330607
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description