Centralized Caching with Unequal Cache Sizes

Centralized Caching with Unequal Cache Sizes

Abstract

We address centralized caching problem with unequal cache sizes. We consider a system with a server of files connected through a shared error-free link to a group of cache-enabled users where one subgroup has a larger cache size than the rest. We investigate caching schemes with uncoded cache placement which minimize the load of worst-case demands over the shared link. We propose a caching scheme which improves upon existing schemes by either having a lower worst-case load, or decreasing the complexity of the scheme while performing within 1.1 multiplicative factor suggested by our numerical simulations.

\IEEEoverridecommandlockouts{IEEEkeywords}

Centralized Caching, Unequal Cache Sizes \IEEEpeerreviewmaketitle

1 Introduction

Content traffic which is the dominant form of traffic in data communication networks is not uniformly distributed over the day. This makes caching an integral part of data networks in order to tackle the non-uniformity of traffic. Caching schemes consist of two phases for content delivery. In the first phase, called the placement phase, content is partly placed in caches close to users. This phase takes place during off-peak hours when the requests of users are still unknown. In the second phase, called the delivery phase, each user requests a file while having access to a cache of pre-fetched content. This phase takes place during peak hours when we need to minimize the load over the network.

The information-theoretic study of a network of caches started by Maddah-Ali and Niesen [1]. They considered a centralized multicast set-up where there is a server of files connected via a shared error-free link to a specific group of users each equipped with a dedicated cache of equal size. In this work, they introduced a new caching gain called global caching gain. This gain is achieved by providing coding opportunities over the shared link in the delivery phase while designing the placement phase. This gain is in addition to local caching gain, which was traditionally known as the caching gain and is the result of the fact that users have access to part of their requested files.

This information-theoretic study has then been extended to address other variations of the problem which arise in practice such as decentralized caching [2] where the identity or the number of users is not clear in the placement phase; caching with non-uniform file popularity [3] where some of the files in the server are more popular than the others; and hierarchical coded caching [4] where there are multiple layers of caches. Also, while most of existing works consider uncoded cache placement where the cache of each user is populated by directly placing part of server files, it has been shown for some special cases that coded cache placement can outperform uncoded one [1, 5, 6, 7].

Figure 1: System model with a server storing files of size bits connected through a shared error-free link to users. User  is equipped with a cache of size bits where , , and , , for some .

1.1 Existing works and Contributions

In this work, we address caching problems where there is a server connected through a shared error-free link to a group of users with caches of possibly different sizes. The objective is to minimize the load of worst-case demands over the shared link. Considering decentralized caching with unequal cache sizes, the placement phase is the same as the one for the equal-cache case where randomly part of each file is assigned to the cache of each user. The main challenge is to exploit all the coding opportunities in the delivery phase[8, 9].

However, considering centralized caching with unequal cache sizes, the challenge also involves designing the placement phase. For the two-user case, Cao et al. [10] proposed a caching scheme which is optimum and shows that coded cache placement outperforms uncoded one. For a system with an arbitrary number of users, Saeedi Bidokhti et al. [11] proposed a scheme constructed based on memory sharing and the scheme for centralized caching with equal cache size [1]. Also, Ibrahim et al. [12] formulated this problem as a linear optimisation problem in which the number of parameters grows exponentially with the number of users. Both works [11] and [12] utilised uncoded cache placement. Considering a system with more than two users, due to the simplicity of the scheme by Saeedi Bidokhti et al. [11] at the cost of performance, and the complexity of the optimisation problem by Ibrahim et al. [12] when the network size is large, we here address centralized caching with unequal cache sizes again where we also consider uncoded cache placement.

We propose a caching scheme for centralized caching with unequal cache sizes where there are two subgroups of users, one with a larger cache size than the others. Our caching scheme outperforms the caching scheme proposed by Saeedi Bidokhti et al. [11]. Also, in comparison to the work by Ibrahim et al. [12] as our scheme is an explicit scheme, it makes the complexity of running an optimisation problem (in which the number of parameters grows exponentially with the number users) redundant; this is while it performs within 1.1 multiplicative factor suggested by our numerical simulations.

2 System Model

We consider centralized caching where there is a server storing independent files , , , connected through a shared error-free link to cache-enabled users, as shown in Fig. 1. We assume that the number of files in the server is at least as many as the number of users, i.e., . Each file in the server is of size bits (where is the set of natural numbers), and is uniformly distributed over the set . User , , , is equipped with a cache of size bits where . We represent all the cache sizes by the vector . In this work, we assume that there are two subgroups of users, one with a larger cache size than the other, i.e.,

for some . Each user requests a file from the server. The file requested by user  is denoted by . We then represent the request of all the users by the vector where .

As mentioned earlier, each caching scheme consists of two phases, the placement phase and the delivery phase. The placement phase consists of caching functions

where

i.e.,

The delivery phase consists of encoding functions

where

i.e.,

We refer to as the load of the transmission and as the rate of the transmission over the shared link.

The delivery phase consists of also decoding functions

i.e.,

where is the decoded version of at user  when the demand vector is .

The probability of error for the caching scheme is defined as

Definition 1

For a given , we say that the rate is achievable if for every and large enough , there exists a caching scheme with rate such that its probability of error is less than . For a given , we also define as the infimum of all achievable rates.

3 Background

In this section, we first consider centralized caching with equal cache sizes, i.e., , and present the optimum caching scheme among those with uncoded placement [13]. We then present existing works on centralized caching with unequal cache sizes where there are more than two users [11, 12].

3.1 Equal Cache Sizes

Here, we present the optimum caching scheme for centralized caching with equal cache sizes when the cache placement is uncoded, and  [1]. In this scheme, a parameter denoted by is defined at the beginning as

First, assume that is an integer. As , we have . In the placement phase, , , is divided into non-overlapping parts denoted by where and ( denotes the cardinality of the set ). is then placed in the cache of user if . This means that the size of each part is bits, and we place parts from each file in the cache of user . Therefore, we satisfy the cache size constraint as we have

In the delivery phase, the server transmits

for every where . This results in the transmission rate of

Maddah-Ali and Niesen [1] proved that this delivery scheme satisfies the demands of all the users.

Now, assume that is not an integer. In this case, memory sharing is utilized where is defined as

and is computed using the following equation

where . Based on the , the caching problem is divided into two independent problems. In the first one, the cache size is , and we cache the first bits of the files, denoted by , . In the delivery phase, the server transmits

(1)

for every where .

In the second one, the cache size is , and we cache the last bits of the files, denoted by , . In the delivery phase, the server transmits

(2)

for every where .

Consequently, the rate

(3)

is achieved where is considered to be zero if .

Figure 2: An existing scheme for centralized caching with unequal cache sizes

3.2 Unequal Cache Sizes

Here, we present existing works on centralized caching with unequal cache sizes where there are more than two users [11, 12].

Scheme 1 [11]

In this scheme, assuming without loss of generality that , the problem is divided into caching problems. In problem , , there are two groups of users: the first group is composed of users 1 to , all with equal cache size of bits; the second group is composed of users to , all without cache. In problem , is considered as zero, and there is only one group consisting of users all with equal cache size of bits. In problem , we only consider bits of the files where . This scheme is schematically shown in Fig. 2 for the three-user case. Based on the equal cache results, the transmission rate for caching problem  is

(4)

The first term on the right-hand side of (4) corresponds to the transmission rate for the first groups of users, and the second term corresponds to the transmission rate for the second group of users which are without cache.

Therefore, by optimising the sum rate over the parameters , we achieve the following transmission rate

(5)

Scheme 2 [12]

In this scheme, the problem of centralized caching with unequal cache sizes is formulated as an optimisation problem where it is assumed that the cache placement is uncoded, and the delivery phase uses linear coding. To characterize all possible uncoded placement policies, the parameter , , is defined where represents the length of as the fraction of stored in the cache of users in . Hence, these parameters must satisfy

and

In the delivery phase, the server transmits

to the users in where is a non-empty subset of . , which is a part of , needs to be decoded at user , and cancelled by all the users in . Therefore, is constructed from subfiles where and . To characterize all the possible linear delivery policies, two sets of parameters are defined: (i) where represents the length of , and consequently . (ii) where is the length of which is the fraction of used in the construction . These parameters needs to satisfy some conditions which can be found in the work of Ibrahim et al. [12, equations (25)–(30)]. By considering as all the optimisation parameters, and as all the conditions that need to be met in the both placement and delivery phases, we achieve the following transmission rate

(6)

4 Proposed Caching Scheme

In this section, we first provide some insights into our proposed scheme using an example. We then propose a scheme for a system with two subgroups of users, one with a larger cache size than the other, i.e.,

for some .

4.1 An Example

In our example, as shown in Fig. 3, we consider the case where the number of files in the server is four, denoted for simplicity by , and the number of users is also four. The first three users have a cache of size bits, and the forth one has a cache of size bits. First, we ignore the extra cache available at the first three users, and use the equal-cache scheme. This divides each file into six parts, and places , , in the cache of user  if . Therefore, assuming without loss of generality that users 1, 2, 3 and 4 request , , , and respectively, the server needs to transmit , , , and , and we achieve the rate of by ignoring the extra cache available at the first three users. Now, we try to utilize the extra cache available at users 1, 2, and 3. To do this, we put in the extra cache of user 1, in the extra cache of user 2, and in the extra cache of user 3. This removes from the transmission of the equal-cache scheme, and we achieve the rate of .

Let also see the cases where the extra cache available at the first three users is less or more than bits. First, let assume it is less than bits, say bits for some . In this case, we can remove portion of from the transmission of the equal-cache scheme, and we achieve the rate of . Now, let assume that the extra cache available at the first three users is more than bits. This additional extra cache cannot decrease the transmission rate in this example. This is because, for the case where , we also achieve the optimum rate of by putting all the four files in the caches of the first three users, and half of each file in the cache of the last user.

Figure 3: An example for our proposed scheme

4.2 Scheme with Two Levels of Caches

In this subsection, we explain our proposed scheme for the system where the first users have a cache of size bits, and the last users have a cache of size bits for some .

On the Equal-Cache Scheme

We first address an equal-cache problem which is used later in our proposed scheme for the unequal-cache problem. Suppose that we initially have a system with files, and users each having a cache of size bits. We use the equal-cache scheme described in Section 3.1 to fill the caches. We later increase the cache size of each user by bits for some . The problem is that we are not allowed to change the content of the first bits that we have already filled, but we want to have the equal-cache scheme described in Section 3.1 for the new system with files, and users each having a cache of size bits.

We present our solution when and for some integer . The solution can be easily extended to an arbitrary and . In the cache placement for the system with the parameters , we divide , , into subfiles denoted by , and place the ones with in the cache of user . This means that we put subfiles of in the cache of each user. After increasing the cache of each user to bits, we further divide each subfile into parts denoted by , , and place in the cache of user . This adds , , to the cache of user  while keeping the existing content of the first bits of user , i.e., , . This means that we add

to the cache of each user which satisfies the cache size constraint. Our cache placement for the system with the parameters becomes the same as the one described in Section 3.1 by merging all the parts which have the same as a single subfile , where .

Proposed Scheme

We here present our proposed scheme for the system where , , , and , , for some .

Our placement phase is composed of two stages. In the first stage, we ignore the extra cache available at the first users, and use the equal-cache placement for the system with the parameters . Hence, at the end of this stage, we can achieve the rate in (3) by transmitting , defined in (1), for any where , and , defined in (2), for any where .

In the second stage of our placement phase, we fill the extra cache available at the first users by looking at what are going to be transmitted when ignoring these extra caches. To do so, we try to reduce the load of the transmissions which are intended only for the users with a larger cache size, i.e., for any (), and for any (). These transmissions are constructed from the subfiles , , , and , , . These subfiles occupy

(7)

of each user’s cache, and the sum-length of these subfiles for any is

Considering our aim in designing the second stage of our placement phase, we again use the equal-cache placement for the subfiles , , , and , while considering the extra cache available at the first users. This means that we use the equal-cache scheme for a system with files of size bits, and users each having a cache of size bits where

(8)

Note that we are not allowed to change what we have already placed in the cache of the first users in the first stage. Otherwise, we cannot assume that, from the delivery phase when ignoring the extra caches, the transmissions where , , , , and where , , , , can still be decoded by target users. Therefore, we employ our proposed solution in Section 4.2.1 for using the equal-cache scheme for the second time.

Two scenarios can happen in the second stage.

Scenario  where : In this scenario, we achieve the rate

where

is the load of the transmissions intended only for the users with a larger cache size if we ignore their extra caches (or equivalently if we just utilize the first stage of our placement phase). is the new load of the transmissions intended only for the users with a larger cache size at the end of the second stage.

Scenario  where : In this scenario, we also use memory sharing between the case with , where

and the case with . In the system with , according to (8), we have , and we achieve the rate . In the system with , we can simply just remove the first users as they can cache the whole files in the server, and we achieve the rate . Therefore, in this scenario, we achieve the rate

where takes a value between zero and one, and is calculated using .

5 Comparison with existing works

In this section, we present our simulation results comparing our proposed scheme with the existing works, described in Section 3.2. Our simulation results, characterizing the trade-off between the worst-case transmission rate and cache size for systems with two levels of cache sizes, suggest that our scheme outperforms the scheme by Saeedi Bidokhti et al. [11]. Considering the work by Ibrahim et al. [12], as the complexity of the solution grows exponentially with the number of users, we simulated that work for systems with up to four users. Our simulation results suggest that our scheme performs withing 1.1 multiplicative factor of that scheme, i.e., . As an example, this comparison is shown in Fig. 4 for a four-user system with the parameters , , . For these parameters, our scheme performs the same as the work by Ibrahim et al. [12].

Figure 4: Comparing the worst-case transmission rate of the proposed scheme with the existing ones.

6 Conclusion

We addressed the problem of centralized caching with unequal cache sizes. We proposed an explicit scheme for the system with a server of files connected through a shared error-free link to a group of users where one subgroup is equipped with a larger cache size than the rest. Simulation results comparing our scheme with existing works showed that our solution improves upon existing works by either improving the worst-case transmission rate over the shared link or decreasing the complexity while performing within 1.1 multiplicative factor.

References

  1. M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
  2. ——, “Decentralized coded caching attains order-optimal memory-rate tradeoff,” IEEE/ACM Trans. Netw., vol. 23, no. 4, pp. 1029–1040, Aug. 2015.
  3. U. Niesen and M. A. Maddah-Ali, “Coded caching with nonuniform demands,” IEEE Trans. Inf. Theory, vol. 63, no. 2, pp. 1146–1158, Feb. 2017.
  4. N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. N. Diggavi, “Hierarchical coded caching,” IEEE Trans. Inf. Theory, vol. 62, no. 6, pp. 3212–3229, June 2016.
  5. Z. Chen, P. Fan, and K. B. Letaief, “Fundamental limits of caching: improved bounds for users with small buffers,” IET Commun., vol. 10, no. 17, pp. 2315–2318, Nov. 2016.
  6. J. Gómez-Vilardebó. (2017, May 23) Fundamental limits of caching: improved bounds with coded prefetching. [Online]. Available: https://arxiv.org/abs/1612.09071v4
  7. C. Tian and K. Zhang. (2017, Apr. 25) From uncoded prefetching to coded prefetching in coded caching. [Online]. Available: https://arxiv.org/abs/1704.07901v1
  8. S. Wang, W. Li, X. Tian, and H. Liu. (2015, Aug. 29) Coded caching with heterogenous cache sizes. [Online]. Available: https://arxiv.org/abs/1504.01123v3
  9. M. Mohammadi Amiri and D. Gündüz, “Decentralized coded caching with distinct cache capacities,” in Proc. 50th Asilomar Conf. Signals Syst. Comput., Pacific Grove, CA, Nov. 2016, pp. 734–738.
  10. D. Cao, D. Zhang, P. Chen, N. Liu, W. Kang, and D. Gündüz. (2018, Feb. 8) Coded caching with heterogeneous cache sizes and link qualities: The two-user case. [Online]. Available: https://arxiv.org/abs/1802.02706v1
  11. S. Saeedi Bidokhti, M. Wigger, and R. Timo. (2016, May 8) Noisy broadcast channels with receiver caching. [Online]. Available: https://arxiv.org/abs/1605.02317v1
  12. A. M. Ibrahim, A. A. Zewail, and A. Yener, “Centralized coded caching with heterogeneous cache sizes,” in Proc. IEEE Wirel. Commun. Netw. Conf. (WCNC), San Francisco, CA, Mar. 2017.
  13. Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “The exact rate-memory tradeoff for caching with uncoded prefetching,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, June 2017, pp. 1613–1617.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
127114
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description