On Combination Networks with Cache-aided Relays and Users

On Combination Networks with Cache-aided Relays and Users

Abstract

Caching is an efficient way to reduce peak hour network traffic congestion by storing some contents at the user’s cache without knowledge of later demands. Coded caching strategy was originally proposed by Maddah-Ali and Niesen to give an additional coded caching gain compared the conventional uncoded scheme. Under practical consideration, the caching model was recently considered in relay network, in particular the combination network, where the central server communicates with users (each is with a cache of files) through immediate relays, and each user is connected to a different subsets of relays. Several inner bounds and outer bounds were proposed for combination networks with end-user-caches. This paper extends the recent work by the authors on centralized combination networks with end-user caches to a more general setting, where both relays and users have caches. In contrast to the existing schemes in which the packets transmitted from the server are independent of the cached contents of relays, we propose a novel caching scheme by creating an additional coded caching gain to the transmitted load from the server with the help of the cached contents in relays. We also show that the proposed scheme outperforms the state-of-the-art approaches.

\definechangesauthor

[color=blue,name=Mingyue Ji]MJ

1 Introduction

1.1 Shared Link Networks

Caching content at the end-user’s memories mitigates peak-hour network traffic congestion. The seminal paper [dvbt2fundamental] by Maddah-Ali and Niesen (MAN) proposed an information theoretic model for cache-aided shared link networks. Such a model comprises a server with files of bits each, users with local memory of size files, and a single error-free broadcast “bottleneck” link. A caching scheme comprises two phases. (i) Placement phase: during off-peak hours, the server places parts of its library into the users’ caches without knowledge of what the users will later demand. Centralized caching systems allow for coordination among users during the placement phase, while decentralized ones do not. When pieces of files are simply copied into the cache, the cache placement phase is said to be uncoded; otherwise it is coded. (ii) Delivery phase: each user requests one file during peak-hour time. According to the user demands and cache contents, the server transmits bits in order to satisfy all user demands. The goal is to determine , the minimum load that satisfies arbitrary/worst-case user demands.

The coded caching strategy (with coded delivery) originally proposed in [dvbt2fundamental] gives an additional multiplicative global caching gain compared to uncoded caching schemes. For centralized systems, each file is split into a number of non-overlapping equal size and uncoded pieces that are strategically placed into the user caches. During the deliver phase, coded multicast messages are sent through the shared link so that a single transmission simultaneously serves several users. In [ontheoptimality], we showed that the MAN scheme is optimal under the constraint of uncoded cache placement when . In [exactrateuncoded], the MAN scheme was shown to have redundant multicast transmissions when . The achieved load in [exactrateuncoded] was proved to be optimal under the constraint of uncoded placement, and that it is optimal to within a factor of 2 [yas2].

Figure 1: A combination network with end-user caches, with relays and users, i.e., .

1.2 Combination Networks

Since users may communicate with the central server through intermediate relays, recently caching was considered in relay networks. The caching problem with general relay networks was originally considered in [multiserver], where a caching scheme including uncoded cache placement and linear network coding was proposed. In [Naderializadeh2017onthoptimality], it was proved that under the constraint of uncoded cache placement and of the separation between caching and multicast message generation on one hand, and message delivery on the other hand (i.e., the generation of the multicast messages is independent of the communication network topology), the proposed scheme is order optimal within a factor of among the separation schemes.

Since it is difficult to analyze general relay networks, a symmetric network, known as combination network, has received a significant attention [cachingJi2015]. In this network as illustrated in Fig. 1, a server equipped with files is connected to relays. Each of the users, with a memory of files each, is connected to a unique -subset of relays. The goal is to minimize the maximum load among all links, which are assumed to be error-free and orthogonal.

The existing achievable schemes for centralized combination networks could be divided into two classes, based on uncoded cache placement [cachingincom, novelwan2017, wan2017novelmulticase, PDA2017yan] and cache placement [Zewail2017codedcaching, asymmetric2018wan, Wan2018ITA], respectively. The authors in [cachingincom, novelwan2017] proposed delivery schemes to deliver MAN multicast messages. With MAN placement, we proposed a delivery by generating multicast messages based on the network topology in [wan2017novelmulticase]. The caching scheme proposed in [PDA2017yan] used the Placement Delivery Array (PDA) to reduce the sub-packetization of the schemes in [cachingincom, Zewail2017codedcaching] for the case divides . The combination network was treated as uncoordinated shared-link models in [Zewail2017codedcaching] by using an MDS precoding. By leveraging the connectivity of users and relays respectively, two asymmetric coded placements were proposed in [asymmetric2018wan, Wan2018ITA] which can lead a symmetric delivery phase. Outer bounds (based on cut-set or acyclic directed graph for the corresponding index coding problem) were proposed in [cachingJi2015, novelwan2017]. Some existing schemes are known to be optimal under the constraint of uncoded placement for some system parameters [novelwan2017, wan2017novelmulticase, asymmetric2018wan, PDA2017yan].

1.3 Beyond Combination Networks

The existing inner and outer bounds for combination networks with end-user caches to more general settings:

  1. Combination networks with cache-aided relays and users was considered in [Zewail2017codedcaching, wan2017novelmulticase], where the main idea of [Zewail2017codedcaching, wan2017novelmulticase] is to divide each file into two parts, the part only cached in relays and the remaining part. The first parts of the demanded files are directly transmitted from relay to user and the delivery of the second parts is equivalent to the combination network with end-user-caches.

  2. The proposed scheme for centralized systems in [wan2017novelmulticase] was extended to decentralized systems with dMAN cache placement.

  3. In [wan2017novelmulticase], we extended the proposed inner bound to more general relay networks than combination networks, where each user is connected to an arbitrary subset of relays.

1.4 Contributions

In this paper, we consider combination networks with cache-aided relays and users, based on the asymmetric coded placement in [Wan2018ITA], we propose a caching placement strategy where the cached contents in relays are treated as the additional side informations of the connected users which can also help users to decode the coded messages transmitted from the server and thus can further reduce the transmitted load from the server to relays. We also show that the proposed scheme outperforms the state-of-the-art schemes.

2 System Model and Related Results

In Section 2.1, we introduce the notation convention used in this paper. In Section 2.2, we introduce the system model for combination network with cache-aided relays and users. Finally, in Section 2.3, we revise the asymmetric coded placement proposed by us in [Wan2018ITA], which will be used in our proposed scheme for combination networks with cache-aided relays and users.

2.1 Notation Convention

A collection is a set of sets, e.g., . Calligraphic symbols denote sets or collections, bold symbols denote vectors, and sans-serif symbols denote system parameters. We use to represent the cardinality of a set or the absolute value of a real number; and ; represents bit-wise XOR. We define the set

(1)

We define that

(2)

where is the number of users in the system, is the number of users connected to each relay, and represents the number of users that are simultaneously connected to relays. Our convention is that if or or .

2.2 System Model for Combination Networks with Cache-aided Relays and Users

In a combination network, a server has files, denoted by , each composed of i.i.d uniformly distributed bits. The server is connected to relays through error-free orthogonal links. The relays are connected to users through error-free orthogonal links. Each user is connected to a distinct subset of relays. Each relay can store bits and each user can store bits, for . The subset of users connected to relay is denoted by . The subset of relays connected to user is denoted by . For each subset of users , the set of relays each of which is connected to all the users in is denoted by

(3)

For the network in Fig. 1, for example, , , and .

In the placement phase, relay and user store information about the files in its cache of size and bits, respectively. The cache content of relay is denoted by and the one of user is denoted by ; let . During the delivery phase, user requests file ; the demand vector is revealed to all nodes. Given , the server sends a message of bits to relay . Then, relay transmits a message of bits to user . User must recover its desired file from and with high probability when . The objective is to determine the load (number of transmitted bits in the delivery phase) pairs

for the worst case demands for a given placement .

In practice, the throughput of transmission from the server to relays may be much lower than the throughput from the relays to their local connected users. For example, in wireless networks where the throughput from small cell base stations to users are much higher than that from the macro base stations to small base stations if all use sub-6GHz wireless communications. In this paper, for combination networks with cache-aided relays and users, we mainly want to minimize the max-link load from the server to relays, i.e., .

For a caching scheme with max-link load among all the links from the server to relays , we say it attains a coded caching gain of if

(4)
(5)

where is achieved by routing. By the cut-set bound [cachingincom] we have (recall that is the number of users connected to each relay).

2.3 Asymmetric Coded Placement in [Wan2018ITA]

In this part, we introduce the caching scheme based on an asymmetric coded placement for the case proposed in [Wan2018ITA], which treats the combination network as coordinated shared-link models and leverages the connectivity among the divided models.

We aim to achieve coded caching gain , that is, every coded multicast message is simultaneously useful for users. So each subfile should be cached by at least users. We consider each subset of users with cardinality for which there exists at least one relay connected to all the users in , that is, we define the collection

(6)

where defined in (3). For example, consider the combination network in Fig. 1, we have

Each subfile corresponds to one set in .

Placement phase

We define

(7)
(8)

We divide each file into non-overlapping and equal-length pieces, which are then encoded by a MDS code; denote the MDS coded symbols/subfiles as . For each is cached by the users in . Each MDS coded symbol includes bits, and thus by the inclusion-exclusion principle [combinatorics, Theorem 10.1], we can compute that the needed memory size is

(9)

Delivery phase

Each user needs to recover all the MDS coded symbols where , and (but not those for which ). For those MDS coded symbols needed by user , we divide into non-overlapping and equal-length pieces, . After considering all the MDS coded symbols demanded by all the users, for each relay and each set where , we create the multicast message

(10)

which will be sent to relay who will then forward it to the users in .

Hence from the placement and delivery phase, each user obtains the MDS coded symbol where and . By the inclusion-exclusion principle [combinatorics, Theorem 10.1], user totally obtains MDS coded symbols of such that it can recover its desired file .

Max-link load

It can be proved that each demanded MDS coded symbol is multicasted with other demanded MDS coded symbols with the same length and thus the coded caching gain is and thus the max link-load is .

It was also shown in [Wan2018ITA] that when , the achieved max link-load by the proposed approach is strictly lower than the one by [Zewail2017codedcaching]. However, when , we have for each set , and thus we do not leverage the coordination among relays. In this case it is equivalent to the scheme in [Zewail2017codedcaching].

3 Combination Networks with Cache-aided Relays and Users

In Section 3.1 we will revise the caching scheme in [Zewail2017codedcaching], which divides each file into two parts and the packets transmitted from the server are independent of the cached contents of relays. In Section 3.2, we propose a novel caching scheme, in which the users can leverage the cached contents of the connected relays to decode the coded messages transmitted from the server.

3.1 Caching in [Zewail2017codedcaching] for Combination Networks with Cache-aided Relays and Users

The memories-loads tradeoff of the scheme in [Zewail2017codedcaching] is the lower convex envelope of the two groups of points.

  1. where . For each point in this group, we can see that relays do not have memory and the scheme is equivalent to the one for combination networks with end-user-caches. The combination network is treated as uncoordinated shared-link models.

    Placement Phase

    Each file , where is divided into non-overlapping and equal-length pieces, which are then encoded by using a MDS code; the -th MDS coded symbol is denoted by , of size for . For each , is divided into non-overlapping and equal-length pieces, i.e., . Each user caches if .

    Delivery Phase

    For each relay , the MAN-like multicast messages (11) are delivered from the server to relay , and then relay then forwards to the users in . It can be seen that user connected to relay can recover , and eventually .

    Max-link load

    Each demanded subfile is transmitted with other subfiles from the server. So . Each user receives the uncached part of its demanded file with totally bits from its connected relays. So .

  2. where . In this case, each relay directly caches such that the server needs not to transmit any packets to relays.

    If , each user does not cache any bits. In the delivery phase, each relay transmits to each user . So we have .

    If , in the placement phase, each user caches for and . So it caches all the files such that .

Notice that in each of the above points, is always equal to .

3.2 Proposed Scheme for Combination Networks with Cache-aided Relays and Users

We start by an example of [Wan2018ITA] for combination networks with end-user-caches, which will be used later to derive our proposed method for combination network with cache-aided relays and users.

Example 1 (, , , and ).

In this example, we have

We aim to achieve coded caching gain , that is, every multicast message is simultaneously useful for users. So each subfile should be known by at least users.

Placement phase

We divide each into non-overlapping and equal-length pieces, which are then encoded by MDS code. The length of each MDS symbol is . For each , there is one MDS coded symbol/subfile denoted by (composed of bits) which is cached by all the users in . Thus the memory size .

Delivery phase

We let each user recover where , and . For each such , we divide it into non-overlapping and equal-length pieces, . After considering all the MDS coded symbol demanded by all the users, for each relay and each set where , we create the multicast message in (10) to be sent to relay and then forwarded to the users in . Hence each demanded MDS coded symbol is transmit in one linear combination which also includes other demanded MDS coded symbols with identical length and thus the coded caching gain is . In conclusion, the minimum needed memory size to achieve of the proposed scheme is while the ones of [Zewail2017codedcaching] is .

Our proposed scheme for combination networks with cache-aided relays and users illustrated in the next example is based on the caching scheme in Example 1.

Example 2 (, , , and , ).

The network topology is the same as Example 1. In this example, we also impose that each demanded subfile of each user which is neither stored in its memory nor the memories of its connected relays, is transmitted from the server in one linear combination including other subfiles. We aim to let each user benefit from the cached content in its connected relays as its own cache contents.

Placement phase

As in Example 1, we also divide each into non-overlapping and equal-length pieces, which are then encoded by MDS code. The length is each MDS symbol is . For each , there is one MDS symbol denoted by . However, different to Example 1, not the whole symbol is stored in the cache of each user in . Instead, we divide into non-overlapping parts (but not necessary with identical length), . For each , is cached by relay where . In addition, is cached by each user in where . Hence, each relay caches for each and each where . Thus the number of cached bits of relay is

For example, consider which is divided into non-overlapping pieces. Each relay caches with bits. So for the last piece, we have and thus no user caches any bits of .

Consider now which is divided into non-overlapping pieces. Each relay caches with bits. So . Thus each user in caches with bits.

We then focus on user . For each set , we have and . For each set , we have and . So user caches bits.

Delivery phase

We let each user recover where and . There are three steps in delivery phase: In the first step, for each relay and each user , relay delivers all the cached bits of to user . More precisely, for each set where , relay delivers to user . So by this step and the placement phase, each user can recover where and . User can also recover where , , and . In the second step, we also focus on each relay and each user . For each set and each where , and , relay delivers to user . These additional side information of user will help him decode the multicast messages transmitted from the server in the second step. In the last step, as Example 1, we let each user recover where , and . More precisely, we let representing the unknown bits in of user . We divide into non-overlapping and equal-length pieces, . After considering all the MDS coded symbols demanded by all the users, for each relay and each set where , we create the multicast message (12) to be sent to relay and then forwarded to the users in , where we use the same convention as that in the literature when it comes to ‘summing’ sets. For example, consider relay and set . It can be seen that . Similarly . So . Consider now relay and set . It can be seen that . Since , we don’t further partition which includes bits. Similarly, each of and has bits. Hence, including is transmitted from the server to relay , which then forwards it to users in .

Hence, we achieve and while the scheme in [Zewail2017codedcaching] described in Section 3.1 gives . It can be seen the max link-load from the server to relays achieved by the proposed method is less than the half of the one achieved by the scheme in [Zewail2017codedcaching].

Comparing the proposed scheme and the scheme in [Zewail2017codedcaching], there are main two advantages. On one hand, we can see that the cached contents of relays help users to decode the packets transmitted from the server which can lead an additional coded caching gain. For example, is cached by relay and is cached by relay . In the first step of delivery, is transmitted from relay to user and is transmitted from relay to user such that user knows them. In the second step of delivery, the server transmit to relay and user can use and to decode . On the other hand, our proposed scheme is based on the asymmetric coded placement in [Wan2018ITA] which is proved to be better than the scheme in [Zewail2017codedcaching].

We now generalize the proposed scheme in Example 2. Notice that in this example, with bits is divided into non-overlapping pieces where bits and . It can be seen that if we increase by a small value and we still desire to achieve , we have and thus these two pieces are overlapped which leads to redundancy. In other words, not all the bits of cached in relays and are useful to user . So in this paper, we only consider the case

where is the length of each MDS symbol generated by the scheme in [Wan2018ITA] (described in Section 2.3).

The memories-loads tradeoff of the proposed scheme is the lower convex envelope of the three groups of points.

  1. for each . For each point in this group, we can see that relays do not have memory and the scheme is equivalent to the one for combination networks with end-user-caches. We use the scheme in Section 2.3 which leads

  2. where the coded caching gain . In this case, for each user , since each relay caches where , and , we have

    (13)

    In addition, since each user caches where , we have

    (14)

    Hence, from (13) and (14) we have

    Hence, we can use the proposed scheme in Example 2.

    Placement phase

    We also divide each into non-overlapping and equal-length pieces, which are then encoded by MDS code. The length of each MDS symbol is . For each , there is one MDS symbol denoted by and we divide into non-overlapping parts, . For each , is cached by relay where In addition, is cached by each user in where Hence, for each , if , we have ; otherwise, .

    Delivery phase

    We let each user recover where and . There are two steps in delivery phase:

    1. For each relay and each user , relay delivers all the cached bits of to user . More precisely, for each set where , relay delivers to user .

      In addition, for each set where , and , relay delivers to user where and .

    2. We let each user recover where , and . More precisely, we let representing the unknown bits in of user . We divide into non-overlapping and equal-length pieces, . After considering all the MDS coded symbols demanded by all the users, for each relay and each set where , we create the multicast message in (12), which is to be sent to relay and then forwarded to the users in . It is also easily to check that each subfile in the multicast message in (12) has the same length.

    We can compute that