On the Optimality of D2D Coded Caching with Uncoded Cache Placement and One-shot Delivery

On the Optimality of D2D Coded Caching with Uncoded Cache Placement and One-shot Delivery

Abstract

We consider a cache-aided wireless device-to-device (D2D) network under the constraint of one-shot delivery, where the placement phase is orchestrated by a central server. We assume that the devices’ caches are filled with uncoded data, and the whole file database at the server is made available in the collection of caches. Following this phase, the files requested by the users are serviced by inter-device multicast communication. For such a system setting, we provide the exact characterization of load-memory trade-off, by deriving both the minimum average and the minimum peak sum-loads of links between devices, for a given individual memory size at disposal of each user. Capitalizing on the one-shot delivery property, we also propose an extension of the presented scheme that provides robustness against random user inactivity.

\providecommand\Version

3

I Introduction

The killer application of wireless networks has evolved from real-time voice communication to on-demand multimedia content delivery (e.g., video), which requires a nearly -fold increase in the per-user throughput, from tens of kb/s to Mb/s. Luckily, the pre-availability of such content allows for leveraging storage opportunities at users in a proactive manner, thereby reducing the amount of necessary data transmission during periods of high network utilization.

A caching scheme is composed of two phases. The placement phase refers to the operation during low network utilization, when users are not requesting any content. During this phase, the cache memories of users are filled by a central server proactively. When each user directly stores a subset of bits, the placement phase is uncoded. The placement phase is called centralized if the server knows the identity of the users in the system and coordinates the placement of the content based on this information. On the other hand, the placement without such a coordination among the users is called decentralized placement.

The transmission stage when users request their desired content is termed delivery phase. By utilizing the content stored in their caches during the placement phase, users aim to reconstruct their desired content from the signals they receive. The sources of such signals may differ depending on the context and network topology. In this work, we focus on the device-to-device (D2D) caching scenario, in which the signals available during the delivery phase are generated merely by the users themselves, whereas the central server remains inactive.

A coded caching strategy was proposed by Maddah-Ali and Niesen (MAN) [1]. Their model consists of users with caches and of a server which is in charge of the distribution of content to users through an error-free shared-link, during both the placement and delivery phases. This seminal work showed that a global caching gain is possible by utilizing multicasting linear combinations during the delivery phase, whereas the previous work on caching [2, 3, 4, 5, 6, 7] aimed to benefit from the local caching gain, omitting the multicasting opportunities.

By observing that some MAN linear combinations are redundant, the authors [8] proposed an improved scheme, which is optimal under the constraint of uncoded cache placement. It was proved in [9] that the uncoded caching scheme is optimal generally within a factor of , e.g. even when more involved (coded) cache placement schemes are allowed.

The work [1] has attracted a lot of attention and led to numerous extensions, e.g., decentralized caching [10], device-to-device (D2D) caching [11, 12, 13], caching on file selection networks [14], caching with nonuniform demands [15, 16, 17], multi-server[18], online caching [19] to name some.

D2D caching problem was originally considered in [11, 12, 13], where users are allowed to communicate with each other. By extending the caching scheme in [1] to the D2D scenario, we can also have the global caching gain. It was proved in [11, 12] that the proposed D2D caching scheme is order optimal within a constant when the memory size is not small.

Particularly, the D2D caching setting with uncoded placement considered in this work is closely related to the distributed computing [20, 21, 22, 23, 24, 25, 26, 27] and data-shuffling problems [28, 29]. The coded distributed computing setting can be interpreted as a symmetric D2D caching setting with multiple requests, whereas the coded data shuffling problem can be viewed as a D2D caching problem with additional constraints on the placement.

I-a Our Contributions

Our main contributions in this paper are:

  1. Based on the D2D achievable caching scheme in [12], with the number of users and the number of files, for and the shared-link caching scheme in [8] for , we propose a novel achievable scheme for D2D caching problem, which is shown to be order optimal within a factor of under the constraint of uncoded placement, in terms of the average transmitted load for uniform probability of file requests and the worst-case transmitted load among all possible demands.

  2. For each user, if any bit of its demanded file not already in its cache can be recovered from its cache content and a transmitted packet of a single other user, we say that the delivery phase is one-shot. Under the constraint of uncoded placement and one-shot delivery, we can divide the D2D caching problem into shared-link models. Under the above constraints, we then use the index coding acyclic converse bound in [30, Corollary 1] to lower bound the total load transmitted in the shared-link models. By leveraging the connection among the shared-link models, we propose a novel way to use the index coding acyclic converse bound compared to the method used for single shared-link model in [31, 32, 8]. With this converse bound, we prove that the proposed achievable scheme is exactly optimal under the constraint of uncoded placement and one-shot delivery, in terms of the average transmitted load and the worst-case transmitted load among all possible demands.

  3. Lastly, inspired from the distributed computing problem with stragglers (see e.g. [33] for a distributed linear computation scenario), where straggling servers fail to finish their computational tasks on time, we focus on a novel D2D caching system, where during the delivery phase, each user may be inactive with a probability and the inactivity event of each user is not known by other users. User inactivity may occur due to several reasons such as broken communication links, users moving out of the network, users going off-line to save power, etc. For this setting, it is hard to design a non one-shot delivery scheme, because for a non one-shot delivery scheme, some packets are not able to reach one user, for whom the joint decoding among all received packets from other users may become unsuccessful. Instead, we can directly extend the proposed optimal one-shot delivery phase to this problem by using the MDS precoding proposed in [33], which promotes robustness against random unidentified user inactivity.

The rest of this paper is organized as follows. We provide a precise definition of our model and an overview of previous results on D2D and shared-link caching scenarios in Section II. We formally define the load-memory trade-off problem and give a summary of our results in Section III. The proposed caching scheme is presented in Section IV. We demonstrate its optimality under the constraint of one-shot delivery through a matching converse in Section V. We treat the problem of random user inactivity by proposing an extension of the presented scheme in Section VI. We corroborate our results with computer simulations also by providing numerical comparisons with the existing bounds in Section VII.

Ii Problem Setting and Related Results

In this section, we define our notations and network model and present previous results which are closely related to the problem we consider in the current work.

Ii-a Notation

is used to represent the cardinality of a set or the length of a file in bits; we let , , and ; the bit-wise XOR operation between binary vectors is indicated by ; for two integers and , we let if or .

Ii-B D2D Caching Problem Setting

We consider a DD network composed of users, which are able to receive all the other users’ transmissions (see Fig. 1). Users make requests from a fixed file database of files each with a length of bits. Every user has a memory of bits, , at its disposal. The system operation can be divided into two phases, namely, into the placement and delivery phases.

During the placement phase users have access to a central server containing the database . In this work, we only consider the caching problem with uncoded cache placement, where each user directly stores bits of files in its memory. For the sake of simplicity, we do not repeat this constraint in the rest of paper. Since the placement is uncoded, we can divide each file into subfiles, , where represents the set of bits exclusively cached by users in .

We denote the indices of the stored bits at user by . For convenience, we denote the cache placement of the whole system by . We assume that, at the end of this phase, each bit of the database is available at least in one of the users’ caches, implying must hold, i.e., we have .

During the delivery phase, each user demands one file. We define demand vector , with denoting user ’s requested file index. The set of all possible demands is denoted by , so that . Given the demand information, each user generates a codeword of length bits and broadcasts it to other users, where indicates the load of user . For a given subset of users , we let denote the ensemble of codewords broadcasted by these users. From the stored bits in and the received codewords , each user attempts to recover its desired file .

In this work we concentrate on the special case of one-shot delivery, which we formally define in the following.

Definition 1 (One-shot delivery):

If each user can decode any bit of its requested file not already in its own cache from its cache and the transmission of a single other user, we say that the delivery phase is one-shot. Mathematically, we indicate by the block of bits needed by user and recovered from the transmission of user i, i.e.,

indicating that is a deterministic function of and . Then, a one-shot scheme implies that

In addition, we also define as the block of bits needed by user and recovered from the transmission of user , which are exclusively cached by users in . Hence, we have for each user

Remark 1:

One-shot terminology is often used in settings related to interference-channel. To the best of our knowledge, the only work which explicitly emphasized the one-shot delivery in the caching setting before the present work was [34].

Fig. 1: System model for cache-aided D2D network where users broadcast to all the other users using the bits in their memories stored from the central server during the placement phase. Solid and dotted lines indicate operation during placement and delivery phases, respectively.

Letting , we say that a communication load is achievable for a demand and placement , with , if and only if there exists an ensemble of codewords of size such that each user can reconstruct its requested file . We let indicate the minimum achievable load given and . We also define as the minimum achievable load given and under the constraint of one-shot delivery.

We consider independent and equally likely user demands, i.e., is uniformly distributed on . Given a placement , the average load is defined as the expected minimum achievable load under this distribution of requests:

We define as the minimum achievable average load:

Similarly, we define as the minimum average load under the constraint of one-shot delivery.

Furthermore, for a given placement , the peak load is defined as

In addition, we define as the minimum achievable peak load:

Correspondingly, we define as the minimum average load under the constraint of one-shot delivery.

Further, for a demand , we let denote the number of distinct indices in . In addition, we let and stand for the demand vector of users and the number of distinct files requested by all users but user , respectively.

As in [35, 8], we group the demand vectors in according to the frequency of common entries that they have. Towards this end, for a demand , we stack in a vector of length the number of appearances of each request in descending order, and denote it by . We refer to this vector as composition of . Clearly, . By we denote the set of all possible compositions. We denote the set of demand vectors with the same composition by . We refer to these subsets as types. Obviously, they are disjoint and . For instance, for and , one has and when .

Ii-C Previous Results on the Device-to-device Coded Caching Problem

The seminal work on D2D coded caching [12] showed that for a demand the load

(1)

is achievable for . Moreover, for non-integer with , the lower convex envelope of these points is achievable.

By cut-set arguments, the authors also showed that the minimum peak load is lower bounded as

(2)

Later in [36], the lower bound was tightened with the help of Han’s Inequality (cf. [37, Theorem 17.6.1]) to:

(3)

with , .

These lower bounds are more general than our lower bound presented in Section V, in the sense that they are neither restricted to uncoded placement nor to one-shot delivery.

Ii-D Previous Results on the Shared-link Coded Caching Problem

In this subsection, we shortly sketch the shared-link model [1], and state the capacity results for the case of uncoded cache placement [8, 31], which are essential for appreciating our results for the D2D model.

In the shared-link model (or the bottleneck model as frequently used), a server with files is connnected to users through an error-free channel. Each file is composed of bits and each user is provided with a local cache of size bits.

For uncoded placement, the minimum average and worst-case loads are given as follows [8]:

Theorem 1:

For a server based shared-link coded caching scenario with a database of files and users each with a cache of size , the following average load under the constraint of uncoded cache placement, is optimal

(4)

with , where is uniformly distributed over and indicates the number of distinct demands in . When , corresponds to the lower convex envelope of its values at .

Corollary 1:

For a server based shared-link coded-caching scenario with a database of files and users each with a cache of size , the following peak load under the constraint of uncoded cache placement, is optimal

(5)

with . When , corresponds to the lower convex envelope of its values at .

Notice that for the case of , i.e., when every user demands a distinct file, the negative term in the above expressions disappear. The achievability for this case was in fact already presented in the seminal paper by Maddah-Ali and Niesen [1] and its optimality was proven in [31].

The abovementioned loads are achieved by applying the caching scheme in [8] for each demand . The achieved load by this scheme for a given demand is given as

(6)

where refers to the symmetric placement which was originally presented in [1].

Ii-E Graphical Converse Bound for the Shared-link Coded Caching Problem

As shown in [31, 32], the acyclic index coding converse bound proposed in [30, Corollary 1] can be used to lower bound the broadcast load for the shared-link caching problem with uncoded cache placement. In the delivery phase, with the knowledge of the uncoded cache placement and demand vector , we can generate a directed graph. For each sub-file demanded by each user, we can generate a node in the graph. There is a directed edge from node to node , if and only if the user who demands the sub-file represented by node caches the sub-file represented by node . If the subgraph over a set of nodes does not contain a directed cycle, assuming the set of sub-files corresponding to this set of nodes is and the length of each sub-file is , the broadcast load (denoted by ) is lower bounded by,

(7)

The authors in [31, 32] proposed a way to choose the maximal acyclic sets in the graph. We choose users with different demands. The chosen user set is denoted by where . Each time, we consider a permutation of , denoted by . It was proved in [31, Lemma 1] that the following set of sub-files is acyclic, . By using (7), we have

(8)

Considering all the possible sets of the users with different demands and all the permutations, we sum all the inequalities in form of (8) to derive a converse bound of in terms of the lengths of sub-files. The next step is to consider all the possible demands and use Fourier-Motzkin to eliminate all the sub-files with the constraints of cache size and file size. Finally, we can derive the converse bound for the shared-link model.

Iii Main Results

In the following theorem, we characterize the exact memory-average load trade-off under the constraint of one-shot delivery. The achievable scheme is introduced in Section IV and the converse bound is proved in Section V.

Theorem 2 (Average load):

For a D2D caching scenario with a database of files and users each with a cache of size , the following average load under the constraint of uncoded placement and one-shot delivery with uniform demand distribution, is optimal

(9)

with , where is uniformly distributed over . Additionally, corresponds to the lower convex envelope of its values at , when .

We can also extend the above results to worst-case transmitted load in the following corollary, whose proof is also in Section V.

Corollary 2 (Worst-case load):

For a D2D caching scenario with a database of files and users each with a cache of size , the following peak load under the constraint of uncoded placement and one-shot delivery, is optimal

(10)

with . Additionally, corresponds to the lower convex envelope of its values at , when .

Proof:

Firstly, recall that binomial coefficient is strictly increasing in the first term.

For if every file is demanded by at least users, every user will have non-leading demanders, which is the maximum value possible for . Hence, such a demand maximizes the load.

For , however, it is not possible to have for all users. Depending on whether a user is the unique demander of a file or not, notice that or , respectively. By monotonicity of the binomial coefficient, a worst case demand must have the maximum possible number of different demands, i.e., . Hence, for and for must hold.

For , a demand that minimizes the number of unique demanders cannot have user sets of size greater than which request the same file, because moving any user from such a subset to a subset of unique demanders would decrease the number of unique demanders. Hence, the composition of such a vector should be only composed of s and s. Precisely, the first entries of such a composition must be and the remaining entries , to satisfy . Thus, there are users with and users with .

Remark 2:

As we will present in Section IV and discuss in Remark 4, our achievable scheme is in fact composed of shared-link sub-systems, where each sub-system has the parameter . Our presented scheme is symmetric in placement phase and file-splitting step in the delivery phase. The optimality of the symmetry in placement phase [1] was already shown for the shared-link model in [32, 31, 8] under the constraint of uncoded placement. This symmetry is intuitively plausible as the placement phase takes place before users reveal their demands and any asymmetry in the placement definitely would not lead to a better peak load.

However, file-splitting step occurs after users make their demands known to the other users. Interestingly, it turns out that the proposed caching scheme with symmetry in file-splitting step achieves the lower bound shown in Section V, even though each one of shared-link sub-systems may not have the same .

Remark 3:

There are two main differences between the graphical converse bounds in [31, 32] for shared-link model and the ones in Theorem 2, Corollary 2 for D2D model. On one hand, the D2D caching with one-shot delivery can be divided into shared-link models. The converse for D2D model leverages the connection between these shared-link models while in [31, 32], we only need to consider a single shared-link model. It will be explained in Remark 6 that, if we do not leverage the connection, we may loosen the converse bound. On the other hand, in shared-link caching problem, one sub-file demanded by multiple users is treated as one sub-file. However, in D2D caching problem, if and , we do not treat and as one sub-piece for each .

By comparing the achievable load by our proposed scheme and the minimum achievable load for shared-link model, we obtain the following order optimality result.

Theorem 3 (Order optimality):

For a D2D caching scenario with a database of files and users each with a cache size of , the proposed achievable average and worst-case transmitted loads in (9) and (10), is order optimal within a factor of .

Proof:

We only show the order optimality for the average case. The same result can be proven for the worst case by following similar steps as we present in the following.

First, we notice that the load of a transmission that satisfies users’ demands from a server with the whole library cannot be higher than the sum-load of transmissions from users’ caches. That is to say, we have that . Furthermore, we observe that by the following:

(11)

where (11) is due to for all .

Therefore, we see that , which concludes the proof.

Iv A Novel Achievable D2D Coded Caching Scheme

In this section, we present a caching scheme that achieves the loads stated in Theorem 2 and Corollary 2. To this end, we show that for any demand vector the proposed scheme achieves the load

(12)

where refers to the symmetric placement which was originally presented in [1]. This immediately proves the achievability of the average and worst case loads given in Theorem 2 and Corollary 2, respectively. In Subsection IV-A, we will present our achievable scheme and provide a simple example, illustrating how the idea of exploiting common demands [8] is incorporated in the D2D setting. In Remark 4, we will discuss our approach of decomposing the D2D model into shared-link models.

Iv-a Achievability of

In the following, we present the proposed caching scheme for integer values of . For non-integer values of , resource sharing schemes [1, 10, 12] can be used to achieve the lower convex envelope of the achievable points for integer.

Placement phase

Our placement phase is based on the MAN placement [1], where each file is divided into disjoint sub-files denoted by where and . During the placement phase, each user caches all bits in each sub-file if . As there are sub-files for each file where and each sub-file is composed of bits, each user caches bits, fulfilling the memory constraint.

Delivery phase

The delivery phase starts with the file-splitting step: Each sub-file is divided into equal length disjoint sub-pieces of bits which are denoted by , where .

Subsequently, each user selects any subset of users from , denoted by , which request distinct files. Extending the nomenclature in [8], we refer to these users as leading demanders of user .

Let us now fix a user and consider an arbitrary subset of users. Each user needs the sub-piece , which is cached by all the other users in and the user . Precisely, all users in a set wants to exchange these sub-pieces from the transmissions of user . By letting user broadcast the codeword

(13)

this sub-piece exchanging can be accomplished, as each user has all the sub-pieces on the RHS of (13), except for .

We let each user broadcast the binary sums that are useful for at least one of its leading demanders. That is, each user broadcasts all for all subsets that satisfy , i.e. . For each user , the size of the broadcasted codeword amounts to times the size of a sub-piece, summing which for all results in the load stated in (12).

We now show that each user is able to recover its desired sub-pieces. When is a leading demander of a user , i.e., , it can decode any sub-piece , for any , , from which is broadcasted from user , by performing

(14)

as can be seen from (13).

However, when , not all of the corresponding codewords for its required sub-pieces are directly broadcasted from user . The user can still decode its desired sub-piece by generating the missing codewords based on its received codewords from user . To show this, we first reformulate Lemma from [8], applied to the codewords broadcasted by a user .

Lemma 1 (Lemma 1 in [8]):

Given a user , the demand vector of the remaining users , and a set of leading demanders , for any subset that includes , let be family of all subsets of such that each requested file in is requested by exactly one user in .

The following equation holds:

Let us now consider any subset of non-leading demanders of user . Lemma 1 implies that the codeword can be directly computed from the broadcasted codewords by the following equation:

(15)

where , because all codewords on the RHS of the above equation are directly broadcasted by user . Thus, each user can obtain the value for any subset of users, and is able to decode its requested sub-piece.

For each , user decodes its desired sub-piece by following either one of the above strategies, depending on whether it is a leading demander of or not.

In the following, we provide a short demonstration of the above presented ideas.

An example Let us consider the case when and . Notice that and Each file is divided into sub-files and users cache the following sub-files for each :

and need the following missing sub-files:

After splitting the sub-files into equal length sub-pieces, users transmit the following codewords, as can be seen from (13):

Notice that for these users, there exists no subset s.t. , which satisfies . However, depending on the choice of , user 2 can find subset with . Such an can be determined as for the cases of , , , respectively.

Picking user as its leading demander, i.e., , user only transmits

sparing the codeword . As mentioned before, the choice of the leading demanders is arbitrary and any one of the can be determined as the superfluous codeword. In fact, any one of these codewords can be attained by summing the other two, since (cf. (15)).

From the broadcasted codewords, all users can decode all their missing sub-pieces by using the sub-pieces in their caches as side-information, by performing (14).

As each sub-piece is composed of bits and as codewords of such size are broadcasted, our scheme achieves a load of , which could be directly calculated by (12).

Remark 4:

Notice that a user generates its codewords exclusively from the sub-pieces and there exist such sub-pieces in its cache.

In addition, for any , we have for any , , , . That is to say, users generate their codewords based on non-overlapping libraries of size bits.

Also, observe that the cache of a user contains such sub-pieces, which amounts to bits. Recall that a sub-piece is shared among users other than .

Therefore, the proposed scheme is in fact composed of shared-link models each with files of size bits and users with caches of size units each. The corresponding parameter for each model is found to be . Summing the loads of each shared-link sub-systems (6) with parameters , yields (12).

Remark 5:

When each user requests a distinct file (), our proposed scheme corresponds to the one presented in [12]. The potential improvement of our scheme when hinges on identifying the possible linear dependencies among the codewords generated by a user.

V Converse Bound under the Constraint of One-Shot Delivery

In this section we propose the converse bound under the constraint of one-shot delivery given in Theorem 2. Under the constraint of one-shot delivery, we can divide each sub-file into sub-pieces. Recall that represents the bits of decoded by user from . Under the constraint of one-shot delivery, we can divide the D2D caching problem into shared-link models. In the shared-link model where , user transmits such that each user can recover