Cache-Aided Radio Access Networkswith Partial Connectivity

Cache-Aided Radio Access Networks
with Partial Connectivity

Ahmed Roushdy21, Abolfazl Seyed Motahari3, Mohammed Nafie4 and Deniz Gunduz2
Email: {{ahmed.elkordy17, d.gunduz}@imperial.ac.uk, motahari@sharif.edu, mnafie@ieee.org}
1Wireless Intelligent Networks Center (WINC), Nile University
4Electronics and Communications Department, Cairo University
2 Information Processing and Communication Lab, Imperial College London
3 Department of Computer Engineering, Sharif University of Technology
Abstract

Centralized coded caching and delivery is studied for a partially-connected radio access network (RAN), whereby a set of edge nodes (ENs) (without caches), connected to a cloud server via orthogonal fronthaul links with limited capacity, serve a total of UEs over wireless links. The cloud server is assumed to hold a library of files, each of size bits; and each user, equipped with a cache of size bits, is connected to a distinct set of ENs. The objective is to minimize the normalized delivery time (NDT), which refers to the worst case delivery latency when each user requests a single file from the library. Two coded caching and transmission schemes are proposed, called the MDS-IA and soft-transfer schemes. MDS-IA utilizes maximum distance separable (MDS) codes in the placement phase and real interference alignment (IA) in the delivery phase. The achievable NDT for this scheme is presented for and an arbitrary cache size , and also for an arbitrary value of when the cache capacity is above a certain threshold. The soft-transfer scheme utilizes soft-transfer of coded symbols to ENs that implement zero forcing (ZF) over the edge links. The achievable NDT for this scheme is presented for arbitrary and cache size . The results indicate that the fronthaul capacity determines which scheme achieves a better performance in terms of the NDT, and the soft-transfer scheme becomes favorable as the fronthaul capacity increases.

Keywords— Coded caching, Partially connected interference networks, Interference management, Delivery latency.

footnotetext: This work was supported in part by the European Union‘s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie action TactileNET (grant agreement No 690893), by the European Research Council (ERC) Starting Grant BEACON (grant agreement No 725731), and by a grant from the Egyptian Telecommunications Regulatory Authority.

I Introduction

Proactively caching popular contents into user devices during off-peak traffic periods, by exploiting the increasingly abundant storage resources in wireless terminals, is a promising solution for the growing network traffic and latency for future communication networks [1, 2, 3, 4]. A centralized coded proactive caching scheme was introduced by Maddah-Ali and Niesen in [5], where a single server serves multiple cache-enabled users over an error-free shared link; and it is shown to provide significant coding gains with respect to classical uncoded caching. Decentralized coded caching is considered in [6] and [7], where each user randomly stores some bits from each file independently of the other users.

More recently, the idea of coded caching has been extended to wireless radio access networks (RANs), where transmitters and/or receivers are equipped with cache memories. Cache-aided delivery over a noisy broadcast channel is considered in [8] and [9]. Cache-aided delivery from multiple transmitters is considered in [10, 11, 12, 13, 14, 15, 16, 17]. It is shown in [10] that caches at the transmitters can improve the sum degrees of freedom (DoF) by allowing cooperation among transmitters for interference mitigation. In [11] and [18] this model is extended to a network, in which both the transmitters and receivers are equipped with cache memories. An achievable scheme exploiting real interference alignment (IA) for the general network is proposed in [12], which also considers decentralized caching at the users.

While the above works assume that the transmitter caches are large enough to store all the database, the fog-aided RAN (F-RAN) model [13] allows the delivery of contents from the cloud server to the edge-nodes (ENs) through dedicated fronthaul links. Coded caching for the F-RAN scenario with cache-enabled ENs is studied in [13]. The authors propose a centralized coded caching scheme to minimize the normalized delivery time (NDT), which measures the worst case delivery latency with respect to an interference-free baseline system in the high signal-to-noise ratio (SNR) regime. In [14], the authors consider a wireless fronthaul that enables coded multicasting. In [15], decentralized coded caching is studied for a RAN architecture with two ENs, in which both the ENs and the users have caches. In [16], this model is extended to an arbitrary number of ENs and users. We note that the models in [13, 14, 15, 16] assume a fully connected interference network between the ENs and users. A partially connected RAN is studied in [17] from an online caching perspective.

If each EN is connected to a subset of the users through dedicated error free orthogonal links, the corresponding architecture is known as a combination network. Coded caching in a combination network is studied in [19, 20, 21]. In such networks, the server is connected to a set of relay nodes, which communicate to users, such that each user is connected to a distinct set of relay nodes, where is refered to as the receiver connectivity. The links are assumed to be error- and interference-free. The objective is to determine the minimax link load, defined as the minimum achievable value of the maximum load among all the links (proportional to the download time) and over all possible demand combinations. Note that, although the delivery from the ENs to the users takes place over orthogonal links, that is, there are no multicasting opportunities as in[5], the fact that the messages for multiple users are delivered from the server to each relay through a single link allows coded delivery to offer gains similarly to [5]. The authors of [20] consider a class of combination networks that satisfy the resolvability property, which require to be divisible by . A combination network in which both the relays and the users are equipped with caches is presented in [21]. For the case when there are no caches at the relays, the authors are able to achieve the same performance as in [20] without requiring the resolvability property.

In this paper we study the centralized caching problem in a RAN with cache-enabled user equipments (UEs), as depicted in Fig. 1. Our work differs from the aforementioned prior works [13, 14, 15, 16] as we consider a partially connected interference channel from the ENs to the UEs, instead of a fully connected RAN architecture. This may be due to physical constraints that block the signals or the long distance between some of the EN-UE pairs.

The considered network topology from the server to the UEs, where ENs act as relay nodes for the UEs they serve, is similar to the combination network architecture; however, we consider interfering wireless links from the ENs to the UEs instead of dedicated links, and study the normalized delivery time in the high SNR regime. The authors in [22] study the NDT for a partially connected interference channel with caches at both the transmitters and the receivers, where each receiver is connected to consecutive transmitters. Our work is different from [22], since we take into consideration the fronthaul links from the server to the ENs, and consider a network topology in which the number of transmitters (ENs in our model) is less than or equal to the number of receivers.

We formulate the minimum NDT problem for a given receiver connectivity . Then, we propose two centralized caching and delivery schemes; in particular, the MDS-IA scheme that we proposed in our previous work [23] and the soft-transfer scheme. The MDS-IA scheme exploits real IA to minimize the NDT for receiver connectivity of . We then extend this scheme to an arbitrary receiver connectivity of assuming a certain cache capacity. For this scheme, we show that increasing the receiver connectivity for the same number of ENs and UEs will decrease the NDT for the specific cache capacity region studied, while the reduction in the NDT depends on the fronthaul capacity. On the other, in the soft-transfer scheme the server delivers quantized channel input symbols to the ENs in order to enable them to implement zero-forcing transmission to the UEs to minimize the NDT for an arbitrary receiver connectivity and cache capacity. Our results show that the scheme that achieves a smaller NDT depends on the fronthaul capacity. The MDS-IA scheme achieves a smaller NDT when the fronthaul capacity is relatively limited, while the soft-transfer scheme performs better as the fronthaul capacity increases.

The rest of the paper is organized as follows. In Section II, we introduce the system model and the performance measure. In Section III, the main results of the paper are presented. The MDS-IA scheme is presented in Section IV, while the soft-transfer scheme is introduced in Section V. The numerical results are presented in section VI. Finally, the paper is concluded in Section VII.

A Notation

We denote sets with calligraphic symbols and vectors with bold symbols. The set of integers is denoted by . The cardinality of set is denoted by .

Ii System Model and Performance Measure

Fig. 1: RAN architecture with receiver connectivity , where ENs serve UEs.

A System Model

We consider the RAN architecture as illustrated in Fig. 1, which consists of a cloud serve and a set of ENs, , that help the cloud server to serve the requests from a set of UEs, . The cloud is connected to each ENs via orthogonal fronthaul links of capacity bits per symbol, where the symbol refers to a single use of the edge channel from the ENs to the UEs. The edge network from the ENs to the users is a partially connected interference channel, where is connected to a distinct set of ENs, where is referred to as the receiver connectivity. The number of UEs is , which means that . In this architecture, , , is connected to UEs.

The cloud server holds a library of files, }, each of size bits. We assume that the UEs request files from this library only. Each UE is equipped with a cache memory of size bits, , while the ENs have no caches. We define two parameters, and , where the former is the normalized cache capacity (per file) available across all the UEs, while the latter is the normalized cache capacity of the UEs connected to a particular edge node. We denote the set of UEs connected to by , where , and the set of ENs connected to by , where . We will use the function : , which returns if is not served by , and otherwise returns the relative order of among the L UEs served by . For example, in Fig. 1, we have , and

The system operates in two phases: a placement phase and a delivery phase. The placement phase takes place when the traffic load is low, and the UEs are given access to the entire library . , , is then able to fill its cache using the library without any prior knowledge of the future demands or the channel coefficients. Let denote the cache contents of at the end of the placement phase. We consider centralized placement; that is, the cache contents of UEs, donated by , are coordinated jointly.

In the delivery phase, , , requests file from the library, . We define as the demand vector. Once the demands are received, the cloud server sends message of blocklength to , , via the fronthaul link. This message is limited to bits to guarantee correct decoding at with high probability. In this paper, we consider half-duplex ENs; that is, ENs start transmitting only after receiving their messages from the cloud server. This is called serial transmission in [13], and the overall latency is the sum of the latencies in the fronthaul and the edge connections. has an encoding function that maps the fronthaul message , the demand vector , and the channel coefficients , where denotes the complex channel gain from to , to a channel input vector of blocklength , which must satisfy an average power constraint of P, i.e., . decodes its requested file as by using its cache contents , the received signal , as well as its knowledge of the channel gain matrix and the demand vector . We have

(1)

where denotes the independent additive complex Gaussian noise at the th user. The channel gains are independent and identically distributed (i.i.d.) according to a continuous distribution, and remain constant within each transmission interval. Similarly to [10, 11, 12, 13, 14], we assume that perfect channel state information is available at all the terminals of network. The probability of error for a coding scheme, consisting of the cache placement, cloud encoding, EN encoding, and user decoding functions, is defined as

(2)

which is the worst-case probability of error over all possible demand vectors and all the users. We say that a coding scheme is feasible, if we have 0 when , for almost all realizations of the channel matrix .

B Performance Measure

We will consider the normalized delivery time (NDT) in the high SNR regime [24] as the performance measure. Note that the capacity of the edge network scales with the SNR. Hence, to make sure that the fronthaul links do not constitute a bottleneck, we let , where is called the fronthaul multiplexing gain. For cache capacity and fronthaul multiplexing gain we say that is an achievable NDT if there exists a sequence of feasible codes that satisfy

(3)

We additionally define the fronthaul NDT as

(4)

and the edge NDT as

(5)

such that the end-to-end NDT is the sum of the fronthaul and edge NDTs. We define the minimum NDT for a given tuple as

Iii Main Result

The main result of the paper is stated in the following two theorems.

Theorem 1.

For an partially-connected RAN architecture outlined above, with user cache capacity of , fronthaul multiplexing gain , number of files , and considering centralized cache placement, the following NDT is achievable by using the MDS-IA scheme for integer values of :

(6)

for a receiver connectivity of , or for arbitrary receiver connectivity when .

The NDT for non-integer values can be obtained as a linear combination of the NDTs corresponding to the nearest integer values through memory-sharing.

Theorem 2.

For the same partially connected RAN architecture, the following NDT is achievable by using the soft-transfer scheme for integer values of

(7)
Fig. 2: Comparison of the achievable NDT for a RAN architecture with library files for different receiver connectivity and fronthaul multiplexing gains.

The NDT for non-integer values can be obtained as a linear combination of the NDTs corresponding to the nearest integer values through memory-sharing.

Remark.

From Theorem 1 Eqn. (6), when , the NDT achieved by the MDS-IA scheme is given by

Consider two different RAN architectures with ENs, denoted by RAN-A and RAN-B, with receiver connectivities and , respectively, where and . The two networks have the same number of UEs , but the number of UEs each EN connects to is different, and is given by , . We illustrate the achievable NDT performance of the MDS-IA scheme in a RAN in Fig. 2 setting and for different fronthaul multiplexing gains. We observe from the figure that, with the same cache capacity the achievable NDT of network RAN-A is less than or equal to that of network RAN-B, and the gap between the two increases as the fronthaul multiplexing gain decreases. This suggests that the increased connectivity helps in reducing the NDT despite potentially increasing the interference as well, and the gap between the two achievable NDTs for RAN-A and RAN-B becomes negligible as the fronthaul multiplexing gain increases, i.e., .

Iv MDS-IA Scheme

In this section, we present the MDS-IA scheme for the partially-connected RAN architecture.

A Cache Placement Phase

We use the cache placement algorithm proposed in [21], where the cloud server divides each file into equal-size non-overlapping subfiles. Then, it encodes the subfiles using an maximum distance separable (MDS) code [25]. The resulting coded chunks, each of size bits, are denoted by , where is the file index , and is the index of the coded chunk. will act as an edge server for the encoded chunk , . Note that, thanks to the MDS code, any encoded chunks are sufficient to reconstruct the whole file.

Each encoded chunk is further divided into equal-size non-overlapping pieces, each of which is denoted by , where , . The pieces , , are stored in the cache memory of if and ; that is, the pieces of chunk , , are stored by the UEs connected to . At the end of the placement phase, each user stores pieces, each of size bits, which sum up to bits, satisfying the memory constraint with equality. We will next illustrate the placement phase through an example.

Example 1. Consider the partially connected RAN depicted in Fig. 1, where , , and . The cloud server divides each file into subfiles. These subfiles are then encoded using a MDS code. As a result, there are coded chunks, denoted by , , , each of size bits. For , i.e., , each encoded chunk is further divided into pieces , where and . Cache contents of each user are listed in TABLE I. Observe that each user stores two pieces of the encoded chunks of each file for a total of 10 files, i.e., bits, which satisfies the memory constraint.

{adjustbox}

width=1 User                     Cache Contents , , , , , , , , , ,

TABLE I: Cache contents after the placement phase for the RAN scenario considered in Example 1, where , , , and .
{adjustbox}

width=.9

TABLE II: The data delivered from the cloud server to each EN for Example 1.

B Delivery Phase

The delivery phase is carried out in two steps. The first step is the delivery from the cloud server to the ENs, and the second step is the delivery from the ENs to the UEs.

B1 Delivery from the cloud server to the ENs

For each -element subset of , i.e., and , the cloud server will deliver the following message to :

(8)

Overall, for given , the following set of messages will be delivered to

(9)

which makes a total of bits. The fronthaul NDT from the cloud server to the ENs is then given by

(10)

The message to be delivered to each EN in Example 1 is given in TABLE II, and we have .

1 , , ,
2 FOR
3     FOR
4     
5       FOR
6         
7          Find : set of other UEs receiving the same         interference signal , .         Sort UEs in in ascending order.
8         For each user in , find interference vector ,       s.t. and .
9          set of vectors
10       END FOR
11            If
12                FOR
13                   FOR
14                      FOR
15                           IF
16                              
17                              Go to 21, i.e., next iteration of .
18                           END IF
19                       END FOR
20                   END FOR
21                END FOR
22             END IF
23     for , where
24                                FOR
25                                   FOR
26                                  
27                                  END FOR
28                               END FOR
29     Remove interference signals in from
30     for
31    END FOR
32 END FOR
Algorithm 1 Generator for , and Matrices

B2 Delivery from the ENs to the UEs

, , is interested in the following set of messages:

(11)

where . On the other hand, the transmission of the following messages interfere with the transmissions of the messages in :

(12)

Each causes interference at UEs, including . Hence, the total number of interfering signals at from the ENs in is , where is the number of interfering signals from each EN connected to .

{adjustbox}

width=1

TABLE III: The interference matrices at the UEs of Example 1.

We enumerate the ENs in , , such that is the q-th element in when they are ordered in ascending order. At , , we define the interference matrix to be an matrix whose columns are denoted by , where the q-th column represents the interference caused by a different EN in . For each column vector , we sort the set of interfering signals for in ascending order. In Example 1, we have , , etc., and the interference matrices are shown in TABLE III.

We will use real IA, presented in  [26] and extended to complex channels in [27], for the delivery from the ENs to the UEs to align each of the interfering signals in , one from each EN, in the same subspace. We define , and to be the basis matrix, i.e., function of the channel coefficients, the data matrix and user matrix, respectively, where the dimensions of these matrices are , and , respectively, where . We denote the rows of these matrices by , and , respectively, where . The row vectors are used to generate the set of monomials . Note that, the function defined in [10] corresponds to in our notation. The set is used as the transmission directions for the modulation constellation [10] for the whole network. In other words, each row data vector will use the set as the transmission directions of all its data to align all the interfering signals from in the same subspace at , if these signals belong to .

We next explain matrix more clearly. For each with , there will be a user at which these data will be aligned in the same subspace, i.e., . The row consists of , where .
We employ Algorithm 1 to obtain matrices , and for a receiver connectivity of , and for arbitrary receiver connectivity when . In Example 1, the three matrices are given as follows:

Then, for each signal in , we construct a constellation that is scaled by the monomial set , i.e, the signals in use the monomials in , resulting in the signal constellation

(13)

Focusing on Example 1, we want to assess whether the interfering signals have been aligned, and whether the requested subfiles arrive with independent channel coefficients, the decodability is guaranteed. Starting with , the received constellation corresponding to the desired signals , , , , and is given as follow