Cache-Aided Fog Radio Access Networkswith Partial Connectivity

Cache-Aided Fog Radio Access Networks
with Partial Connectivity

Ahmed Roushdy21, Abolfazl Seyed Motahari3, Mohammed Nafie1 and Deniz Gündüz2 Email: {{ahmed.elkordy17, d.gunduz}@imperial.ac.uk, motahari@sharif.edu, mnafie@ieee.org}
1Wireless Intelligent Networks Center (WINC), Nile University, Egypt
3EECE Dept., Department of Computer Engineering, Sharif University of Technology, Iran
2 Information Processing and Communication Lab, Imperial College London, UK
Abstract

Centralized coded caching and delivery is studied for a partially-connected fog radio access network (F-RAN), whereby a set of edge nodes (ENs) (without caches), connected to a cloud server via orthogonal fronthaul links, serve users over the wireless edge. The cloud server is assumed to hold a library of files, each of size bits; and each user, equipped with a cache of size bits, is connected to a distinct set of ENs; or equivalently, the wireless edge from the ENs to the users is modeled as a partial interference channel. The objective is to minimize the normalized delivery time (NDT), which refers to the worst case delivery latency, when each user requests a single file from the library. An achievable coded caching and transmission scheme is proposed, which utilizes maximum distance separable (MDS) codes in the placement phase, and real interference alignment (IA) in the delivery phase, and its achievable NDT is presented for and arbitrary cache size , and also for arbitrary values of when the cache capacity is sufficiently large.

footnotetext: This work was supported in part by the European Union‘s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie action TactileNET (grant agreement No 690893), by the European Research Council (ERC) Starting Grant BEACON (grant agreement No 725731), and by a grant from the Egyptian Telecommunications Regulatory Authority.

I Introduction

Proactively caching popular contents into user devices during off-peak traffic periods by exploiting the increasingly abundant storage resources in mobile terminals, has been receiving increasing attention as a promising solution to reduce the increasing network traffic and latency for 5G and future communication networks. A centralized coded proactive caching scheme is first studied by Maddah-Ali and Niesen in [1], where a single server serves multiple cache-enabled users through an error-free shared link; and it is shown to provide significant coding gains with respect to classical uncoded caching. More recently, the idea of coded caching has been extended to multi-terminal wireless networks, where transmitters and/or receivers are equipped with cache memories [2, 3, 4]. It is shown in [2] that caches at the transmitters can improve the sum degrees of freedom (DoF) by allowing cooperation between transmitters for interference mitigation. In [3] and [5] this model is extended to a network, in which both the transmitters and receivers are equipped with cache memories. An achievable scheme exploiting real interference alignment (IA) for the general network is proposed in [4], which also considers decentralized caching at the users.

While the aforementioned papers assume that the transmitter caches are large enough to store all the database, the fog-aided radio access network (F-RAN) model, introduced in [6], relaxes this requirement, and allows the delivery of contents from the cloud server to the edge-nodes (ENs) through dedicated fronthaul links. Coded caching for the F-RAN scenario with cache-enabled ENs is studied in [6]. The authors propose a centralized coded caching scheme to minimize the normalized delivery time (NDT), which measures the worst case delivery latency with respect to an interference-free baseline system in the high signal-to-noise ratio (SNR) regime. In [7], the authors consider a wireless fronthaul that enables coded multicasting. In [8], decentralized coded caching is studied for an F-RAN architecture with two ENs, in which both the ENs and the users have caches. We note that the models in [6, 7, 8] assume a fully connected interference network between the ENs and users. A partially connected F-RAN is studied in [9] from an online caching perspective.

If each EN is connected to a subset of the users through dedicated orthogonal links, the corresponding architecture is called a combination network [10, 11, 12]. In combination networks, the server is connected to a set of relay nodes (i.e., ENs), which communicate to users, such that each user is connected to a distinct set of relays. The links are assumed to be error and interference free. The objective is to determine the minimal max-link load , defined as the smallest max-rate (the maximum rate among all the links, proportional to the download time) for the worst case demand. Note that, although the delivery from the ENs to the users takes place over orthogonal links, that is, there are no multicasting opportunities as in the Maddah Ali and Niesen model in[1], the fact that the contents for multiple users are delivered from the server to each relay through a single link, allows coded delivery to offer similar gains. The authors of [11] consider networks that satisfy the resolvability property, which requires to be divisible by . Combination networks with caches at both the relay and the users is studied in [12]. For the case when there are no caches at the relays, the authors are able to achieve the same performance as in [11] without requiring the resolvability property. A partially connected cache-aided network model is studied in [13], which assumes a random topology during the delivery phase, which is unknown during placement.

In this paper we study the centralized caching problem in an F-RAN with cache memories at the users as depicted in Fig. 1. Our work is different from the aforementioned prior works on F-RANs in that, we consider partially connected interference channel from the ENs to the users, instead of a fully connected F-RAN architecture. This may be due to physical constraints that block the signals, or the long distance between some of the EN-user pairs.

Noفe that the considered network topology, in which the ENs act as relay nodes for the users they serve, is similar to a combination network; however, we consider interfering wireless links from the ENs to the users instead of dedicated links, and study the normalized delivery time (NDT) in the high SNR regime. The authors in [14] study the NDT for a partially connected interference channel with caches at both the transmitters and the receivers, where each receiver is connected to consecutive transmitters. Our work is different from [14], since we also take into consideration the fronthaul links from the server to the ENs, and consider a network topology in which the number of transmitters (ENs in our model) is less than or equal to the number of receivers.

We formulate the minimum NDT problem for an arbitrary receiver connectivity , which denotes the number of ENs each user is connected to. Then, we propose a centralized caching and delivery scheme that exploits real interference alignment (IA) to minimize the NDT for receiver connectivity of . We then extend this scheme to an arbitrary receiver connectivity for certain cache capacities. For the proposed scheme, we show that increasing the receiver connectivity, , for the same number of ENs and users will reduce the total NDT for the specific cache capacity region studied, while the amount of reduction depends on the fronthaul capacity.

Notation: We denote sets with calligraphic symbols, and vectors with bold symbols. The set of integers is denoted by . The cardinality of set is denoted by .

Ii System Model and Performance Measure

Fig. 1: F-RAN architecture with receiver connectivity , where ENs serve users.

A System Model

We consider an F-RAN, illustrated in Fig. 1, which consists of a cloud server that holds a library of files, }, each of size bits; a set of ENs, , that help the cloud server to serve the requests from a set of users, . The edge network from the ENs to the users is a partially connected interference channel, where each user is connected to a distinct set of ENs, where is referred to as the receiver connectivity. The number of users is , which means that . In this F-RAN architecture, , , is connected to users. Each user is equipped with a cache memory of size bits, while the ENs have no caches. We define the normalized cache capacity of users as . We denote the set of users connected to by , where , and the set of ENs connected to user by , where . We will use the function : , defined in [12], which returns if is not served by , and otherwise returns the relative order of user among the users served by . For example, in Fig. 1, we have , and

The system operates in two phases: a placement phase and a delivery phase. The placement phase takes place when the traffic load is low, and the users are given access to the entire library . Each user is then able to fill its cache, denoted by , using the library without any prior knowledge of the future demands or the channel coefficients. In the delivery phase, requests a file from the library. We define as the demand vector. The cloud is connected to each EN via a fronthaul link of capacity bits per symbol, where the symbol refers to a single use of the edge channel from the ENs to the users.

Once the demands are received, the cloud server sends message of blocklength to , , via the fronthaul link. This message is limited to bits to guarantee correct decoding at with high probability. In this paper, we consider half-duplex ENs in that is, ENs start transmitting only after receiving their messages from the cloud server. This is called serial transmission in [6], and the overall latency is the sum of the latencies in the fronthaul and the edge connections. has an encoding function that maps the fronthaul message , the demand vector , and the channel coefficients , where denotes the complex channel gain from to , to a message of blocklength , which must satisfy a power constraint of P. User decodes its requested file as by using its cache contents , the received message , as well as its knowledge of the channel gain and the demand vector . We have

(1)

where denotes the complex Gaussian noise at the th user. The channel gains are independent and identically distributed (i.i.d.) according to a continuous distribution, and remain constant within each transmission interval. The probability of error for a coding scheme, consisting of the caching, cloud encoding, EN encoding, and user decoding functions, is defined as

(2)

which is the worst-case probability of error over all possible demand vectors and over all users. We say that a coding scheme is feasible, if and only if we have 0 when , for almost all realizations of the channel matrix .

User                    
Cache Contents , , , , , , , , , ,
TABLE I: Cache contents after the placement phase for the F-RAN scenario considered in Example 1, where , , , and .
TABLE II: The data delivered from the cloud server to each EN for Example 1.

B Performance Measure

We will consider the normalized delivery time (NDT) in the high SNR regime [15] as the performance measure. For cache capacity and fronthaul capacity , is an achievable NDT if there exists a sequence of feasible codes that satisfy

(3)

We define the minimum NDT for a given tuple as

Let denote the worst-case traffic load from the cloud server to the , while denote the worst-case traffic load per user, both normalized by file size . The per-user capacity in the high SNR regime can be approximated by , where is the per-user DoF, while the capacity of the fronthaul link is given by + o(), where is called the fronthaul rate. Then, NDT can be expressed more conveniently as [5]

(4)

where represents the fronthaul NDT, and represents the edge NDT, which suggests that NDT characterizes the delivery time of the actual traffic load at a transmission rate specified by DoF .

Iii Main Result

The main result of the paper is presented next.

Theorem 1.

For an partially-connected F-RAN architecture with user cache capacity of , fronthaul rate , number of files and centralized cache placement, the following NDT is achievable for integer values of :

(5)

for a receiver connectivity of , or for arbitrary receiver connectivity when . The NDT for non-integer values can be obtained as a linear combination of the NDTs of integer values through memory-sharing.

Remark From Theorem 1, when , we have

Consider two different F-RAN architectures with ENs, F-RAN A and F-RAN B, with receiver connectivities and , respectively, where and . The two networks have the same number of users, and we have , . One can then show that the achievable NDT in F-RAN A is lower, showing that the increased connectivity helps in reducing the NDT despite increasing interference, and the gap between the two achievable NDTs in F-RAN A and F-RAN B becomes negligable as the fronthaul rate increases, i.e., . We illustrate the achievable NDT performance in a F-RAN in Fig. 2 for , , for different fronthaul rates. We observe from the figure that, with the same cache capacity the achievable NDT of network A is less than or equal to that of network B, and the gap between the two increases as the fronthaul rate decreases. This suggests that the achievable NDT for a given F-RAN architecture decreases as the connectivity increases where the amount of decreasing depends on the fronthaul rate.

Fig. 2: Comparison of the NDT for a F-RAN architecture with a library of files, considering different receiver connectivities () and fronthaul rates ().

Iv Centralized Coded Caching

In this section, we present a centralized coded caching scheme for the partially-connected F-RAN architecture with a receiver connectivity of for , and also for any receiver connectivity for .

A Cache Placement Phase

We use the cache placement algorithm proposed in [12], where the cloud server divides each file into equal size subfiles. Then, it encodes each subfile using an maximum distance separable (MDS) code [16]. The resulting coded chunks, each of size bits, are denoted by , where is the file index and . Each acts as a virtual server for the resulting encoded symbol . Note that, any encoded chunks are sufficient to reconstruct the whole file.

Each encoded symbol is further divided into equal-size non-overlapping pieces, each of which is denoted by , where , . The pieces , , are stored in the cache memory of user if and ; that is, the pieces of chunk , , are stored by users connected to . At the end of the placement phase, each user stores pieces, each of size bits, which sum up to bits, satisfying the memory constraint. We will explain the placement phase through an example.

Example 1. Consider the partially connected F-RAN depicted in Fig. 1, where , , and . For , i.e., , the cloud server first divides each file into subfiles. These subfiles are then encoded using a MDS code. As a result, there are encoded chunks, denoted by , , , each of size bits. Each encoded chunk is further divided into pieces , where and . Cache contents of each user are listed in TABLE I. Observe that each user stores two pieces of the encoded chunks of each file for a total of 10 files, i.e., bits, which satisfies the memory constraint.

B Delivery Phase

The delivery phase is carried out in two steps. The first step is the delivery from the cloud server to the ENs, and the second step is the delivery from the ENs to the users.

B1 Delivery from the cloud server to the ENs

For each -element subset of , i.e., and , the cloud server will deliver the following message to :

(6)

Overall, for given , the following set of messages will be delivered to

(7)

which is of size bits. The message to be delivered to each EN in Example 1 is given in TABLE II, which results in a normalized fronthaul traffic load of . Hence, the achievable NDT from the cloud server to the ENs is . In general the NDT from the cloud server to the ENs is

(8)
TABLE III: The interference matrices at the users of Example 1.
1 , , ,
2 FOR
3     FOR
4     
5       FOR
6        
7         Find : set of other users receiving the same        interference signal , .        Sort users in in ascending order.
8        For each user in , find interference vector ,       s.t. and .
9         set of vectors
10       END FOR
11           If
12              FOR
13                 FOR
14                    FOR
15                        IF
16                           
17                           Go to 21, i.e., next                          iteration of .
18                        END IF
19                     END FOR
20                 END FOR
21              END FOR
22            END IF
23     for , where
24                             FOR
25                               FOR
26                              
27                              END FOR
28                            END FOR
29     Remove interference signals in from
30     for
31    END FOR
32 END FOR
Algorithm 1 Generator for , and Matrices

B2 Delivery from the ENs to the users

User , , is interested in the following set of messages:

(9)

where . On the other hand, the transmission of the following messages interfere with the transmissions of the messages in :

(10)

Each causes interference at users, including . Hence, the total number of interfering signals at from the ENs in is , where is the number of interfering signals from each EN connected to user .

At each user , , we define the interference matrix to be a matrix with columns, denoted by , each column representing the interference caused by , and rows. For each column vector , we sort the set of interfering signals for in ascending order, where is the q-th element of the set when they are ordered in ascending order. For Example 1, the interference matrices are shown in TABLE III.

We will use real IA, presented in  [17] and [18], for the delivery phase from the ENs to the users to align each of the interfering signals in , one from each EN, to the same subspace. We define , and to be the basis matrix, i.e., function of the channel coefficients, the data matrix and user matrix, respectively, where the dimensions of these matrices are , and , respectively, where . We denote the rows of these matrices by , and , respectively, where . The row vectors are used to generate the set of monomials . Note that, the function defined in [2] corresponds to in our notation. The set is used as the transmission directions for the modulation constellation  [2] for the whole network. In other words, each row data vector will use the set as the transmission directions of all its data to align all the interfering signals from at the same subspace at , if these signals belong to .

We next explain matrix more clearly. For each with , there will be a user at which these data will be aligned into the same dimension, i.e., . The row is given as follow,

(11)

We employ Algorithm 1 to obtain matrices , and for a receiver connectivity of , and for arbitrary receiver connectivity when . For Example 1, the three matrices are given as follows:

Then, for each signal in , we construct a constellation that is scaled by the monomial set , i.e, the signals in uses the monomial , resulting in the signal constellation

(12)

Focusing on the users of Example 1, we want to assess whether the interfering signals have been aligned, and if the requested subfiles arrive with independent channel coefficients, so that the decodability is guaranteed. Starting with , the received constellation for the desired signals , , , and is given as follow

(13)

The received constellation for the interfering signals , , , and is given by

(14)

Equation (14) proves that every two interfering signals, one from each EN, i.e., the first two terms in (14), have collapsed into the same constellation space. Also, since the monomials , and do not overlap and linear independence is obtained, the interfering signals will align into different subspaces.

We can also see in (13) that the monomials do not align, and rational independence is guaranteed (with high probability) and the desired signals will be received over 6 different subspaces. Since the monomials form different constellations, and , whose terms are functions of different channel coefficients, we can assert that these monomials do not overlap. Hence, we can claim that real IA is achieved, and each user can achieve a DoF of . In general, the achievable DoF per user is given by

(15)

Thus, our scheme guarantees that the desired signals at each user will be received in different subspaces, and each interfering signals will be aligned into the same subspace, i.e., one from each EN, resulting in a total of interference subspaces.

When , the number of interference signals at each user is . Hence, we just transmit the constellation points corresponding to each signal. We are sure that the decodability is guaranteed since all channel coefficients are i.i.d. according to a continuous distribution. As a result, each user will be able to achieve a DoF of =.

User utilizes its memory to extract the pieces for and . Therefore, user reconstructs , and decodes its requested file . In Example 1, utilizes its memory in TABLE I to extract , for , and . Hence, reconstructs and , and decodes its requested file ; and similarly for the remaining users. We have . Thus, the edge NDT from ENs to the users is equal to , while the total NDT is . In the general case, the NDT from the ENs to the users is given by

(16)

V conclusions

We have studied centralized caching and delivery over a partially-connected F-RAN with a specified network topology between the ENs and the users. We have proposed a coded caching and delivery scheme that exploits real IA for a receiver connectivity of ; that is, when each user can be served by two ENs, or for any receiver connectivity when the user cache capacities are sufficiently large. We have derived the achievable NDT for this scheme, and showed that, increasing receiver connectivity for the same number of ENs and users will reduce the NDT for the specific cache capacity values considered, while the amount of reduction depends on the fronthaul rate. The former result follows thanks to the real IA scheme used, which carefully takes care of the interference, and thus, additional connectivity provides better delivery over the edge network, rather than increasing the interference. The latter result is due to the fact that the size of the transmitted data through each fronthaul link for the network with higher connectivity is less than that of the network with lower connectivity; and hence, the fronthaul rate helps improve the performance of the latter network more, resulting in a relatively smaller improvement.

References

  • [1] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. on Inform. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
  • [2] ——, “Cache-aided interference channels,” in IEEE Int’l Symp. on Inform. Theory, June 2015, pp. 809–813.
  • [3] N. Naderializadeh, M. A. Maddah-Ali, and A. S. Avestimehr, “Fundamental limits of cache-aided interference management,” IEEE Trans. on Inform. Theory, vol. 63, no. 5, pp. 3092–3107, May 2017.
  • [4] J. P. Roig, D. Gunduz, and F. Tosato, “ Interference networks with caches at both ends,” IEEE Int’l Conf. on Comms., May 2017.
  • [5] F. Xu, M. Tao, and K. Liu, “Fundamental tradeoff between storage and latency in cache-aided wireless interference networks,” IEEE Trans. on Inform Theory, vol. PP, no. 99, pp. 1–28, 2017.
  • [6] A. Sengupta, R. Tandon, and O. Simeone, “Fog-aided wireless networks for content delivery: Fundamental latency tradeoffs,” IEEE Trans. on Inform. Theory, vol. 63, no. 10, pp. 6650–6678, Oct 2017.
  • [7] J. Koh, O. Simeone, R. Tandon, and J. Kang, “Cloud-aided edge caching with wireless multicast fronthauling in fog radio access networks,” in IEEE Wireless Comms. and Networking Conf., Mar. 2017, pp. 1–6.
  • [8] A. M. Girgis, O. Ercetin, M. Nafie, and T. ElBatt, “Decentralized coded caching in wireless networks: Trade-off between storage and latency,” in IEEE Int’l Symp. on Inform. Theory, Jun. 2017, pp. 2443–2447.
  • [9] S. M. Azimi, O. Simeone, and R. Tandon, “Content delivery in fog-aided small-cell systems with offline and online caching: An information theoretic analysis,” Entropy, vol. 19, no. 7, 2017.
  • [10] M. Ji, M. F. Wong, A. M. Tulino, J. Llorca, G. Caire, M. Effros, and M. Langberg, “On the fundamental limits of caching in combination networks,” in IEEE Int’l Workshop on Signal Proc. Advances in Wireless Comms., Jun. 2015, pp. 695–699.
  • [11] L. Tang and A. Ramamoorthy, “Coded caching for networks with the resolvability property,” in IEEE Int’l Symp. on Inform. Theory, July 2016, pp. 420–424.
  • [12] A. A. Zewail and A. Yener, “Coded caching for combination networks with cache-aided relays,” in IEEE Int’l Symp. on Inform. Theory, Jun. 2017, pp. 2433–2437.
  • [13] N. Mital, D. Gunduz, and C. Ling, “Coded caching in a multi-server system with random topology,” in IEEE Wireless Communications and Networking Conference (WCNC), Apr. 2018.
  • [14] F. Xu and M. Tao, “Cache-aided interference management in partially connected wireless networks,” ArXiv e-prints, Aug. 2017.
  • [15] J. Zhang and P. Elia, “Fundamental limits of cache-aided wireless bc: Interplay of coded-caching and csit feedback,” IEEE Trans. on Inform. Theory, vol. 63, no. 5, pp. 3142–3160, May 2017.
  • [16] S. Lin and D. J. Costello, Error Control Coding, Second Edition, 2004.
  • [17] A. S. Motahari, S. Oveis-Gharan, M. A. Maddah-Ali, and A. K. Khandani, “Real interference alignment: Exploiting the potential of single antenna systems,” IEEE Trans. on Inform. Theory, vol. 60, no. 8, pp. 4799–4810, Aug 2014.
  • [18] M. A. Maddah-Ali, “On the degrees of freedom of the compound miso broadcast channels with finite states,” in IEEE Int’l Symp. on Informa. Theory, Jun. 2010, pp. 2273–2277.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
90945
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description