Enhanced VIP Algorithms for Forwarding, Caching, and Congestion Control in Named Data Networks

Enhanced VIP Algorithms for Forwarding, Caching, and Congestion Control in Named Data Networks

Abstract

Emerging Information-Centric Networking (ICN) architectures seek to optimally utilize both bandwidth and storage for efficient content distribution over the network. The Virtual Interest Packet (VIP) framework has been proposed to enable joint design of forwarding, caching, and congestion control strategies within the Named Data Networking (NDN) architecture. While the existing VIP algorithms exhibit good performance, they are primarily focused on maximizing network throughput and utility, and do not explicitly consider user delay. In this paper, we develop a new class of enhanced algorithms for joint dynamic forwarding, caching and congestion control within the VIP framework. These enhanced VIP algorithms adaptively stabilize the network and maximize network utility, while improving the delay performance by intelligently making use of VIP information beyond one hop. Generalizing Lyapunov drift techniques, we prove the throughput optimality and characterize the utility-delay tradeoff of the enhanced VIP algorithms. Numerical experiments demonstrate the superior performance of the resulting enhanced algorithms for handling Interest Packets and Data Packets within the actual plane, in terms of low network delay and high network utility.

1Introduction

It is increasingly clear that traditional connection-based networking architectures are ill suited for the prevailing user demands for network content [1]. Emerging Information-Centric Networking (ICN) architectures aim to remedy this fundamental mismatch so as to dramatically improve the efficiency of content dissemination over the Internet. In particular, Named Data Networking (NDN) [1], or Content-Centric Networking (CCN)[2], is a proposed network architecture for the Internet that replaces the traditional client-server communication model with one based on the identity of data or content.

Content delivery in NDN is accomplished using Interest Packets and Data Packets, along with specific data structures in nodes such as the Forwarding Information Base (FIB), the Pending Interest Table (PIT), and the Content Store (cache). Communication is initiated by a data consumer or requester sending a request for the data using an Interest Packet. Interest Packets are forwarded along routes determined by the FIB at each node. Repeated requests for the same object can be suppressed at each node according to its PIT. The Data Packet is subsequently transmitted back along the path taken by the corresponding Interest Packet, as recorded by the PIT at each node. A node may optionally cache the data objects contained in the received Data Packets in its local Content Store. Consequently, a request for a data object can be fulfilled not only by the content source but also by any node with a copy of that object in its cache. Please see [1] for details.

NDN seeks to optimally utilize both bandwidth and storage for efficient content distribution, which highlights the need for joint design of traffic engineering and caching strategies, in order to optimize network performance. To address this fundamental problem, in our previous work [3], we propose the VIP framework for the design of high performing NDN networks. Within this VIP framework, we develop joint dynamic forwarding, caching and congestion control algorithms operating on virtual interest packets (VIPs) to maximize network utility subject to network stability in the virtual plane, using Lyapunov drift techniques[3]. Then, using the resulting flow rates and queue lengths of the VIPs in the virtual plane, we develop the corresponding joint dynamic algorithms in the actual plane, which have been shown to achieve superior performance in terms of network throughput, user utility, user delay, and cache hit rates, relative to several baseline policies.

While the VIP algorithms in [3] exhibit excellent performance, they are primarily focused on maximizing network throughput and do not explicitly consider user delay. In this paper, we aim to further improve the delay performance of the existing VIP algorithms in [3] by leveraging VIP information beyond one hop. There are several potential challenges in pursuing this. First, it is not clear how one should improve the delay performance of the existing VIP algorithms by jointly modifying forwarding, caching and congestion control in a tractable manner. Second, it is not clear how to maintain the desired throughput optimality and utility-delay tradeoff of the existing VIP algorithms when the Lyapunov-drift-based control structure is modified for improving the delay performance.

In the following, we shall address the above questions and challenges. We first develop a new class of enhanced distributed forwarding and caching algorithms operating on VIPs to stabilize network in the virtual plane. We then extend the algorithm to include congestion control, thus achieving a favorable utility-delay tradeoff. These enhanced VIP algorithms reduce the delay of the existing VIP algorithms by 1) exploiting the margin between the VIP arrival rate vector and the boundary of the VIP network stability region, and 2) making use of VIP information beyond one hop in a simple and flexible manner. Generalizing Lyapunov drift techniques, we demonstrate the throughput optimality and characterize the utility-delay tradeoff of the enhanced VIP algorithms. These enhanced VIP algorithms generalize the VIP algorithms in [3] in the sense that they maintain network stability and maximize network utility while improving delay performance. In addition, these enhanced VIP algorithms (designed for NDN networks) extend the enhanced dynamic backpressure algorithms in [5] (designed for traditional source-destination networks) in the sense that they incorporate caching into the joint design of dynamic forwarding (routing) and congestion control. Numerical experiments demonstrate the superior performance of the resulting enhanced algorithms for handling Interest Packets and Data Packets within the actual plane, in terms of low network delay and high network utility.

Although there is now a rapidly growing literature in ICN, the problem of optimal joint forwarding and caching for content-oriented networks remains challenging. In [6], the authors demonstrate the gains of joint forwarding and caching in ICNs. In [7], a potential-based forwarding scheme with random caching is proposed for ICNs. The results in [7] are heuristic in the sense that it remains unknown how to choose proper potential values to ensure good performance. In [8], the authors propose throughput-optimal one-hop routing and caching to support the maximum number of requests in a single-hop Content Distribution Network (CDN) setting. In [9], assuming the path between any two nodes is predetermined, the authors consider single-path routing (equivalently cache node selection) and caching to minimize link utilization for a general multi-hop content-oriented network. The benefits of selective caching based on the concept of betweenness centrality, relative to ubiquitous caching, are shown in [10]. In [11], cooperative caching schemes have been heuristically designed without being jointly optimized with forwarding strategies. Finally, adaptive multi-path forwarding in NDN has been examined in [12], but has not been jointly optimized with caching strategies.

2Network Model

We consider the same network model as in [3], which we describe for completeness.

Consider a connected multi-hop (wireline) network modeled by a directed graph , where and denote the sets of nodes and directed links, respectively. Assume that whenever . Let be the transmission capacity (in bits/second) of link . Let be the cache size (in bits) at node .

Assume that content in the network are identified as data objects, each consisting of multiple data chunks. Content delivery in NDN operates at the level of data chunks. That is, each Interest Packet requests a particular data chunk, and a matching Data Packet consists of the requested data chunk, the data chunk name, and a signature. A request for a data object consists of a sequence of Interest Packets which request all the data chunks of the object. We consider a set of data objects, which may be determined by the amount of control state that the network is able to maintain, and may include only the most popular data objects in the network, typically responsible for most of the network congestion[4]. For simplicity, we assume that all data objects have the same size (in bits). The results in the paper can be extended to the more general case where object sizes differ. We consider the scenario where for all . Thus, no node can cache all data objects. For each data object , assume that there is a unique node which serves as the content source for the object. Interest Packets for chunks of a given data object can enter the network at any node, and exit the network upon being satisfied by matching Data Packets at the content source for the object, or at the nodes which decide to cache the object.

3VIP Framework

Figure 1: VIP framework . IP (DP) stands for Interest Packet (Data Packet).
Figure 1: VIP framework . IP (DP) stands for Interest Packet (Data Packet).

We adopt the VIP framework proposed in [3]. In the following, we briefly introduce the VIP framework to facilitate the discussion of the algorithms developed in later sections. Please refer to [3] for the details on the motivation and utility of this framework. As illustrated in Figure 1, the VIP framework relies on virtual interest packets (VIPs), which capture the measured demand for the respective data objects in the network, and represent content popularity which is empirically measured, rather than being given a priori. The VIP framework employs a virtual control plane operating on VIPs at the data object level, and an actual plane handling Interest Packets and Data Packets at the data chunk level. The virtual plane facilitates the design of distributed control algorithms operating on VIPs, aimed at yielding desirable performance in terms of network metrics of concern, by taking advantage of local information on network demand (as represented by the VIP counts). The flow rates and queue lengths of the VIPs resulting from the control algorithm in the virtual plane are then used to specify the control algorithms in the actual plane [3].

We now specify the dynamics of the VIPs within the virtual plane. Consider time slots of length 1 (without loss of generality) indexed by . Specifically, time slot refers to the time interval . Within the virtual plane, each node maintains a separate VIP queue for each data object . Note that no data is contained in these VIPs. Thus, the VIP queue size for each node and data object at the beginning of slot is represented by a counter .1 An exogenous request for data object is considered to have arrived at node if the Interest Packet requesting the starting chunk of data object has arrived at node . Let be the number of exogenous data object request arrivals at node for object during slot .2 For every arriving request for data object at node , a corresponding VIP for object is generated at . The long-term exogenous VIP arrival rate at node for object is Let be the allocated transmission rate of VIPs for data object over link during time slot . Note that a single message between node and node can summarize all the VIP transmissions during each slot. Data Packets for the requested data object must travel on the reverse path taken by the Interest Packets. Thus, in determining the transmission of the VIPs, we consider the link capacities on the reverse path as follows:

where is the capacity of “reverse” link and is the set of links which are allowed to transmit the VIPs of object . Let .

In the virtual plane, we may assume that at each slot , each node can gain access to any data object for which there is interest at , and potentially cache the object locally. Let represent the caching state for object at node during slot , where if object is cached at node during slot , and otherwise. Note that even if , the content store at node can satisfy only a limited number of VIPs during one time slot. This is because there is a maximum rate (in objects per slot) at which node can produce copies of cached object [3].

Initially, all VIP counters are set to 0, i.e., . The time evolution of the VIP count at node for object is as follows:

where . Furthermore, for all if . The detailed explanation of can be found in [3]. Physically, the VIP count can be interpreted as a potential. For any data object, there is a downward “gradient” from entry points of the data object requests to the content source and caching nodes.

The VIP network stability region is the closure of the set of all VIP arrival rates for which there exists some feasible (i.e., satisfying - and the cache size limits ) joint forwarding and caching policy which can guarantee that all VIP queues are stable [3]. Assume (i) the VIP arrival processes are mutually independent with respect to and ; (ii) for all and , are i.i.d. with respect to ; and (iii) for all and , for all . Under these assumptions, Theorem 1 in [4] characterizes the VIP stability region in the virtual plane (or equivalently the Interest Packet stability region in the actual plane when there is no collapsing or suppression at the PITs). Note that the theoretical results in this paper also hold under these assumptions.

In the following, with the aim of improving the delay performance of the VIP algorithms in [3], we focus on developing a new class of enhanced VIP algorithms within the virtual plane of the VIP framework, for the cases where and in Sections Section 4 and Section 5, respectively.

4Enhanced Throughput Optimal VIP Control

In this section, we consider the case where , and develop a new class of enhanced joint dynamic forwarding and caching algorithms, within the virtual plane of the VIP framework.

4.1Bias Function

The VIP algorithm, i.e., Algorithm 1 in [3], focuses primarily on maximizing network throughput, and uses one-hop VIP count differences for forwarding and on per-node VIP counts for caching. This leads to a simple distributed implementation. On the other hand, by incorporating VIP count information beyond one hop in a tractable manner, one can potentially improve the delay performance of the VIP algorithm while retaining the desirable throughput optimality and distributed implementation. Toward this end, we consider a general nonnegative VIP count-dependent bias function for each node and object [5]:

Here, represents the VIP counts at a particular time slot and is the weight associated with VIP count at node for object , representing the relative importance of in the bias at node for object . The parameter is designed to guarantee network stability and will be discussed below in Theorem ? and Theorem ?. We can treat as a normalized version of . While the bias function in is generally written as a function of the global VIP counts, one can choose the bias function to depend only on the local VIP counts within one hop [5]. For example, as in [5], we can choose the minimum next-hop VIP count bias function, i.e.,

where .

Each specific choice of a bias function corresponds to one enhanced VIP algorithm, and the number of VIP counts contributing to the bias function determines the implementation complexity of the corresponding enhanced VIP algorithm. The form of the bias function is carefully chosen to stabilize the network or maximize the network utility, while at the same time offering a high degree of flexibility in choosing specific enhanced VIP algorithms with manageable complexity, distributed implementation, and significantly better delay performance[5].

4.2Enhanced Forwarding and Caching Algorithm

We now present a new class of enhanced joint dynamic forwarding and caching algorithms for VIPs in the virtual plane by incorporating the general VIP count-dependent bias function in into Algorithm 1 in [3].

At each slot and for each link , the enhanced backpressure-based forwarding algorithm allocates the entire normalized “reverse” link capacity to transmit the VIPs for the data object which maximizes the enhanced backpressure . The enhanced max-weight caching algorithm implements the optimal solution to the max-weight knapsack problem in , i.e., allocate cache space at node to the objects with the largest enhanced caching weights . The enhanced forwarding and caching algorithm maximally balances out the VIP counts by joint forwarding and caching, to prevent congestion building up in any part of the network, thereby reducing delay.

It is important to note that with local VIP count-bias functions, such as the minimum next-hop VIP count bias function in , both the enhanced backpressure-based forwarding algorithm and the enhanced max-weight caching algorithm are distributed. Following the complexity analysis in [5], we know that the enhanced VIP algorithm in Algorithm ? with the minimum next-hop VIP bias function in has the same order of implementation complexity as Algorithm 1 in [3] (without any bias functions). In general, for ease of implementation, one should intelligently choose the VIP counts which contribute to the bias function, leading to enhanced VIP algorithms with distributed implementation and good delay performance.

4.3Throughput Optimality

We now show that Algorithm ? adaptively stabilizes all VIP queues for any , without knowing .

Similar to Theorem 1 in [5], Theorem ? should be interpreted as follows. When it is given that is bounded away from the boundary of by at least , i.e., , one can choose a finite such that . In this case, Algorithm ? can improve the delay performance of Algorithm 1 in [3] (which will be demonstrated numerically in Section 6) while maintaining a generalized notion of throughput optimality, by exploiting the margin to incorporate VIP counts beyond one-hop [5]. When it is only known that and no extra margin is given (), then by Theorem ?, must be chosen to be infinity for all and (i.e., ). In this case, Algorithm ? reduces to Algorithm 1 in [3], and Theorem ? reduces to Theorem 2 in [4]. Theorem ? can be seen as the generalization of the throughput optimal results in Theorem 1 of [5] and Theorem 2 of [4].

5Enhanced VIP Congestion Control

The VIP forwarding and caching algorithm first described in [3] were extended to incorporate congestion control in [4]. Here, we develop a new class of enhanced algorithms which generalize the enhanced forwarding and caching algorithm (Algorithm 1) described above to incorporate congestion control.

5.1Transport Layer and Network Layer VIP Dynamics

Even with throughput optimal forwarding and caching, excessively large request rates () can overwhelm a NDN network with limited resources. When , newly arriving Interest Packets (equivalently VIPs) first enter transport-layer storage reservoirs before being admitted to network-layer queues. Let and denote the transport layer VIP buffer size and VIP count for object at node at the beginning of slot , respectively. Let denote the amount of VIPs admitted to the network layer VIP queue of object at node from the transport layer VIP queue at slot . Assume , where is a positive constant which limits the burstiness of the admitted VIPs to the network layer. We have the following time evolutions of the transport and network layer VIP counts[13]:

5.2Enhanced Congestion Control Algorithm

The goal of congestion control is to support a portion of the VIPs which maximize the sum utility when . Let be the utility function associated with the VIPs admitted into the network layer for object at node . Assume is non-decreasing, concave, continuously differentiable and non-negative. Define a -optimal admitted VIP rate[13]:

where , and . Due to the non-decreasing property of the utility functions, the maximum sum utility over all is achieved at when .

We now develop a new class of enhanced joint congestion control, forwarding and caching algorithms that yield a throughput vector which can be arbitrarily close to the optimal solution . We introduce auxiliary variables and the virtual VIP count for all and . Set for all and .

5.3Utility-Delay Tradeoff

We now show that by tuning control parameter , Algorithm ? adaptively achieves a utility-delay tradeoff for VIP queues, for any , without knowing . In addition, Algorithm ? yields a throughput vector which can be arbitrarily close to by letting .

Similar to Theorem 2 in [5], Theorem ? should be interpreted as follows. When , one can choose a finite such that . In this case, Algorithm ? can improve the utility-delay tradeoff of Algorithm 3 in [4] (which will be demonstrated numerically in Section 6), by exploiting the margin to incorporate VIP counts beyond one-hop [5]. When , i.e., no margin is given, is chosen to be infinity for all and (i.e., ). In this case, Algorithm ? reduces to Algorithm 3 in [4], and Theorem ? reduces to Theorem 3 in [4]. Theorem ? can be seen as the generalization of the utility-delay tradeoff results in Theorem 2 of [5] and Theorem 3 of [4].

6Experimental Evaluation

Based on the enhanced VIP algorithms in Algorithm ? and Algorithm ? operating on VIPs in the virtual plane, we can develop the corresponding algorithms for handling Interest Packets and Data Packets in the actual plane using a mapping similar to that in [3]. We omit the details due to the page limitation. In this section, we first compare the delay performance of the enhanced VIP algorithm for the actual plane resulting from Algorithm ? (using the minimum next-hop bias function in with ), denoted by EVIP, with the VIP algorithm for the actual plane resulting from Algorithm 1 in [3], denoted by VIP, as well as with six other baseline algorithms. In particular, these baseline algorithms use popular caching algorithms (LFU, LCE-UNIF, LCE-LRU, LCD-LRU, and LCE-BIAS) in conjunction with shortest path forwarding and a potential-based forwarding algorithm. The detailed descriptions of these baseline algorithms can be found in [4]. We then compare the utility-delay tradeoff of the enhanced VIP algorithm for the actual plane resulting from Algorithm ? (using the minimum next-hop bias function in with ), also denoted by EVIP, with the VIP algorithm for the actual plane resulting from Algorithm 3 in [4], also denoted by VIP, with some abuse of notation. To our knowledge, there are no other congestion control algorithms for NDN networks which can easily control the tradeoff between utility and delay. In evaluating delay and utility-delay tradeoff, as in [3], we also consider the constant shortest path bias versions of EVIP and VIP, with being the per-link cost.

Experimental scenarios are carried on two network topologies: the GEANT Topology and the DTelekom Topology, as shown in Fig. ?. In the two topologies, object requests can be generated by any node, and the content source for each data object is independently and uniformly distributed among all nodes. At each node requesting data, object requests arrive according to a Poisson process with an overall rate (in requests/node/slot). Each arriving request requests data object (independently) with probability , where follows a (normalized) Zipf distribution with parameter 0.75. We choose , Mb/slot, GB, MB, the Interest Packet size is 125B, and the Data Packet size is 50 KB. Each simulation generates requests for time slots. Each curve is obtained by averaging over 10 simulation runs. The delay for an Interest Packet request is the difference (in time slots) between the fulfillment time (i.e., time of arrival of the requested Data Packet) and the creation time of the Interest Packet request. We use the total delay for all the Interest Packets generated over time slots as the delay measure. We consider -utility function with , i.e., , and use the sum utility over all nodes and all objects as the utility measure.

Fig. ? illustrates the delay performance. We can observe that the VIP algorithms in [3] and the proposed enhanced VIP algorithms achieve much better delay performance than the six baseline schemes, especially in DTelekom. In addition, the proposed enhanced VIP algorithms significantly improve the delay performance of the VIP algorithms (e.g., about at in GEANT and at in DTelekom). Fig. ? illiterates the utility-delay tradeoff. We can observe that the proposed enhanced VIP algorithms achieve significantly better utility-delay tradeoff than the VIP algorithms in [4]. In summary, the proposed enhanced VIP algorithms improve the delay performance of the VIP algorithms in [3] by intelligently exploiting the VIP counts beyond one hop for forwarding, caching and congestion control.

7Conclusion

In this paper, we develop a new class of enhanced distributed forwarding, caching and congestion control algorithms within the VIP framework, which adaptively stabilize the network and maximize network utility, while improving delay performance. We prove the throughput optimality and characterize the utility-delay tradeoff of the enhanced VIP algorithms in the virtual plane. Numerical experiments demonstrate the superior performance of the resulting algorithms for handling Interest Packets and Data Packets in the actual plane, in terms of low network delay and high network utility, relative to a number of baseline alternatives.

Appendix A: Proof of Theorem

Define the quadratic Lyapunov function . The Lyapunov drift at slot is given by . First, we calculate . Taking square on both sides of , we have

Summing over all , we have

where (a) is due to the following:

and (b) is due to the following:

Taking conditional expectations on both sides of , we have

where (c) is due to the fact that Algorithm ? minimizes the R.H.S. of (c) over all feasible and .3 Since , according to the proof of Theorem 1 in [4], there exists a stationary randomized forwarding and caching policy that makes decisions independent of such that

On the other hand, similar to , we can show

By , and for all and , we have . By Lemma 4.1 of [13], we complete the proof.

Appendix B: Proof of Theorem

Define the Lyapunov function , where . The Lyapunov drift at slot is . First, we calculate . Similar to Appendix A, taking square on both sides of , we have

In addition, taking square on both sides of , we have

Therefore, similarly, we have

Taking conditional expectations and subtracting

from both sides of , we have