Towards a Queueing-Based Framework for In-Network Function Computation

Towards a Queueing-Based Framework for In-Network Function Computation

Siddhartha Banerjee, Piyush Gupta and Sanjay Shakkottai Siddhartha Banerjee is with the Department of ECE, the University of Texas at Austin. Email: sbanerjee@mail.utexas.eduPiyush Gupta is with the Mathematics of Networks and Communications group, Bell Labs, Alcatel-Lucent. Email: pgupta@research.bell-labs.comSanjay Shakkottai is with the Department of ECE, The University of Texas at Austin. Email: shakkott@mail.utexas.edu
Abstract

We seek to develop network algorithms for function computation in sensor networks. Specifically, we want dynamic joint aggregation, routing, and scheduling algorithms that have analytically provable performance benefits due to in-network computation as compared to simple data forwarding. To this end, we define a class of functions, the Fully-Multiplexible functions, which includes several functions such as parity, MAX, and -order statistics. For such functions we characterize the maximum achievable refresh rate of the network in terms of an underlying graph primitive, the min-mincut. In acyclic wireline networks, we show that the maximum refresh rate is achievable by a simple algorithm that is dynamic, distributed, and only dependent on local information. In the case of wireless networks, we provide a MaxWeight-like algorithm with dynamic flow splitting, which is shown to be throughput-optimal. \keywordsin-network function computation wireless sensor networks dynamic routing and scheduling algorithms

I Introduction

In-network function computation is one of the fundamental paradigms that increases the efficiency of sensor networks vís-a-vís conventional data networks. Sensor nodes, in addition to sensing and communication capabilities, are often equipped with basic computational capabilities. Depending on the task for which they are deployed, a sensor network can be viewed as a distributed platform for collecting and computing a specific function of the sensor data. For example, a sensor network for environment monitoring may only be concerned with keeping track of the average temperature and humidity in a region. Similarly ‘alarm’ networks, such as those for detecting forest fires, require only the maximum temperature. The baseline approach for performing such tasks is to aggregate all the data at a central node and then perform offline computations; the premise of in-network computation is that distributed computation schemes can provide sizable improvement in the performance of the network. However, from the perspective of designing network algorithms, in-network function computation poses a greater challenge than data networks as the freedom to combine and compress packets, as long as the desired information is preserved, destroys the flow conservation laws central to data networks. The network has a lot more flexibility, so much so as to make quantifying its performance much more challenging [1].

Our focus in this paper is to develop a queue-based framework for such systems, and use it to design and analyze network algorithms. By network algorithms, we refer to cross layer algorithms that jointly perform the following tasks:

  1. Aggregating the data at nodes via in-network computation,

  2. Routing packets between nodes, and

  3. Scheduling links between nodes for packet transmission.

Cross-layer algorithms for data networks, although very successful in both theory and increasingly in real-system implementation, are concerned only with the scheduling and routing aspects. Hence, there is a need for a new framework and new algorithms for in-network function computation in sensor networks. Keeping in mind the lessons learnt from the success of data networks, our aim is to design network algorithms that are dynamic (i.e., the algorithm should not be designed assuming static network parameters, but rather, use the network state to adaptively learn the network parameters), robust (i.e., the algorithm adapts to temporal changes in traffic and network topology), capable of dealing with a large class of functions (i.e., if the function being computed by the network changes, then one should only need to make minor changes to the scheduling and routing algorithms), and generalizable to all network topologies.

Due to the wide range of potential applications, there are many existing models for such networks, and many different perspectives from which they are analyzed. Some representative works in this regard are as follows:

  • The pioneering work of Giridhar and Kumar [1] considers the function computation problem from the point of view of the capacity scaling framework of Gupta and Kumar [2]. In particular, they quantify scaling bounds for certain classes of functions (symmetric, divisible, type-sensitive, and type-threshold) under the protocol model of wireless communications and for collocated graphs and random geometric graphs.

  • Other papers consider the function computation problem from the point of view of information theory [3, 4] and communication complexity [5, 7], characterizing various metrics (refresh rate, number of messages, etc.) for different functions in terms of certain properties of the graph, the function to be computed, and the underlying data correlation. All the above works take an idealized ‘bottom-up’ approach to determine the fundamental limits of the problem, and hence are not directly suitable for designing practical network algorithms.

  • Similar in spirit to the above papers, another approach is to study function computation from the perspective of source coding[6, 9]. These works characterize bounds and show the existence of coding schemes for noiseless, wireline networks. As with the previous algorithms, these policies tend to be idealized, using more complex coding-based schemes instead of simple routing and aggregation (we will later show that such simple strategies suffice for optimal in-network computation of a number of functions of interest); further, these papers do not have explicitly defined policies but rather existence results for such policies.

  • In contrast to the ‘bottom-up’ approach of all the above works, Krishnamachari et al. [8] adopt a more ‘top-down’ approach whereby they formulate network models that abstract out some of the complexity while allowing quantification of performance gains (in their case, energy and delay). Their models do not, however, achieve the optimal throughput and also do not allow for the design of dynamic network algorithms.

  • An alternate model of sensor networks is to assume that nodes are capable of in-network compression, wherein only the compression (and not merging) of flows is permitted. For example, Baek et al. [10] consider routing algorithms for power savings in hierarchical sensor networks. Similarly, Sharma et al. [11] design energy-efficient queue-based algorithms under the assumption that the only operation allowed in addition to routing and scheduling is compression of packets at the source node.

The queue-based model for data networks has proved to be an essential tool in designing provably-efficient algorithms for such systems. This model has provided a common framework for understanding various aspects of data network performance such as throughput [12, 13], delay [14, 15], flow utility maximization [16, 17, 18], network utility maximization [19, 20], distributed algorithms [21], among others (for an overview, refer to [22]). In addition, these algorithms have been implemented in real systems [23, 24], including in sensor networks [25], with good results. However, these algorithms are designed for data networks, and can not exploit any potential benefit from in-network computation. More recently, this framework has been extended to fork-and-join networks with fixed routing[26], and resource allocation in processing networks[27].

Using fixed routing in a network usually leads to suboptimal operations as the routes may not be designed to optimize the network performance; in general, even choosing the single best fixed route can perform arbitrarily worse than with dynamic routing (see example in Section III). Further, static routing is not robust to temporal changes in the network. However, introducing dynamic routing with in-network computation destroys the flow conservation equations that exist in data networks and networks with fixed flows, as the flow out of a node depends both on inflow as well as (dynamic) packet aggregation at that node. Thus, there is a need to come up with a new queue-based framework and algorithms for efficient function computation in sensor networks, and our paper is a step in this direction.

I-a Main Contributions

Our main contributions in this paper are as follows:

  • We identify a class of functions, the Fully-Multiplexible or FMux functions, for which we provide a tight characterization of the maximum refresh rate with in-network computation, i.e., the maximum rate at which the sensors can generate data such that the computation can be performed in a stable111By stability, we refer to the standard notion of the existence of a stationary regime for the queueing process [13, 22]. manner. More formally, we show that for these functions, if the refresh rate exceeds a certain graph parameter (the stochastic min-min-cut, which we define formally in Section III), then the system is transient under any algorithm, whereas for any rate lower than this parameter, we construct a policy that can stabilize the system.

  • Leveraging the results of Massoulié et al. [28] on broadcasting, we obtain a wireline routing algorithm for aggregation via in-network computation of FMux functions in directed acyclic graphs. Our approach is based on the observation that broadcasting and aggregation are in some sense, duals, of each other. More technically, the duality occurs between ’isolation’ of packets in aggregation (i.e., a packet does not have neighboring packets to aggregate with) and that of multiple receptions of the same packet (from different neighbors) in broadcasting. By suitably modifying the approach in [28], we are able to develop an in-network aggregation algorithm for which routing is completely decentralized, and simple random packet forwarding and aggregation suffices for throughput-optimality.

  • For general wireless networks we develop dynamic algorithms based on a centralized allocation of routes (dynamic flow splitting) and MaxWeight-type scheduling. In particular, we show that loading rounds on trees in a greedy manner (whereby an incoming round is loaded on the least weighted aggregation tree), coupled with an appropriate scheduling rule, is throughput-optimal for computing FMux functions. The analysis of this algorithm is unique in that in addition to an appropriate Lyapunov function, it requires the construction of appropriate tree-packings of the network graph in order to show the throughput-optimality of this routing scheme.

Notation: Throughout the paper, we use calligraphic fonts (, etc.) to denote sets and the corresponding capital letter (, etc.) to denote their cardinality. We interchangeably use or for adding elements to sets, and for deleting elements from sets, and sometimes for brevity of exposition, use the element to denote the singleton set when the meaning is clear from the context (in particular, for a set and element , ). We also use the shorthand notation .

Ii System Model and Function Classes

In this section we describe the system model we study in the rest of the paper. At a high level, the system consists of a network of nodes, one of which is the data aggregator and the rest are sensors. Sensor nodes are capable of three tasks: sensing the environment, transmitting to and receiving data from other nodes, and performing computations on the data. The sensors are assumed to sense the environment in a synchronous manner, and the overall purpose of the system is to compute a specific function of the synchronously generated sensor data and forward it to the aggregator. Further, the function computation is assumed to be done in a repeated manner, and the metric used to quantify the efficacy of an algorithm is the maximum synchronous rate at which the sensors can generate data such that the required function of the data can be forwarded to the aggregator in a stable manner. This rate is henceforth referred to as the maximum refresh rate of the network.

Before we describe the queueing framework for function computation, we first outline the general communication model that we consider in this work. This model is the same as that considered for studying data networks [13]. In the next section, we will outline the modifications we make in order to capture the in-network computation aspect of a sensor network.

Communication Graph: We model the topology of the sensor network as a directed graph , where is a set of nodes, and is a set of directed links which determine the connectivity between nodes. There is a special node, , referred to as the aggregator, and the rest of the nodes in are sensor nodes. Directed link represents that there exists a communication channel from node to node (in wireline this corresponds to a physical channel, while in wireless it represents the fact that the nodes are in radio range).

Transmission Model: Following the convention in literature [13, 28], we consider a continuous time system in case of wireline systems, whereas in the case of wireless networks, we assume that time is slotted , and state all rates in bits per slot. In wireline networks, we define a vector of link rates ; one bit is assumed to traverse a link with a random transit time with distribution Exponential. The transit times are independent across links and across packets crossing the same link.

For wireless networks, we make the following assumptions/definitions [19]:

  • We assume that the channels between nodes are constant (but can extend to time varying channels with added notation, see [13]). The wireless nature of the network is reflected in the interference constraints.

  • For transmission schedule , denotes the link-rate vector of transmission rates over the links under the chosen schedule.

  • is the set of valid schedules that obey the interference constraints (henceforth referred to as independent sets). is said to be admissible if the link-rates can be achieved simultaneously in a time slot. is the set of all admissible and is assumed to be time invariant as stated above. Further, we assume that .

  • is said to be obtainable if , the convex hull of . An obtainable link-rate vector can be achieved by time sharing over admissible link-rate vectors.

  • From the definition of the convex hull, we have that for every obtainable rate vector , there exists a probability measure over such that . The vectors are called Static Service Split (or SSS) rules[13], and represent time sharing fractions between different independent sets in order to achieve the rate .

Up to this point, the system is identical to one considered for data networks. To highlight the unique features of a physical sensor network performing function computation (and how they affect the modeling of such a system), we consider the following example. In the process, we also indicate the gains achievable via in-network computation versus data download and processing at the aggregator.

Example : Consider a grid of temperature sensors, with a single aggregator at the center, engaged in recording the maximum temperature over these sensor readings. Each node is connected to its four immediate neighbors in the grid via links with a fixed capacity . Every node senses the temperature synchronously, and the aggregator desires the MAX of these synchronous measurements. Suppose the network operates by transferring all the data to the aggregator, and then calculating the MAX offline; the maximum rate at which the measurements can be made is then , as all the packets must pass through one of the links entering the aggregator. On the other hand, if we allow in-network computation, wherein nodes on receiving multiple packets can discard all but the one with highest value, then the network can operate at a rate of , as the bottleneck is now the minimum-cut of the graph (again the links entering the aggregator). In subsequent sections, we show that for certain functions like MAX, and any network, the maximum possible refresh rate can be related thus to minimum-cuts in the network. Further, there are dynamic algorithms that support rates up to the maximum refresh rate.

Keeping this example in mind, we now outline the rest of our system model.

Traffic Model: We consider a symmetric arrival rate model, where each sensor node senses the environment synchronously at a rate (the refresh rate of the network). The aim of a network algorithm is to support the maximum possible while ensuring that the network is stable.

In case of wireline networks, packets are generated synchronously at all nodes following a Poisson process with rate . In case of wireless networks, the arrival process in time slot consists of a random number of packets per time slot generated in a synchronous manner , i.e., , and further is i.i.d across time. In this case, we define the refresh rate as

and also assume that has finite second moment222Note that this assumption is not the most general possible restriction on the input process, but one that we choose for convenience of exposition. For more general conditions on the arrival process, refer to [13]. which we denote as .

We associate all simultaneously generated packets with a unique identifier called the round number, which represents the time when the packet was generated. In particular, we follow a scheme whereby we number the packets sequentially in ascending order of their generation times, and updating the round numbers when packets complete being aggregated (Thus the oldest unaggregated packet in the network always has round number and so forth). This scheme of round number allocation is henceforth referred to as the generation-time ordering.

Now in order to develop a queueing model, we need a framework to capture the data aggregation operations. As mentioned before, our primary goal is to explore the benefits of in-network computation versus data-download. To this end, we restrict our attention to a specific class of functions, which we define as the FMux functions, and for which we can exactly quantify the gains from in-network computation. The intuition behind the FMux class is that these functions support maximum compression upon aggregation; when two (or more) packets combine at a node, the resultant packet has the same size as the original packets. We now define it formally.

Computation Model: We assume that the function is divisible[1]. Formally, we assume that each sensor records a value belonging to a finite set , and we have a function of the sensor values that needs to be computed at the aggregator . We use to denote the function operating on inputs, i.e., , where denotes the range of function when it takes inputs. Then the function is said to be divisible if:

  1. is non-decreasing in , and

  2. Given any partition of , there exists a function such that for any :

Intuitively, for any partition of the nodes, can be computed by performing a local computation over each set in the partition, and then aggregating them.

We define a function to be Fully-Multiplexible or FMux if for all . In other words, the output of a FMux function lies in the same set independent of the number of inputs. Some important examples of FMux functions are MAX, k-th order statistics, parity, etc.. As mentioned before, in this work we will focus on FMux functions as they most clearly exhibit the effects of in-network computation (in that we have tight bounds for their refresh rate).

As a representative example of FMux functions for defining the queueing model and algorithms, consider computation of the parity of the sensor readings: , where represents the binary XOR operator. Upon sensing, node stores the value as a packet of size bit. Next, when two or more packets of the same round arrive at a node, they are combined using the XOR operation. Finally, the aggregator obtains the parity by taking XOR of all the packets of a given round that it receives. We now develop a queueing model for FMux functions.

Queueing Model: Each node maintains a queue of packets corresponding to different rounds. For node is a subset of representing the round numbers of all packets queued up at that node. We also define .

When a packet corresponding to round arrives at node from any other neighboring node, it is combined with node ’s own packet corresponding to round to result in a single packet of the same size (using the FMux property in general, e.g. by taking XOR for parity). In the case where node does not have a packet of round in queue, it needs to store the new packet. Formally, upon arrival of packet of round in time slot (and ignoring other arrivals and departures), the queue is updated as follows-

where we use as shorthand for ‘ if , else ’. The complete queue update in a time slot is obtained by extending this definition for all arrivals, and by removing any departing packets from the queue.

Suppose further that the round number allocation is done according to the generation-time ordering scheme described before, then the system described above forms a Markov chain under any stationary scheduling and routing algorithm. Further, it can be showed that this chain is irreversible and aperiodic. We will now focus on the above queueing dynamics for the design of scheduling and routing algorithms.

We should note here that the queueing model we have described above accounts only for routing and aggregation of packets belonging to the same round. We have not allowed packets from different rounds to be combined together in any way, thereby negating the possibility of block coding and network coding. In the case of parity, it is known however that there is no improvement possible by using schemes with block/network coding [1, 29].

Iii Maximum Refresh Rate and Tree Packing

Given the above queueing model, it is unclear what routing structures are required for efficient in-network computation. Existing routing-based approaches for function computation [8, 26, 30] often assume that routing is done on a single aggregation tree, where each node aggregates data from its children before relaying it to its parent. However it’s not a priori evident that a single optimal tree, or a collection of optimal trees exists (or indeed that acyclic aggregation structures are sufficient), and if it does, how it can be found dynamically.

In this section, we derive an algorithm-independent upper bound on the refresh rate for FMux computation. By focusing on the flow of information from sensor nodes to the aggregator, we are able to express the bound in terms of an underlying graph primitive- the min-mincut of the graph. Next we construct a class of throughput-optimal randomized policies, thereby obtaining a tight characterization of the maximum refresh rate. In the process, we show the existence of an optimal collection of aggregation trees. To understand the import of this result, consider the following example.

Example : Consider a wired network consisting of the complete graph on nodes, with every edge having capacity . If we use a single aggregation tree for routing, then the maximum possible refresh rate for computing the parity function is , as every edge is a bottleneck. However, by using a collection of aggregation trees (in fact, it can be shown that a particular set of trees are sufficient), one can achieve a refresh rate of . This, as we will show in the next section, is optimal as it turns out to match the min-mincut of the graph.

Keeping this in mind, we now characterize the maximum refresh rate for FMux computation in general graphs.

Iii-a An upper bound on refresh rate for FMux computation

We now state an upper bound for the refresh rate under which the network can be stabilized under any algorithm. We state this theorem for wireless networks, as an equivalent theorem for wireline networks can be obtained as a special case.

Given a rate vector and any node , we define the min-cut between the node and the aggregator as:

Further, we define the min-mincut of the network under rate vector as

Then we have the following lemma.

Lemma 1.

Upper Bound on refresh rate: Consider a network performing FMux computation. A refresh rate of can not be stabilized by any routing and scheduling algorithm if

We note here that the capacities of the links are given in bits per time slot, while the refresh rate is in terms of packets per time slot. The factor is to convert link capacities into packets per time slot, and is henceforth present in all bounds for the refresh rate.

Proof.

The proof follows from tracing the steady state flow of packets from any sensor node to the aggregator. More specifically, for a refresh rate , suppose the network is stabilized by some algorithm. Then the Markov Chain described by the packets in the network (under the generation time ordering round number allocation, as described above) has a stationary regime. Further, due to the network constraints, the average service rate on each edge of the network in the stationary regime is given by some (in bits per slot)

Next under the stationary regime, for a sensor node , we can trace the packets as they travel from node to the aggregator (in order to do so, we start tracing a packet when generated at , and subsequently whenever that packet is aggregated, we trace the aggregated packet). Now for every directed path from to , we obtain an average flow of packets which travel along that path. This gives us a set of flows from to . Due to the unchanging packet size (due to the FMux assumption), the sum of these flows is equal to . However, due to the network constraints, the sum of flows on an edge is less than or equal to , and thus by the max-flow-min-cut theorem, is less than or equal to the minimum cut with edge capacities given by . Now since this is true for any node , we have that . Maximizing over all , we get our result by contradiction. ∎

Iii-B An optimal class of randomized scheduling/routing policies

From Lemma 1, it is evident that the min-mincut of the graph (under an appropriate SSS rule) is the bottleneck for computing an FMux function. Now we can use a classical theorem of Edmonds in order to simplify the space of policies we need to consider. We state the theorem in its original form whereby it is applicable to a one-to-all network broadcast scenario (informally, a directed graph with a special source node, where the aim is to transmit the same packets from the source to all the nodes in the network). However given a sensor network, we can apply Edmonds’ Theorem on it by reversing the directions of all the edges while keeping their capacities the same.

Consider a directed graph with a distinguished source node , and suppose each edge of the graph is associated with a capacity . As before, the min-mincut of the graph is defined as:

Let to be a set of all spanning trees of rooted at (i.e., every is a spanning tree with as the first element in its topological order). The max-spanning-tree-packing number, is defined to be the solution to the following optimization problem:
Maximize              ,
subject to

Then we have the following theorem.

Theorem 1.

(Edmonds, [31]) For a directed graph with distinguished source vertex and edge capacities , the min-mincut is equal to the max-spanning-tree-packing .

Edmonds’ Theorem guarantees the existence of a tree packing which has the same weight as the min-mincut of the graph. Now in the case of one-to-all broadcast in networks, wherein a node can transmit copies of any packet it has received, it is clear that the subgraph traced out by a packet in reaching all nodes forms a tree. Returning to the wireless setting, we now sketch out how to construct a randomized routing and scheduling algorithm that is throughput-optimal, using the technique developed by Andrews et al [13]. Suppose we know the point in the obtainable rate region which maximizes the min-mincut333Note that such an optimal rate point exists as the min-mincut is a continuous function of the rates, which lie in a compact set , then we can schedule according to the corresponding SSS rule to achieve an ergodic rate of across any link . The network is now converted into a wired network, i.e., with edges having fixed capacities. Next we can use Edmonds’ Theorem to obtain a tree packing for this fixed-capacity network, which determines how the input flow should be balanced between spanning trees. Combining these two steps, we obtain a scheme whereby we split the incoming flow according to the tree packing, and schedule using the SSS rule corresponding to to stabilize the system. By a similar argument, we can obtain a tree packing given the optimal SSS rule for the FMux function computation problem. Here each round is associated to a spanning tree such that the total incoming flow (which is equal to the refresh rate) is split according to the above tree packing. This tree is henceforth referred to as the aggregation tree of the round, and determines the route followed by the packets in that round. The routing thus taken care of, the scheduling is done according to the optimal SSS rule, and in combination, they stabilize the network. Combined with Lemma 1, this gives a tight characterization of the maximum refresh rate of the network, which we state in the following theorem.

Theorem 2.

Consider a network performing in-network computation for an FMux function . The maximum refresh rate is defined as:

Then a refresh rate of can not be stabilized by any algorithm if , and there exists a static, randomized algorithm to stabilize it if .

We note that this bound, and the definition of FMux functions, is similar in spirit to the results in [6]. Theorem 2 is different (and more general) than the results obtained in [6], both in scope and technique. More generally, there is a fundamental difference in the level of abstraction with which we view the problem vis-a-vis other similar works such as [5, 7, 6], where the focus is on the physical/link layers, and further, only for wired networks. Our result is for network layer algorithms for a more general class of networks (wired and wireless); furthermore, the algorithm based on SSS rules is an explicit (albeit static) algorithm, and uses only routing and packet aggregation at nodes. In contrast Appuswamy et al. [6] use results that show the existence of source-coding based, schemes (which are more complex than routing based schemes we use) that achieve the min-mincut in noiseless, wired networks.

The problem with such a static algorithm is that it needs prior calculation of the min-mincut and associated optimal rate point (to obtain the optimal packing of aggregation trees). A better alternative is to use the queues as proxy for learning these through dynamic algorithms based on the current system state (similar to the Backpressure algorithm[13, 12] for data networks). The rest of the paper deals with the development of such algorithms.

Iv Routing with Random Packet Forwarding in Wired Networks

In this section we give a routing algorithm for acyclic wired networks based on random packet forwarding with aggregation. This algorithm is based on an algorithm for one-to-all network broadcast in wireline networks by Massoulié et al.[28], which demonstrates that random ‘useful’-packet forwarding achieves the min-mincut bound. We modify their approach to obtain a dual version applicable to FMux computation in wireline networks.

In in-network FMux computation, as described before, a new round of packets arrives at all sensor nodes in a synchronous manner, and need to be routed to the aggregator. For the broadcast problem (where packets arrive at the source and need to be routed to all other nodes), an optimal algorithm[28] is as follows: for any idle link in the network, the source node randomly picks a packet that the receiver does not have (defined as a ’useful’ packet) and transmits it on that link. We now define an analogous notion of a useful packet for in-network aggregation, and show how it can be used to derive an optimal random routing algorithm for FMux function computation.

A natural invariant in broadcast is that the trace of a round of packets always follows a spanning tree. This is not in general true in aggregation; however in the case of acyclic networks, one can impose additional constraints to ensure that a transmission does not lead to an isolated packet, i.e., a packet at a node such that no neighboring node has a packet from the same round, thus preventing its aggregation. This can be ensured by defining an appropriate notion of a ’useful’ packet and only transmitting useful packets. We define a packet in node to be useful to neighbor if (a) has a packet of the same round (hence ensuring aggregation); and (b) transferring the packet to does not result in an isolated neighbor of . The routing algorithm now consists of randomly forwarding useful packets whenever a link is idle. In Appendix Appendix A Scheduling with Random Packet Forwarding: Detailed Proofs, we prove that this definition leads to packets being routed on spanning trees.

Formally, the algorithm is a work-conserving policy whereby each node ensures that an outgoing edge is engaged in a packet transfer if and only if there are packets in that are useful to . For a node , we define and to be the ‘in-neighborhood’ and ‘out-neighborhood’ of respectively. Now at a given time , packets of a round can be in states under the algorithm (analogous to the notation Massoulié et al.[28]):

  1. Sucessfully aggregated, i.e., present only at aggregator .

  2. Idle, i.e., not being transmitted at any edge. Packets of an idle round are present at nodes of some set , henceforth called the footprint-set of round , and denoted . We define a valid footprint-set to be one where the subgraph induced by the set contains a spanning tree rooted at (equivalently, each node in the footprint set has a directed path to ); the collection of such sets is denoted as . Finally, for all is a count of idle rounds located in .

  3. Active, i.e., being transmitted on at least edge. The collection of active rounds is given by , where round each round has an associated pair ; here is the footprint-set, and is the set of edges on which packets of round are being transmitted.

The pair forms a complete description of the system; we henceforth consider the Markov Chain on this system description for describing and analyzing the algorithm. Further, for ease of exposition, we supress the dependence on time whenever clear from context.

Now we can formalize the notion of a useful packet for transmission. We define an edge to be idle if (i.e., no packet it being transmitted on it). For a given idle edge at time , a packet of round (idle or active) is said to be useful if:

  1. (Aggregation Condition) Both and are in .

  2. (Non-isolation Condition) For all , there is an alternate route for aggregation, i.e., .

Figure 1 illustrates the above conditions for determining whether a packet is useful with respect to a link. Note that the definition of valid footprint-sets is consistent with definition of useful packets: by ensuring transmission of only useful packets, we ensure that the footprint of any round must be a valid footprint set (i.e., always containing a spanning tree rooted at ).

Fig. 1: Illustration of notion of ‘useful’ packet: For the single round in the above network, packets are not useful for: Link because of violating the aggregation condition (node has no corresponding packet), Link because of violating the non-isolation condition (node ’s packet gets isolated). The packet is useful for link .

Next we impose a work conservation requirement on the system in the following manner. Define to be the number of useful idle packets across edge , and similarly to be the number of active packets at which are useful to . Then we impose the following activity condition on the network– one of the following is true:

or in words, an edge is active as long as there is at least one useful packet across it. We now describe the routing algorithm, which performs random useful packet forwarding with aggregation while ensuring the activity condition. The routing is performed whenever a link is idle.

Input: An idle link , i.e., a link with no packet transmitting on it currently.
Output: A routing decision of which packet to transmit on .
Step 1: If useful packets across , leave link idle.
Step 2: Otherwise, pick a useful packet uniformly at random and start transmitting.
Algorithm 1 Random useful packet forwarding with aggregation for FMux computation in wireline networks.

And finally we have the main theorem for the stability of the algorithm.

Theorem 3.

For a directed acyclic network operating under algorithm 1, the network is stable if , where .

The proof closely follows the proof of Massoulié et al.[28], with appropriate modifications in order to perform aggregation rather than broadcast. Similar to [28], it proceeds in three stages-

  1. Defining the fluid limit of the Markov chain, and associated convergence results.

  2. Defining a Lyapunov function for the fluid system, and showing negative drift.

  3. Using the fluid Lyapunov and convergence results to show stability of the original system.

The critical additions that we make are in the appropriate definition of a useful packet, and in identifying the appropriate counter variables that capture FMux aggregation in networks. Further, in Lemma 4, we derive a crucial combinatorial relation between these counter variables parallel to the main lemma in [28]. The details of the proof are provided in Appendix A.

V Scheduling With Aggregation-Tree Routing in Wireless Networks

The presence of interference in wireless networks necessitates efficient scheduling of independent sets in addition to routing. Given an SSS rule, we can modify Theorem 3 to show that random packet aggregation supports a rate upto the min-mincut under the corresponding SSS rule. However dynamic scheduling in order to achieve the optimal SSS rule (i.e., with the largest min-mincut) needs an alternate routing technique.

We now describe an alternate approach to throughput-optimal dynamic scheduling and routing for in-network FMux computation over wireless networks. Unlike wired networks, where routing was performed via random packet forwarding, we now focus on schemes based on pre-allocating the route to be followed by the packets of each round, and then scheduling under these routing constraints. Building on the intuition that the “correct” routing structures for FMux computation are spanning trees rooted at the aggregator (henceforth refered to as aggregation trees), we split the algorithm into two components:

  • A routing component that maps incoming rounds to aggregation trees. Once a round is assigned to a tree, its packets follow the edges of the tree to the aggregator.

  • A scheduling component uses the knowledge of the next hop of each packet to determine an optimal independent set for transmission.

The main result of this section is that there is a dynamic algorithm of this type that is throughput optimal for wireless networks. More specifically, we present a throughput-optimal algorithm based on ‘greedy’ routing (whereby the aggregation tree is chosen in a greedy manner) and ‘MaxWeight’-type scheduling (whereby links are scheduled according to a maximum weighted independent set problem, with link weights determined by the queues).

Before presenting the general algorithm, we consider some specific example networks to give an intuition as to how the algorithm is constructed; in particular we illustrate the scheduling and routing components separately. Finally, in Section V-B, we present the complete algorithm for general graphs, and prove its throughput-optimality.

V-a Scheduling With Aggregation-Tree Routing for FMux Computation: Preliminaries and Some Examples

In this section we give some examples to build some intuition for the general algorithm we present in Section V-B. Suppose the network is a tree rooted at node . For node , we define to be the (unique) parent node and to be the set of immediate children nodes in the aggregator tree. Before specifying the queueing dynamics for this system, we first need a lemma that reduces the space of all possible scheduling policies to a smaller set of policies for which we can write the dynamics in a convenient manner.

A scheduling policy for tree aggregation is said to be of type aggregate and transmit or Type-AT if for every node , and every round , a packet of round is transmitted from to only after receiving the corresponding round packet from every node . A Type-AT policy thus prevents a round from being transmitted to its parent until it has aggregated all corresponding packets from its children–this is analogous to the non-isolation requirement in Section IV. Further, this ensures that the flow on each edge of the tree is equal to the input rate of packets on that the tree. Henceforth, we restrict to Type-AT policies, which are sufficient by the following lemma.

Lemma 2.

For an aggregation tree and a scheduling policy that stabilizes the system for given refresh rate , there exists a scheduling policy that stabilizes the system for the same refresh rate, and in addition, is of Type-AT.

Proof.

Given any stabilizing policy, we can use a standard coupling argument to obtain a scheduling policy of Type-AT. Whenever the policy transfers a packet violating non-isolation, the modified algorithm stores the packet at the same node. This continues until the node has received all packets of that round from its children nodes. Now the next time the policy transmits a packet of the same round from that node (which we know happens as the algorithm is stable), the modified algorithm transmits the aggregated packet. However since each round starts off with packets, this means that the number of packets under the modified algorithm is less than or equal to times the number of packets under the non Type-AT algorithm. Since the original policy is stable, hence the modified policy is also stable, and is of Type-AT. ∎

We now consider some example networks with nodes, where the aggregator node desires a function of the sensed data. Assume that each sensor node records a value from an ordered, finite set , and the aggregator wants the MAX of these values (an FMux function). The computation at the nodes consists of taking all available packets of a given round, and retaining the one with the largest value. In the following examples, we focus on the routing and scheduling aspects of the problem: first we study how to schedule links to deal with interference under a single aggregation tree; next we allow for collections of aggregation trees with fixed flows and show how to mix flows across these trees; finally we show a simple example of how dynamic routing over many trees can be achieved. In the next section, we combine these to obtain a dynamic scheduling and routing algorithm for general network topologies.

Example (Single Aggregation Tree): Consider a sensor network where the MAX is computed by combining data on a single aggregation tree. We now modify the queueing model of Section II to ensure that a policy is Type-AT. Each node maintains two queues: corresponding to ‘not-useful’ packets which are awaiting packets from with the same round index, and corresponding to ‘useful’ packets which are ready for transmission to , having received and calculated the MAX of all corresponding packets from nodes in . We also define , and use and to denote the cardinality of the appropriate queues.

Packets entering the network at node at time are stored in except in leaf nodes where they are stored in . A node only transmits packets which are in in order to ensure that the policy is of Type-AT. When node receives packets corresponding to round from all nodes in , it retains the packet with the maximum value and stores it in . Formally, we can write the queue dynamics as:

Here represents the packets transmitted from node to its parent in time slot and denotes the internal transfer of packets at node from unaggregated to aggregated ( if for at least one and ). The cardinality of is henceforth denoted as which represents the number of packets transmitted over link in time slot .

One observation regarding these dynamics is that unlike data networks, under Type-AT policies, a packet transmission by node does not change the total size of its parent’s queues (this is in general due to the FMux property). Further, each unaggregated round in the network has a useful packet at some node. Thus, we obtain the following scheduling algorithm, which is a modified version of the Backpressure policy [12] to account for these facts:

Input: Time slot , queue states , incoming packets , admissible rate region
Step 1: Place incoming packets to sensor in for non-leaf nodes, and for leaf nodes.
Step 2: Compute as:
.
Step 3: Consider node . If and , then transmit the first packets, where

The above example indicates how the algorithm chooses independent sets for a single class of packets. Next we consider a network which uses a collection of aggregation trees for routing, therefore requiring the algorithm to make an additional decision of which packet to transmit on a scheduled link.

Example (Multiple aggregation trees): Consider a network modeled by a directed graph where we restrict the routing to a specified collection of aggregation trees. We assume that each tree has a pre-determined arrival rate of rounds on it. Each new round is associated with a given tree in accordance to the arrival rates, thereby completely specifying the routing. In each time slot, flows from different trees can be scheduled for transmission. We first need some additional notation.

Let be the set of spanning trees that are used for routing. Each incoming round is tagged with a specific aggregation tree , which specifies the route to be followed by packets of that round while calculating the MAX at each node. Define to be the parent and children nodes of node on tree . Also, define and to be the given splitting of input traffic between the aggregation trees. The queueing model is an extension of the previous model. Each node maintains two queues for each tree : corresponding to unaggregated packets which are awaiting packets from with the same round index (not-useful), and corresponding to aggregated packets which are ready for transmission to (useful). We use and to denote the cardinality of the appropriate queues. The queue update equations are similar to before.

The scheduling algorithm for this network is similar to the single aggregation tree, with the added step that the weight of a link is now given by the maximum queue backlog over all queues competing for that link. Formally we have:

Input: Time slot , queues , incoming packets , admissible rate region .
Step 1: Place incoming packets as before.
Step 2: Calculate . Also define as the tree which maximizes .
Step 3: Compute schedule as:
Step 4: Consider link . If , then transmit the first packets of queue , where:

The above two examples indicate how the scheduling algorithm works when the routing is specified. As we mentioned before, the routing component of the algorithm assigns incoming packets to aggregation trees. The challenge is to do so in a dynamic manner, i.e., to route the packets based on network state alone, and not using pre-computed rates for each tree. As we mentioned before, this routing decision is made in a ‘greedy’ manner. In the next example, we consider a simple network to illustrate this.

Fig. 2: Decomposition of a complete graph on nodes into edge-disjoint aggregation trees.

Example (Complete Graph): Consider a network which in the form of a complete graph of nodes labelled with node denoting the aggregator node (which again wants to calculate the MAX value of the data at all the other nodes), and with each link having unit capacity. As we claimed earlier, the min-min-cut of this network can be achieved by packing aggregation trees. In particular, consider the set of depth trees , where tree consists of nodes at the bottom level connected to node which is connected to the aggregator, i.e., node (for example, consider the decomposition of a node complete graph in figure ). These trees are clearly edge-disjoint and hence they can each support a load of to achieve a tree packing of (as for each edge of the graph, there is a single such tree which traverses it. Since all the edges have equal capacity, therefore putting unit capacity on each tree gives us a feasible packing). Hence they are optimal.

There are two ways to route packets on these trees. Since we know that the optimal load on each tree is unit, we can associate each incoming round of packets to tree with probability . Alternately, when a new round of packets arrives, we can load it on the tree that has the least total number of packets on it. Intuitively this scheme also asymptotically achieves the appropriate load balancing. In the next section, we formalize this notion of ‘greedy’ tree-loading for general graphs, and further show that it indeed does achieve the optimal tree-packing. A more subtle point is that we may not a priori know the correct trees to route on (unlike in this example), and a surprising result is that it is sufficient to perform greedy tree-loading over all aggregation trees and still remain throughput-optimal.

V-B Scheduling With Aggregation-Tree Routing for FMux Computation: The General Algorithm

Finally we present the complete dynamic algorithm for FMux computation. The algorithm separates the routing and scheduling components as follows: when a round of packets arrives in the network, we first ‘load’ all packets of the round on an aggregation tree (thereby fixing the routing); next, in each time slot, scheduling is done according to a modified MaxWeight policy.

The routing is performed using a greedy tree-loading policy, wherein all incoming rounds in a time slot are loaded on the tree with smallest sum useful-queue, i.e., least number of useful packets. Formally, we have:

Input: Time slot , queues , incoming rounds.
Output: A routing decision associating each incoming round with a tree .
Step 1: Calculate for all .
Step 2: Find the minimum loaded tree as:
Step 3: Assign all incoming rounds to aggregation tree .
Algorithm 2 Greedy tree-loading algorithm for FMux computation.

The scheduling algorithm is similar to the MaxWeight policy[13], in that it picks a maximum independent set with weights given by the product of the rate and the maximum queue across an edge. Formally we have the following algorithm:

Input: Time slot , queues , incoming packets , admissible rate region .
Output: A scheduling decision .
Step 1: Place packets arriving on tree at node in for non-leaf nodes, and for leaf nodes.
Step 2: Calculate . Also define as the tree which maximizes .
Step 3: Compute schedule as:
Step 4: Consider link . If , then transmit the first packets from .
Algorithm 3 MaxWeight scheduling algorithm.

For the sake of completeness, we note that in all the above algorithms, tie-breaking rules as well as the service discipline (i.e., among a set of multiple packets suitable for transmission, which one gets priority) are assumed to be random; this is done for the sake of convenience, and we note that there are many possible tie-breaking rules and service disciplines which would suffice.

Now we can state and prove the throughput optimality of this algorithm.

Theorem 4.

The dynamic queue based policy consisting of greedy tree loading (Algorithm 2) and MaxWeight scheduling (Algorithm 3) stabilizes the system for any refresh rate that is less than the maximum refresh rate .

Before proceeding further, we point out a particular novel aspect of the proof of this theorem. Similar to previous papers [13, 12], we use a quadratic Lyapunov function for showing stability; however our technique for bounding the Lyapunov drift is quite different from those used for point-to-point data. The difficulty arises from the fact that although Edmonds’ Theorem guarantees the existence of an optimal tree-packing for the network, the trees in this optimal packing are unknown to the algorithm; consequently it is unclear whether routing over all trees could lead to instability via packet accumulation on trees not involved in the optimal packing. We circumvent this by showing the existence of some intermediate tree packings between the optimal and the desired refresh rates, which allow uniform bounding of the Lyapunov drift. We now present the complete proof.

Proof.

We define a candidate Lyapunov function as

with corresponding Lyapunov drift given by

Similar to before, we have that for all states of the system, and that . We now need to show that given , there exists such that if for some , then . Now we have

and defining to be arrival of useful packets on tree to node , we have

and thus (due to external arrivals plus inter-node transmissions). Let . Then we have

From the definition of , we know that there exists an optimal rate point and the corresponding optimal SSS rule that maximizes the min-mincut. Consider now a refresh rate less than the , such that . Note that the algorithm can potentially split the incoming flow over every spanning tree of the network, in order to dynamically arrive at the optimal packing. To uniformly bound the Lyapunov drift, we first need to construct two tree packings: an ‘achievable’ packing such that which serves as a proxy for the flow-splitting, and a ‘near-optimal’ packing such that and further which has the property that uniformly over all spanning trees (for some which we define below). We do so as follows.

Assume that there exists such that if any edge is scheduled alone (i.e. ), then (this is simply a formal definition of existence of a link). We can now perturb the optimal SSS rule to get a new rate point with the following two properties:

  1. Every edge has capacity .

  2. The min-mincut of the network at the rate point is .

This helps ensure that the ‘near-optimal’ tree packing can have some mass on each edge of the graph.

To construct the perturbed SSS rule , consider the optimal SSS rule . We define (i.e., the set of independent sets that have some mass under ) and (which is as the cardinality of is finite). Now we reduce each by . This reduces the min-mincut by at most . To see this, note that the capacity of any edge reduces from to where:

Further, the maximum number of edges across the min-mincut is bounded by . Thus the min-mincut of the network at the rate point is .

Next, suppose is the set of edges with zero flow under . We now complete the definition of the perturbed SSS rule (using the fact that singleton edges are valid independent sets) as follows:

To see that this is a valid SSS rule, note that , which is the weight we have distributed equally over all links in . The rate point under this SSS rule is henceforth denoted as . Then for edges in we have . Now since there are only edges, each with positive capacity , therefore there exists some such that every edge has capacity under SSS rule . Finally, applying Edmonds’ Theorem (Theorem 1) on the network under , we get a packing such that we have

Before proceeding further, we need the following definitions:

  • .

  • .

  • .

  • .

Note that as and the packing is not tight on the finite set . Similarly, .

Finally we can construct the tree packings (on the network under SSS rule ) that we need to bound the Lyapunov drift:

  1. The ‘achievable’ tree packing, is defined as:

    Then clearly is a packing (as we are only removing mass from a valid packing) and further:

  2. The ‘near-optimal’ tree packing, is defined as:

    First we need to show that this is a valid tree packing. To see this, note that the maximum load added on any edge is bounded by (since in the worst case, all the trees in can contain some edge). For any edge in , this is less than the slack ( by definition) that was already present. For an edge in , we know at least one tree in contained it (as every edge in the graph has positive capacity under the SSS rule ), and hence we subtract a load of at least , which is again greater than the amount of load we add. Thus is a valid packing.

    Further we have that .

In addition, defining

we get that .

Thus we have constructed the two tree packings we need. We now return to bounding the Lyapunov drift. From above, we have

Now, let be the rate for packets on aggregation tree on link allocated by the policy in time slot (thus ). Then we have

Further, from the definition of the policy, we know that