Mind Your Own Bandwidth: An Edge Solution to Peak-hour Broadband Congestion

Mind Your Own Bandwidth: An Edge Solution to Peak-hour Broadband Congestion

Felix Ming Fai Wong1, Carlee Joe-Wong1, Sangtae Ha2, Zhenming Liu1, Mung Chiang1 1Princeton University, Princeton, NJ, USA 2University of Colorado, Boulder, CO, USA {mwthree, cjoe, chiangm}@princeton.edu, sangtae.ha@colorado.edu,zhenming@cs.princeton.edu

Motivated by recent increases in network traffic, we propose a decentralized network edge-based solution to peak-hour broadband congestion that incentivizes users to moderate their bandwidth demands to their actual needs. Our solution is centered on smart home gateways that allocate bandwidth in a two-level hierarchy: first, a gateway purchases guaranteed bandwidth from the Internet Service Provider (ISP) with virtual credits. It then self-limits its bandwidth usage and distributes the bandwidth among its apps and devices according to their relative priorities. To this end, we design a credit allocation and redistribution mechanism for the first level, and implement our gateways on commodity wireless routers for the second level. We demonstrate our system’s effectiveness and practicality with theoretical analysis, simulations and experiments on real traffic. Compared to a baseline equal sharing algorithm, our solution significantly improves users’ overall satisfaction and yields a fair allocation of bandwidth across users.

I Introduction

I-a Motivation: Demand in Cable Networks

In recent years, ISPs have seen a large, sustained increase in traffic demand on their wired and wireless networks, driven by the increasing popularity of media streaming services such as Netflix and cloud services such as Dropbox [1]. While many strategies for managing this demand have been proposed recently [2], few works explicitly consider wired cable networks. Cable providers have themselves considered two measures to manage network congestion: 1) protocol/content-agnostic fair bandwidth distribution, where fairness can account for a household’s recent usage history [3], and 2) using deep packet inspection to detect and throttle “abusive” users, e.g., those running BitTorrent. However, these existing solutions cannot simultaneously address the following two challenges:

Incentivizing responsible usage. Fair sharing does not directly translate to user satisfaction, e.g., when everyone overloads the network by streaming HD videos at the same time. The crux of the current problem is that users have no incentive to moderate their bandwidth consumption only to the amount they actually need. We need a solution that accounts for peak-hour usage over longer timespans, e.g., one week, so that users can plan their usage patterns accordingly.

Addressing different bandwidth needs. When bandwidth is a limited resource it is important to prioritize certain types of traffic while maintaining privacy and net neutrality. Moreover, users should have a way to express their usage preferences to the ISP. Consider, for instance, two neighbors who both use a substantial amount of bandwidth in the evening. One of them watches Netflix, and the other backs up large files from work. The optimal solution would then be to prioritize the first neighbor during the evening, and provide an incentive to the second neighbor to back up his files a few hours later at night. Yet with current network infrastructure, the cable provider might sub-optimally allocate equal bandwidth to both neighbors when congestion is present.

I-B A Home Solution to a Home Problem

We argue the difficulties faced by cable providers can be solved by pushing congestion management to the network edge at home gateways. Thus, we empower the user to improve his or her own satisfaction from using the network.

We propose to allocate bandwidth using a two-level hierarchy, as shown in Figure 1. On the first level, bandwidth is allocated among home gateways. At a second level, each gateway’s bandwidth is allocated among the users and devices connected to the gateway.

Central to our solution on Level 1 is the notion of virtual credits, inspired by [4], distributed to the gateways. Each gateway uses its credits to “purchase” guaranteed bandwidth rates at congested times, and thus has an incentive to moderate network usage due to its limited credit budget. We limit the total bandwidth demand to the network capacity by fixing the total number of credits available to spend and recirculating credits to the gateways as they are spent. Using credits enables us to meet the following requirements:

Fairness: Credits are circulated back to each gateway in a way that depends on other gateways’ behavior. Over time, every gateway will be able to use a fair portion of the bandwidth, as gateways that spend a lot of credits in one time period will have fewer to spend later.

Social welfare optimization: At the equilibrium, each gateway chooses its credit spending so as to selfishly maximize its own satisfaction. We show that the credit redistribution mechanism ensures that these choices also optimize the collective social welfare, i.e., all gateways’ satisfaction, over time.

Decentralization: The circulation of credits in our system naturally allows for a distributed solution to the equilibrium, since each gateway decides how to spend its credits. We design an algorithm for making these spending decisions that runs at individual gateways and utilizes only the information on how the credits are circulated to find the optimal spending.

Privacy preservation: Since gateways’ spending decisions use only their knowledge of the credit recirculation, they need not reveal their individual preferences to other gateways or to the ISP, thus preserving their privacy.

Incremental deployability: Since the number of total credits is fixed, we can incrementally deploy the solution by starting with a small number of credits, and introducing more as more gateways begin to participate.

Once each gateway has chosen a bandwidth rate, we perform a second-level allocation, dividing this gateway’s total bandwidth among the devices and applications. Different apps and different users can then receive more or less bandwidth depending on their relative priorities (e.g., video streaming over software updates).

I-C System Architecture

The architecture of our proposed solution is shown in Figure 1. We consider a series of discrete time periods (e.g., each lasting one hour) and allocate bandwidth among gateways and users in each period. At the start of each period, the algorithm performs a two-level allocation: first, each gateway decides how many of its credits to spend, i.e., how much guaranteed bandwidth to purchase in this period (Level 1). In practice, an automated agent acting on behalf of the gateway’s users will make this decision, though some users may wish to manually override the gateway’s decision. Once this decision is made, the gateway performs the second-level allocation, dividing the purchased bandwidth among its apps and devices.

In each time period, a central server in the ISP’s network records the total credits spent by each gateway and redistributes the appropriate number of credits to each gateway in the next time period. Each gateway updates its budget by deducting the credits spent in this time period and adding the number of credits redistributed to it. In the next time period, each gateway then knows its updated budget and can again choose how many credits to spend.

Fig. 1: Hierarchical bandwidth allocation.

Level 1. When congestion is present, the ISP divides traffic into two classes: a first-tier, higher priority class, which gateways must purchase with credits, and a second-tier class that costs no credits but is always of lower priority. The first traffic class provides users with a guaranteed minimum bandwidth, which can be utilized by other gateways’ second-tier traffic only when there is spare capacity. This scheme ensures that the network is fully utilized if there is sufficient demand, yet still eases congestion through encouraging gateways to spend credits at different times. Moreover, our scheme allows an infinite variety of service classes for first-tier traffic, each defined by its guaranteed bandwidth rate. Second-tier traffic is the lowest tier, as it has no guaranteed rate. By using credits, our algorithm accomplishes this multi-class allocation without introducing substantial overhead for the cable provider.

We discuss the Level 1 credit allocation in Sections II and III of the paper. Section II introduces the credit redistribution algorithm and characterizes the optimal solution, i.e., the spending pattern that maximizes the gateways’ total satisfaction. Section III then presents a practical online algorithm for each gateway to independently decide the amount of credits to spend at each time, subject to predictions about the number of credits redistributed in the future.

Level 2. Our level 2 allocation mechanism distributes each gateway’s bandwidth share among its applications according to their different priorities. In Section III, we give an algorithm for optimally distributing bandwidth, and we discuss its implementation in Section IV. In performing this allocation, we emphasize two properties that ensure its practicality:

User-specified device/app prioritization: Each user has different priorities for their devices–one user, for instance, might prioritize streaming music, while another might prioritize file transfers. Given these priorities, we automate the bandwidth allocation among the different devices and apps.111Should explicit prioritization prove too complex for average users, we can introduce default priorities for different types of apps and devices. We focus on elephant traffic, which tends to be non-bursty and amenable to bandwidth throttling.

Network neutrality: The level 2 allocation runs locally at each gateway. Thus, the user has full control of these decisions, maintaining ISP neutrality.

We make the following contributions in this paper:

  • A virtual pricing mechanism to fairly allocate limited network capacity among gateways and incentivize users to limit their bandwidth demand, easing congestion.

  • A distributed algorithm that allows users to optimally choose their bandwidth used.

  • A gateway implementation that accordingly limits the total bandwidth used.

  • A practical method for classifying traffic at the gateway and enforcing bandwidth limits on different apps.

After discussing our bandwidth allocation algorithms in Sections II and III, we describe our implementation in Section IV and show simulation and implementation results of an example scenario in Sections V and VI. We briefly discuss related works in Section VII and conclude in Section VIII.

Ii Credit Distribution and Optimal Spending

In this section, we describe the bandwidth allocation at the higher level of Figure 1. We first describe our system of credits for purchasing bandwidth (Section II-A) and show that it satisfies several fairness properties. We then show that even if each gateway selfishly maximizes its own satisfaction, the total satisfaction across all gateways can be maximized (Section II-B). All proofs are in Appendices AF.

Ii-a Credit Distribution

We divide congested times of the day into several discrete time periods, e.g., of a half-hour duration, and allow gateways to “purchase” bandwidth in each time period. At the end of the period, spent credits are redistributed to the gateways.

We suppose that a fixed number of credits is shared by different gateways, where is the network capacity in Gbps and an over-provisioning factor chosen by the ISP. We consider time periods indexed by , e.g., periods per day. We use to denote the budget of each gateway at time , and we suppose that the total credits are initially distributed equally across gateways, i.e., for all . We then update each gateway ’s budget as


where we sum over all gateways except gateway and denotes the number of credits used by gateway in time period . Each gateway is constrained by : it cannot spend negative credits, and the number of credits spent cannot exceed its budget. This credit redistribution scheme conserves the total number of credits for all times :

Lemma 1

At any time , the number of credits distributed among gateways is fixed, i.e., .

Heavy gateways are prevented from hogging the network (helping to enforce a form of fairness across gateways), as a large simply means that the other gateways will receive larger budgets in the time interval . In fact, if this redistribution leads back to a previous budget allocation, then all gateways spend the same number of credits:

Lemma 2

Suppose that for some times and , for all gateways , e.g., and . Then each gateway spends the same number of credits between times and : for all gateways and , .

Using this result, we can more generally bound the difference in the number of credits gateways can spend:

Proposition 1

At any time , for any two gateways and , . Thus, the time-averaged difference in spending


Over time, fairness is enforced in the sense that all gateways can spend approximately the same number of credits.

Though these fairness results to some extent limit heavy gateways’ hogging the network, lighter gateways may conversely “hoard” credits, thus hurting other gateways’ budgets. Yet no single gateway can hoard all available credits:

Proposition 2

Suppose that a given gateway uses at least bandwidth every periods, where and may denote, e.g., one day. Then at any time , gateway ’s budget


as , where . In particular, if , . Moreover, at any fixed time , at most one gateway can have a budget of zero credits.

For instance, if a gateway spends credits at each time, then as , : if is relatively large, a gateway hoards fewer credits, since these are redistributed among others once spent. Conversely, a gateway that spends very little can asymptotically hoard almost credits.

More broadly, if a number of gateways are inactive in a network for a certain number of time periods , then we can bound the number of credits these gateways accumulate:

Proposition 3

Suppose that gateways are inactive for time periods. Then the number of credits that these gateways can accumulate in these times is given by


where we index the inactive gateways by and suppose they are inactive from time 0 to time .

Thus, gateways can only accumulate all the credits asymptotically, as . In practice, however, a gateway might stay inactive for a long period of time, hoarding credits and decreasing other gateways’ utilities. Thus, we cap each gateway’s budget at a maximal value of to limit hoarding.

To ensure that gateways can still save some credits for future use, we choose . Since at each time , this lower bound ensures that there is a feasible set of budgets with each .222If , then we would have for all gateways at all times , so in practice we would have . For instance, the ISP might choose , where equals the minimum number of gateways on the network at any given time. The inactive gateways at that time can then hoard at most credits, leaving the remaining credits for the active gateways.

To enforce this budget cap, the excess budget of any gateway exceeding the cap is evenly distributed among all gateways below the cap. Should these excess budgets push any gateway over the cap, the resulting excess is evenly redistributed to the remaining gateways until all budgets are below the cap. Since we choose and reallocate to fewer gateways at each successive iteration, this process converges after at most iterations.

Ii-B Optimal Credit Spending

Given the above credit distribution scheme, each gateway must decide how many credits to spend in each period. To formalize this mathematically, let denote gateway ’s utility as a function of the guaranteed bandwidth in time interval . Though gateways may increase their utilities with second-tier traffic, we do not consider this traffic in our formulation. Second-tier bandwidth is difficult to predict, as most gateways would not have consistent historical information on its availability. Gateways could only learn this information by sending such traffic, which they might not do regularly.

We consider a finite time horizon to keep the problem tractable and because the utility functions cannot be reliably known indefinitely far into the future. Each gateway then optimizes its total utility from the current time to :


Here the budgets are calculated using the credit redistribution scheme (1), with appropriate adjustments to enforce a budget limit. For ease of analysis, we do not model these budget limits here. In practice, the ISP can cap gateways’ budgets for each time period during the credit redistribution.

We first note that the budget expressions (1) can be used to rewrite the inequality as the linear function


Thus, if the are strictly concave functions, then given the amount spent by other gateways , (5) is a convex optimization problem with linear constraints.333The assumption of concavity, i.e., , may be justified with the economic principle of diminishing marginal utility as bandwidth increases.

Since each gateway chooses its own to solve (5), these joint optimization problems may be viewed in a game-theoretic sense: each gateway is making a decision that affects the utilities of other gateways. From this perspective, the game has a Nash equilibrium at the system optimum:

Proposition 4

Consider the global optimization problem


with the credit redistribution (1) and strictly concave . Then an optimal solution to (7) is a Nash equilibrium.

While Prop. 4’s result is encouraging from a system standpoint, in practice this Nash equilibrium may never be achieved. Since the gateways do not know each others’ utility functions, they do not know how many credits will be spent and redistributed at future times, making the future credit budgets unknown parameters in each gateway’s optimization problem. These must be estimated based on historical observations, which we discuss in the next section.

Iii An Online Bandwidth Allocation Algorithm

In this section, we consider a gateway’s actions at both levels of bandwidth allocation. We first give an algorithm to decide its credit spending (Level 1), and then show how the purchased bandwidth can be divided among apps at the gateway (Level 2). Using Algorithm 1, each gateway iteratively estimates the future credits redistributed, decides how many credits to spend, prioritizes apps, and updates its credit estimates. We assume throughout that the gateway’s automated agent knows the utility functions for users at that gateway.

   { tracks the current time.}
  while  do
     if  then
         Update estimate of future amounts redistributed using Algorithm 2.
     end if
     Calculate for .
     Solve (5) with budget constraints (9) given .
     Choose the application priorities by solving (10).
  end while
Algorithm 1 Gateway spending decisions.

Iii-a Estimating Other Gateways’ Spending

To be consistent with (5)’s finite time horizon, we suppose that gateways employ a sliding window optimization. At any given time , gateway chooses rates for the next periods so as to maximize its utility for those periods. At time , the gateway updates its estimates of future credits redistributed and optimizes over the next periods, etc.

Estimating the future credits spent by other gateways is difficult: since relatively few gateways share each cable link, fluctuations in a single gateway’s behavior can significantly affect the number of credits redistributed. Thus, we propose the method of scenario optimization to estimate the number of credits each gateway will receive in the future. This technique is often used in finance to solve optimization problems with stochastic constraints that are not easily predicted, e.g., due to market dynamics [5].

Scenario optimization considers a finite set of possible scenarios for each gateway and computes the optimal spending in each scenario. Each scenario is associated with a probability that the scenario will take place. Evaluating the credit redistribution for each then yields a probability distribution of the possible credits spent.

In our case, a “scenario” can be defined as a set of utility functions for other gateways. We can parameterize these scenarios by noting that gateways’ utilities depend on the application used, e.g., streaming versus downloading files. We incorporate this dependence by taking


for each gateway , where is a scaling factor and each is the utility received from an application of type (e.g., corresponds to streaming, to file downloads, etc.) The utilities are assumed to be pre-determined functions consistent across gateways, and the are specified by individual gateways. The variable corresponds to the (estimated) probability that gateway optimizes its usage using the utility function , e.g., if application is used the most at time . This probabilistic approach accounts for a gateway’s incomplete knowledge of its future utility functions.

With this utility definition, we can define a scenario by the coefficients and of gateways’ utility functions. Since gateway has no way to distinguish between other gateways, it can group them together as one “gateway” by simply adding the utility functions and budget constraints. These aggregated gateways then maximize

subject to the budget constraints , where the coefficients represent the added coefficients for all gateways . We suppose that gateway correctly estimates gateway ’s future usage , and that gateway also correctly estimates . Thus, following Prop. 4, all gateways choose their usage so as to maximize the collective utility subject to the budget constraints. This optimization may be solved to calculate the credits redistributed at each time in scenario .

To improve our credit estimates, at each time we update the scenario probabilities by comparing the observed number of credits redistributed at time , denoted by , with the estimated amount redistributed for each . We suppose that gateways’ behavior is sufficiently periodic such that the scenario probabilities at times and are the same.

We use to denote the probability that, given at time , gateways use scenario ’s utility function at time . We can calculate these probabilities by measuring the discrepancy between the estimated and observed credits redistributed:

We then update the scenario probabilities using Bayes’ rule and use the new in Algorithm 2.

   { tracks the current time.}
  while  do
     for all gateways  do {this loop may be run in parallel}
         Choose scenarios .
         for each scenario  do
            Calculate the predicted amount redistributed for , assuming other gateways know for all .
            if  then
               Update probability using Bayes’ Rule.
            end if
         end for
     end for
  end while
Algorithm 2 Estimating credit redistribution.

Iii-B Online Spending Decisions and App Prioritization

Algorithm 1 shows how the credits spent in different scenarios are incorporated into choosing a gateway’s rates and application priorities. Each gateway constrains its spending depending on the estimated redistributed credits: for instance, a conservative gateway might choose the so that the budget constraints hold for all scenarios. In the discussion below, we suppose that gateways ensure that the constraints hold for the expected number of credits redistributed; the budget constraint (6) then becomes


We additionally introduce the constraints , i.e., that the expected budget at a given time cannot exceed the budget cap. This constraint ensures that gateways are not forced to give credits to other gateways due to the cap.

Each gateway can further improve its own experience through its Level 2 allocation dividing the purchased bandwidth among its apps. It does so by assigning priorities to different devices and applications, so that higher-priority apps receive more bandwidth. Since users cannot be expected to manually specify priorities in each time period, we introduce an automated algorithm that leverages the gateway’s known utility functions (8) to optimally set application priorities.

We consider the four application categories in (8) and use to represent each category ’s priority. Since the particular applications active at a given time may change within a given period, e.g., if a user starts or stops watching a video, we define an application’s priority in relative terms: for any apps and , , where is the amount of bandwidth allocated to application and , ensuring that all the purchased bandwidth is used. We normalize the priorities to sum to 1, i.e., .

Since it is nearly impossible to predict which apps will be active at a given instant of time, we choose the app priorities according to a “worst-case scenario,” in which all apps are simultaneously active. In this case, each app receives bandwidth, and we choose the to maximize total utility:


Since each function is assumed to be concave and the constraint is linear in the , (10) is a convex optimization problem and may be solved rapidly with standard methods.

Iv Design and Implementation

Iv-a System Architecture

The architecture of our system is summarized in Figure 2. It consists of four modules: 1) When traffic goes through the gateway for forwarding, it is passed to a device and an application classifier to identify its traffic type and priority. 2) All traffic is redirected through a proxy process, which forwards traffic between client devices and the Internet. The proxy’s data forwarding rate is determined by the optimizer (L2 Allocator) in each gateway by considering an application’s priority. The rate is enforced by a rate limiter. 3) The bandwidth (credit spending) for each gateway is computed by the optimizer (L1 Allocator) in the ISP. 4) A user can access the gateway through a web interface to view its usage (at either aggregate or joint device-app level) and update its preferences, i.e., when to spend more credits and traffic priorities, so as to adjust the optimizer’s decisions. We show screenshots of these user interfaces in Figures 3(b) and 3(c). In this section we discuss how we implement the classification, rate limiting, and prioritization in a commodity router.

Fig. 2: System architecture.

Iv-B Hardware and Software

We implement our system in a commodity wireless router, a Cisco E2100L with an Atheros 9130 MIPS-based 400MHz processor, 64MB memory, and 8MB flash storage (Figure 3(a)). We replaced the factory default firmware with OpenWrt, a Linux distribution commonly used for embedded devices. Although OpenWrt has rich functionality in terms of traffic classification and control, we still find it insufficient for our needs. In particular, we modify OpenWrt to enable: 1) low-overhead traffic and device classification using HTTP connections, i.e., not port-based, and 2) back-pressure based gentle rate limiting that does not mandate parameter tuning.

(a) Cisco E2100L board.
(b) Usage tracking.
(c) Traffic prioritization and device/OS classification.
Fig. 3: Router and web interface screenshots.

Iv-C Traffic and Device Classification

To build a low-overhead classifier, we integrate a kernel-level module that inspects the first several packets of a connection for application matching. If a match is found, the classifier marks the connection with a mark to be queried at userspace by our proxy processes through .

Our classifier module performs traffic classification above layer 7, i.e., it can differentiate YouTube and Netflix, through a combination of content matching, byte tracking and protocol fingerprinting. For device and OS classification, we use the same module to monitor HTTP traffic and inspect user-agent header strings for device information. In practice we find this approach effective because of the prevalence of devices using HTTP traffic.

Iv-D Rate Limiting

Our goal is to: 1) enforce an aggregate rate limit over multiple connections, and 2) enforce prioritization, i.e., which gets higher bandwidth, among the connections given the aggregate limit. Achieving this is challenging because we are throttling incoming traffic,444Throttling outgoing traffic is easy with token bucket-based traffic shaping mechanisms. and at a packet level this is possible only by dropping packets when the incoming rate exceeds a threshold (known as traffic policing), or by delaying packet forwards to match a specified rate limit through ACK clocking. Since both approaches unnecessarily increase RTT and degrade user experience, we instead take advantage of TCP’s flow control mechanism and implement our own rate limiting system in the application layer, which is both easier to implement and more graceful in throttling. The module consists of two components.

Transparent Proxy. When a connection between a client device and a server is being established, it is intercepted at the gateway and redirected to the proxy process running in the gateway. Then the proxy establishes a new connection to the server on behalf of the client and forwards traffic between the two (proxy-server and client-proxy) connections. We use the Linux function to achieve zero-copying, i.e., all data are handled in kernel space.

Implicit Receive Window Control. TCP’s flow control mechanism allows the receiver of a connection to advertise a receive window to the sender so that incoming traffic does not overwhelm the receiver’s buffer. While originally set to match the available receiver buffer space, the receive window can be artificially set to limit bandwidth using the relation : given a maximum and measured round trip time (), we set the receive window to be no greater than . Although it is possible to set the receive window directly by modifying TCP headers, we opt for a more elegant adaptive approach such that the proxy does not need to know the or compute the exact window size.

To illustrate our approach we first consider a one connection case. As data from the server arrives at the proxy, they are queued at the proxy’s receive buffer until the proxy issues a on the proxy-server socket to process and clear them (at the same time the proxy issues a on the client-proxy socket to forward the data to the client). Note that if we modulate the freqeency and the size of ’s, we modulate the size of the receive buffer and effectively the sending rate. Further explanation is given in Appendix G.

Iv-E Traffic Prioritization.

When there are multiple connections the proxy spawns multiple threads such that each thread serves one connection, and we aim to limit the aggregate rate over all connections. To allocate bandwidth fairly among the connections, we coordinate socket reads of these threads through a time division multiplexing scheme: using a thread mutex, we create a virtual time resource such that each socket read is associated with an exclusively held time slot of length proportional to the number of bytes read. Although more complicated socket read scheduling mechanisms can be considered, for simplicity we leave the scheduling to the operating system, and from experiments we observe the sharing of time slots to be fair.

For traffic prioritization we assign a relative priority parameter for every connection such that for busy connections, i.e., each has a sufficiently large backlog, we want the sum of their rates to be , and for .

We achieve the desired prioritization through truncated reads. When the proxy issues a socket read, it needs to specify a maximum block size to read,555We set it to be the page size of the processor architecture. and for a busy connection this limit is always reached. If connection is of lower priority with , we truncate this block limit by setting it to be . Since each access to a time slot is associated with a server socket read (equivalently, a client socket write) of bytes and time slots are fairly distributed across connections, the achieved client rate (equivalent to ) scales with .

By virtue of statistical multiplexing, our rate allocation mechanism does not require the number of busy connections , which is difficult to track in practice; hence it can readily accommodate new connections. To accommodate bursty connections, the proxy first queries the receive buffer for the number of pending bytes. If it is above then it does a truncated read as described above; otherwise it does not. The pseudocode of a proxy thread is shown in Algorithm 3.

0:  , , ,: socket of server connection,: socket of client connection,: thread mutex shared by all connections
  while connection open do
     while not all written to client do
     end while
  end while
Algorithm 3 Pseudocode of incoming rate control.

V Experimental Results

(a) Throughput.
(b) Retransmission counts.
(c) Jitter.
Fig. 4: Comparison between our rate limiting algorithm and . Results shown are averaged over 10 runs, 60 seconds each, with 95% confidence intervals.

V-a Rate Limiting

To justify our decision to develop our own rate control module, we compare it with the standard Linux traffic policing approach using the command. Two experiments are performed using . In the first one, we fix network RTT to be 100ms and vary the rate limit from 1 to 15Mbps to observe the actual rate achieved. We also try two choices of ’s burst parameter to ensure completeness. Figure 4(a) shows that our approach results in more accurate rate limiting (less than 4% error in each setting). While it appears that increasing the burst parameter helps in improving rate limiting accuracy, we note the values chosen are rather large (a typical value is 10k, while we use 50k and 200k) and may harm network stability. The sensitivity of the results of w.r.t. parameters also highlights the need of careful parameter tuning, which is undesirable given the diversity of network environments.

The first experiment hints that traffic policing, or using packet drops to signal the sender to reduce its rate, is too drastic as a rate control mechanism. Our second experiment confirms this. We fix the rate limit at 8Mbps and burst parameter at 50k, and vary network RTT from 20 to 100ms. Figures 4(b) and 4(c) show that results in significantly more packet retransmissions and higher jitter. This shows our approach is indeed more graceful in rate limiting.

V-B Traffic Prioritization

Fig. 5: YouTube playback performance with different prioritization settings.

Consider a scenario with two users, one watching a 720p YouTube video stream and the other downloading a large file with , competing for a limited bandwidth of 2Mbps. We vary the priorities of the two types of traffic and observe the effect on video playback.

Let and be the priorities of YouTube and respectively. With fixed, we vary and measure the amount of video played w.r.t. time elapsed.666 We create a video-embedded webpage with a Javascript snippet that periodically queries the YouTube API for playback progress. Note there are two base cases: the case corresponds to YouTube traffic not interfered by and is the best possible result we can expect; the case is equivalent to not having prioritization. Figure 5 shows the results. When , i.e., YouTube has higher priority, playback performance (inversely related to the duration of pauses or flat regions in a curve) is strictly better than the no prioritization case. Also, performance improves with decreasing ratio.

Not only is our system able to do fine-grained traffic classification with two types of traffic running under HTTP, but our traffic prioritization algorithm also produces noticeable improvement in user experience.

Vi Gateway Sharing Simulation

To demonstrate the efficacy of our gateway sharing framework, we consider sixteen gateways sharing a cable link. After explaining the simulation setup, we compare our credit-based allocation to equal sharing, in which the ISP reduces all gateways’ bandwidth to a minimal but acceptable level, e.g., 1 Mbps. With equal sharing, each gateway is assigned a slot that receives this bandwidth until the network capacity is reached. This approach, which is similar to cable operators’ current practices in that gateways are all treated equally, has two problems. First, it risks inefficiency: gateways may occupy a slot without needing the slot’s full bandwidth. Second, gateways that need more bandwidth cannot receive any from gateways that do not. Our credit-based approach addresses both disadvantages, and we show in our simulations that it significantly improves gateway utilities. Moreover, all gateways receive a fair rate allocation, with no one gateway receiving significantly more bandwidth than the others.

After comparing the credit-based and equal sharing solutions, we consider the solution obtained with the online algorithm 1 in Section III. Despite the gateways’ not fully knowing their future credit budgets, the utility achieved is near to the optimal. Moreover, all gateways receive a fair share of bandwidth, and gateways actively save and spend credits at different times.

Vi-a Gateway Utilities and Simulation Parameters

(a) Jain’s Index for gateway rates.
(b) Credit budgets for representative gateways.
(c) Utility comparison.
Fig. 6: Rate variation, optimal credit budgets, and utilities for the gateways over one week.

We suppose that credit-based sharing is enforced in the congested hours between 6pm and midnight, with half-hour timeslots. Users at each gateway are assumed to make their credit spending decisions based on their probability of using four types of applications: streaming, social networking, file downloads, and web browsing. We use the utility functions

in (8) to respectively model the utility received from each application, where . The probabilities of using each application are adapted from a recent measurement study of per-app usage over time for iOS, Android, Windows, and Mac smartphones and computers [6]. Table I shows the devices at each gateway.

Gateway iPhones Androids Windows laptops Mac laptops
1,4,9,13 1 1 1 1
2,6,10,14 2 0 2 1
3,7,11,15 1 1 1 2
4,8,12,16 2 0 1 1
TABLE I: Devices at each gateway.

We choose coefficients to be larger in the evening, as is consistent with the usage measurements in [6], and add random fluctuations to model heterogeneity between users and day-to-day variations in each gateway’s behavior.

We assume a budget of total credits, which each credit representing 1Mbps. The budget for each gateway is capped at 32 credits at any given time. In addition to the purchased bandwidth, we suppose that gateways send a random amount of traffic over the second tier, which is capped at the network capacity. We simulate one week of credit redistributions and bandwidth allocations.

Vi-B Bandwidth Allocations

Globally Optimal Solution: We first compute the globally optimal rates, i.e., those that maximize (7), and show that the overall rate allocation is fair. To see this, we compute Jain’s Index over the gateways’ rates at each time, including second-tier traffic in Figure 6(a). The Jain’s Index is relatively low at some times, indicating that there is a large variation in gateways’ rates. Thus, some gateways use little bandwidth in order to save credits and others spend a lot of credits and receive large bandwidth rates. Yet if we compute Jain’s Index for all gateways’ cumulative usage over time, we see in Figure 6(a) that the index quickly converges to 1. The gateways receive comparable cumulative rates, as is consistent with the fairness property of Prop. 1.

The large variability in gateway allocations at a given time can be seen more clearly in Figure 6(b), which shows the budgets of four representative gateways over time. All four save credits at some times in order to spend at other times. This time flexibility significantly improves the overall utility over equal sharing: the gateways’ total achieved utility increases by 29.7% relative to the equal sharing allocation of each gateway receiving 1.25Mbps at all times.

Figure 6(c) shows the cumulative density function (CDF) of the ratio of gateway utilities under credit allocation and equal sharing at different times. We plot the CDF for all gateways and times, as well as individual CDFs for the gateways shown in Figure 6(b). All of the ratio distributions are comparable, indicating that credit-based allocation benefits all gateways’ utilities. While the gateways reduce their utility nearly half of the time, the utility more than doubles in some periods.

Online Solution: We next compare the globally optimal utilities with those obtained when the gateways follow Algorithm 1. To perform the credit estimation, we use four scenarios, in which all other gateways are assumed to use only streaming, only social networking, etc. Each gateway assumes (falsely) that the other gateways’ coefficients are the same, and the probabilities of each scenario are initialized to be uniform. After learning the scenario distribution for the simulation’s first four days, the algorithms recovers 84.7% of the optimal utility for the remaining three days. Since this result is achieved with only four scenarios, Algorithm 1 is practically effective in achieving near-optimal rates.

As with the optimal solution, at any given time gateways’ rates can be very different: Jain’s indices in Figure 7(a) for all gateways’ usage at instantaneous times can be quite low. However, all gateways achieve similar cumulative rates: Jain’s index of the cumulative rates quickly converges to 1. Indeed, Figure 7(b) shows the budgets of four representative gateways, indicating that, as with the optimal solution (Figure 6(b)), the gateways vary their spending with time. Thus, incentivizing gateways to delay some of their usage significantly improves users’ overall satisfaction and utility.

(a) Jain’s Index for gateway rates.
(b) Credit budgets for representative gateways.
Fig. 7: Rate variation and optimal credit budgets with the online algorithm for the gateways over one week.

Vii Related Work

Using pricing to manage network congestion is a long-studied research area [2]. Our work differs in that we target broadband users on flat-fee service plans, prompting us to use virtual pricing instead of having users pay extra fees for prioritized access. Related to our work is [7], which proposes a scheme for users to access higher-quality service by spending tokens, but their work is mostly theoretical. We present a complete solution, from algorithms to implementation, for a specific problem of peak-hour broadband access.

From a systems perspective, there has been much recent work on developing smart home gateways with plain Linux/Windows or open-source router software such as OpenWrt. Smart home gateways have been used for network measurement [8, 9], providing intuitive interfaces for home network management [10, 11] and better QoS provisioning [12, 13]. However, we are not aware of any work in coordinating bandwidth usage across households. We also develop our own incoming rate limiting tool, as off-the-shelf tools (e.g., Linux ) are insufficient for our application. The Congestion Manager (CM) project [14] shares similar goals of reducing congestion at the network edge, but we approach the problem by incentivizing users to reduce unneeded usage with a virtual currency scheme, while CM provides an API for client and server applications to adapt to varying network conditions. CM thus requires sender-side support.

Receiver-side rate control is mostly done through explicitly controlling the receive window [15] or the receive socket buffer [16]. They have been applied in implementing low-priority transfers [17] and prioritizing traffic [18, 19]. Compared to these approaches, our solution does not require modifying client devices or tracking both RTT and the number of active connections. It also avoids interfering with Linux’s own buffer autotuning mechanism [20], which is crucial for connections with high bandwidth-delay products. Our approach of implicit receive window control is most similar to that of Trickle [21], but the goals are different. Trickle is designed for users without administrative privileges to voluntarily rate limit their applications, while we are interested in imposing mandatory rate limits transparent to users. For rate limiting across multiple devices, [21] proposes a distributed architecture where a centralized scheduler (e.g., the gateway) coordinates rate limiting over a home network. Our solution, in using a split-connection proxy, allows rate limiting to be done solely inside the home gateway to avoid the overhead of distributed coordination.

Viii Concluding Remarks

In this paper, we propose to solve peak-hour broadband network congestion problems by pushing congestion management to the network edge. Our solution performs a two-level bandwidth allocation: in Level 1, gateways purchase bandwidth on a shared link using virtual credits, and in Level 2 they divide the purchased bandwidth among their apps and devices. We show analytically that our credit distribution scheme yields a fair bandwidth allocation across gateways and describe our implementation of the bandwidth purchasing and app prioritization on commodity wireless routers. Our implementation can successfully enforce app priorities and increase users’ satisfaction. Finally, we simulate the behavior of sixteen gateways sharing a single link. We show that our algorithm yields a fair bandwidth allocation that significantly improves user utility relative to a baseline equal-sharing scheme.

By implementing congestion management at the network edge, we obtain a decentralized, personalized solution that respects user privacy and requires minimal support from ISP infrastructure and user devices. Since users make their own decisions regarding credit spending and bandwidth allocation, our incentive mechanisms empower users to moderate their demand so as to limit network congestion. While we implement this solution for cable networks, our methodology is applicable to other access technologies, e.g., cellular, that involve shared medium access. Such technologies, wireless and wired, will increasingly need new congestion management mechanisms as user demand for bandwidth continues to grow.


  • [1] Cisco Systems, “Cisco visual networking index: Forecast and methodology, 2011-2016,” May 30 2012, may 30, http://tinyurl.com/VNI2011.
  • [2] S. Sen, C. Joe-Wong, S. Ha, and M. Chiang, “A survey of smart data pricing: Past proposals, current plans, and future trends,” arXiv, September 2013, http://arxiv.org/abs/1201.4197.
  • [3] Comcast, “Frequently asked questions about network management,” 2013, http://customer.comcast.com/Pages/FAQViewer.aspx?seoid= frequently-asked-questions-about-network-management.
  • [4] F. Kelly, A. K. Maulloo, and D. H. K. Tan, “Rate control for communication networks: Shadow prices, proportional fairness, and stability,” Journal of the Operational Research Society, vol. 49, pp. 237–252, 1998.
  • [5] A. Consiglio, F. Cocco, and S. A. Zenios, “Scenario optimization asset and liability modelling for individual investors,” Annals of Operations Research, vol. 152, no. 1, pp. 167–191, 2007.
  • [6] J. Y. Chung, Y. Choi, B. Park, and J.-K. Hong, “Measurement analysis of mobile traffic in enterprise networks,” in Proc. APNOMS, 2011.
  • [7] D. Lee, J. Mo, J. Walrand, and J. Park, “A token pricing scheme for internet services,” in Economics of Converged, Internet-Based Networks.    Springer, 2011, pp. 26–37.
  • [8] S. Sundaresan, W. de Donato, N. Feamster, R. Teixeira, S. Crawford, and A. Pescapè, “Broadband Internet performance: A view from the gateway,” in Proc. ACM SIGCOMM, 2011.
  • [9] A. Patro, S. Govindan, and S. Banerjee, “Observing home wireless experience through WiFi APs,” in Proc. ACM MobiCom, 2013.
  • [10] R. Mortier, T. Rodden, P. Tolmie, T. Lodge, R. Spencer, A. crabtree, J. Sventek, and A. Koliousis, “Homework: putting interaction into the infrastructure,” in Proc. ACM UIST, 2012.
  • [11] J. Yang, W. Edwards, and D. Haslem, “Eden: Supporting home network management through interactive visual tools,” in Proc. ACM UIST, 2010.
  • [12] C. E. Palazzi, M. Brunati, and M. Roccetti, “An OpenWRT solution for future wireless homes,” in Proc. IEEE ICME, 2010.
  • [13] C. Gkantsidis, T. Karagiannis, P. Key, B. Radunovi, E. Raftopoulos, and D. Manjunath, “Traffic management and resource allocation in small wired/wireless networks,” in Proc. ACM CoNEXT, 2009.
  • [14] H. Balakrishnan, H. S. Rahul, and S. Seshan, “An integrated congestion management architecture for Internet hosts,” in Proc. ACM SIGCOMM, 1999.
  • [15] L. Kalampoukas, A. Varma, and K. K. Ramakrishnan, “Explicit window adaptation: A method to enhance TCP performance,” in Proc. IEEE INFOCOM, 1998.
  • [16] J. Semke, J. Mahdavi, and M. Mathis, “Automatic TCP buffer tuning,” in Proc. ACM SIGCOMM, 1998.
  • [17] P. Key, L. Massouliè, and B. Wang, “Emulating low-priority transport at the application layer: A background transfer service,” in Proc. ACM SIGMETRICS/Performance, 2004.
  • [18] N. T. Spring, M. Chesire, M. Berryman, V. Sahasranaman, T. Anderson, and B. Bershad, “Receiver based management of low bandwidth access links,” in Proc. IEEE INFOCOM, 2000.
  • [19] Y. Im, C. Joe-Wong, S. Ha, S. Sen, T. T. Kwon, and M. Chiang, “AMUSE: Empowering users for cost-aware offloading with throughput-delay tradeoffs,” in Proc. IEEE INFOCOM, 2013.
  • [20] M. Fisk and W.-c. Feng, “Dynamic right-sizing in TCP,” in Proc. LACSI Symposium, 2001.
  • [21] M. A. Eriksen, “Trickle: A userland bandwidth shaper for Unix-like systems,” in Proc. USENIX Annual Technical Conference, 2005.

Appendix A Proof of Lemma 1

We proceed by induction: at time , clearly the sum of gateways’ budgets from the budget initialization. Supposing that at time , we then calculate

Appendix B Proof of Lemma 2

We first note that (1) is equivalent to the statement that

Then if for all gateways , we obtain the system of equations


It suffices to show that (11) implies the proposition.

We proceed by induction on . If , then clearly (11) is exactly our desired result, since . We now suppose that the proposition holds for and show that it holds for . From (11), we have

Substituting this equality into (11) for , we have for all such ,

Thus, we have upon rearranging that

Simplifying, we obtain

for all . By induction, this implies that for all , and the proposition follows upon solving for .

Appendix C Proof of Proposition 1

We first show that given a distribution of budgets at a fixed time , there exists a set of gateway spending decisions such that for all gateways . Suppose that each gateway spends credits at time . Then Lemma 1’s budget conservation allows us to conclude that gateway ’s budget at time is

We now observe that since each , we can apply Lemma 2 to conclude that

for all gateways and . We then rearrange this equation to find the first part of the proposition:

The time average follows immediately upon dividing by and taking limits as s.

Appendix D Proof of Proposition 2

To prove the first part of the proposition, we note that if each , then (1) yields

where the inequality comes from each gateway’s budget constraint . Thus, at time , we have

as desired, using the fact that at any time . We obtain (3) by taking , substituting for , and simplifying.

To prove the second part of the proposition, suppose that gateways and both have zero budgets at time , i.e., , but that . Since each , such a time must exist. But then from (1), , and since each , we have . But then , and since , we have , which is a contradiction. Thus, at most one gateway can have zero budget in any given time period.

Appendix E Proof of Proposition 3

We first note that at each time ,

An inductive argument then shows that

Expanding the sums and subtracting then yields the proposition.

Appendix F Proof of Proposition 4

Suppose that solve (7), and let denote the corresponding Lagrange multiplier for the constraint , with the multiplier for the constraint . Since the are strictly concave, it suffices to show that these multipliers satisfy the Karush-Kuhn-Tucker conditions for (5), augmented by all gateways’ constraints:

Since the budget constraints are identical to those of (7), it suffices to show that


where we use (6) to sum over the appropriate multipliers . However, this equation is just one of the KKT conditions for (7): the only change between (7) and (5) is the addition of utility terms , which are additively decoupled from gateway ’s spending decisions . Thus, (12) must be satisfied by the and multipliers , . Each gateway is thus optimizing its own utility, given other gateways’ credit spending decisions .

Appendix G Rate Control by Controlling Buffer Reads

Consider the model in Figure 8: the queue is the proxy’s receive buffer, is the receive buffer size (it can change with time due to Linux’s buffer autotuning mechanism but it does not concern us here), and at time , is the fill rate (sending rate, which the proxy cannot directly control), is the drain rate (how frequent the proxy issues ’s), is the queue length, and is the advertised window size.

Suppose updates happen at intervals of , then the window update equation is


and taking a fluid approximation by setting , we have


Our goal of rate limiting is equivalent to getting for large enough through controlling By setting at all , it is not difficult to verify from Eq. (14) that at equilibrium777Note that if we throttle a connection through TCP flow control, a static equilibrium can indeed be achieved because the rate is now limited by the receive window, rather than self-induced congestion, i.e., the usual sawtooth time evolution does not happen anymore. we have and .

Fig. 8: Receive buffer model.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description