Achieving Fair Network Equilibria with Delay-based Congestion Control Algorithms

Achieving Fair Network Equilibria with Delay-based Congestion Control Algorithms

Miguel Rodríguez-Pérez, Sergio Herrería-Alonso, Manuel Fernández-Veiga, 
Andrés Suárez-González and Cándido López-García
The authors are with the Telematics Engineering Dept., Univ. of Vigo, 36310 Vigo, Spain. Tel.:+34 986 813459; fax:+34 986 812116; email: miguel@det.uvigo.es (M. Rodríguez-Pérez). This work was supported by the “Ministerio de Educación y Ciencia” through the project TSI2006-12507-C03-02 of the “Plan Nacional de I+D+I” (partly financed with FEDER funds).
Abstract

Delay-based congestion control algorithms provide higher throughput and stability than traditional loss-based AIMD algorithms, but they are inherently unfair against older connections when the queuing and the propagation delay cannot be measured accurately and independently. This paper presents a novel measurement algorithm whereby fairness between old and new connections is preserved. The algorithm does not modify the dynamics of congestion control, and runs entirely in the server host using locally available information.

©2008 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/LCOMM.2008.080372

{keywords}

Delay-based congestion control, FAST TCP, persistent congestion, fairness.

I Introduction

\PARstart

Delay-based congestion avoidance (DCA) algorithms, such as FAST or Vegas, achieve high throughput in high-speed long-latency networks [1, 2]. But it is also well known that their equilibrium transmission rates are very sensitive both to the accuracy of the estimated round-trip propagation delay and to the estimated queuing delay. Measurement errors in any of these quantities may lead to severe unfairness. A situation like that arises, for instance, when a new flow encounters a state where the queue ahead of the bottleneck link never gets empty, thus hampering to correctly estimate the propagation delay along its network path. This harmful, self-sustained condition, termed persistent congestion, was already found as early as in [3].

In [4], a mathematical analysis is provided for a scenario where persistent congestion is due to the successive arrival of a set of everlasting flows to an empty router queue. It has been argued that such scenario is far unlikely, however, the arrival of just a single flow to a saturated link is a sufficient condition to trigger unfairness as long as some of the older flows do not depart. Such configuration, where a small group of newborn flows find a link in equilibrium (bandwidth equally distributed) shared by preexisting long-lived flows, was precisely the setting analyzed in [5], and includes [4], in fact, as a particular case. As a possible solution to the persistent congestion problem, [5] suggests throttling down briefly each newly started flow to allow queues to empty, and thus obtain a reliable estimate of the propagation delay. We have found that this approach is not always effective, though.

We show that such a cautious source can fail to measure a correct propagation delay under general circumstances, and present a novel solution able to remove the undesired effect of persistent congestion in arbitrary conditions. As in [5], our proposal only requires the modification of the sender end host, and attains a throughput as high as (and a buffer utilization as low as) FAST does.

Ii Equilibrium Rate of Recent Arrivals

Despite their differences at the packet level, all congestion control algorithms can be mathematically described, at the flow level, by the dynamical equation

(1)

where denotes the congestion window at time for flow , is a gain function, is a suitable utility function, and is the congestion signal [1]. The transmission rate is then given by , where is the round-trip time. For DCA algorithms, is the queuing delay. TCP Vegas uses , whereas FAST takes , where , and are protocol parameters. Both instances, FAST and Vegas, use and have therefore equal equilibrium structure, determined by (1), namely

(2)

where is the propagation delay as estimated by flow .

We consider in this paper the arrival of a single new flow (indexed by ) at a bottleneck link of capacity shared by a set of FAST flows. We also assume that each connection knows its true round-trip propagation delay ().111That is, we assume that there is a working algorithm in place to account for the persistent congestion bias. In Section IV we present such an algorithm. Hence, each flow is receiving units of bandwidth. Following the model in [5], flow contributes packets to the router queues, so flow sees a propagation delay of . As a result of this overestimated value, it grabs a rate in the equilibrium

(3)

while the new common equilibrium rate for the older flows is

(4)

Since we obtain

(5)

The transmission rates given by (3) and (4) are clearly unfair in that the recent arrival obtains far more bandwidth than the rest. Moreover, the unfairness worsens with the number of flows, .

We claim that the fair equilibrium is achievable using a slightly modified procedure to measure the propagation delay (see Section IV). Hence, since the onset of persistent congestion can be completely avoided, any new flow will find the bottleneck link capacity fully and equally shared among the older ones, as long as their rates have stabilized during the time elapsed from the last arrival. Consequently, there is no need to pose the case of successive flow beginnings, as in [5], and the assumption of a single recent arrival does not entail loss of generality.

Iii The Rate Reduction Approach

The solution presented in [5] consists in restraining transiently the transmission rate of a new flow by a given factor to allow router queues to get eventually empty, thus giving new connections a chance to directly measure the true round-trip propagation delay. Unfortunately, and despite of the reduction on its rate, the new connection is not always able to detect queue emptiness. Note that, as the new flow drains queues by reducing its own rate, competing flows respond by increasing their rates. Hence, the new flow will only obtain the true propagation delay if queues empty before existing flows are aware of this event, that is, if the time required to empty the queues is less than the RTT of the existing flows.

Let be the total backlog buffered at the core of the network in equilibrium. This backlog will be drained from the queue at a rate equal to the bottleneck link capacity minus the sum of the transmission rates of all active flows. In the most favorable case, the new connection will completely pause its transmission (). Then, if all the existing flows experience the same propagation delay (), and so the same RTT (), the fairness condition becomes

(6)

Finally, substituting (5), (4) and into (6), it follows that

(7)

Thus, the rate reduction method is only effective when the round-trip propagation delay of competing flows exceeds the lower bound calculated in (7). This lower bound scales as with the number of active flows, preventing a sensible default for the duration of the rate reduction.

Iv A Novel Solution

We noticed that, when the newly arriving flow stabilizes, it can indirectly obtain a good estimation of its actual round-trip propagation delay. As already pointed in Section II, the new flow overestimates its propagation delay as . Since and are known, it suffices to estimate to get the real .

A good estimation of and can be obtained even if the router queues are not completely empty. In fact, as we will show, it suffices to just indirectly measure queue length variations after a short change of the transmission rate. Let be the RTT of the tagged flow once it reaches a stable throughput. If this connection modifies its transmission rate , with , for a brief time (of the same order as , so that the rest of the flows do not adjust their own transmission rates) it will measure a new RTT when it resumes its transmission. Let . Under such circumstances

(8)

Substituting (3), (4) and (5) in (8), and solving for yields

(9)

Now, using (9) and (3) and the correct propagation delay can be adjusted as

Note that using positives values for causes the queue to drain, and it is possible to exhaust the backlog before the end of the measure. In that case (8) no longer holds and the number of flows is overestimated. To avoid it, it suffices to use small negative values for , causing the queueing delay to increase. Although for insufficiently dimensioned buffers this may cause some packet drops, this condition can be easily detected and avoided by using smaller values of in subsequent measures.

V Performance Analysis

(a) Single bottleneck topology.
(b) Multiple bottlenecks topology. Every link has a Mbs capacity and a ms propagation delay.
Figure 1: Network topologies used in the simulation experiments.
(a) Throughput comparison (FAST).
(b) Throughput comparison (modified FAST).
(c) Core queue length comparison.
Figure 2: Throughput and core queue length comparisons.
(a) Impact of round-trip propagation delay.
(b) Impact of the number of preexisting flows.
(c) Impact of background traffic.
Figure 3: Simulation experiment results.

To verify these claims, we report several ns-2 simulation experiments. In the first one, there are five FAST connections sharing the bottleneck link (Fig. 1(a)), starting at intervals of s each. Routers’ buffers are large enough to avoid packet losses, and sources always have data to send.

Fig. 2(a) shows the instantaneous throughputs of the FAST flows with the original congestion avoidance mechanism ( packets). As expected, FAST strongly favors new sources and recent connections get larger throughput than older flows. With the modified measurement method, this bias disappears and the network bandwidth is shared fairly (Fig. 2(b)).222For the estimation of we have employed and to prevent the bottleneck from getting empty. Also, the average queue length at the bottleneck (Fig. 2(c)) is consistently lower because, due to persistent congestion, the backlog of FAST exceeds the target value of packets per source, whereas our proposal does not so.

A second test was run over the same network to compare the proposed algorithm with the original FAST protocol and the rate reduction (RR) variant. Assume a set of existing FAST flows aware of their true propagation delays, sharing the bandwidth uniformly. Once their rates stabilize, a new flow starts. The delay of the link was appropriately set so as to have the desired RTT. Following customary practice, we measured the fairness among the new and the existing connections as the ratio where is the average transmission rate of the new flow and denotes the average rate of flow . Fig. 3(a) compares the performances of the three protocols for . As expected, with FAST, the new connection obtains a higher throughput. With the RR method, the bandwidth sharing depends on the experienced propagation delay: for delays below the threshold given by (7) (ms in this scenario), the source rates become unfairer. In contrast, fairness is preserved if the novel solution is used. Further, for any given RTT, the unfairness aggravates with the number of flows, as Fig. 3(b) clearly shows, either for FAST or for the RR reduction method, and only the modified version allocates bandwidth equally.

A more realistic and stringent topology was also considered. In Fig. 1(b), the network (a variant of the classic parking-lot topology) has multiple bottlenecks, with five flows running from nodes to node . The flow originated in starts its transmission after the rest of the flows stabilize. Additionally, in a similar way as in [1], some background traffic was simulated with a Pareto flow with shape factor of , average burst and idle time of ms and a peak rate ranging from to Mbs. Fig. 3(c) shows the results. Not surprisingly, with both FAST and the RR method, fairness improves as the peak rate of background traffic increases. The reason is that, during active periods, FAST flows reduce their rates as router queues fill due to background traffic, so in the idle periods the new flow can seize a better estimate of its propagation delay before queues fill again. In any case, our solution assures fairness irrespective of the amount of background traffic introduced.

Vi Conclusions

This paper has demonstrated that the rate reduction approach fails to solve persistent congestion in networks shared by many flows, as it cannot always completely drain the bottleneck queues, and thus is unable to obtain an accurate measure of the propagation delay.

We have presented a novel solution that does not rely on getting a direct measure of the propagation delay. Instead, by carefully modulating its own transmission rate, the source is able to calculate the error in the estimation of the round trip propagation delay and thus share the link evenly with the other FAST flows onwards.

References

  • [1] D. X. Wei, C. Jin, S. H. Low, and S. Hedge, “FAST TCP: Motivation, architecture, algorithms, performance,” IEEE/ACM Trans. Networking, vol. 14, no. 6, pp. 1246–1259, Dec. 2006.
  • [2] L. Brakmo, S. O’Malley, and L. Peterson, “TCP Vegas: New techniques for congestion detection and avoidance,” in Pr. SIGCOMM’94, 1994, pp. 24–35.
  • [3] J. Mo, R. La, V. Anantharam, and J. Walrand, “Analysis and comparison of TCP Reno and Vegas,” in Proc. INFOCOM’99, 1999, pp. 1556–1563.
  • [4] S. H. Low, L. Peterson, and L. Wang, “Understanding Vegas: a duality model,” J. ACM, vol. 49, no. 2, pp. 207–235, Mar. 2002.
  • [5] T. Cui, L. Andrew, M. Zukerman, and L. Tan, “Improving the fairness of FAST TCP to new flows,” IEEE Commun. Lett., vol. 10, no. 5, pp. 414–416, May 2006.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
60116
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description