Constant Factor Approximationfor ATSP with Two Edge Weights

Constant Factor Approximation
for ATSP with Two Edge Weights

Ola Svensson
École Polytechnique Fédérale de Lausanne. Supported by ERC Starting Grant 335288-OptApprox.
   Jakub Tarnawski
École Polytechnique Fédérale de Lausanne.
   László A. Végh
London School of Economics. Supported by EPSRC First Grant EP/M02797X/1.

We give a constant factor approximation algorithm for the Asymmetric Traveling Salesman Problem on shortest path metrics of directed graphs with two different edge weights. For the case of unit edge weights, the first constant factor approximation was given recently by Svensson. This was accomplished by introducing an easier problem called Local-Connectivity ATSP and showing that a good solution to this problem can be used to obtain a constant factor approximation for ATSP. In this paper, we solve Local-Connectivity ATSP for two different edge weights. The solution is based on a flow decomposition theorem for solutions of the Held-Karp relaxation, which may be of independent interest.

1 Introduction

The traveling salesman problem — one of finding the shortest tour of cities — is one of the most classical optimization problems. Its definition dates back to the 19th century and since then a large body of work has been devoted to designing “good” algorithms using heuristics, mathematical programming techniques, and approximation algorithms. The focus of this work is on approximation algorithms. A natural and necessary assumption in this line of work that we also make throughout this paper is that the distances satisfy the triangle inequality: for any triple of cities, we have where denotes the pairwise distances between cities. In other words, it is not more expensive to take the direct path compared to a path that makes a detour.

With this assumption, the approximability of TSP turns out to be a very delicate question that has attracted significant research efforts. Specifically, one of the first approximation algorithms (Christofides’ heuristic [Christofides76]) was designed for the symmetric traveling salesman problem (STSP) where we assume symmetric distances (). Several works (see e.g. [FriezeGM82, AsadpourGMGS10, Oveis11, Anari14, Svensson15]) have addressed the more general asymmetric traveling salesman problem (ATSP) where we make no such assumption.

However, there are still large gaps in our understanding of both STSP and ATSP. In fact, for STSP, the best approximation algorithm remains Christofides’ -approximation algorithm from the 70’s [Christofides76]. For the harder ATSP, the state of the art is a -approximation algorithm by Asadpour et al. [AsadpourGMGS10] and a recent -estimation algorithm111An estimation algorithm is a polynomial-time algorithm for approximating/estimating the optimal value without necessarily finding a solution to the problem. by Anari and Oveis Gharan [Anari14]. On the negative side, the best inapproximability results only say that STSP and ATSP are hard to approximate within factors and , respectively [KarpinskiLS15]. Closing these gaps is a major open problem in the field of approximation algorithms (see e.g. “Problem 1” and “Problem 2” in the list of open problems in the recent book by Williamson and Shmoys [WSbook]). What is perhaps even more intriguing about these questions is that we expect that a standard linear programming (LP) relaxation, often referred to as the Held-Karp relaxation, already gives better guarantees. Indeed, it is conjectured to give a guarantee of for STSP and a guarantee of (or even ) for ATSP.

An equivalent formulation of STSP and ATSP from a more graph-theoretic point of view is the following. For STSP, we are given a weighted undirected graph where and we wish to find a multisubset of edges of minimum total weight such that is connected and Eulerian. Recall that an undirected graph is Eulerian if every vertex has even degree. We also remark that we use the term multisubset as the solution may use the same edge several times. An intuitive point of view on this definition is that represents a road network, and a solution is a tour that visits each vertex at least once (and may use a single edge/road several times). The definition of ATSP is similar, with the differences that the input graph is directed and the output is Eulerian in the directed sense: the in-degree of each vertex equals its out-degree. Having defined the traveling salesman problem in this way, there are several natural special cases to consider. For example, what if is planar? Or, what if all the edges/roads have the same length, i.e., if is unweighted?

For planar graphs, we have much better algorithms than in general. Grigni, Koutsoupias and Papadimitriou [GrigniKP95] first obtained a polynomial-time approximation scheme for STSP restricted to unweighted planar graphs, which was later generalized to edge-weighted planar graphs by Arora et al. [AroraGKKW98]. More recently, ATSP on planar graphs (and more generally bounded genus graphs) was shown to admit constant factor approximation algorithms (first by Oveis Gharan and Saberi [Oveis11] and later by Erickson and Sidiropoulos [EricksonS14] who improved the dependency on the genus).

In contrast to planar graphs, STSP and ATSP remain APX-hard for unweighted graphs (ones where all edges have identical weight) and, until recently, there were no better algorithms for these cases. Then, in a recent series of papers, the approximation guarantee of was finally improved for STSP restricted to unweighted graphs. Specifically, Oveis Gharan, Saberi and Singh [GharanSS11] first gave an approximation guarantee of ; Mömke and Svensson [MomkeS11] proposed a different approach yielding a -approximation guarantee; Mucha [Mucha12] gave a tighter analysis of this algorithm; and Sebő and Vygen [SeboV14] significantly developed the approach to give the currently best approximation guarantee of . Similarly, for ATSP, it was only very recently that the restriction to unweighted graphs could be leveraged: the first constant approximation guarantee for unweighted graphs was given by Svensson [Svensson15]. In this paper we make progress towards the general problem by taking the logical next step and addressing a simple case left unresolved by [Svensson15]: graphs with two different edge weights.


There is an -approximation algorithm for ATSP on graphs with two different edge weights.

The paper [Svensson15] introduces an “easier” problem named Local-Connectivity ATSP, where one needs to find an Eulerian multiset of edges crossing only sets in a given partition rather than all possible sets (see next section for definitions). It is shown that an “-light” algorithm to this problem yields a -factor approximation for ATSP. For unweighted graphs (and slightly more generally, for node-induced weight functions222For ATSP, we can think of a node-weighted graph as an edge-weighted graph where the weight of an edge equals the node weight of .) it is fairly easy to obtain a 3-light algorithm for Local-Connectivity ATSP; the difficult part in [Svensson15] is the black-box reduction of ATSP to this problem. Note that [Svensson15] easily gives an -approximation algorithm in general if we take and to denote the largest and smallest edge weight, respectively. However, obtaining a constant factor approximation even for two different weights requires substantial further work.

In Local-Connectivity ATSP we need a lower bound function on the vertices. The natural choice for node-induced weights is . With this weight function, every vertex is able to “pay” for the incident edges in the Eulerian subgraph we are looking for. This choice of does not seem to work for more general weight functions, and we need to define more “globally”, using a new flow theorem for Eulerian graphs (Theorem 2). In Section 1.2, after the preliminaries, we give a more detailed overview of these techniques and the proof of the theorem. Our argument is somewhat technical, but it demonstrates the potential of the Local-Connectivity ATSP problem as a tool for attacking general ATSP.

Finally, let us remark that both STSP [PapYan93, BermanK06] and ATSP [Blaser04] have been studied in the case when all distances are either or . That restriction is very different from our setting, as in those cases the input graph is complete. In particular, it is trivial to get a -approximation algorithm there, whereas in our setting – where the input graph is not complete – a constant factor approximation guarantee already requires non-trivial algorithms. (In our setting, we can still think about the metric completion, but it will usually have more than two different edge weights.)

1.1 Notation and preliminaries

We consider an edge-weighted directed graph with . For a vertex subset we let and denote the sets of outgoing and incoming edges, respectively. For two vertex subsets , we let . For a subset of edges , we use and . We also let denote the set of weakly connected components of the graph ; the vertex set will always be clear from the context. For a directed graph we use to denote its vertex set and the edge set. For brevity, we denote the singleton set by (e.g. , and we use the notation for a subset of edges and a vector . For a multiset , we have denote the indicator vector of , which has a coordinate for each edge with value equal to the number of copies of in . For the case of two edge weights, we use to denote the two possible values, and partition so that if and if . We will refer to edges in and as cheap and expensive edges, respectively.

We define ATSP as the problem of finding a connected Eulerian subgraph of minimum weight. As already mentioned in the introduction, this definition is equivalent to that of visiting each city exactly once (in the metric completion) since we assume the triangle inequality. The formal definition is as follows.



An edge-weighted (strongly connected) digraph .


A multisubset of of minimum total weight such that is Eulerian and connected.

Held-Karp Relaxation.

The Held-Karp relaxation has a variable for every edge in . The intended meaning is that should equal the number of times is used in the solution. The relaxation is defined as follows:


The first set of constraints says that the in-degree should equal the out-degree for each vertex, i.e., the solution should be Eulerian. The second set of constraints enforces that the solution is connected; they are sometimes referred to as subtour elimination constraints. Finally, we remark that although the Held-Karp relaxation has exponentially many constraints, it is well-known that we can solve it in polynomial time either by using the ellipsoid method with a separation oracle or by formulating an equivalent compact (polynomial-size) linear program. We will use to denote an optimal solution to of value , which is a lower bound on the value of an optimal solution to ATSP on .

Local-Connectivity ATSP.

The Local-Connectivity ATSP problem can be seen as a two-stage procedure. In the first stage, the input is an edge-weighted digraph and the output is a “lower bound” function on the vertices such that . In the second stage, the input is a partition of the vertices, and the output is an Eulerian multisubset of edges which crosses each set in the partition and where the ratio of weight to of every connected component is as small as possible. We now give the formal description of the second stage, assuming the function is already computed.

Local-Connectivity ATSP


An edge-weighted digraph , a function with , and a partitioning of the vertices.


A Eulerian multisubset of such that

Here we used the notation that for a connected component of , (summation over the edges) and (summation over the vertices). We say that an algorithm for Local-Connectivity ATSP is -light on if it is guaranteed, for any partition, to find a solution such that for every component ,

In [Svensson15], is defined as ; note that in this case. We remark that we use the “-light” terminology to avoid any ambiguities with the concept of approximation algorithms (an -light algorithm does not compare its solution to an optimal solution to the given instance of Local-Connectivity ATSP).

Perhaps the main difficulty of ATSP is to satisfy the connectivity requirement, i.e., to select an Eulerian subset of edges which connects the whole graph. Local-Connectivity ATSP relaxes this condition – we only need to find an Eulerian set that crosses the cuts defined by the partition. This makes it intuitively an “easier” problem than ATSP. Indeed, an -approximation algorithm for ATSP (with respect to the Held-Karp relaxation) is trivially an -light algorithm for Local-Connectivity ATSP for an arbitrary function with : just return the same Eulerian subset as the algorithm for ATSP; since the set connects the graph, we have . Perhaps more surprisingly, the main technical theorem of [Svensson15] shows that the two problems are equivalent up to small constant factors.

Theorem ([Svensson15])

Let be an algorithm for Local-Connectivity ATSP. Consider an ATSP instance , and let denote the optimum value of the Held-Karp relaxation. If is -light on , then there exists a tour of with value at most . Moreover, for any , a tour of value at most can be found in time polynomial in the number of vertices, in , and in the running time of .

In other words, the above theorem says that in order to approximate an ATSP instance , it is sufficient to devise a polynomial-time algorithm to calculate a lower bound and a polynomial time algorithm for Local-Connectivity ATSP that is -light on with respect to this function. Our main result is proved using this framework.

1.2 Technical overview

Singleton partition.

Let us start by outlining the fundamental ideas of our algorithm and comparing it to [Svensson15] for the special case of Local-Connectivity ATSP when all partition classes are singletons. For unit weights, the choice in [Svensson15] is a natural one: intuitively, every node is able to pay for its outgoing edges. We can thus immediately give an algorithm for this case: just select an arbitrary integral solution to the circulation problem with node capacities . Then for any we have and hence , showing that is a 2-light solution.

The same choice of does not seem to work in the presence of two different edge costs. Consider a case when every expensive edge carries only a small fractional amount of flow. Then can be much smaller than the expensive edge cost , and thus the vertex would not be able to “afford” even a single outgoing expensive edge. To resolve this problem, we bundle small fractional amounts of expensive flow, channelling them to reach a small set of terminals. This is achieved via Theorem 2, a flow result which might be of independent interest. It shows that within the fractional Held-Karp solution , we can send the flow from an arbitrary edge set to a sink set with ; in fact, can be any set minimal for inclusion such that it can receive the total flow from . We apply this theorem for , the set of expensive edges; let be the flow from to , and call elements of terminals. Now, whenever an expensive edge is used, we will “force” it to follow to a terminal in , where it can be paid for. Enforcement is technically done by splitting the vertices into two copies, one carrying the flow and the other the rest. Thus we obtain the split graph and split fractional optimal solution .

The design of the split graph is such that every walk in it which starts with an expensive edge must proceed through cheap edges until it reaches a terminal before visiting another expensive edge. In our terminology, expensive edges create “debt”, which must be paid off at a terminal. Starting from an expensive edge, the debt must be carried until a terminal is reached, and no further debt can be taken in the meantime. The bound on the number of terminals guarantees that we can assign a lower bound function with such that (up to a constant factor) cheap edges are paid for locally, at their heads, whereas expensive edges are paid for at the terminals they are routed to. Such a splitting easily solves Local-Connectivity ATSP for the singleton partition: find an arbitrary integral circulation in the split graph with an upper bound on every node, and a lower bound on whichever copy of transmits more flow. Note that is a feasible fractional solution to this problem. We map back to an integral circulation in the original graph by merging the split nodes, thus obtaining a constant-light solution.

Arbitrary partitions.

Let us now turn to the general case of Local-Connectivity ATSP, where the input is an arbitrary partition . For unit weights this is solved in [Svensson15] via an integer circulation problem on a modified graph. Namely, an auxiliary node is added to represent each partition class , and one unit of in- and outgoing flow from is rerouted through . In the circulation problem, we require exactly one in- and one outgoing edge incident to to be selected. When we map the solution back to the original graph, there will be one incoming and one outgoing arc from every set (thus satisfying the connectivity requirement) whose endpoints inside violate the Eulerian condition. In [Svensson15] every is assumed to be strongly connected, and therefore we can “patch up” the circulation by connecting the loose endpoints by an arbitrary path inside . This argument easily gives a 3-light solution.

Let us observe that the strong connectivity assumption is in fact not needed for the result in [Svensson15]. Indeed, given a component which is not strongly connected, consider its decomposition into strongly connected (sub)components, and pick a which is a sink (i.e. it has no edges outgoing to ). We proceed by rerouting unit of flow through a new auxiliary vertex just as in that algorithm, but we do this for instead. This guarantees that has at least one outgoing edge in our solution, and that edge must leave as well.

Our result for two different edge weights takes this observation as the starting point, but the argument is much more complicated. We will find an integer circulation in a graph based on the split graph , and for every , there will be an auxiliary vertex representing a certain subset . These sets will be obtained as sink components in certain auxiliary graphs we construct inside each . This construction is presented in Section LABEL:sec:solvlcATSP; we provide a roadmap to the construction at the beginning of that section.

2 The Flow Theorem

In this section we prove our main flow decomposition result. As indicated in Section 1.2, we will use it to channel the flow from the expensive edges to a small set of terminals (where ). We will use the theorem stated below by moving the tail of every edge in to a new vertex . If , then the constraints of the Held-Karp relaxation guarantee condition (1). The details of the reduction are given in Lemma 3.1.


Let be a directed graph, let be a nonnegative capacity vector, and let be a source node with no incoming edges, i.e., . Assume that for all we have


Consider a set such that there exists a flow of value from the source to the sink set , and is minimal subject to this property.333That is, the maximum flow value from to any proper subset is smaller than . Then .

The proof of this theorem can be skipped on first reading, as the algorithm in Section 3 only uses it in a black-box manner.


Fix a minimal set and denote . Our goal is to prove that . We know that there exists a flow of value from to . For any such flow we define its imbalance sequence to be the sequence of values for all sorted in non-increasing order. We select the flow which maximizes the imbalance sequence (lexicographically). We write so that ; denote for brevity. By minimality of we have . The following is our main technical lemma.


Let be the number of with , i.e., . Then we have

In other words, the number of terminals with small imbalance is not much more than the sum of large imbalances.

Assuming this lemma, the main theorem follows immediately, since we have , i.e., .

The remainder of this section is devoted to the proof of the technical lemma. Let us first give an outline. We analyze the residual capacity (with respect to ) of certain cuts that must be present due to the lexicographic property. First of all, there must be a saturated cut (that is, one of residual in-capacity) containing all large terminals (i.e., those with imbalance at least ) but no small ones (Claim 2). Next, consider an arbitrary small terminal . Also by the maximality property, it is not possible to increase the value of to by rerouting flow from other small terminals to . Hence there must be a cut , disjoint from , which contains as the only terminal and has residual in-capacity less than in (Claim 2). As an illustration of the argument, let us assume that these sets are pairwise disjoint. It follows from (1) that the residual in-capacity of is at least (Claim 2). Hence every set must receive units of its residual in-capacity from . On the other hand, (1) upper-bounds the residual out-capacity of by (Claim 2). These together give a bound . Recall however that we assumed that all sets are disjoint. Since these sets may in fact overlap, the proof needs to be more careful: instead of sets , we argue with the sets (nonempty as containing ), and the union of pairwise intersections ; thus instead of , we get a slightly worse constant .

Proof (Proof of Lemma 2)

First note that the claim is trivial if , so assume . For an arc , we let denote the reverse arc. We define the residual graph with . The residual capacity for the first set of arcs is defined as , and for the second set as . For any set , and a disjoint , let , and denote the capacities of the respective cuts in the residual graph of , i.e.,

The next two claims derive simple bounds from (1) on the residual in-and out-capacities of cuts.


If , then .


The equality is by flow conservation. The inequality is by (1). The claim follows.


Consider such that for some . Then .


Here we used (1) and the flow conservation (as the single sink contained in is ).

The next claim shows that the large terminals can be separated from the small ones by a cut of residual in-degree . This follows from the lexicographically maximal choice, and is not a particular property of the threshold (it remains true if we replace by any ).


There exists a set with (i.e., contains exactly the large terminals) such that , and .


If , then we can choose . So assume . Consider the maximum flow in the residual graph from the source set to the sink set . If its value is positive, then there exists a path in from to for some and (without loss of generality it contains no other terminals). Set . Then the - flow has a lexicographically larger imbalance sequence than because is increased without decreasing any other of the large imbalances, a contradiction. So there must be a cut with and .444Note that since contains a path from to . The second part follows by the first via Claim 2.


For any (i.e., is a small terminal) there exists a set such that and .


If , then we can choose ; we have since all arcs leaving the source are saturated. So assume . Consider the maximum flow from the source set to the sink in the graph .

If its value is at least , then we will get a contradiction by increasing the imbalance of to at least without changing any of the large imbalances. Namely, let be a flow from to of value . Consider the vector . There are two possible cases:

  • If is still an - flow, i.e., if for all we have , then it has a lexicographically larger imbalance sequence than , a contradiction.

  • Otherwise pick the maximum such that is still an - flow, i.e., for all we have , with equality for some . This means that is an - flow where at least one terminal has zero imbalance, i.e., it can be removed from the set , contradicting its minimality.

So there must be a cut such that and

The claim follows by , which holds since all edges in are saturated in .

The argument uses the bound and the fact that all the ’s must receive a large part of their residual in-degrees from . Since the sets overlap, we have to take their intersections into account. Let us therefore define

as the set of vertices contained in at least two sets . Let , , and for each .


For each we have .


Note that , and thus by Claim 2. From this we can see

where the second inequality follows because an edge entering either enters from outside of , or enters from , or enters from .

For the residual in-degree of the set , we apply the trivial bound

The last estimate is by the choice of the sets in Claim 2. For the residual out-degree, we get

using Claim 2. Applying Claim 2 to and noting that gives . Putting Claim 2 and the above two bounds together, we conclude that


Lemma 2 now follows.

3 Algorithm for Local-Connectivity ATSP

We prove our main result in this section. Our claim for ATSP follows from solving Local-Connectivity ATSP:


There is a polynomial-time -light algorithm for Local-Connectivity ATSP on graphs with two edge weights.

Together with Theorem 1.1, this implies our main result:


For any graph with two edge weights, the integrality gap of its Held-Karp relaxation is at most . Moreover, we can find an -approximate tour in polynomial time.

The factor comes from . In Theorem 1.1, we select such that . Our proof of Theorem 3 proceeds as outlined in Section 1.2. In Section 3.1, we give an algorithm for calculating and define the split graph which will be central for finding light solutions. In Section LABEL:sec:solvlcATSP, we then show how to use these concepts to solve Local-Connectivity ATSP for any given partitioning of the vertices.

Recall that the edges are partitioned into the set of cheap edges and the set of expensive edges. Set to be an optimal solution to the Held-Karp relaxation. We start by noting that the problem is easy if assigns very small total fractional value to expensive edges. In that case, we can easily reduce the problem to the unweighted case which was solved in [Svensson15].


There is a polynomial-time -light algorithm for Local-Connectivity ATSP for graphs where .


If , then just apply the standard -light polynomial-time algorithm for unweighted graphs [Svensson15]. So suppose that . Then clearly the graph is strongly connected, i.e. every pair of vertices is connected by a directed path of cheap edges (of length at most ). Thus each expensive edge can be replaced by such a --path . Let us obtain a new circulation from by replacing all expensive edges in this way, i.e.,

To bound the cost of , note that and thus

By construction, is a feasible solution for the Held-Karp relaxation and . Therefore we can use it in the standard -light polynomial-time algorithm for the unweighted graph . Together with the bound this gives a -light algorithm.

For the rest of this section, we thus assume . Our objective is to define a function such that and then show how to, given a partition , find an Eulerian set of edges which crosses all -cuts and is -light with respect to the defined function.

3.1 Calculating and constructing the split graph

First, we use our flow decomposition technique to find a small set of terminals such that it is possible to route a certain flow from endpoints of all expensive edges to . Next, we use and to calculate the function and to construct a split graph , where each vertex of is split into two.

Finding terminals and flow .

We use Theorem 2 to obtain a small-enough set of terminals and a flow which takes all flow on expensive edges to this set . More precisely, we have the following corollary of Theorem 2.


There exist a vertex set and a flow from source set to sink set of value such that:

  • ,

  • ,

  • saturates all expensive edges, i.e., for all ,

  • for each , and .

Moreover, and can be computed in polynomial time.


We construct to be with a new vertex , where the tail of every expensive edge is redirected to be . Formally, and . The capacity vector is obtained from by just following this redirection, i.e., for any edge we define , where is taken to be the preimage of in .

Clearly , and has no incoming edges. To see that condition (1) of Theorem 2 is satisfied, recall that for every we have ; redirecting the tail of some edges to can only reduce the outdegree or increase the indegree of , i.e., . This gives condition (1) for all sets ; for , note that since we assumed that .

From Theorem 2 we obtain a vertex set with and a flow from to of value with . We can assume for all : in a path-cycle decomposition of we can remove all cycles and terminate every path at the first terminal it reaches. The flow is obtained by mapping back to , i.e., taking each to be , where is the image of . Note that must saturate all outgoing edges of , so saturates all expensive edges. For the last condition, the part is implied by , and for the part , note that if , then we could have removed from .

Note that such a set can be found in polynomial time: starting from (for which the required flow exists: consider , the restriction of to ), we remove vertices from one by one until we obtain a minimal set such that there exists a flow of value from to .

Definition of .

We set to be a scaled-down variant of which is defined as follows:

The definition of is now simply . The scaling-down is done so as to satisfy (see Lemma 3.1). Clearly we have for all and for terminals .

The intuition behind this setting of is that we want to pay for each expensive edge in the terminal which the flow “assigns” to . Indeed, in the split graph we will reroute flow (using ) so as to ensure that any path which traverses must then visit such a terminal to offset the cost of the expensive edge. As for the total cost of , note that if we removed the rounding from its definition, then we would get , since

(here we used that is of value ). So, similarly to the -light algorithm for unweighted metrics in [Svensson15], the key is to argue that rounding does not increase this cost too much. For this, we will take advantage of the small size of . Details are found in the proof of the following lemma.



The bound follows from elementary calculations:

(recall that by Lemma 3.1).

Construction of the split graph.

The next step is to reroute flow so as to ensure that all expensive edges are “paid for” by the at terminals. To this end, we define a new split graph and a split circulation on it (see also Fig. LABEL:fig:gspex for an example).


The split graph is defined as follows. For every we create two copies and in . For every cheap edge :

  • if , create an edge in with ,

  • if , create an edge in with .

For every expensive edge we create one edge in with . Finally, for each