# Bayesian Truthful *Mechanisms* for Job Scheduling
from Bi-criterion Approximation *Algorithms*

## Abstract

We provide polynomial-time approximately optimal Bayesian mechanisms for makespan minimization on unrelated machines as well as for max-min fair allocations of indivisible goods, with approximation factors of and respectively, matching the approximation ratios of best known polynomial-time *algorithms* (for max-min fairness, the latter claim is true for certain ratios of the number of goods to people ). Our mechanisms are obtained by establishing a polynomial-time approximation-sensitive reduction from the problem of designing approximately optimal *mechanisms* for some arbitrary objective to that of designing bi-criterion approximation *algorithms* for the same objective plus a linear allocation cost term. Our reduction is itself enabled by extending the celebrated “equivalence of separation and optimization” [26] to also accommodate bi-criterion approximations. Moreover, to apply the reduction to the specific problems of makespan and max-min fairness we develop polynomial-time bi-criterion approximation algorithms for makespan minimization with costs and max-min fairness with costs, adapting the algorithms of [44], [9] and [3] to the type of bi-criterion approximation that is required by the reduction.

## 1Introduction

Job shop scheduling is a fundamental problem that has been intensively studied in operations research and computer science in several different flavors. The specific one that we consider in this paper, called *scheduling unrelated machines*, pertains to the allocation of indivisible jobs for execution to machines so as to minimize the time needed for the last job to be completed, called the *makespan* of the schedule. The input is the processing time of each machine for each job . The problem is -hard to -approximate, for any , but a polynomial-time -approximation algorithm is known [35]. An overview of algorithmic work on this problem can be found in [30].

Starting in the seminal work of Nisan and Ronen [41], scheduling unrelated machines has also become paradigmatic for investigating the relation between the complexity of mechanism and algorithm design. Mechanism design can be viewed as the task of optimizing an objective over “strategic inputs.” In comparison to algorithm design where the inputs are known, in mechanism design the inputs are owned by rational agents who must be incentivized to share enough information about their input so that the desired objective can be optimized. The question raised by [41] is how much this extra challenge degrades our ability to optimize objectives:

*How much more difficult is mechanism design for a certain objective compared to algorithm design for that same objective?*

In the context of scheduling unrelated machines, suppose that the machines are rational agents who know their own processing times for the jobs, but want to minimize the sum of processing times of the jobs assigned to them minus the payment made to them by the mechanism. If the machines are rational, is it still possible to (approximately) minimize makespan?

Indeed, there are two questions pertaining to the relation of algorithm and mechanism design that are important to answer. The first is comparing the performance of the optimal mechanism to that of the optimal algorithm. In our setting, the question is whether there are mechanisms whose makespan is (approximately) optimal with respect to the real ’s, which (at least a priori) are only known to the machines. Nisan and Ronen show that the classical VCG mechanism achieves a factor approximation to the optimal makespan [41], but since their work no constant factor approximation has been obtained. We overview known upper and lower bounds in Section ?.

The second question pertaining to the relation of algorithm and mechanism design is of computational nature. The question is whether polynomial-time (approximately) optimal mechanisms exist for objectives for which polynomial-time (approximately) optimal algorithms exist. In our context, there exist polynomial-time algorithms whose makespan is approximately optimal with respect to the optimal makespan of any feasible schedule [35], so the question is whether there exist polynomial-time mechanisms whose makespan is approximately optimal with respect to that of any mechanism. This is the question that we study in this paper.

Before proceeding it is worth mentioning that (outside of makespan minimization) this question has been intensively studied, and the results are discouraging. In particular, a sequence of recent results [42] have identified welfare maximization problems for which polynomial-time constant factor approximation algorithms exist, but where no polynomial-time mechanism is better than a polynomial-factor approximation, subject to well-believed complexity theoretic assumptions. At the same time, we have also witnessed a recent surge in the study of mechanisms in Bayesian settings, where the participants of the mechanism (in our case machines) have types (in our case processing times for jobs) drawn from a prior distribution that is common knowledge. The existence of priors has been shown [29] to sidestep several intractability results including the ones for welfare maximization referenced above. In view of this experience, it is natural to ask:

*Are there approximately optimal, computationally efficient mechanisms for makespan minimization in Bayesian settings?*

We provide a positive answer to this question, namely (see Theorem ? for a formal statement)

In particular, the approximation factor achieved by our mechanism exactly matches the best known approximation factor achieved by polynomial-time algorithms [35]. In fact, our proof establishes a polynomial-time, approximation sensitive, black-box reduction from the problem of designing a mechanism for makespan minimization to the problem of designing a bi-criterion approximation for the generalized assignment problem [44]. We explain our reduction and the type of bi-criterion approximation that is required in Section 1.1. We discuss prior work on mechanisms for makespan minimization in Section ?, noting here that the best known approximation factors prior to our work were polynomial, in general.

A problem related to makespan minimization is that of *max-min fair allocation of indivisible goods*, abbreviated to *max-min fairness*. In the language of job scheduling, this can be described as looking for an assignment of jobs to machines that maximizes the minimum load—rather than minimizing the maximum load, which is the goal in makespan minimization. While the two problems are related, the best known polynomial-time approximation algorithms for max-min fairness achieve factors that are polynomial in the number of jobs or machines. We overview algorithmic work on the problem in Section ?, noting here that there are several, mutually undominated approximation algorithms, whose approximation guarantees have different dependences on the number of jobs , machines , and other parameters of the problem. Our contribution here is to obtain polynomial-time Bayesian mechanisms matching the approximation factor of some of those algorithms, namely (see Theorem ? for a formal statement)

In particular, our approximation guarantees match those of the approximation algorithms provided by [3] and [9], which both lie on the Pareto boundary of what is achievable by polynomial-time algorithms. Our contribution here, too, can be viewed as pushing mechanism design up to speed with algorithm design for the important objective of max-min fairness. Our proof is enabled by a polynomial-time, approximation sensitive, black-box reduction from mechanism design for max-min fairness to bi-criterion approximation algorithm design *for max-min fairness with allocation costs*, for which we recover approximation guarantees matching those of [3] in Section ?.

Our mechanism to algorithm reduction, enabling Theorems ? and ? is discussed next.

### 1.1Black-Box Reductions in Mechanism Design

A natural approach towards Theorems ? and ? is establishing a polynomial-time reduction from (approximately) optimizing over mechanisms to (approximately) optimizing over algorithms. This approach has already been shown fruitful for welfare maximization. Indeed, recent work establishes such a reduction for welfare maximization in Bayesian settings [29]. Roughly speaking, it is shown that black-box access to an -approximation algorithm for an arbitrary welfare maximization problem can be leveraged to obtain an -approximately optimal mechanism for the same welfare maximization problem.

Unfortunately, recent work has ruled out such black-box reduction for makespan minimization [18]. This impossibility result motivated recent work by the authors, where it is shown that adding a linear allocation cost term to the algorithmic objective can bypass this impossibility [12]. Specifically, it is shown in [15] that finding an (-approximately) optimal mechanism for an arbitrary objective can be reduced to polynomially many black-box calls to an (-approximately) optimal algorithm for the same objective , perturbed by an additive allocation cost term.^{1}

On the other hand, adding a (possibly negative) allocation cost term may turn an objective that can be (approximately) optimized in polynomial-time into one that cannot be optimized to within any finite factor. This is precisely what happens if we try to carry out the reduction of [15] for makespan minimization or max-min fairness with indivisible goods. More precisely:

To find a polynomial-time -approximately optimal mechanism for makespan minimization, the reduction of [15] requires a polynomial-time -approximately optimal algorithm for the problem of

*scheduling unrelated machines with costs*. This is similar to scheduling unrelated machines, except that now it also costs (which may be positive, negative, or ) to assign job to machine , and we are looking for an allocation of jobs to machines that minimizeswhere is the makespan of the allocation . In words, we want to find a schedule that minimizes the sum of makespan and cost of the allocation. However, it is easy to see that it is NP-hard to optimize to within any finite factor even when restricted to instances whose optimum is guaranteed to be positive.

^{2}Similarly, to find a polynomial-time -approximately optimal mechanism for max-min fairness, the reduction of [15] requires a polynomial-time -approximately optimal algorithm for the problem of

*max-min fairness with allocation costs.*In the notation of the previous bullet, we are looking for an allocation of jobs to machines that maximizeswhere is the load of the least loaded machine under allocation . Again, it is easy to see that it is NP-hard to optimize to within any finite factor.

^{3}

#### A single-criterion to bi-criterion approximation-sensitive reduction

The inapproximability results identified above motivate us to develop a novel reduction that is more robust to adding the allocation cost term to the mechanism design objective. We expect that our new reduction will reach a much broader family of mechanism design objectives, and indeed as a corollary of our new reduction we obtain Theorems ? and ? for the important objectives of makespan and max-min fairness, where the reduction of [15] fails.

Our new approach is based on the concept of -approximation of objectives modified by allocation costs, defined in Section ?. Instead of presenting the concept in full generality here, let us describe it in the context of the makespan minimization objective and its resulting scheduling unrelated machines with costs problem. For , we will say that an allocation of jobs to machines is an -approximation to a scheduling unrelated machines with costs instance iff

that is, we discount the makespan term in the objective, before comparing to the optimum.

Setting in recovers the familiar notion of -approximation, but taking might make the problem easier. Indeed, we argued earlier that it is NP-hard to achieve any finite when . On the other hand, we can exploit the bi-criterion result of Shmoys and Tardos for the generalized assignment problem [44] to get a polynomial-time algorithm achieving and [44]. The proof of the following proposition is presented in Section ?.

Given such -approximation algorithms for objectives modified by allocation cost, we show how to obtain -approximately optimal mechanisms, by establishing an appropriate mechanism to algorithm reduction described informally below.

See Theorem ? in Section ? for a formal statement. The main technical challenge in establishing our reduction is extending the celebrated “equivalence of separation and optimization” [26] to also accommodate -approximations—see Theorem ?. Theorem ? is obtained by combining Proposition ? and Theorem ?.

To apply our mechanism to algorithm reduction to max-min fairness, we need -approximation algorithms for max-min fairness with allocation costs. Since we have a maximization objective, we are now looking to compute allocations such that

for some . In this case, we are allowed to boost the fairness part of the objective before comparing to . Again, even though no finite is achievable in polynomial time when , we can adapt the algorithms of [3] to obtain finite -approximation algorithms for max-min fairness with costs.

The proof of Proposition ? is given in Section ?. It extends the algorithms of [3] to the presence of allocation costs, showing that the natural linear programming relaxation can be rounded so that the cost term does not increase, while the fairness term decreases by a factor of or respectively. Combining Proposition ? with Theorem ? gives Theorem ?.

## AProof of Theorems and

### a.1Proof of Theorem

Here we prove Theorem ? for minimization algorithms, noting that the proof for maximization algorithms is nearly identical after switching for and for where appropriate. Much of the proof is similar to that of Theorem H.1 in [14]. We include it here for completeness, however we refer the reader to [14] for the proof of some technical lemmas. We begin by defining the weird separation oracle in Figure ?. This is identical to the weird separation oracle used in [14], except that we use instead of just .

“

**Yes**” if the ellipsoid algorithm with iterations outputs “infeasible” on the following problem:^{4};

;

^{5}“yes” if ;

^{6}the violated hyperplane otherwise.

If a feasible point is found, output the violated hyperplane .

If outputs a halfspace , then we must have , implying that . Because is an -approximation, we know that . Therefore, every satisfies , and the halfspace contains .

It is clear that is queried at most times, where is the bit complexity of . Each execution of makes one call to . So as long as and the bit complexity of are both polynomial in , the lemma holds. This is shown in Corollary 5.1 of Section 5 and Lemma 5.1 of Section 5 in [14], and we omit further details here.

When the Ellipsoid algorithm tries to minimize , it does a binary search over possible values , and checks whether or not there is a point satisfying , , and . If there is a point satisfying , , and , then clearly every halfspace output by the separation oracle for contains , and so does the halfspace . Furthermore, by Lemma ?, every halfspace output by contains as well. Therefore, if , the Ellipsoid algorithm using will find a feasible point and continue its binary search. Therefore, the algorithm must conclude with a point satisfying .

Consider the following intersection of halfspaces:

If , there exists some weight vector such that . And for appropriately chosen , we also have .

So if , consider the point , with . By the reasoning in the previous paragraph, it’s clear that is in the above intersection of halfspaces.

So consider an execution of that accepts the point . Then taking to be the set of directions queried by the Ellipsoid algorithm during the execution of , the Ellipsoid algorithm deemed the intersection of halfspaces above to be empty. This necessarily means that , as otherwise the previous paragraphs prove that the above intersection of halfspaces wouldn’t be empty.

So we may take to be (an appropriately chosen subset of) the directions queried by the Ellipsoid algorithm during the execution of and complete the proof of the lemma.

It is clear that each lemma proves one guarantee of Theorem ?, completing the proof.

### a.2Theorem

We begin by stating the linear program used in our algorithm in Figure ?, and it’s modification to use instead in Figure (both taken directly from [15]).

**Variables:**

, for all bidders and types , denoting the expected value obtained by bidder when their true type is but they report instead.

, for all bidders and types , denoting the expected price paid by bidder when they report type .

, denoting the expected value of .

Constraints:

, for all bidders , and types , guaranteeing that the implicit form is BIC.

, for all bidders , and types , guaranteeing that the implicit form is individually rational.

, guaranteeing that the implicit form is feasible.

Minimizing:

, the expected value of when played truthfully by bidders sampled from .

**Variables:**

, for all bidders and types , denoting the expected value obtained by bidder when their true type is but they report instead.

, for all bidders and types , denoting the expected price paid by bidder when they report type .

, denoting the expected value of .

Constraints:

, for all bidders , and types , guaranteeing that the implicit form is BIC.

, for all bidders , and types , guaranteeing that the implicit form is individually rational.

, guaranteeing that the implicit form is (almost) feasible.

Minimizing:

, (almost) the expected value of when played truthfully by bidders sampled from .

We now proceed by proving Propositions ? through ?.

Theorem ? guarantees that the linear program can be solved in the desired runtime, and that the desired directions will be output. It is clear that any implicit form satisfying the constraints is truthful.

Let now denote the value of the LP in Figure ?, denote the value of the LP in Figure ? using a real separation oracle for , and denote the value of the LP in Figure ? using a real separation oracle for . Theorem ? also guarantees that . So we just need to show that with the desired probability.

To see this, first observe that the origin satisfies every constraint in the linear program not due to (i.e. the truthfulness constraints) with equality. Therefore, if any implicit form is truthful, so is the implicit form . This immediately implies that .

By Proposition ?, we know that with the desired probability, the implicit form (with respect to ) of whatever mechanism implements (with respect to ) is -close to , and therefore with the desired probability as well.

In order to prove Proposition ?, we make use of a technical lemma from [15] (specifically, combining Propositions 1 and 7).

Let’s first consider the case that . The case will be handled with one technical modification. Consider first that for any fixed , the problem of finding that minimizes is an instance of . Simply let , , and .

So with black-box access to an -approximation algorithm, , for , let be the mechanism that on profile simply runs on input , , . We therefore get that the mechanism satisfies the following inequality:

By Proposition ?, this then implies that:

This exactly states that is an -approximation. It is also clear that we can compute efficiently: has polynomially many profiles in its support, so we can just run on every profile and see what it outputs, then take an expectation to compute the necessary quantities of the implicit form. Note that this computation is the reason we bother using at all, as we cannot compute these expectations exactly in polynomial time for as the support is exponential.

Now we state the technical modification to accommodate . Recall that for any feasible implicit form , that the implicit form is also feasible for any . So if , simply find any feasible implicit form, then set the component to . This yields a feasible implicit form with , which is clearly an -approximation (in fact, it is a -approximation). If instead the problem has a maximization objective, we may w.l.o.g. set in the implicit form we output, which means that the contribution of is completely ignored. So we can use the exact same approach as the case and just set .

So let be the algorithm that runs on every profile as described, and computes the implicit form of this mechanism with respect to . clearly terminates in the desired runtime. Finally, to implement a mechanism whose implicit form matches , simply run with the required parameters on every profile.

By Proposition ?, the implicit form output by the linear program of Figure ? is in the convex hull of . Therefore, the implicit form is in the convex hull of . Therefore, we can implement with respect to by randomly sampling a direction according to the convex combination, and then implementing the corresponding . Call this mechanism . By Proposition ?, this can be done time polynomial in the desired quantities. Finally, we just need to show that the guarantees hold with the desired probability.

If our target was just an interim individually rational mechanism, it would be trivial to match the prices exactly: just charge each bidder the desired prices. But if we want an ex-post IR mechanism, we need to employ a simple reduction used in Appendix D of [22], which causes the prices to possibly err with the rest of the implicit form. To see that all guarantees hold with the desired probability, consider that the implicit form of with respect to is -close to with the desired probability. In the event that this happens, it’s obvious that the desired properties hold.

## BOmitted Proofs from Section

In this section we provide a proof of Theorems ?, ?, and ?. We begin with Theorem ?. Shmoys and Tardos show that if the linear program of Figure ? outputs a feasible fractional solution, then it can be rounded to a feasible integral solution without much loss. We will refer to this linear program as for various values of .

**Variables:**

, for all machines and jobs denoting the fractional assignment of job to machine .

, denoting the maximum of the makespan and the processing time of the largest single job used.

Constraints:

, for all , guaranteeing that every job is assigned.

, for all , guaranteeing that the makespan is at most .

, for all .

for all such that , guaranteeing that no single job has processing time larger than .

.

Minimizing:

, (almost) the makespan plus cost of the fractional solution.

With Theorem ? in hand, we can now design a -approximation algorithm. Define as the modified makespan of an assignment to be . In other words, is the larger of the makespan and the processing time of the largest single job that is fractionally assigned. Note that for any that . Now consider solving for all possible values of , and let denote the best solution among all feasible solutions output. The following lemma states that performs better than the integral optimum.

Some job assigned in has the largest processing time, say it is . Then is a feasible solution to , and will have value . therefore satisfies . As is an integral solution, we have , proving the lemma.

Comining Lemma ? with Theorem ? proves Theorem ?.

Consider the algorithm that solves for all values of and outputs the fractional solution that is optimal among all feasible solutions found. By Lemma ?, is at least as good as the optimal integral solution. By Theorem ?, we can continue by rounding in polynomial time to an integral solution satisfying .

We next prove Theorem ?, as it will be used in the proof of Theorem ?.

We can break the cost of into , where denotes the portion of the cost due to jobs assigned to machines with positive cost, and denotes the portion of the cost due to jobs assigned to machines with negative cost. As assigns all jobs to the machine with largest positive cost, we clearly have and (but may have ). Furthermore, as for all , we clearly have (but may have ).

So there are two cases to consider. Maybe . In this case, we clearly have , and the first possibility holds. The other case is that maybe , in which case we clearly have , and the second possibility holds.

With Theorem ?, we may now prove Theorem ?. We begin by describing our algorithm modifying that of Asadpour and Siberi, which starts by solving a linear program known as the configuration LP. We modify the LP slightly to minimize fairness plus cost, but this does not affect the ability to solve this LP in polynomial time via the same approach used by Bansal and Sviridenko [5].^{7}

**Variables:**

, for all machines and configurations denoting the fractional assignment of configuration to machine .

Constraints:

, for all , guaranteeing that every machine is fractionally assigned a valid configuration with weight .

, for all , guaranteeing that no job is fractionally assigned with weight more than .

, for all .

Maximizing:

, the cost of the fractional solution .

Step one of the algorithm solves for all for which the fairness of the optimal solution could possibly be between and . It’s clear that there are only polynomially many (in the bit complexity of the processing times and and ) such . Let denote the solution found by solving (if one was found at all). Then define . We first claim that is a good fractional solution.

Whatever the optimal integral allocation, is, it has some fairness . For satisfying , is clearly a feasible solution to , and therefore we must have . As we also clearly have by choice of , we necessariliy have . As maximizes over all , it satisfies the same inequality as well.

From here, we will make use of Theorem ?: either we will choose the allocation that assigns every job to the machine with the highest non-negative cost, or we’ll round to via the procedure used in [3]. We first state the rounding algorithm of [3].

Make a bipartite graph with nodes (one for each machine) on the left and nodes (one for each job) on the right.

For each machine and job , compute . If , put an edge of weight between machine and job . Call the resulting graph .

For each node , denote by the sum of weights of edges incident to .

Update the weights in to remove all cycles. This can be done without decreasing or changing for any , and is proved in Lemma ?.

Pick a random matching in according to Algorithm 2 of [3]. Each edge will be included in with probability exactly , and each machine will be matched with probability exactly .

For all machines that were unmatched in , select small configuration with probability .

For all jobs that were selected both in the matching stage and the latter stage, award them just to whatever machine received them in the matching. For all jobs that were selected only in the latter stage, choose a machine uniformly at random among those who selected it. Throw away all unselected jobs.

Before continuing, let’s prove that we can efficiently remove cycles without decreasing the cost or changing any .

Consider any cycle . For , denote by and . Call the odd edges those with odd subscripts and the even edges those with even subscripts. W.l.o.g. assume that the odd edges have higher total cost. That is, . Let also . Now consider decreasing the weight of all even edges by and increasing the weight of all odd edges by . Clearly, we have not decreased the cost. It is also clear that we have not changed for any . And finally, it is also clear that we’ve removed a cycle (by removing an edge). So we can repeat this procedure a polynomial number of times and result in an acyclic graph.

Now, let denote the fractional assignment obtained after removing cycles in . Then it’s clear that . If we let denote the randomized allocation output at the end of the procedure, it’s also clear that for all . This is because if there were never any conflicts (jobs being awarded multiple times), we would have exactly . But because of potential conflicts, can only decrease. Asadpour and Siberi show the following theorem about the quality of :

And now we are ready to make use of Theorem ?.

After removing cycles, we have a fractional solution that is a -approximation. By using the randomized procedure of Asadpour and Siberi, we get a randomized satisfying for all and . Therefore, taking , Theorem ? tells us that either assigning every job to the machine with highest non-negative cost yields a -approximation, or is a -approximation.

We conclude this section by proving that the -approximation algorithm of Bezakova and Dani for fairness can be modified to be a -approximation for fairness plus costs. The algorithm is fairly simple: for a fixed , make the following bipartite graph. Put nodes on the left, one for each machine, and nodes on the right, one for each job. Put an additional nodes on the left for dummy machines. Put an edge from every job node to every dummy machine node of weight , and an edge from every job node to every real machine node of weight *only if* . Then find the maximum weight matching in this graph. For every job that is matched to a real machine, assign it there. For every job that is assigned to a dummy machine, assign it to the machine with the maximum non-negative cost (or nowhere if they’re all negative). Call this assignment . Denote . Finally, let denote the allocation that just assigns every job to the machine of highest cost. If , output . Otherwise, output .

Consider the optimal assignment . We either have , or . If , then clearly . If , then every machine is awarded at least one job, but at most . For each machine , define to be the job assigned to with the highest processing time. Except for , reassign all other jobs to the machine with the highest non-negative cost. This can only increase the cost, and will not hurt the fairness by more than a factor of . So this solution, , clearly has . Futhermore, corresponds to a feasible matching when . Whatever solution is found instead clearly has fairness at least and cost at least . So , and therefore also , is a -approximation.

So in conclusion, either , in which case is a -approximation, or , in which case . So if we ever output , we actually have . If we output , then either , or . In both cases, is a -approximation.

Part 1) is proved in Proposition ?, and part 2) is proved in Proposition ?.

### Footnotes

- Technically, their result holds for maximization objectives, but our work here provides the necessary modifications for minimization objectives, as well as the important generalization to -approximations discussed below.
- This can be seen via a simple modification of an inapproximability result given in [35]. For the problem of scheduling unrelated machines, they construct instances with integer-valued makespan that is always and such that it is NP-hard to decide whether the makespan is or . We can modify their instances to scheduling unrelated machines with costs instances by giving each job a cost of on every machine for an arbitrary . Then the total cost of any feasible solution is exactly . So their proof immediately shows that it is NP-hard to determine if these instances have optimal makespan + cost that is or . Since was arbitrary, this shows that no finite approximation factor is possible.
- Indeed, Bezakova and Dani [9] present a family of max-min fairness instances such that it is NP-hard to distinguish between and . To each of these instances add a special machine and a special job such that the processing-time and cost of the special machine for the special job are and respectively, while the processing-time and cost of the special machine for any non-special job or of any non-special machine for the special job are and respectively. Also, assign cost to any non-special machine non-special job pair. In the resulting max-min fairness with costs instances it is -hard to distinguish between and , hence no finite approximation is possible.
- The appropriate choice of for our use of is provided in Corollary 5.1 of Section 5 in [15]. is polynomial in the appropriate quanitites.
- The appropriate choice of for our use of is provided in Lemma 5.1 of Section 5 in [14]. The bit complexity of is polynomial in the appropriate quantities.
- Notice that the set “Yes” is not necessarily convex or even connected.
- Note that this is non-trivial, as the LP has exponentially-many variables. The approach of Bansal and Sviridenko is to solve the dual LP via a separation oracle which requires solving a knapsack problem.

### References

**Truthful Mechanisms for One-Parameter Agents.**

A. Archer and É. Tardos. In*the 42nd Annual Symposium on Foundations of Computer Science (FOCS)*, 2001.**Santa claus meets hypergraph matchings.**

A. Asadpour, U. Feige, and A. Saberi. In*the 12th International Workshop on Approximation, Randomization, and Combinatorial Optimization (APPROX-RANDOM)*, 2008.**An approximation algorithm for max-min fair allocation of indivisible goods.**

A. Asadpour and A. Saberi. In*the 39th Annual ACM Symposium on Theory of Computing (STOC)*, 2007.**Optimal lower bounds for anonymous scheduling mechanisms.**

I. Ashlagi, S. Dobzinski, and R. Lavi.*Mathematics of Operations Research*, 37(2):244–258, 2012.**The santa claus problem.**

N. Bansal and M. Sviridenko. In*the 38th annual ACM symposium on Theory of computing (STOC)*, pages 31–40, 2006.**Maxmin allocation via degree lower-bounded arborescences.**

M. Bateni, M. Charikar, and V. Guruswami. In*the 41st annual ACM symposium on Theory of Computing (STOC)*, pages 543–552, 2009.**Revenue maximization with nonexcludable goods.**

M. Bateni, N. Haghpanah, B. Sivan, and M. Zadimoghaddam. In*the 9th International Conference on Web and Internet Economics (WINE)*, 2013.**Bayesian Incentive Compatibility via Fractional Assignments.**

X. Bei and Z. Huang. In*the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, 2011.**Allocating indivisible goods.**

I. Bezáková and V. Dani.*SIGecom Exchanges*, 5(3):11–18, 2005.*Fair Division: From cake-cutting to dispute resolution*.

S. J. Brams and A. D. Taylor. Cambridge University Press, 1996.**Inapproximability for VCG-Based Combinatorial Auctions.**

D. Buchfuhrer, S. Dughmi, H. Fu, R. Kleinberg, E. Mossel, C. H. Papadimitriou, M. Schapira, Y. Singer, and C. Umans. In*Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, 2010.**An Algorithmic Characterization of Multi-Dimensional Mechanisms.**

Y. Cai, C. Daskalakis, and S. M. Weinberg. In*the 44th Annual ACM Symposium on Theory of Computing (STOC)*, 2012.**Optimal Multi-Dimensional Mechanism Design: Reducing Revenue to Welfare Maximization.**

Y. Cai, C. Daskalakis, and S. M. Weinberg. In*the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS)*, 2012.**Reducing Revenue to Welfare Maximization: Approximation Algorithms and other Generalizations.**

Y. Cai, C. Daskalakis, and S. M. Weinberg. In*the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, 2013.**Understanding Incentives: Mechanism Design becomes Algorithm Design.**

Y. Cai, C. Daskalakis, and S. M. Weinberg. In*the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS)*, 2013.**On allocating goods to maximize fairness.**

D. Chakrabarty, J. Chuzhoy, and S. Khanna. In*50th Annual IEEE Symposium on Foundations of Computer Science (FOCS)*, 2009.**Prior-Independent Mechanisms for Scheduling.**

S. Chawla, J. Hartline, D. Malec, and B. Sivan. In*Proceedings of 45th ACM Symposium on Theory of Computing (STOC)*, 2013.**On the limits of black-box reductions in mechanism design.**

S. Chawla, N. Immorlica, and B. Lucier. In*Proceedings of the 44th Symposium on Theory of Computing (STOC)*, 2012.**Mechanism Design for Fractional Scheduling on Unrelated Machines.**

G. Christodoulou, E. Koutsoupias, and A. Kovács. In*the 34th International Colloquium on Automata, Languages and Programming (ICALP)*, 2007.**A Lower Bound for Scheduling Mechanisms.**

G. Christodoulou, E. Koutsoupias, and A. Vidali. In*the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, 2007.**A Deterministic Truthful PTAS for Scheduling Related Machines.**

G. Christodoulou and A. Kovács. In*the 21st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, 2010.**Symmetries and Optimal Multi-Dimensional Mechanism Design.**

C. Daskalakis and S. M. Weinberg. In*the 13th ACM Conference on Electronic Commerce (EC)*, 2012.**Truthful Approximation Schemes for Single-Parameter Agents.**

P. Dhangwatnotai, S. Dobzinski, S. Dughmi, and T. Roughgarden. In*the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS)*, 2008.**An Impossibility Result for Truthful Combinatorial Auctions with Submodular Valuations.**

S. Dobzinski. In*Proceedings of the 43rd ACM Symposium on Theory of Computing (STOC)*, 2011.**The Computational Complexity of Truthfulness in Combinatorial Auctions.**

S. Dobzinski and J. Vondrak. In*Proceedings of the ACM Conference on Electronic Commerce (EC)*, 2012.**The Ellipsoid Method and its Consequences in Combinatorial Optimization.**

M. Grötschel, L. Lovász, and A. Schrijver.*Combinatorica*, 1(2):169–197, 1981.**Optimal auctions with positive network externalities.**

N. Haghpanah, N. Immorlica, V. S. Mirrokni, and K. Munagala. In*the 12th ACM Conference on Electronic Commerce (EC)*, 2011.**Bayesian Incentive Compatibility via Matchings.**

J. D. Hartline, R. Kleinberg, and A. Malekian. In*the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, 2011.**Bayesian Algorithmic Mechanism Design.**

J. D. Hartline and B. Lucier. In*the 42nd ACM Symposium on Theory of Computing (STOC)*, 2010.*Approximation algorithms for NP-hard problems*.

D. S. Hochbaum. PWS Publishing Co., 1996.**On Linear Characterizations of Combinatorial Optimization Problems.**

R. M. Karp and C. H. Papadimitriou. In*the 21st Annual Symposium on Foundations of Computer Science (FOCS)*, 1980.**Approximation algorithms for the max-min allocation problem.**

S. Khot and A. K. Ponnuswami. In*the 11th International Workshop on Approximation, Randomization, and Combinatorial Optimization (APPROX-RANDOM)*, 2007.**Sur le probleme du partage pragmatique de h. steinhaus.**

B. Knaster. In*Annales de la Societé Polonaise de Mathematique*, volume 19, pages 228–230, 1946.**A Lower Bound of for Truthful Scheduling Mechanisms.**

E. Koutsoupias and A. Vidali. In*the 32nd International Symposium on the Mathematical Foundations of Computer Science (MFCS)*, 2007.**Approximation algorithms for scheduling unrelated parallel machines.**

J. K. Lenstra, D. B. Shmoys, and É. Tardos. In*FOCS*, 1987.**On 2-Player Randomized Mechanisms for Scheduling.**

P. Lu. In*the 5th International Workshop on Internet and Network Economics (WINE)*, 2009.**An Improved Randomized Truthful Mechanism for Scheduling Unrelated Machines.**

P. Lu and C. Yu. In*the 25th Annual Symposium on Theoretical Aspects of Computer Science (STACS)*, 2008.**Randomized Truthful Mechanisms for Scheduling Unrelated Machines.**

P. Lu and C. Yu. In*the 4th International Workshop on Internet and Network Economics (WINE)*, 2008.**Setting lower bounds on truthfulness: extended abstract.**

A. Mu’alem and M. Schapira. In*the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, 2007.**Optimal Auction Design.**

R. B. Myerson.*Mathematics of Operations Research*, 6(1):58–73, 1981.**Algorithmic Mechanism Design (Extended Abstract).**

N. Nisan and A. Ronen. In*Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing (STOC)*, 1999.**On the hardness of being truthful.**

C. H. Papadimitriou, M. Schapira, and Y. Singer. In*Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS)*, 2008.**An approximation algorithm for the generalized assignment problem.**

D. B. Shmoys and É. Tardos.*Mathematical Programming*, 62(1-3):461–474, 1993.**Scheduling Unrelated Machines with Costs.**

D. B. Shmoys and É. Tardos. In*the 4th Symposium on Discrete Algorithms (SODA)*, 1993.**The problem of fair division.**

H. Steinhaus.*Econometrica*, 16(1), 1948.