On the Impossibility of BlackBox Transformations in Mechanism Design
Abstract
We consider the problem of converting an arbitrary approximation algorithm for a singleparameter optimization problem into a computationally efficient truthful mechanism. We ask for reductions that are blackbox, meaning that they require only oracle access to the given algorithm and in particular do not require explicit knowledge of the problem constraints. Such a reduction is known to be possible, for example, for the social welfare objective when the goal is to achieve Bayesian truthfulness and preserve social welfare in expectation. We show that a blackbox reduction for the social welfare objective is not possible if the resulting mechanism is required to be truthful in expectation and to preserve the worstcase approximation ratio of the algorithm to within a subpolynomial factor. Further, we prove that for other objectives such as makespan, no blackbox reduction is possible even if we only require Bayesian truthfulness and an averagecase performance guarantee.
1 Introduction
Mechanism design studies optimization problems arising in settings involving selfish agents, with the goal of designing a system or protocol whereby agents’ individual selfish optimization leads to global optimization of a desired objective. A central theme in algorithmic mechanism design is to reconcile the incentive constraints of selfish participants with the requirement of computational tractability and to understand whether the combination of these two considerations limits algorithm design in a way that each one alone does not.
In the bestcase scenario, one might hope for a sort of equivalence between the considerations of algorithm design and mechanism design. In particular, recent research has explored general reductions that convert an arbitrary approximation algorithm into an incentive compatible mechanism. Ideally, these reductions have an arbitrarily small loss in performance, and are blackbox in the sense that they need not understand the underlying structure of the given algorithm or problem constraints. A big benefit of this approach is that it allows a practitioner to ignore incentive constraints while finetuning his algorithm to the observed workload. Of course the feasibility of the approach depends heavily on the objective at hand as well as the incentive requirements. The goal of this paper is to understand what scenarios enable such blackbox reductions.
The classic result of Vickrey, Clarke and Groves provides a positive result along these lines for social welfare maximization. Social welfare maximization is a standard objective in mechanism design. Here, a central authority wishes to assist a group of individuals in choosing from among a set of outcomes with the goal of maximizing the total outcome value to all participants. The Vickrey, Clarke and Groves result demonstrates that for any such problem, there exists a mechanism that maximizes social welfare with very robust incentive properties (namely, it is ex post incentive compatible). This construction requires that the mechanism optimize social welfare precisely, and so can be thought of as a reduction from incentive compatible mechanism design to exact algorithm design. This result can be extended to more general scenarios beyond social welfare. In the singleparameter setting, where the preferences of every selfish agent can be described by a single scalar parameter, (Bayesian or ex post) incentive compatibility is essentially equivalent to a peragent monotonicity condition on the allocation returned by the mechanism. Therefore, for objective functions that are “monotone” in the sense that exact optimization of the objective leads to a monotone allocation function, there is a reduction from mechanism design to exact algorithm design along the lines of the VCG mechanism for social welfare.
However, many settings of interest involve constraints which are computationally infeasible to optimize precisely, and so exact algorithms are not known to exist. Recent work [12, 11, 4] shows that in Bayesian settings of partial information, the reduction from mechanism design to algorithm design for social welfare can be extended to encompass arbitrary approximate algorithms with arbitrarily small loss in expected performance. These reductions work even in multiparameter settings. Moreover, these reductions are blackbox, meaning that they require only oracle access to the prior type distributions and the algorithm, and proceed without knowledge of the feasibility constraints of the problem to be solved.
In light of this positive result, two natural questions arise:

Are there blackbox reductions transforming approximation algorithms for social welfare into ex post incentive compatible mechanisms with little loss in worstcase approximation ratio?

Does every monotone objective admit a blackbox reduction that transforms approximation algorithms into (Bayesian or ex post) incentive compatible mechanisms with little loss in (worstcase or average) approximation ratio?
In this paper we answer both of these questions in the negative. Our impossibility results apply to the simpler class of singleparameter optimization problems.
The first question strengthens the demands of the reduction beyond those of the aforementioned positive results [12, 11, 4] in two significant ways. First, it requires the stronger solution concept of ex post incentive compatibility, rather than Bayesian incentive compatibility. Second, it requires that the approximation factor of the original algorithm be preserved in the worst case over all possible inputs, rather than in expectation. It is already known that such reductions are not possible for general multiparameter social welfare problems. For some multiparameter problems, no ex post incentive compatible mechanism can match the worstcase approximation factors achievable by algorithms without gametheoretic constraints [16, 8]. Thus, for general social welfare problems, the relaxation from an ex post, worstcase setting to a Bayesian setting provably improves one’s ability to implement algorithms as mechanisms. However, prior to our work, it was not known whether a lossless blackbox reduction could exist for the important special case of singleparameter problems. We show that any such reduction for singleparameter problems must sometimes degrade an algorithm’s worstcase performance by a factor that is polynomial in the problem size.
The second question asks whether there are properties specific to social welfare that enable the computationally efficient reductions of the above results [12, 11, 4]. One property that appears to be crucial in their analysis is the linearity of the objective in both the agents’ values and the algorithm’s allocations. For our second impossibility result we consider the (highly nonlinear but monotone) makespan objective for scheduling problems. In a scheduling problem we are given a number of (selfish) machines and jobs; our goal is to schedule the jobs on the machines in such a way that the load on the most loaded machine (namely, the makespan of the schedule) is minimized. The sizes of jobs on machines are private information that the machines possess and must be incentivized to share with the algorithm.
Ashlagi et al. [2] showed that for the makespan objective in multiparameter settings (that is, when the sizes of jobs on different machines are unrelated), ex post incentive compatibility imposes a huge cost: while constant factor approximations can be obtained in the absence of incentive constraints, no “anonymous” mechanism can obtain a sublinear approximation ratio under the requirement of incentive compatibility. The situation for singleparameter settings is quite different. In singleparameter (a.k.a. related) settings, each machine has a single private parameter, namely its speed, and each job has a known intrinsic size; the load that a job places on a machine is its size divided by the speed of the machine. In such settings in the absence of feasibility constraints, deterministic PTASes are known both with and without incentives. Given this positive result one might expect that for singleparameter makespan minimization there is no gap between algorithm design and mechanism design, at least in the weaker Bayesian setting where the goal is to achieve Bayesian incentive compatibility and match the algorithm’s expected performance. We show that this is not true—any blackbox reduction that achieves Bayesian incentive compatibility must sometimes make the expected makespan worse by a factor polynomial in the problem size.
Finally, while makespan is quite different from social welfare, one might ask whether there exist objectives that share some of the nice properties of social welfare that enable reductions in the style of [11] and others. At a high level, the blackbox reductions for social welfare perform“ironing” operations for each agent independently fixing nonmonotonicities in the algorithm’s output in a local fashion without hurting the overall social welfare. One property of social welfare that enables such an approach is that it is additive across agents. In our final result we show that even restricting attention to objectives that are additive across agents, for almost any objective other than social welfare no peragent ironing procedure can simultaneously ensure Bayesian incentive compatibility as well as a bounded loss in performance. The implication for mechanism design is that any successful reduction must take a holistic approach over agents and look very different from those known for social welfare.
Our results and techniques.
As mentioned earlier, the existence of a blackbox reduction from mechanism design to algorithm design can depend on the objective function we are optimizing, the incentive requirements, as well as whether we are interested in a worstcase or averagecase performance guarantee. We distinguish between two kinds of incentive requirements (see formal definitions in Section 2). Bayesian incentive compatibility (BIC) implies that truthtelling forms a BayesNash equilibrium under the assumption that the agents’ value distributions are common knowledge. The stronger notion of ex post incentive compatibility (EPIC), a.k.a. universal truthfulness, implies that every agent maximizes her utility by truthtelling regardless of others’ actions and the mechanism’s coin flips. For randomized mechanisms there is a weaker notion of truthfulness called truthfulness in expectation (TIE) which implies that every agent maximizes her utility in expectation over the randomness in the mechanism by truthtelling regardless of others’ actions. We further distinguish between social welfare and arbitrary monotone objectives, and between the average performance of the algorithm and its worst case performance.
Table 1 below summarizes our findings as well as known results along these three dimensions. Essentially, we find that there is a dichotomy of settings: some allow for essentially lossless transformations whereas others suffer an unbounded loss in performance.
Objective: social welfare 
Objective: arbitrary monotone (e.g. makespan)  


One way to establish our impossibility results would be to demonstrate the existence of singleparameter optimization problems for which there is a gap in the approximating power of arbitrary algorithms and ex post incentive compatible algorithms. This is an important open problem which has resisted much effort by the algorithmic mechanism design community, and is beyond the scope of our work. Instead, we focus upon the blackbox nature of the reductions with respect to, in particular, the feasibility constraint that they face. Note that for singleparameter problems, (Bayesian or ex post) incentive compatibility is essentially equivalent to a peragent monotonicity condition on the allocation returned by the mechanism. We construct instances that contain “hidden” nonmonotonicities and yet provide good approximations. In order for the transformation to be incentive compatible while also preserving the algorithm’s approximation factor, it must fix these nonmonotonicities by replacing the algorithm’s original allocation with very specific kinds of “good” allocations. However, in order to determine which of these good allocations are also feasible the transformation must query the original algorithm at multiple inputs with the hope of finding a good allocation. We construct the algorithm in such a way that any single query of the transformation is exponentially unlikely to find a good allocation.
Related Work.
Reductions from mechanism design to algorithm design in the Bayesian setting were first studied by Hartline and Lucier [12], who showed that any approximation algorithm for a singleparameter social welfare problem can be converted into a Bayesian incentive compatible mechanism with arbitrarily small loss in expected performance. This was extended to multiparameter settings by Hartline, Kleinberg and Malekian [11] and Bei and Huang [4].
Some reductions from mechanism design to algorithm design are known for priorfree settings, for certain restricted classes of algorithms. Lavi and Swamy [15] consider mechanisms for multiparameter packing problems and show how to construct a (randomized) approximation mechanism that is truthful in expectation, from any approximation that verifies an integrality gap. Dughmi, Roughgarden and Yan [10] extend the notion of designing mechanisms based upon randomized rounding algorithms, and obtain truthful in expectation mechanisms for a broad class of submodular combinatorial auctions. Dughmi and Roughgarden [9] give a construction that converts any FPTAS algorithm for a social welfare problem into a mechanism that is truthful in expectation, by way of a variation on smoothed analysis.
Babaioff et al. [3] provide a technique for turning a algorithm for a singlevalued combinatorial auction problem into a truthful approximation mechanism, when agent values are restricted to lie in . This reduction applies to singleparameter problems with downwardclosed feasibility constraints and binary allocations (each agent’s allocation can be either or ).
Many recent papers have explored limitations on the power of deterministic ex post incentive compatible mechanisms to approximate social welfare. Papadimitriou, Schapira and Singer [16] gave an example of a social welfare problem for which constantfactor approximation algorithms exist, but any polytime ex post incentive compatible mechanism attains at best a polynomial approximation factor. A similar gap for the submodular combinatorial auction problem was established by Dobzinski [8]. For the general combinatorial auction problem, such gaps have been established for the restricted class of maxinrange mechanisms by Buchfuhrer et al. [5].
Truthful scheduling on related machines to minimize makespan was studied by Archer and Tardos [1], who designed a truthfulinexpectation 3approximation. Dhangwatnotai et al. [7] gave a randomized PTAS that is truthfulinexpectation, which was then improved to a deterministic truthful PTAS by Christodoulou and Kovács [6], matching the performance of the best possible approximation algorithm [13]. Our work on makespan minimization differs in that we consider the goal of minimizing makespan subject to an arbitrary feasibility constraint.
A preliminary version of this work [14] proved an impossibility result for EPIC blackbox reductions for singleparameter social welfare problems. In this paper we extend that result to apply to (the broader class of) TIE reductions.
2 Preliminaries
Optimization Problems.
In a singleparameter realvalued optimization problem we are given an input vector . Each is assumed to be drawn from a known set , so that is the set of possible input vectors. The goal is to choose some allocation from among a set of feasible allocations such that a given objective function is optimized (i.e. either maximized or minimized, depending on the nature of the problem). We think of the feasibility set and the objective function as defining an instance of the optimization problem. We will write , where each .
An algorithm defines a mapping from input vectors to outcomes . We will write for the allocation returned by as well as the value it obtains; the intended meaning should be clear from the context. In general an algorithm can be randomized, in which case is a random variable.
Given an instance of the social welfare problem, we will write for the allocation in that maximizes , as well as the value it obtains. Given algorithm , let denote the worstcase approximation ratio of for problem . That is, for a maximization problem; here is implicit and should be clear from the context. Note that for all and .
We also consider a Bayesian version of our optimization problem, in which there is publiclyknown product distribution on input vectors. That is, and each is distributed according to . Given , the expected objective value of a given algorithm is given by . The goal of the optimization problem in this setting is to optimize the expected objective value.
Mechanisms.
We will consider our optimization problems in a mechanism design setting with rational agents, where each agent possesses one value from the input vector as private information. We think of an outcome as representing an allocation to the agents, where is the allocation to agent . A (directrevelation) mechanism for our optimization problem then proceeds by eliciting declared values from the agents, then applying an allocation algorithm that maps to an allocation , and a payment rule that maps to a payment vector . We will write and for the allocations and payments that result on input . The utility of agent , given that the agents declare and his true private value is , is taken to be .
A (possibly randomized) mechanism is truthful in expectation (TIE) if each agent maximizes its expected utility by reporting its value truthfully, regardless of the reports of the other agents, where expectation is taken over any randomness in the mechanism. That is, for all , all , and all . We say that an algorithm is TIE if there exists a payment rule such that the resulting mechanism is TIE. It is known that an algorithm is TIE if and only if, for all and all , is monotone nondecreasing as a function of , where the expectation is over the randomness in the mechanism.
We say that a (possibly randomized) mechanism is Bayesian incentive compatible (BIC) for distribution if each agent maximizes its expected utility by reporting its value truthfully, given that the other agents’ values are distributed according to (and given any randomness in the mechanism). That is, for all and all , where the expectation is over the distribution of others’ values and the randomness in the mechanism. We say that an algorithm is BIC if there exists a payment rule such that the resulting mechanism is BIC. It is known that an algorithm is BIC if and only if, for all , is monotone nondecreasing as a function of .
Transformations.
A polytime transformation is an algorithm that is given blackbox access to an algorithm . We will write for the allocation returned by on input , given that its blackbox access is to algorithm . Then, for any , we can think of as an algorithm that maps value vectors to allocations; we think of this as the algorithm transformed by . We write for the allocation rule that results when is transformed by . Note that is not parameterized by ; informally speaking, has no knowledge of the feasibility constraint being optimized by a given algorithm . However, we do assume that is aware of the objective function , the domain of values for each agent , and (in Bayesian settings) the distribution over values.
We say that a transformation is truthful in expectation (TIE) if, for all , is a TIE algorithm. In a Bayesian setting with distribution , we say that transformation is Bayesian incentive compatible (BIC) for if, for all , is a BIC algorithm. Note that whether or not is TIE or BIC is independent of the objective function and feasibility constraint .
3 A Lower Bound for TIE Transformations for social welfare
In this section we consider the problem of maximizing social welfare. For this problem, BIC transformations that approximately preserve expected performance are known to exist. We prove that if we strengthen our solution concept to truthfulness in expectation and our performance metric to worstcase approximation, then such blackbox transformations are not possible.
3.1 Problem definition and main theorem
The social welfare objective is defined as .
Our main result is that, for any TIE transformation , there is a problem instance and algorithm such that degrades the worstcase performance of by a polynomially large factor.
Theorem 3.1.
There is a constant such that, for any polytime TIE transformation , there is an algorithm and problem instance such that .
The highlevel idea behind our proof of Theorem 3.1 is as follows. We will construct an algorithm and input vectors and such that, for each agent in some large subset of the players, but . This does not immediately imply that is nontruthful, but we will show that it does imply nontruthfulness under a certain feasibility condition , namely that any allocation is constant on the players with . Thus, any TIE transformation must alter the allocation of either on input or on input . However, we will craft our algorithm in such a way that, on input , the only allocations that the transformation will observe given polynomially many queries of will be , plus allocations that have significantly worse social welfare than , with high probability. Similarly, on input , with high probability the transformation will only observe allocation plus allocations that have significantly worse social welfare than . Furthermore, we ensure that the transformation can not even find the magnitude of the allocation to players in when presented with input , thereby preventing the transformation from randomizing between the high allocation of and an essentially empty allocation to simulate the allocation without directly observing it. Instead, in order to guarantee that it generates an TIE allocation rule, the transformation will be forced to assume the worstcase and offer players the smallest possible allocation on input . This signifcantly worsens the worstcase performance of the algorithm .
3.2 Construction
In the instances we consider, each private value is chosen from , where is a parameter that we set below. That is, we will set for all . We can therefore interpret an input vector as a subset , corresponding to those agents with value (the remaining agents have value ). Accordingly we define , , etc., for a given subset . Also, for and , we will write for the allocation in which each agent is allocated , and each agent is allocated .
Feasible Allocations.
We now define a family of feasibility constraints. Roughly speaking, we will choose with and sets of agents. The feasible allocations will be , , and . That is, we can allocate to every agent, to all agents in , or to all agents in . We will also require that and satisfy certain properties, which essentially state that and are sufficiently large and have a sufficiently large intersection.
More formally, define parameters , , , and (which we will fix later to be functions of ), such that , , and . We think of as a bound on the size of “small” sets, and we think of as a ratio between the sizes of “small” and “large” sets.
(a)  (b) 
Suppose that , , and are subsets of . We say that the triple , , is admissible if the following conditions hold:

,

,

, and,

.
In general, for a given admissible , , and , we will tend to write for notational convenience. See Figure 1(a) for an illustration of the relationship between the sets in an admissible triple. In order to hide the feasibility constraint from the transformation, we will pick the sets , , and uniformly at random from all admissible triples, and the value from an appropriate range. For each admissible tuple , , , and value , we define a corresponding feasibility constraint
Note that does not depend on ; we include set purely for notational convenience. We remark that all of the feasible allocations allocate the same amount to agents in .
Recall that agents have values chosen from . We will choose , where we recall that our parameters have been chosen so that .
The Algorithm.
We now define the algorithm corresponding to an admissible tuple and value . We think of as an approximation algorithm for the social welfare problem and later show that there is no TIE transformation of without a significant loss in worstcase approximation for some value of .
Given , we define
and
That is, is the number of elements of that lie in , with elements of counted twice. Likewise, is the number of elements of that lie in , with elements of counted thrice.
The algorithm is then described as Algorithm 1.
3.3 Analysis
In this section, we derive the key lemmas for the proof of Theorem 3.1. First, we bound the approximation factor of algorithm for problem .
Lemma 3.2.
.
Proof.
Choose and consider the three cases for the output of .
Case 1: , , and . Our algorithm returns allocation and obtains welfare at least . Note that
and
The allocation obtains welfare at most . Note that here we used , which follows since .
The allocation obtains welfare at most . So we obtain at least a approximation in this case.
Case 2: , , and . Our algorithm returns allocation and obtains welfare at least . The same argument as case 1 shows that our approximation factor is at least in this case.
Case 3: and . Our algorithm returns allocation for a welfare of at least . The allocation obtains welfare at most , and allocation obtains welfare at most . So our approximation factor is at least in this case.
Case 4: and . Our algorithm returns allocation for a welfare of at least . The allocation obtains welfare at most , and allocation obtains welfare at most . So our approximation factor is at least in this case. ∎
Suppose now that is any algorithm for problem that is TIE. We will show that is then very restricted in the allocations it can return on inputs and . Furthermore, we note that if has a good enough approximation ratio, then its allocations on inputs and are restricted further still. In particular, the optimal allocation on both and is ; so to obtain a good approximation factor, on both and , the algorithm should allocate a large enough amount to agents in . As any TIE transformation of is itself an algorithm for problem , these observations will later play a key role in our impossibility result.
Claim 3.3.
Suppose is a truthfulinexpectation algorithm for problem . Then the expected allocation to each agent in must be at least as large in as in .
Proof.
Take any set with , . Then, on input , the expected allocation to the agent in must not decrease. Since all allocations are constant on , this means that the expected allocation to each agent in must not decrease. By the same argument, returns an allocation at least this large for all such that , and in particular for . ∎
In light of these claims, our strategy for proving Theorem 3.1 will be to show that a polytime transformation is unlikely to encounter the allocation during its sampling when the input is , given that the sets , , and are chosen uniformly at random over all admissible tuples. This means the transformation will be unable to learn the value of . This is key since it prevents the transformation from using the value of to appropriately randomize between the allocation of and the essentially empty allocation of to acheive an effective allocation of for agents in on input thereby satisfying the conditions of Claim 3.3. Similarly, a transformation is unlikely to encounter the allocation during its sampling on input , and therefore can not satisfy Claim 3.3 by allocating to agents in on input .
Lemma 3.4.
Fix and satsifying the requirements of admissibility. Then for any , , with probability taken over all choices of that are admissible given and .
Proof.
Fix any . Write , , and . Note that . Define the random variables and by and .
The event occurs precisely if the following are true:
(1) 
(2) 
(3) 
We will show that the probability of these three inequalities being true is exponentially small. To see this, note that (3) implies that . Thus, (1) implies that , and hence . Now each element of counted in will count toward with probability , and each element of counted in will count toward with probability . Since , Chernoff bounds imply that with probability at least , we will have . Then
contradicting (2). ∎
Lemma 3.5.
Fix and satsifying the requirements of admissibility. Then for any , , with probability taken over all choices of and that are admissible given and .
Proof.
Fix any . Write , , and . Note that . Define the random variables and by and .
The event occurs precisely if the following are true:
(4) 
(5) 
(6) 
We will show that the probability of these three inequalities being true is exponentially small. To see this, first note that we can assume , as this only loosens the requirements of the inequalities. We then have that (6) implies . Thus, (4) implies that , and hence . Now each element of counted in will count toward with probability , and each element of counted in will count toward with probability . Since , Chernoff bounds imply that with probability at least , we will have . Then
contradicting (5). ∎
3.4 Proof of Main Theorem
We can now set our parameters , , , and . We will choose , , and . The values of we will be considering are and . Note that for each choice of . Note also that .
Our idea now for proving Theorem 3.1 is that since the transformation can not determine the value of on input (by Lemma 3.4), and since it can not find the “good” allocation of on input (by Lemma 3.5), it must be pessimistic and allocate the minimum possible value of to agents in on input in order to guarantee that the resulting allocation rule is TIE (by Claim 3.3). This implies a bad approximation on input and hence a bad worstcase approximation.
Proof of Theorem 3.1 : For each admissible and , write for . Lemma 3.5 implies that, with all but exponentially small probability, will not encounter allocation on input . Thus, on input , it can allocate at most to each agent in in expectation (using the fact that ). Then, since is incentive compatible, Claim 3.3 implies that must allocate at most to each agent in on input .
Now Lemma 3.4 implies that, with all but exponentially small probability, will not encounter allocation on input , and thus is unaware of the value of on input . Thus, to ensure incentive compatibility, must allocate at most to each agent in . It therefore obtains a welfare of , whereas a total of is possible with allocation . Thus has a worstcase approximation of , whereas has an approximation factor of . ∎
We conclude with a remark about extending our impossibility result to TIE transformations under the weaker goal of preserving the expected social welfare under a given distribution . We would like to prove that, when agents’ values are drawn according to a distribution , any TIE transformation necessarily degrades the average welfare of some algorithm by a large factor. The difficulty with extending our techniques to this setting is that a transformation may use the distribution to “guess” the relevant sets and (i.e. if the distribution is concentrated around the sets and in our construction). One might hope to overcome this difficulty in our construction by hiding a “true” set (that generates a nonmonotonicity) in a large sea of sets that could potentially take the role of . Then, if the transformation is unlikely to find a good allocation on input , and unlikely to determine the value of on any of these potential sets, and is further unable to determine which set is the “true” , then it must be pessimistic and allocate the minimum potential value of on any of these potential sets in order to guarantee truthfulness. Unfortunately, our construction assumes that all allocations are constant on , and this makes it difficult to hide a set while simultaneously making it difficult to discover a good allocation on input . We feel that, in order to make progress on this interesting open question, it is necessary to remove the assumption that all allocations are constant on which, in hand, seems to make it much more difficult to derive necessary conditions for a transformation to be TIE.
4 An Impossibility Result for Makespan
We now consider an objective function, namely makespan, that differs from the social welfare objective in that it is not linear in agent values or allocations. Informally we show that blackbox reductions for approximation algorithms for makespan are not possible even if we relax the notion of truthfulness to Bayesian incentive compatibility and relax the measure of performance to expected makespan, where both the notions are defined with respect to a certain fixed and known distribution over values. As in the previous section, our impossibility result hinges on the fact that the transformation is not aware of the feasibility constraint that an allocation needs to satisfy and can learn this constraint only by querying the algorithm at different inputs.
4.1 Problem definition and main theorem
We consider the following minimization problem in a Bayesian setting. In this problem selfish machines (a.k.a. agents) are being allocated jobs. Each agent has a private value representing its speed. If machine is allocated jobs with a total length , the load of machine is . The makespan of allocation to machines with speeds is the maximum load of any machine:
An instance of the (Bayesian) makespan problem is given by a feasibility constraint and a distribution over values ; the goal is to map every value vector to allocations so as to minimize the expected makespan:
Given an algorithm , we use to denote its expected makespan.
Our main result is the following:
Theorem 4.1.
Let be large enough and be any blackbox BIC transformation that makes at most blackbox queries to the given algorithm on each input. There exists an instance of the makespan problem and a deterministic algorithm such that either returns an infeasible allocation with positive probability, or has makespan . Here is the uniform distribution over for an appropriate and is known to .
We note that the algorithm in the statement of Theorem 4.1 is deterministic. A BIC transformation must therefore degrade the makespan of some algorithms by a polynomially large factor even when we limit ourselves to deterministic algorithms. For simplicity of exposition, we prove a gap of , however, our construction can be tweaked to obtain a gap of for any .
Problem Instance.
We now describe the problem instance in more detail. Let be a parameter to be determined later. As mentioned earlier, is the uniform distribution over . That is, every value (i.e. speed) is or with equal probability. There are jobs in all, of length and of length . Our feasibility constraint will have the property that each machine can be assigned at most one job. So a valid allocation will set the allocation to each machine to a value in .^{1}^{1}1A makespan assignment must allocate each job to a machine, but we will sometimes wish to specify an allocation in which not all jobs are allocated. For ease of exposition, we will therefore assume that there is an extra agent with value ; this agent will always be allocated all jobs not allocated to any other agent. Note that the load of this machine is always at most .
Of all such allocations (i.e. all ), all but one will be feasible. This one infeasible allocation is thought of as a parameter of the problem instance. Given , we will write as the set , and as the corresponding problem instance in which is infeasible. We will use to denote the forbidden allocation in the remainder of this section.
The algorithm.
We will first describe a randomized algorithm (Algorithm 2 below) which we think of as an approximation algorithm for problem instance .
We first note that must terminate.
Claim 4.2.
For all , algorithm terminates with probability .
Proof.
This follows from noting that at least two distinct allocations can be chosen by on each branch of the condition on line , so must eventually choose an allocation that is not . ∎
We now use to define a set of deterministic algorithms^{2}^{2}2More precisely, we will define a set of deterministic allocation rules that map type profiles to allocations; in particular we will not be concerned with implementations of these allocation rules.. Let (or for short) denote the set of deterministic algorithms in the support of . That is, for every , is an allocation returned by on input with positive probability for every . Moreover, for every combination of allocations that can be returned by on each input profile, there is a corresponding deterministic algorithm in .
For any , let be the set of high speed agents. Let denote the event that , over randomness in . We think of as the event that the number of highspeed agents is concentrated around its expectation. We note the following immediate consequence of Chernoff bounds.
Observation 4.3.
This allows us to bound the expected makespan of each .
Lemma 4.4.
For each , where the expectation is taken over .
Proof.
If event occurs, then returns an allocation in which each agent with receives allocation at most . Since each agent with also receives allocation at most , we conclude that if event occurs then the makespan of is at most . Otherwise, the makespan of is trivially bounded by . Since Observation 4.3 implies that this latter case occurs with probability at most , the result follows. ∎
4.2 Transformation Analysis
We now present a proof of Theorem 4.1. Let denote a BIC transformation that can make at most blackbox queries to an algorithm for makespan. Write for the mechanism induced when is given blackbox access to an algorithm .
We first note that if returns only feasible allocations with probability , then it can only return an allocation that it observed during blackbox queries to algorithm . This is true even if we consider only algorithms of the form for some choice of .
Claim 4.5.
Suppose that for some , with positive probability, returns an allocation not returned by a blackbox query to . Then there exists an algorithm such that returns an infeasible allocation with positive probability.
Proof.
Suppose that with positive probability returns the allocation on without encountering it in a black box query to the algorithm . Then there exists such that and agree on each input queried by in some instance where returns , and furthermore never returns allocation on any input. Note, then, that also returns allocation with positive probability. But , so if we set then we conclude that returns the infeasible allocation with positive probability. ∎
In the remainder of the analysis we assume that only returns observed allocations. We will now think of as being fixed, and as being drawn from uniformly at random. Let denote the (bad) event, over randomness in , , and the choice of , that returns an allocation with makespan . Our goal will be to show that if is BIC for every , then must be large; this will imply that the expected makespan of will be large for some .
Intuitively, the reason that an algorithm may not be truthful is because lowspeed agents are often allocated while highspeed agents are often allocated . In order to fix such nonmonotonicities, must either increase the probability with which is allocated to the highspeed agents, or increase the probability with which is allocated to the lowspeed agents. To this end, let be the event that, on input , returns an allocation in which at least agents satisfy and .
As the following lemma shows, the event is unlikely to occur unless also occurs. Then to fix the nonmonotonicity while avoiding , must rely on allocating more often to the highspeed agents. However this would require to query on speed vectors that are nearcomplements of , and in turn imply a largeenough probability of allocating to lowspeed agents, i.e. event . We now make this intuition precise.
Lemma 4.6.
For each , .
Proof.
Fix input , and suppose that event occurs. Recall that only returns an allocation that outputs on a query . We will refer to a query of on input as a successful query, if it returns an that satisfies . Let us bound the probability of a single query being successful. Let denote the number of agents with . Then implies .
First, suppose that does not satisfy the event , that is, the number of high speed agents in is far from its mean . Then each of the agents has an probability of being allocated (taken over the choice of from ). The probability that none of the agents is allocated a load of is at most .
On the other hand, suppose that satisfies the event . Then implies that at least agents satisfy and . In this case, each of these agents has probability of being allocated . The probability that none of them is allocated a load of is at most .
In either case, the probability that a single query is successful is at most . Transformation can make at most queries on input . We will now bound the probability that any one of them is successful. First note that since is deterministic, we can assume that does not query more than once on the same input. Furthermore, we can think of the choice of as independently selecting the behaviour of for each input profile, so that the allocations returned by on different input profiles are independent with respect to the choice of from . We can therefore think of the queries as independent trials that are successful with probability at most . Thus, the probability that at least one of these queries is successful is at most
as required. ∎
We now consider the specific probabilities with which returns high or low allocations on high or low values. For agent , value , and allocation , we will write . That is, is the probability that conditioned on agent ’s value being , allocates to the agent; Here the probability is over the values of the other agents, any randomness in , and the choice of .
Observation 4.7.
.
We can express the fact that satisfies BIC in terms of conditions on these probabilities (Lemma 4.8 below): either should be large, i.e. lowspeed agents get a low allocation, or one of and should be large, i.e. highspeed agents get a high allocation. On the other hand, in Lemmas 4.9, 4.10, and 4.11 we show that on average over all agents, each of these probabilities is small if the probability of the bad event is small. The proofs of these lemmas are deferred to the end of this section. In Lemma 4.12 we put these results together to argue that occurs with a large probability.
Lemma 4.8.
Let be the allocation rule of . Then if , , and , then .
Proof.
Since , we have . Since , we have . We conclude that which implies the desired result. ∎
Lemma 4.9.