Who to Trust for Truthfully Maximizing Welfare?

Who to Trust for Truthfully Maximizing Welfare?

Dimitris Fotakis National Technical University of Athens, 157 80 Athens, Greece.
   Christos Tzamos Massachusetts Institute of Technology, Cambridge, MA 02139
, 22email: tzamos@mit.edu, 22email: mzampet@mit.edu fotakis@cs.ntua.gr
   Emmanouil Zampetakis Massachusetts Institute of Technology, Cambridge, MA 02139
, 22email: tzamos@mit.edu, 22email: mzampet@mit.edu fotakis@cs.ntua.gr
Abstract

We introduce a general approach based on selective verification and obtain approximate mechanisms without money for maximizing the social welfare in the general domain of utilitarian voting. Having a good allocation in mind, a mechanism with verification selects few critical agents and detects, using a verification oracle, whether they have reported truthfully. If yes, the mechanism produces the desired allocation. Otherwise, the mechanism ignores any misreports and proceeds with the remaining agents. We obtain randomized truthful (or almost truthful) mechanisms without money that verify only agents, where is the number of outcomes, independently of the total number of agents, and are -approximate for the social welfare. We also show that any truthful mechanism with a constant approximation ratio needs to verify agents. A remarkable property of our mechanisms is robustness, namely that their outcome depends only on the reports of the truthful agents.

1 Introduction

Let us start with a simple mechanism design setting where we place a facility on the line based on the preferred locations of strategic agents. Each agent aims to minimize the distance of her preferred location to the facility and may misreport her location, if it finds it profitable. Our objective is to minimize the maximum distance of any agent to the facility and we insist that the facility allocation should be truthful, i.e., no agent can improve her distance by misreporting her location. The optimal solution is to place the facility at the average of the two extreme locations. However, if we cannot incentivize truthfulness through monetary transfers (e.g., due to ethical or practical reasons, see also [27]), the optimal solution is not truthful. E.g., the leftmost agent has an incentive to declare a location further on the left so that the facility moves closer to her preferred location. In fact, for the infinite real line, the optimal solution leads to no equilibrium declarations for the leftmost and the rightmost agent. The fact that in this simple setting, the optimal solution is not truthful was part of the motivation for the research agenda of approximate mechanism design without money, introduced by Procaccia and Tennenholtz [27]. They proved that the best deterministic (resp. randomized) truthful mechanism achieves an approximation ratio of (resp. ) for this problem.

Our work is motivated by the simple observation that the optimal facility allocation can be implemented truthfully if we inspect the declared locations of the two extreme agents and verify that they coincide with their preferred locations (e.g., for their home address, we may mail something there, visit them or ask for a certificate). Inspection of the two extreme locations takes place before we place the facility. If both agents are truthful, we place the facility at their average. Otherwise, we ignore any false declarations and recurse on the remaining agents. This simple modification of the optimal solution is truthful, because non-extreme agents do not affect the facility allocation, while the two extreme agents cannot change the facility location in their favor, due to the verification step. Interestingly, the Greedy algorithm for -Facility Location (see e.g., [30, Sec. 2.2]) also becomes truthful if we verify the agents allocated a facility and ignore any liars among them (see Section 4). Greedy is -approximate for minimizing the maximum agent-facility distance, in any metric space, while [14] shows that there are no deterministic truthful mechanisms (without verification) that place facilities in tree metrics and achieve a bounded (in terms of and ) approximation ratio.

Selective Verification: Motivation and Justification. Verifying the declarations of most (or all) agents and imposing large penalties on liars should suffice for the truthful implementation of socially efficient solutions (see e.g., [6]). But in the facility location examples above, we truthfully implement the optimal (or an almost optimal) solution by verifying a very small number of agents (independent of ) and by using a mild and reasonable penalty. Apparently, verification is successful in these examples because it is selective, in the sense that we verify only the critical agents for the facility allocation and fully trust the remaining agents.

Motivated by this observation, we investigate the power of selective verification in approximate mechanism design without money in general domains. We consider the general setting of utilitarian voting with outcomes and strategic agents, where each agent has a nonnegative utility for each outcome. We aim at truthful mechanisms that verify few critical agents and approximate the maximum social welfare, i.e., the total utility of the agents for the selected outcome. Our goal is to determine the best approximation guarantee achievable by such mechanisms with limited selective verification, so that we obtain a better understanding of the power of limited verification in mechanism design without money. Our main result is a smooth and essentially best possible tradeoff between the approximation ratio and the number of agents verified by randomized truthful (or almost truthful) mechanisms with selective verification.

Our general approach is to start from a (non-truthful) allocation rule with a good approximation guarantee for the social welfare and to devise a mechanism without money that incentivizes truthful reporting by selective verification. The mechanism first selects an outcome and an (ideally small) verification set of agents according to (e.g., for facility location on the line, the allocation rule is to take the average of the two extreme locations, the selected outcome is the average for the particular instance and the verification set consists of the two extreme agents). Next, detects, through the use of a verification oracle, whether the selected agents are truthful. If yes, the mechanism outputs . Otherwise, excludes any misreporting agents and continues with the remaining agents. We note that asks the verification oracle for a single bit of information about each agent verified: whether she has reported truthfully or not. excludes misreporting agents from the allocation rule, so it does not need to know anything else about their true utilities.

Instead of imposing some explicit (i.e., monetary) penalty to the agents caught lying by verification, the mechanism just ignores their reports, a reasonable reaction to their revealed attempt of manipulating the mechanism. We underline that liars still get utility from the selected outcome. It just happens that their preferences are not taken into account in the allocation. For these reasons, the penalty of exclusion from the mechanism is mild and compatible with the spirit of mechanisms without money.

Selective verification allows for an explicit quantification of the amount of verification and is applicable to essentially any domain. From a theoretical viewpoint, we believe that it can lead to a deep and delicate understanding of the power of limited verification in approximate mechanism design without money. From a practical viewpoint, the extent to which selective verification and the penalty of ignoring false declarations are natural very much depends on the particular domain / application. E.g., for applications of facility location, where utility is usually determined by the home address of each agent, public authorities have simple ways of verifying it. E.g., registration to a public service usually requires a certificate of address. Failure to provide such a certificate usually implies that the application is ignored, with no penalties attached.

Technical Approach and Results. A (randomized) mechanism with selective verification is truthful (in expectation) if no matter the reports of the other agents and whether they are truthful or not, truthful reporting maximizes the (expected) utility of each agent from the mechanism. Two nice features of our allocation rules (and mechanisms) is that they are strongly anonymous and scale invariant. The former means that the allocation only depends on the total agents’ utility for each outcome (and not on each agent’s contribution) and the latter means that multiplying all valuations by a positive factor does not change the allocation.

For mechanisms with selective verification, truthfulness is an immediate consequence of two natural (and desirable) properties: robustness and voluntary participation. Robustness is a strong property made possible by selective verification. A mechanism with verification is robust if completely ignores any misreports and the resulting probability distribution is determined by the reports of truthful agents only. So, if is robust, no misreporting agent can change the resulting allocation whatsoever. We achieve robustness through obliviousness of to the declarations of misreporting agents not verified (see also [13, Sec. 5]). Specifically, a randomized mechanism is oblivious if the probability distribution of over the outcomes, conditional on the event that no misreporting agents are included in the verification set, is identical to the probability distribution of if all misreporting agents are excluded from the mechanism. By induction on the number of agents, we show that obliviousness is a sufficient condition for robustness (Lemma 3). To the best of our knowledge, this is the first time that robustness (or a similar) property is considered in mechanism design. We defer the discussion about robustness and its comparison to truthfulness to Section A.2.

Robustness leaves each agent with essentially two strategies: either she reports truthfully and participates in the mechanism or she lies and is excluded from the mechanism. An allocation rule satisfies voluntary participation (or simply, participation ) if each agent’s utility when she is truthful is no less than her utility when she is excluded from the mechanism. Robustness and participation immediately imply truthfulness111The reader is invited to verify that the average mechanism for facility location on the line is scale invariant, not strongly anonymous, oblivious (and thus, robust) and satisfies participation. Robustness and participation imply that the mechanism is truthful. (Lemma 4). We prove that strongly anonymous randomized allocation rules that satisfy participation are closely related to maximal in distributional range rules (see e.g., [9, 21]), i.e., allocation rules that maximize the expected social welfare over a (not necessarily proper) subset of probability distributions over outcomes. Specifically, we show that maximizing the social welfare is sufficient for participation (Lemma 1), while for scale invariant and continuous allocation rules, it is also necessary (Lemma 2).

As a proof of concept, we apply selective verification to -Facility Location problems (Section 4), which have served as benchmarks in approximate mechanism design without money (see e.g., [27, 1, 22, 12] and the references therein). We show that Greedy ([30, Section 2.2]) and Proportional [22] satisfy participation and are robust and truthful, if we verify the agents allocated the facilities (Theorems 4.1 and 4.2).

For the general setting of utilitarian voting, we aim at strongly anonymous randomized allocation rules that are maximal in distributional range, so that they satisfy participation, and oblivious, so that they achieve robustness. In Section 5, we present the Power mechanism, which selects each outcome with probability proportional to the -th power of the total utility for , where is a parameter. Intuitively, Power provides a smooth transition from the (robust and truthful) uniform allocation, where each outcome is selected with probability , for , to the optimal solution, for . Power approximately maximizes the social welfare and approximately satisfies participation. It is also scale invariant and, due to the proportional nature of its probability distribution, is oblivious and robust. Power can be implemented with selective verification of at most agents. Using , we obtain that for any , Power with selective verification of agents, is robust, -truthful and -approximate for the social welfare (Theorem 5.1).

To quantify the improvement, we show that without verification, in the general setting of utilitarian voting, the best possible approximation ratio of any randomized truthful mechanism is (see Section A.11). In a slightly more restricted setting with injective valuations [10], the best known randomized truthful mechanism has an approximation ratio of and the best possible approximation ratio is . Moreover, the amount of verification is essentially best possible, since we prove that any truthful mechanism with constant approximation ratio needs to verify agents (Theorem 6.1). We essentially match this lower bound, that applies to all mechanisms, by strongly anonymous and scale invariant mechanisms.

In Section 7, we characterize the class of scale invariant and strongly anonymous truthful mechanisms that verify agents and achieve full allocation, i.e., they result in some outcome with probability . We prove that any such mechanism must employ a constant allocation rule, i.e., a probability distribution that does not depend on the agent declarations. Therefore, such mechanisms cannot achieve nontrivial approximation guarantees. Our characterization reveals an interesting and deep connection between continuity (which is necessary for low verification), full allocation, and maximal in distributional range mechanisms.

Relaxing some of the properties in the characterization, we can obtain (fully) truthful mechanisms with low verification. Relaxing full allocation, we obtain the Partial Power mechanism (Section 8), and relaxing scale invariance, we obtain the Exponential mechanism (Section 9). Both are truthful, robust. For any , they verify agents in the worst-case and agents in expectation, respectively. Partial Power is -approximate, while Exponential has an additive error of . For Exponential, we can have an approximation ratio of , given a constant factor estimation of the value maximum social welfare. All the mechanisms can be implemented in polynomial or expected polynomial time in and .

Power Partial Power Exponential -truthful truthful truthful full allocation partial allocation full allocation scale invariant scale invariant not scale invariant robust robust robust    -approximation    -approximation    additive error verification    verification    expected verification

Figure 1: The main properties of our mechanisms. Partial allocation means that the mechanism may result in an artificial outcome of valuation for all agents (e.g., we may refuse to allocate anything, for private goods, or to setup the service, for public goods). We depict in bold the property whose relaxation allows the mechanism to escape the characterization of Theorem 7.1.

The properties of our mechanisms are summarized in Fig. 1. In all cases, we achieve a smooth tradeoff between the number of agents verified and the quality of approximation. Rather surprisingly, the verification depends on , the number of outcomes, but not on , the number of agents. Also, we discuss (Section A.3) an application to the Combinatorial Public Project problem (see e.g., [28, 24]).

Related Work. Due to lack of space, we restrict our attention to the most relevant previous work (see also Section A.1). Previous work [2, 6, 15] demonstrated that partial verification is essentially useless in the design of truthful mechanisms. Therefore, verification should be exact, i.e., it should forbid even negligible deviations from the truth, at least for some types of misreports. Thus, recent research has focused on the power of exact verification schemes that use either limited or costly verification and mild penalties.

In this direction, [6] introduces probabilistic verification as a general framework for the use of verification in mechanism design. They show that almost any allocation rule can be implemented whit a truthful mechanism with money and probabilistic verification, provided that (i) the detection probability is positive for all agents and for negligible deviations from the truth; and that (ii) each liar incurs a sufficiently large penalty. Here, we instead use selective verification and the reasonable penalty of ignoring misreports and we verify only a small subset of agents instead of almost all of them.

Our approach of selective verification is conceptually similar to the setting of [5], which considers truthful allocation of an indivisible good without money and with costly selective verification and seeks to maximize the social welfare minus the verification cost. Nevertheless, our setting and our mechanisms are much more general, we resort to approximate mechanisms (rather than exact ones) and treat the verification cost as a different efficiency criterion (instead of incorporating it in the social objective). Moreover, selective verification bears some resemblance to [17], which considers truthful mechanisms with money for single-unit and multi-unit auctions and aims at a good tradeoff between the social welfare and the payments charged.

There is a significant amount of work on mechanism design with verification where either the structure of the optimal mechanism is characterized (see e.g., [29]), or mechanisms with money and verification are shown to achieve better approximation guarantees than mechanisms without verification (see e.g., [4, 20]). To the best of our knowledge, our work is the first where truthful mechanisms with limited selective verification (instead of partial or “one-sided” verification applied to all agents with positive utility) are shown to achieve best possible approximation guarantees for the general domain of utilitarian voting.

From a technical viewpoint, the idea of partial allocation in approximate mechanism design without money has been employed with remarkable success in [7]. However, this technique can achieve very restricted results in maximizing social welfare without verification in a very general setting as utilitarian voting (see Section A.11). Moreover, our motivation for using the exponential mechanism with selective verification came from [23, 18], due their tradeoffs between the approximation guarantee and the probability of the gap mechanism (resp. amount of payments) required for truthfulness.

2 Notation and Preliminaries

For any integer , we let . For an event , denotes the probability of . For a random variable , denotes the expectation of . For a finite set , is the unit simplex over , which includes all probability distributions over . For a vector and some , is without . For a nonempty , is the projection of to . For vectors and , denotes their coordinate-wise sum. For a vector and an , is the coordinate-wise power of and is the -norm of . For convenience, we let . Moreover, is the infinity norm of .

Agent Valuations. We consider a set of strategic agents with private preferences over a set of outcomes. We focus on combinatorial problems, assume that is finite and let be the number of different outcomes. The preferences of each agent are given by a valuation function or type that seeks to maximize. The set of possible valuations is the domain . We usually regard each valuation as a vector , where is ’s valuation for outcome . A valuation profile is a tuple consisting of the agents’ valuations. Given a valuation profile , is the vector of the total valuation, or simply, of the weight, for each outcome. We usually write , instead of , when is clear from the context.

Allocation Rules. A (randomized) allocation rule maps each valuation profile to a probability distribution over . To allow for exclusion of some agents from , we always assume that is well defined for any number of agents , . We regard the probability distribution of on input as a vector , where is the probability of outcome . Then, the expected utility of agent from is equal to the dot product . An allocation rule is constant if for all valuation profiles and , , i.e., the probability distribution of in independent of the valuation profile. E.g., the uniform allocation rule, that selects each outcome with probability , is constant.

A rule achieves full allocation if for all , , and partial allocation if , for some . A full allocation rule always outputs an outcome , while a partial allocation rule may also output an artificial (or null) outcome not in . We assume that all agents have valuation for the null outcome.

Two nice properties of our allocation rules is that they are strongly anonymous and (most of them) scale invariant. An allocation rule is scale invariant if for any valuation profile and any , , i.e., scaling all valuations in by does not change the allocation. An allocation rule is strongly anonymous if depends only on the vector with outcome weights. Formally, for all valuation profiles and (possibly with a different number of agents) with , . Hence, a strongly anonymous rule can be regarded as a one-agent allocation rule .

Approximation Guarantee. The social efficiency of an allocation rule is evaluated by a social objective function . We mostly consider the objective of social welfare, where we seek to maximize . The optimal social welfare of a valuation profile is  . An allocation rule has approximation ratio (resp. additive error ) if for all valuation profiles , (resp. ).

Voluntary Participation and MIDR. An allocation rule satisfies voluntary participation (or simply, participation) if for any agent and any valuation profile , , i.e., ’s utility does not decrease if she participates in the mechanism. For some , satisfies -participation if for all agents and valuation profiles , . An allocation rule is maximal in distributional range (MIDR) if there exist a range of (possibly partial) allocations and a function such that for all valuation profiles , (see e.g., [9, 21]). The following show that MIDR is a sufficient condition for participation and that for scale invariant and strongly anonymous continuous allocations, MIDR is also necessary (the proofs can be found in Section A.4 and Section A.5).

Lemma 1

Let be any MIDR allocation rule. Then, satisfies participation.

Lemma 2

For any scale invariant and strongly anonymous continuous allocation rule that satisfies participation, there is a range of (possibly partial) allocations such that .

3 Mechanisms with Selective Verification and Basic Properties

A mechanism with selective verification takes as input a reported valuation profile and has oracle access to a binary verification vector , with if agent has truthfully reported , and otherwise. In fact, we assume that verifies an agent through a verification oracle that on input , returns . So, we regard a mechanism with verification as a function . We highlight that although the entire vector appears as a parameter of , for notational convenience, the outcome of actually depends on few selected coordinates of . We denote , or simply , the set of agents verified by on input . As for allocation rules, we treat the probability distribution of over outcomes as an -dimensional vector and assume that is well defined for any number of agents .

Our approach is to start from an allocation rule and to devise a mechanism that motivates truthful reporting by selective verification. We say that a mechanism with selective verification is recursive if there is an allocation rule such that operates as follows: on a valuation profile , selects an outcome , with probability , and a verification set , and computes the set of misreporting agents in . If , returns . Otherwise, recurses on . Our mechanisms are recursive, except for Partial Power (Section 8), which adopts a slightly different reaction to .

Given an allocation rule , we say that a mechanism with verification is an extension of is for all valuation profiles , . Namely, behaves exactly as given that all agents report truthfully. For the converse, given a mechanism , we say that induces an allocation rule if for all , . For clarity, we refer to mechanisms with selective verification simply as mechanisms, and denote them by uppercase letters, and to allocation rules simply as rules or algorithms, and denote them by lowercase letters. A mechanism has a property of an allocation rule (e.g., scale invariance, partial or full allocation, participation, approximation ratio) iff the induced rule has this property.

A mechanism is -truthful, for some , if for any agent , for any valuation pair and and for all reported valuations and verification vectors ,

A mechanism is truthful if it is -truthful. Namely, no matter the reported valuations of the other agents and whether they report truthfully or not, the expected utility of agent is maximized if she reports truthfully.

Robustness and Obliviousness. A remarkable property of our mechanisms is robustness, namely that they ignore the valuations of misreporting agents and let their outcome be determined by the valuations of truthful agents only. Formally, a mechanism is robust if for all reported valuations and verification vectors , , with the equality referring to the probability distribution of , where is the set of truthful agents. Next, we simply use , instead of .

A mechanism with selective verification is oblivious (to the declarations of misreporting agents not verified) if for all valuation profiles and verification vectors , with , and any outcome ,

(1)

I.e., if the misreporting agents are not caught, they do not affect the probability distribution of (see also [13]). By induction on the number of misreports, we show that obliviousness is sufficient for robustness.

Lemma 3

Let be any oblivious recursive mechanism with selective verification. Then, is robust.

Proof

We fix a valuation profile and a verification vector , and prove that for any outcome , , where is the set of truthful agents in . Clearly, this implies that is robust. The proof is by induction on the number of agents .

If , the statement is obvious. So, we assume inductively that the statement holds for every proper subset of . Let be the set of misreporting agents in and let be the verification set of on input . Then,

(2)

We have that , by (1), since is oblivious. If includes a non-empty set , since is recursive, it ignores their declarations and recurses on . Therefore, for all ,  , where the last equality follows from the induction hypothesis, because the agents in are a proper subset of agents in . Therefore, using that , for all , in (2), we obtain that , i.e., that is robust. ∎

Robustness, Participation and Truthfulness. In the Appendix, Section A.6, we show that robustness and participation imply truthfulness (note that the converse may not be true, since a truthful mechanism with verification does not need to be robust). Then, by Lemma 1 and Lemma 3, we can focus on MIDR allocation rules for which the outcome and the verification set can be selected in an oblivious way.

Lemma 4

For any , if a mechanism with selective verification is robust and satisfies -participation, then is -truthful.

Quantifying Verification. Focusing on truthful mechanisms with verification, where the agents do not have any incentive to misreport, we bound the amount of verification when the agents are truthful (similarly to the definition of the approximation ratio of as the approximation ratio of the induced allocation rule ). For a truthful mechanism , this is exactly the amount of verification required so that motivates truthfulness.

Given a mechanism with verification , its worst-case verification is , i.e., the maximum number of agents verified by in any truthful valuation profile. If is randomized, its expected verification is , where expectation is over all random strings used.

4 Motivating Example: Facility Location Mechanisms with Selective Verification

As a proof of concept, we apply mechanisms with verification to -Facility Location. In such problems, we have a metric space , where is a finite set of points and is a metric distance function. The outcomes are all subsets of locations in . Each agent has a preferred location and her “valuation” for outcome is , i.e., minus the distance of her preferred location to the nearest facility in . So, each agent aims at minimizing . The mechanism gets a profile of reported locations. Using access to a verification oracle, maps to a set of facility locations.

Maximum Cost. To minimize , i.e, the maximum agent-facility distance, we use the -approximate Greedy algorithm for -Center (see e.g., [30, Sec. 2.2]). On input , Greedy first allocates a facility to an arbitrary agent. As long as , the next facility is allocated to the agent maximizing . We extend Greedy to a mechanism with selective verification by inspecting the reported location of every agent allocated a facility. If all of them are truthful, we place the facilities at . Otherwise, we exclude any liars in and recurse on the remaining agents. In the Appendix, Section A.7, we establish the properties of Greedy with verification. To quantify the improvement due to the use of verification, we highlight that there are no deterministic truthful mechanisms (without verification) that place facilities in tree metrics and achieve a bounded (in terms of and ) approximation ratio (see [14]).

Theorem 4.1

The Greedy mechanism with verification for -Facility Location is truthful and robust, is -approximate for the maximum cost and verifies agents.

Social Cost. To minimize , i.e, the total cost of the agents, we use the Proportional mechanism [22], which is -approximate [3]. Proportional first allocates a facility to an agent chosen uniformly at random. As long as , agent is allocated the next facility with probability proportional to . Verifying the reported location of every agent allocated a facility, we obtain that (see Section A.8):

Theorem 4.2

The Proportional mechanism with verification for -Facility Location is truthful and robust, is -approximate for the social cost and verifies agents.

5 The Power Mechanism with Selective Verification

let be the set of the remaining agents and let
pick an outcome and a tuple
      with probability proportional to the value of the term
for each agent  do
     if  then      
if  then return
else return outcome
Mechanism 1 The Power Mechanism

In this section, we present the Power mechanism, a recursive mechanism with verification that approximates the social welfare in the general domain of utilitarian voting. Power with parameter (or , for brevity, see also Mechanism 1) is based on a strongly anonymous and scale invariant allocation that assigns probability proportional to the weight of each outcome raised to . Hence, for each valuation profile , the outcome of depends on the weight vector . If all agents are truthful, results in each outcome with probability , i.e., proportional to (note that for , we get the uniform allocation, while for , the outcome of maximum weight gets probability . To implement this allocation with low verification, we observe that each term can be expanded in terms as follows222For example, let , and (we omit from ’s for clarity). In (3), we expand in terms as follows . Hence, in this example, . Given that outcome is chosen, each of these terms (and the corresponding tuple ) is selected with probability proportional to its value. E.g., and are selected with probability .:

(3)

Hence, choosing an outcome and a tuple with probability proportional333To sample from (3) in steps, we select outcome with probability , and then, conditional on , we select agent in each position of independently with probability . Each tuple is picked with probability . to , we end up with each outcome with probability proportional to . The verification set of consists of the agents in . Since at most agents contribute to each term , we can make robust and almost truthful by verifying at most agents. In Section A.9, we show that is oblivious, due to its proportional nature, and thus robust, and satisfies -participation. Thus, we obtain that:

Theorem 5.1

For any , with is robust and -truthful, has worst-case verification , and achieves an approximation ratio of for the social welfare.

6 Logarithmic Verification is Best Possible

Next, we describe a random family of instances where truthfulness requires a logarithmic expected verification. Below, we only sketch the main idea of the proof. The full proof can be found in Section A.10.

Theorem 6.1

Let be randomized truthful mechanism that achieves a constant approximation ratio for any number of agents and any number of outcomes . Then, needs expected verification .

Proof sketch

We consider outcomes and disjoint groups of agents. Each group has a large number of agents. An agent in group has valuation for any other outcome and valuation either or for outcome , where is tiny (e.g., ). In each group , the probability that agents, , have valuation for outcome is . The expected maximum social welfare of such instances is .

We next focus on a group of agents and fix , i.e., the declarations of the agents in all other groups. Using a simple argument, we can assume wlog. that the probability of outcome depends only on the number of agents in group that declare for . Thus, the mechanism induces a sequence of probabilities , where is the probability of outcome , given that the number of agents that declare for is . Since the mechanism is truthful, if agents declare for outcome , we need to verify each of them with probability at least . Otherwise, an agent with valuation can declare and improve her expected utility. Therefore, for any fixed , when agents declare for outcome , we need an expected verification of at least for agents in group .

Assuming truthful reporting and taking the expectation over the number of agents in group with valuation , we find that expected verification for agents in group is at least half the expected social welfare of the mechanism from group , conditional on , minus half the probability of outcome , conditional on . Removing the conditioning on and summing up over all groups , we find that expected verification is at least half the expected welfare of the mechanism minus . Since the mechanism has a constant approximation ratio, there are instances where the expected verification is . ∎

7 Characterization of Strongly Anonymous Mechanisms

Next, we characterize the class of scale invariant and strongly anonymous truthful mechanisms that verify agents. The characterization is technically involved and consists of four main steps. We first prove that these rules are continuous (for full proof see Section A.12).

Lemma 5

Let be any scale invariant and strongly anonymous allocation rule. If is discontinuous, every truthful extension of needs to verify agents in expectation, for arbitrarily large .

Proof sketch

First, we prove that if has a discontinuity, there are agents that have a very small valuation and can change the allocation by a constant factor, independent of and . Next, we focus on any truthful extension of and show that for every agent that has the ability to change the allocation by a constant factor, the probability that verifies should be at least a constant, say , due to truthfulness. Therefore, the expected verification of is at least . ∎

Therefore, if a truthful mechanism verifies agents and induces a scale invariant and strongly anonymous allocation rule , then needs to be continuous. In Section A.13, we prove that such an allocation rule satisfies participation. Then, by Lemma 2, we obtain the characterization that such an allocation rule is MIDR. Finally, in Section A.14, we show that any full allocation and MIDR rule is either constant, i.e., its probability distribution does not depend on the valuation profile , or has a discontinuity at . Thus, we obtain the following characterization:

Theorem 7.1

Let be any truthful mechanism that verifies agents, is scale invariant and strongly anonymous and achieves full allocation. Then, induces a constant allocation rule.

1:pick tuples
2:      with probability proportional to the value of the term
3:for each and agent  do
4:     if   then return      
5:with probability return null
6:pick an outcome and a tuple
7:    with probability proportional to the value of the term
8:for each agent  do
9:     if   then return      
10:return outcome
Mechanism 2 The Partial Power Mechanism

8 The Partial Power Mechanism with Selective Verification

The Power mechanism, in Section 5, escapes the characterization of Theorem 7.1 by relaxing participation (and thus, truthfulness). In this section, we present Partial Power which escapes the characterization by relaxing full allocation. Thus, Partial Power results in some outcome in with probability less than , and with the remaining probability, it results in an artificial null outcome for which all agents have valuation .

Lemma 2 implies that social welfare maximization is essentially necessary for participation. The proof of Theorem 7.1 implies that maximizing the social welfare over results in discontinuous mechanisms that need verification (e.g., let and consider welfare maximization for weights and , see also Lemma 9). Hence, we optimize over a smooth surface that is close to , but slightly curved towards the corners, so that the resulting welfare maximizers are continuous. Precisely, we consider welfare maximization over the family of sets for all integers . Welfare maximization over results in (Lemma 10), a continuous allocation that is MIDR and satisfies participation. Lemma 11 shows that for any , the partial allocation has approximation ratio for the social welfare.

We next show that there exists a robust extension of the allocation with reasonable verification. Thus, we establish that is truthful. To this end, we introduce Mechanism 2. Since is strongly anonymous, we consider below the weights instead of the valuations . If all agents are truthful, samples exactly from . In particular, steps 1-4 never result in , step 5 outputs null with probability , and steps 6-10 work identically to , since given that the null outcome is not selected, each outcome is chosen with probability proportional to .

The most interesting case is when some agents misreport their valuations. To achieve robustness, we need to ensure that the probability distribution is identical to the case where misreporting agents are excluded from the mechanism. Similarly to , misreporting agents cannot affect the relative probabilities of each outcome. In however, they may affect the probability of the null outcome. Thus, is not oblivious and we cannot establish robustness through Lemma 3 or some variant of it.

Robustness of is obtained through the special action , triggered when verification reveals some misreporting agents. Then, needs to allocate appropriate probabilities to each outcome and to the null outcome so that the unconditional probability distribution of is identical to , where is the set of truthful agents. Therefore, whenever returns , we verify all agents, compute the weight vector for the truthful agents, and return each outcome with probability:

The null outcome is return with probability . We emphasize that these probabilities are chosen so that we cancel the effect of misreporting agents in the unconditional probability distribution of and achieve exactly the probability distribution . Moreover, if the mechanism returns , we verify all agents. So, it is always possible to compute there probabilities correctly.

The crucial and most technical part of the analysis is to show that ’s are always non-negative and their sum is at most . To this end, we employ steps 1-4. These steps implement additional verification and ensure that is large enough for this property to hold (see Section A.16 for the details).

Theorem 8.1

For every , there exist , such that Partial Power is truthful, robust, and -approximate for the social welfare, and verifies at most agents in the worst case.

9 The Exponential Mechanism with Selective Verification

Next, we consider the well known Exponential mechanism and show that it escapes the characterization of Section 7 by relaxing scale invariance. The Exponential mechanism (or , for brevity) is strongly anonymous and assigns a probability proportional to the exponential of the weight of each outcome. For each valuation profile , the outcome of depends on . If all agents are truthful, results in outcome with probability , i.e., proportional to , where is a parameter. As in Section 5, we expand every term and verify only the agents in the tuple corresponding to each term in the expansion below (the sampling can be implemented as in footnote 3):

(4)

The detailed description of is similar to Mechanism 1, with the only difference that, in the second step, we pick an outcome , an integer and a tuple with probability proportional to the value of the term (see also Mechanism 5 in the Appendix). The following summarizes the properties of .

Theorem 9.1

For any , is robust and truthful, achieves an additive error of wrt. the maximum social welfare and has expected verification .

Proof (sketch)

Using an argument similar to that used for Power (see Section A.9), we can show that is oblivious (note that the allocation of is obtained from the allocation of if we condition on a particular exponent ). Then, robustness follows from Lemma 3, because is a recursive mechanism. As for participation, the Exponential allocation is known to be MIDR with range and function , i.e., times the entropy of the resulting allocation (see e.g., [18]). Therefore, by Lemma 1, satisfies participation. Since it is also robust, Lemma 4 implies that is truthful.

For the verification, (4) implies that when all agents are truthful, the number of agents verified, given that the selected outcome is , follows a Poisson distribution with parameter . Therefore, the expected verification is at most .

As for the approximation guarantee, the optimal social welfare and the objective maximized by differ by times the entropy of the allocation, which is at most (see also Section A.17). ∎

In many settings, we know (or can obtain in a truthful way, e.g., by random sampling) an estimation of with , for some . Then, we can choose and obtain an approximation ratio of with expected verification , for any . E.g., if for all agents , , . Then, using , we have an additive error of with verification . Moreover, with , we have approximation ratio with verification . Finally, note that, since the number of agents verified follows a Poisson distribution, by Chernoff bounds, the verification bounds hold with high probability in addition to holding in expectation.

References

  • [1] N. Alon, M. Feldman, A.D. Procaccia, and M. Tennenholtz. Strategyproof approximation of the minimax on networks. Mathematics of Operations Research, 35(3):513-526, 2010.
  • [2] A. Archer and R. Kleinberg. Truthful germs are contagious: A local-to-global characterization of truthfulness. In Proc. of the 9th ACM Conference on Electronic Commerce (EC ’08), pp. 21-30, 2008.
  • [3] D. Arthur and S. Vassilvitskii. -means++: the advantages of careful seeding. In Proc. of the 18th ACM-SIAM Symposium on Discrete Algorithms (SODA ’07), pp. 1027-1035, 2007.
  • [4] V. Auletta, R. De Prisco, P. Penna, and G. Persiano. The power of verification for one-parameter agents. Journal of Computer and System Sciences, 75:190-211, 2009.
  • [5] E. Ben-Porath, E. Dekel, and B.L. Lipman. Optimal allocation with costly verification. American Economic Review, 104(12):3779-3813, 2014.
  • [6] I. Caragiannis, E. Elkind, M. Szegedy, and L. Yu. Mechanism design: from partial to probabilistic verification. In Proc. of the 13th ACM Conference on Electronic Commerce (EC ’12), pp. 266-283, 2012.
  • [7] R. Cole, V. Gkatzelis, and G. Goel. Mechanism design for fair division: allocating divisible items without payments. In Proc. of the 14th ACM Conference on Electronic Commerce (EC ’13), pp. 251-268, 2013.
  • [8] S. Dobzinski. An impossibility result for truthful combinatorial auctions with submodular valuations. In Proc. of the 43rd ACM Symposium on Theory of Computing (STOC ’11), pp. 139-148, 2011.
  • [9] S. Dobzinski and S. Dughmi. On the power of randomization in algorithmic mechanism design. SIAM Journal on Computing, 42(6):2287-2304, 2013.
  • [10] A. Filos-Ratsikas and P.B. Miltersen. Truthful approximations to range voting. In Proc. of the 10th Workshop on Internet and Network Economics (WINE ’14), LNCS 8877, pp. 175-188, 2014.
  • [11] D. Fotakis, P. Krysta, and C. Ventre. Combinatorial auctions without money. In Proc. of the 13th Conference on Autonomous Agents and Multi-Agent Systems (AAMAS ’14), pp. 1029-1036, 2014.
  • [12] D. Fotakis and C. Tzamos. Strategyproof Facility Location with concave costs. In Proc. of the 14th ACM Conference on Electronic Commerce (EC ’13), pp. 435-452, 2013.
  • [13] D. Fotakis and C. Tzamos. Winner-imposing strategyproof mechanisms for multiple Facility Location games. Theoretical Computer Science, 472:90-103, 2013.
  • [14] D. Fotakis and C. Tzamos. On the power of deterministic mechanisms for facility location games. ACM Transactions on Economics and Computation, 2(4):15:1-37, 2014.
  • [15] D. Fotakis and E. Zampetakis. Truthfulness flooded domains and the power of verification for mechanism design. In Proc. of the 9th Workshop on Internet and Network Economics (WINE ’13), LNCS 8289, pp. 202-215, 2013.
  • [16] T. Gneiting and A.E. Rafterys. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359-378, 2007.
  • [17] J.D. Hartline and T. Roughgarden. Optimal mechanism design and money burning. In Proc. of the 40th ACM Symposium on Theory of Computing (STOC ’08), pp. 75-84, 2008.
  • [18] Z. Huang and S. Kannan. The exponential mechanism for social welfare: Private, truthful, and nearly optimal. In Proc. of the 53rd IEEE Symposium on Foundations of Computer Science (FOCS ’12), pp. 140-149, 2012.
  • [19] E. Koutsoupias. Scheduling without payments. In Proc. of the 4th International Symposium on Algorithmic Game Theory (SAGT ’11), LNCS 6982, pp. 143-153, 2011.
  • [20] P. Krysta and C. Ventre. Combinatorial auctions with verification are tractable. In Proc. of the 18th European Symposium on Algorithms (ESA ’10), LNCS 6347, pp. 39-50, 2010.
  • [21] R. Lavi and C. Swamy. Truthful and near-optimal mechanism design via linear programming. Journal of the ACM, 58(6):25, 2011.
  • [22] P. Lu, X. Sun, Y. Wang, and Z.A. Zhu. Asymptotically Optimal Strategy-Proof Mechanisms for Two-Facility Games. In Proc. of the 11th ACM Conference on Electronic Commerce (EC ’10), pp. 315-324, 2010.
  • [23] K. Nissim, R. Smorodinsky, and M. Tennenholtz. Approximately optimal mechanism design via Differential Privacy. In Proc. of the 3rd Conference on Innovations in Theoretical Computer Science (ITCS ’12), pp. 203-213, 2012.
  • [24] C.H. Papadimitriou, M. Schapira, and Y. Singer. On the hardness of being truthful. In Proc. of the 49th IEEE Symposium on Foundations of Computer Science (FOCS ’08), pp. 250-259, 2008.
  • [25] E. Pountourakis and G. Schäfer. Mechanisms for hiring a matroid base without money. In Proc. of the 7th International Symposium on Algorithmic Game Theory (SAGT ’14), LNCS 8768, pp. 255-266, 2014.
  • [26] A.D. Procaccia. Can Approximation Circumvent Gibbard-Satterthwaite? In Proc. of the 24th AAAI Conference on Artificial Intelligence (AAAI 10), pp. 836-841, 2010.
  • [27] A.D. Procaccia and M. Tennenholtz. Approximate mechanism design without money. In Proc. of the 10th ACM Conference on Electronic Commerce (EC ’09), pp. 177-186, 2009.
  • [28] M. Schapira and Y. Singer. Inapproximability of combinatorial public projects. In Proc. of the 4th Workshop on Internet and Network Economics (WINE ’08), LNCS 5385, pp. 351-361, 2008.
  • [29] I. Sher and R. Vohra. Price discrimination through communication. Theoretical Economics, (to appear), 2014.
  • [30] D.P. Williamson and D.B. Shmoys. The Design of Approximation Algorithms. Cambridge University Press, 2011.

Appendix A Appendix

a.1 Other Related Previous Work

The extensive use of monetary transfers in mechanism design is principally because in absence of money, very little can be done to enforce truthfulness. However, there are settings where monetary transfers might be unacceptable, infeasible, or undesirable (see e.g., [27] for examples). To circumvent the impossibility result of Gibbard-Satterthwaite in such settings, [27] suggested to tradeoff social efficiency for truthfulness and introduced the framework of approximate mechanism design without money. The idea is to consider truthful mechanisms without money in a particular domain and determine the best approximation ratio achievable for an appropriate social objective.

In principle, the notion of approximate mechanisms provides the designer with more flexibility. Nevertheless, there have been only few examples of truthful mechanisms with good approximation guarantees that are not based on additional assumptions. All of them concern some simple and restricted domains (see for e.g., [1, 12, 22, 27] for placing or facilities in a metric space and [26] for voting with positional scoring rules). For less restricted domains, there are strong lower bounds on the best possible approximation ratio achievable by truthful mechanisms (see e.g., [14] for deterministic facility location mechanisms). Therefore, for nontrivial approximation guarantees, we need either some assumptions on the direction or the extent of agent misreports, i.e., to use verification, or a way to implicitly penalize misreports, a.k.a. imposition.

Probably the most natural and practically applicable notion of verification is symmetric partial verification (or -verification), which explicitly forbids any false declaration at distance larger than to the true type. Interestingly, [2, 6, 15] prove that symmetric partial verification it does not help in the design of truthful mechanisms (with or without money)! Hence, in order to make some difference in approximate mechanism design without money, verification should be exact, in the sense that it forbids even negligible deviations from the truth, at least for some types of misreports. Many interesting positive results in approximate mechanism design without money use either “one-sided” verification or imposition (see e.g., [11, 13, 19, 23, 25]). However, the use of imposition depends very much on the particular application (see e.g., [13, 19, 25]), while “one-sided” verification explicitly forbids a particular type of false declarations for all agents with positive utility (see e.g., [11]). So, through theoretically interesting, “one-sided” verification is difficult to apply in practice. Thus, starting from [6], recent research has focused on the power of exact verification schemes that use either limited or costly verification and mild (or at least bounded) penalties for the liars.

Working in this direction, we seek a better and more delicate understanding of the power of verification in approximate mechanism design without money. Significantly departing from most of the previous work, we develop a general approach to the use of verification in mechanism design without money that is applicable to essentially any domain and does not resort to any explicit (e.g., monetary) penalties that decrease the utility of misreporting agents.

a.2 Conclusions and Discussion

In this work, we introduce a general approach to approximate mechanism design without money and with selective verification and apply it to the general domain of utilitarian voting (and to Combinatorial Public Project and to -Facility Location). We focus on strongly anonymous randomized mechanisms and characterize such mechanisms that are truthful in expectation, scale invariant, achieve full allocation and have reasonable verification. By relaxing truthfulness, full allocation and scale invariance, we obtain three mechanisms, namely Power, Partial Power, and Exponential, that are truthful (or almost truthful, for Power), achieve an approximation ratio of for the social welfare, and verify agents, or agents for Partial Power, where is the number of outcomes. Hence, we obtained a smoothed tradeoff between the number of agents verified and the quality of approximation. From a technical viewpoint, our mechanisms are based on smooth proportional-like randomized allocation rules. Truthfulness is a consequence of participation, which is closely related to maximal-in-distributional-range, and robustness, which is closely related to obliviousness to the misreporting agents not included in the verification set.

The property of robustness, i.e., namely that the probability distribution of the mechanism does not depend on misreporting agents, seems quite remarkable. To the best of our knowledge, this is the first time that robustness (or a similar) property is considered in mechanism design. Actually, with the possible exception of constant mechanisms, whose probability distribution over outcomes is independent of the agent declarations, a mechanism can be robust only if it uses exact verification.

To see that robustness is a strong property, recall that truthfulness means that a misreporting agent cannot change the allocation in her favor, while robustness means that a misreporting agent cannot change the allocation whatsoever. Hence, the definition (and the proof) of truthfulness assumes a utility function that each agent maximizes by truthful reporting. Robustness, on the other hand, does not refer to the utility function of the agents. Any misreport that can be caught by the verification oracle does not affect the probability distribution of a robust mechanism, no matter the incentives or the utility function of misreporting agents.

We believe that robustness can be very useful when the agent valuations are not declared explicitly to the mechanism, but they are deduced from their declarations on some observable types. E.g., this happens in the Facility Location domain, where the agents declare their locations to the mechanism, and the definition of truthfulness assumes that each agent wants a facility as close as possible to her declared location and that her disutility increases linearly with the distance (see also [12]). On the other hand, robustness only depends on whether each agent declares her true location (e.g., her true home address) to the mechanism, not on whether she wants a facility close, not so close, or far away from her declared location.

a.3 An Application to Combinatorial Public Project

The Combinatorial Public Project Problem (CPPP) was introduced in [28, 24] and is a well-studied problem in algorithmic mechanism design. An instance of CPPP consists of a set with resources, a parameter , , and strategic agents, where each agent has a function that assigns a non-negative valuation to each resource subset . The objective is to find a set of resources that maximizes , i.e., the social welfare of the agents from . We assume that all valuations are normalized, i.e., , and monotone, i.e., for all .

The valuation functions are implicitly represented through a value oracle, which returns the valuation of any resource subset in time. Then, CPPP is -hard and practically inapproximable in polynomial time, under standard computational complexity assumptions (see [28] for the details). If the valuation functions are submodular, i.e., each satisfies , for all , CPPP can be approximated in polynomial time within a factor of . If the valuation functions are subadditive, i.e., each satisfies , for all , CPPP can be approximated in polynomial time within a factor of , while approximating it within any factor better than , for any constant , requires exponential communication [28].

In [24], it was shown that for submodular valuations, CPPP cannot be approximated in polynomial time (or with polynomial communication) by deterministic truthful mechanisms (with money) within any factor better than , for any constant . A similar communication complexity lower bound was shown in [8] for randomized truthful in expectation mechanisms with money. So, the polynomial-time approximability of CPPP with submodular valuations is dramatically better than its approximability by polynomial-time truthful mechanisms with money. Although the approximability of CPPP by polynomial time truthful mechanisms with money has received considerable attention, to the best of our knowledge, this is the first time that the approximability of CPPP by truthful mechanisms without money is considered.

CPPP, with general valuation functions, can be naturally cast in our framework of utilitarian voting, with the outcome set consisting of all resource subsets with (hence, we have ). Then, our mechanisms imply the following results on the approximability of CPPP with general valuation functions by mechanisms without money and with selective verification:

Power.

For any , the Power mechanism always allocates a set of resources, is robust, -truthful, achieves an approximation ratio of and verifies at most agents.

Partial Power.

For any , the Partial Power mechanism allocates a set of resources with probability , is robust, truthful, achieves an approximation ratio of and verifies agents. Note that the empty set can naturally play the role of the null outcome for Partial Power.

Exponential.

Since Exponential is not scale invariant, we need to assume that , for every agent . Then, for any , the Exponential mechanism always allocates a set of resources, is robust, truthful, and achieves an additive error of with verification of agents, or achieves an approximation ratio of with verification of agents (where the verification bounds hold with high probability).

These guarantees are very strong and rather surprising, especially if the number of agents is significantly larger than , which is the case in many practical settings. We almost reach the optimal social welfare of the famous VCG mechanism, which achieves truthfulness through (potentially very large) payments, using truthful mechanisms without money that verify a small number of agents independent of . It becomes even more interesting if we recall that the penalty for a misreporting agent, through which we enforce truthfulness, is just the exclusion of the agent’s preferences from the decision making process.

The mechanisms above run in time polynomial in the total number of outcomes and in the number of agents . So, if the valuation functions are implicitly represented by value oracles, they are not computationally efficient. However, we still need to resort to approximate solutions, because, in absence of money, the optimal solution is not truthful. We underline that computational inefficiency is unavoidable, since our approximation ratio of , for any constant , is dramatically better than known impossibility results on the polynomial time approximability of CPPP.

If we insist on computationally efficient mechanisms without money for CPPP, we can combine our mechanisms with existing maximal-in-range mechanisms so that everything runs in polynomial time. E.g., for CPPP with subadditive valuation functions, we can use the maximal-in-range mechanism of [28, Sec. 3.2] and obtain randomized polynomial-time truthful mechanisms without money that achieve an approximation ratio of for the social welfare with selective verification of agents.

a.4 The Proof of Lemma 1

Let be any agent. Since the allocation rule is MIDR, we obtain the following inequalities:

We apply the MIDR condition to , for the first inequality, and to , for the second inequality. Summing up the two inequalities, we obtain that , i.e., the participation condition. ∎

a.5 Continuous Allocation Rules with Participation: The Proof of Lemma 2

Recall that for any valuation profile , the probability distribution of a strongly anonymous allocation rule depends only on the weight vector of the outcomes. Hence, we fix a pair of arbitrary weight vectors . For any , since satisfies participation, we have that