Combinatorial Auctions without Money

# Combinatorial Auctions without Money

Dimitris Fotakis National Technical University of Athens, Greece, fotakis@cs.ntua.gr.    Piotr Krysta University of Liverpool, UK, pkrysta@liverpool.ac.uk. This author is supported by EPSRC grant EP/K01000X/1.    Carmine Ventre Teesside University, UK, c.ventre@tees.ac.uk.
###### Abstract

Algorithmic Mechanism Design attempts to marry computation and incentives, mainly by leveraging monetary transfers between designer and selfish agents involved. This is principally because in absence of money, very little can be done to enforce truthfulness. However, in certain applications, money is unavailable, morally unacceptable or might simply be at odds with the objective of the mechanism. For example, in Combinatorial Auctions (CAs), the paradigmatic problem of the area, we aim at solutions of maximum social welfare but still charge the society to ensure truthfulness. Additionally, truthfulness of CAs is poorly understood already in the case in which bidders happen to be interested in only two different sets of goods.

We focus on the design of incentive-compatible CAs without money in the general setting of -minded bidders. We trade monetary transfers with the observation that the mechanism can detect certain lies of the bidders: i.e., we study truthful CAs with verification and without money. We prove a characterization of truthful mechanisms, which makes an interesting parallel with the well-understood case of CAs with money for single-minded bidders. We then give a host of upper bounds on the approximation ratio obtained by either deterministic or randomized truthful mechanisms when the sets and valuations are private knowledge of the bidders. (Most of these mechanisms run in polynomial time and return solutions with (nearly) best possible approximation guarantees.) We complement these positive results with a number of lower bounds (some of which are essentially tight) that hold in the easier case of public sets. We thus provide an almost complete picture of truthfully approximating CAs in this general setting with multi-dimensional bidders.

## 1 Introduction

Algorithmic Mechanism Design has as main scope the realignment of the objective of the designer with the selfish interests of the agents involved in the computation. Since the Internet, as the principal computing platform nowadays, is perhaps the main motivation to study problems in which these objectives are different, one would expect truthful mechanisms to have concrete and widespread practical applications. However, one of the principal obstacles to this is the assumption that the mechanisms use monetary transfers. On one hand, money may provoke (unreasonably) large payments [9]; on the other hand, while money might be reasonable in some applications, such as sponsored search auctions, little justification can be found for either the presence of a digital currency or the use of money at all. There are contexts in which money is morally unacceptable (such as, to support certain political decisions) or even illegal (as for example, in organ donations). Additionally, there are applications in which the objective of the computation collides with the presence of money.

Consider Combinatorial Auctions (CAs, for short), the paradigmatic problem in Algorithmic Mechanism Design. In a combinatorial auction we have a set of goods and bidders. Each bidder has a private valuation function that maps subsets of goods to nonnegative real numbers ( is normalized to be ). Agents’ valuations are monotone, i.e., for we have . The goal is to find a partition of such that – the social welfare – is maximized. For this problem, we are in a paradoxical situation: whilst, on one hand, we pursuit the noble goal of maximizing the happiness of the society (i.e., the bidders), on the other, we consider it acceptable to charge the society itself (and then “reduce” its total happiness) to ensure truthfulness. CAs without money would avoid this paradox, automatically guarantee budget-balanceness (property which cannot, in general, be achieved together with social welfare maximization), and deal with budgeted bidders (a case which is generally hard to handle in presence of money).

In this paper, we focus on -minded bidders, i.e., bidders are interested in obtaining one out of a collection of subsets of . In this general setting, we want to study the feasibility of designing truthful CAs without money, returning (ideally, in polynomial time) reasonable approximations of the optimal social welfare. This is, however, an impossible task in general: it is indeed pretty easy to show that there is no better than -approximate mechanisms without money, even in the case of single-item auctions and truthful-in-expectation mechanisms [8]. We therefore focus on the model of CAs with verification, introduced in [17]. In this model, which is motivated by a number of real-life applications and has also been considered by economists [6], bidders do not overbid their valuations on the set that they are awarded. The hope is that money can be traded with the verification assumption so to be able to design “good” (possibly, polynomial-time) mechanisms, which are truthful without money in a well-motivated – still challenging – model.

### 1.1 Our contribution

The model of CAs with verification is perhaps best illustrated by means of the following motivating scenario, discussed first in [17]. Consider a government (auctioneer) auctioning business licenses for a set of cities under its administration. A business company (bidder) wants to get a license for some subset of cities (subset of ) to sell her product stock to the market. Consider the bidder’s profit for a subset of cities to be equal to a unitary publicly known product price (e.g., for some products, such as drugs, the government could fix a social price) times the number of product items available in the stocks that the company possesses in the cities comprising .111Note that bidders will sell products already in stock (i.e., no production costs are involved as they have been sustained before the auction is run). This is conceivable when a government runs an auction for urgent needs (e.g., salt provision for icy roads or vaccines for pandemic diseases). In this scenario, the bidder is strategic on her stock availability. As noted in literature in Economics [6], a simple inspection on the stock consistency implies that bidders cannot overbid their profits: the concealment of existing product items in stock is costless but disclosure of unavailable ones is prohibitively costly. The assumption is verification a posteriori222A stronger model of verification would require bidders to be unable to overbid at all and not just on the awarded set. However, there appears to be weaker motivations for this model: the investment required on inspections would be considerable and rather unrealistic.: the inspection is carried on for the solutions actually implemented and then each bidder cannot overstate her valuation for the set she gets allocated, if any. It is important to notice that bidders can misreport sets and valuations for unassigned sets in an unrestricted way. A formal definition of the model of CAs with verification and without money can be found in Section 2.

In this model, we firstly give a complete characterization of algorithms that are incentive-compatible in both the cases in which the collections of sets, each bidder is interested in, are public (also referred to, as known bidders) and private (also known as, unknown bidders); valuations are always assumed to be private. We prove that truthfulness is characterized in this context in terms of -monotone algorithms: in the case of known bidders, if a bidder is awarded a set and augments her declaration for then a -monotone algorithm must, in this new instance, grant her a set in her collection which is not worse than (i.e., a set with a valuation not smaller than her valuation for ). (This generalizes neatly to the case of unknown bidders.) There are two important facts we wish to emphasize about our characterizations. First, their significance stems from the fact that the corresponding problem of characterizing truthfulness for CAs with money and -minded bidders is poorly understood: this is a long-standing open problem already for , see, e.g., [25, Chapter 12]. Second, it is pretty easy to see that these notions generalize the properties of monotonicity shown to characterize truthfulness with money for single-minded bidders in [22, 19] for known and unknown bidders, respectively. More generally, these properties of monotonicity are also proved to be sufficient to get truthful mechanisms for so-called generalized single-minded bidders [3]. This is an interesting development as, to the best of our knowledge, it is the first case in which a truthful mechanism with money can be “translated” into a truthful mechanism without money. The price to pay is “only” to perform verification to prevent certain lies of the bidders, while algorithms (and then their approximation guarantees) remain unchanged. Thus, in light of our results, previously known algorithms presented in, e.g., [19, 3, 11] assume a double relevance: they are truthful not only when money can be used, but also in absence of money when verification can be implemented. This equivalence gives also a strong motivation for our model. Naturally, the picture for the multi-dimensional case of is more blurry since, as we mention above, truthfulness with money is not well understood yet in these cases.

Armed with the characterization of truthfulness, we provide a number of upper and lower bounds on the approximation guarantee to the optimal social welfare of truthful CAs without money and with verification. The upper bounds hold for the harder case of unknown bidders. We give an upper bound of in the case in which each good in has a supply . This algorithm is deterministic, runs in polynomial time and adapts an idea of multiplicative update of good prices by [18]. Following similar ideas, we also obtain randomized universally truthful mechanisms with approximation ratios of and , where is the maximum size of sets in the bidders’ collections. Our most significant deterministic polynomial-time upper bound is obtained, in the case of , by a simple greedy mechanism that exploits the characteristics of the model without money. This algorithm returns a -approximate solution. These upper bounds are complemented by two simple randomized universally truthful CAs without money: the first achieves a -approximation in exponential time; the second runs instead in polynomial-time and has a -approximation guarantee. We note here that all our polynomial-time upper bounds are computationally (nearly) best possible even when the algorithm has full knowledge of the bidders’ data. We also would like to note that all, but the -approximate, upper bounds given can be obtained in the setting in which bidders’ declare so-called demand oracles, see, e.g., [25, Chapter 11]. We complete this study by proving a host of lower bounds on the approximation guarantee of truthful CAs without money for known bidders, without any computational assumption. (Note that the class of incentive-compatible algorithms for known bidders is larger than the class for unknown bidders.) We prove the following lower bounds: for deterministic mechanisms; for universally truthful mechanisms; and, finally, for truthful-in-expectation mechanisms. This implies that the optimal mechanisms are not truthful in our model. Additionally, stronger lower bounds are proved for deterministic truthful mechanisms that use priority algorithms [1]. These algorithms process (and take decisions) one elementary item at the time, from a list of ordered items. The ordering can also change adaptively after each item is considered. (Note that our greedy mechanism falls in the category of non-adaptive priority algorithms since it process bids as items, which are ordered at the beginning.) We give a lower bound of for priority algorithms that process bids as elementary items (thus, essentially matching the upper bound of the greedy algorithm) and a lower bound of in the case in which the algorithm processes bidders as items.

Our bounds give a rather surprising picture of the relative power of verification versus money, thus suggesting that the two models are somehow incomparable. For example, we have a -approximate universally truthful mechanism, which matches the guarantee of the universally truthful mechanism with money given by [7]. (However, it is worth mentioning that the latter mechanism does not guarantee the approximation ratio since there is an error probability of which cannot be reduced by, e.g., repeating the auction or otherwise truthfulness would be lost.) On the other hand, because of our lower bounds, we know that it is not possible to implement the optimal outcome without money; while, if we have exponential computational time, we can truthfully implement the optimal solution using VCG payments. However, if we restrict to polynomial-time mechanisms, then we have a deterministic truthful -approximation mechanism without money, based on the aforementioned greedy algorithm; with money, instead, it is not known how to obtain any polynomial-time deterministic truthful mechanism with an approximation ratio better than the -approximation given in [14]. Moreover, [1, Theorem 2] proved a lower bound of on the approximation ratio of any truthful greedy mechanism with money for instances with demanded sets of cardinality at most . Our greedy mechanism achieves an approximation ratio of for such instances, which implies that this lower bound does not hold in our model without money. Additionally, we show that the greedy mechanism cannot be made truthful with money, which suggests that the model without money couples better with greedy selection rules. A general lower bound in terms of for CAs without money would shed further light on this debate of power of verification versus power of money. In this regard, we offer an interesting conjecture in Section 5.2.

### 1.2 Related work

CAs as an optimization problem (without strategic consideration) is known to be NP-hard to solve optimally or even to approximate: neither an approximation ratio of , for any constant , nor of can be obtained in polynomial time [23, 19, 13]. As a consequence, a large body of literature has focused on the design of polynomial-time truthful CAs that return as good an approximate solution as possible, under assumptions (i.e., restrictions) on bidders’ valuation domains. For single-minded domains, a host of truthful CAs have been designed (see, e.g., [19, 22, 3]). A more complete picture of what is known for truthful CAs under different restrictions of bidders’ domains can be found in Figure 11.2 of [25].

The authors of [17], instead of restricting the domains of the bidders, proposed to restrict the way bidders lie. We are adopting here their model, adapting it to the case without money. The definition of CAs with verification is inspired by the literature on mechanisms with verification (see, e.g., [24, 27, 28] and references therein). Mechanism design problems where players have restrictions on the way of lying are also considered in theoretical economics. We next discuss some of the work more relevant to this paper. Green and Laffont [12] define and motivate a model of partial verification wherein bidders can only report bids from a type-dependent set of allowed messages; they characterize bidding domains for which the Revelation Principle holds in presence of this notion of restricted bidding. This model has been further studied by Singh and Wittman [31] and later extended in [4] to allow probabilistic verification of bids outside the set of allowed messages. The economic model that is closest to ours is the one studied in [6]; therein verification is supposed to take place for every outcome and not just for the implemented solution and is therefore stronger and less realistic than ours. Another related line of work tries to establish when a subset of incentive-compatibility constraints is sufficient to obtain full incentive-compatibility. [21] considers a single good, single buyer optimal auction design and studies conditions under which no-overbidding constraints would also imply the full incentive compatibility of the underlying auction. Other papers studying this kind of questions are [30, 5]. In particular, the results in [5] (and to some extent in [4]) seem to suggest that one has to focus only on “one-sided” verification, for otherwise a mechanism is truthful if and only if it satisfies a subset of incentive-compatibility constraints.

Our work fits in the framework of approximate mechanism design without money, initiated by [29]. The idea is that for optimization problems where the optimal solution cannot be truthfully implemented without money, one may resort to the notion of approximation, and seek for the best approximation ratio achievable by truthful algorithms. Approximate mechanisms without money have been obtained for various problems, among them, for locating one or two facilities in metric spaces (see e.g., [29, 20]). Due to the apparent difficulty of truthfully locating three or more facilities with a reasonable approximation guarantee, notions conceptually similar to our notion of verification have been proposed [26, 10]. [15] considers truthful mechanisms without money, for scheduling selfish machines whose execution times can be (strongly) verified. The authors of [8] consider the design of mechanisms without money for, what they call, the Generalized Assignment problem: selfish jobs compete to be processed by unrelated machines; the only private data of each job is the set of machines by which it can be actually processed. This problem can be modeled via maximum weight bipartite matching and the latter can be cast as a special case of CAs with demanded sets of cardinality ; then [8, Algorithm 1] can be regarded as a special case of our greedy algorithm.

## 2 Model and preliminaries

In a combinatorial auction we have a set of goods and agents, a.k.a. bidders. Each -minded XOR-bidder has a private valuation function and is interested in obtaining only one set in a private collection of subsets of , being the size of . The valuation function maps subsets of goods to nonnegative real numbers ( is normalized to be ). Agents’ valuations are monotone: for we have .

The goal is to find a partition of such that –the social welfare– is maximized. As an example consider and the first bidder to be interested in . The valuation function of bidder for is

 vi(S)={maxS′∈Si:S⊇S′{vi(S′)}if ∃S′∈Si∧S⊇S′,0otherwise. (1)

Accordingly, we say that (for ) is defined by an inclusion-maximal set such that and . If then we say that defines it. So in the example above is defined by .

Throughout the paper we assume that bidders are interested in sets of cardinality at most , i.e., .

Assume that the sets and the values are private knowledge of the bidders. Then, we want to design an allocation algorithm (auction) that for a given input of bids from the bidders, outputs a feasible assignment (i.e., at most one of the requested sets is allocated to each bidder, and allocated sets are pair-wise disjoint). The auction should guarantee that no bidder has an incentive to misreport her preferences and maximize the social welfare (i.e., the sum of the valuations of the winning bidders).

More formally, we let be a set of non-empty subsets of and let be the corresponding valuation function of agent , i.e., . We call a declaration (or bid) of bidder . We let be the true type of agent . We also let denote the set of all the possible declarations of agent and call the declaration domain of bidder . Fix the declarations of all the agents but . For any declaration in , we let be the set that an auction on input allocates to bidder . If no set is allocated to then we naturally set . Observe that, according to (1), . We say that is a truthful auction without money if the following holds for any , and :

 vi(Ai(ti,b−i))≥vi(Ai(b)). (2)

We also define notions of truthfulness in the case of randomization: we either have universally truthful CAs, in which case the mechanism is a probability distribution over deterministic truthful mechanisms, or truthful-in-expectation CAs, where in (2) we use the expected values, over the random coin tosses of the algorithm, of the valuations. We also say that a mechanism is an -approximation for CAs with -minded bidders if for all , being the value of a solution with maximum social welfare for the instance .

Recall that may not belong to the set of demanded sets . In particular, there can be several sets in (or none) that are subsets of . However, as observed above (cf. (1)), the valuation is defined by a set in which is an inclusion-maximal subset of set that maximizes the valuation of agent . We denote such a set as , i.e., . In our running example above, it can be for some algorithm and some , that whose valuation is defined as observed above by ; the set is denoted as . (Similarly, we define w.r.t. and declaration .) Following the same reasoning, we let denote the set in such that .

We focus on exact algorithms333An algorithm is exact if, to each bidder, either only one of the declared sets is awarded or none. in the sense of [19]. This means that . This implies, by monotonicity of the valuations, that and then the definition of yields the following for any :

 σ(Ai(bi,b−i)|ti)⊆Ai(bi,b−i)=σ(Ai(bi,b−i)|bi). (3)

In the verification model each bidder can only declare lower valuations for the set she is awarded. More formally, bidder whose type is can declare a type if and only if whenever :

 zi(σ(Ai(bi,b−i)|bi))≤vi(σ(Ai(bi,b−i)|ti)). (4)

In particular, bidder evaluates the assigned set as , i.e., . Thus the set can be used to verify a posteriori that bidder has overbid declaring . To be more concrete, consider the motivating scenario for CAs with verification above. The set of cities for which the government assigns licenses to bidder when declaring , can be used a posteriori to verify overbidding by simply counting the product items available in the stock of the cities for which licenses were granted to bidder .

When (4) is not satisfied then the bidder is caught lying by the verification step. We assume that this behavior is very undesirable for the bidder (e.g., for simplicity we can assume that in such a case the bidder loses prestige and the possibility to participate in the future auctions). This way (2) is satisfied directly when (4) does not hold (as in such a case a lying bidder would have an infinitely bad utility because of the assumption above). Thus in our model, truthfulness with verification and without money of an auction is fully captured by (2) holding only for any , and such that (4) is fulfilled. Since our main focus is on this class of truthful mechanisms with verification and no money, we sometimes avoid to mention that and simply refer to truthful mechanisms/algorithms.

##### A graph-theoretic approach

The technique we will use to derive truthful auctions for multi-minded XOR bidders is a straightforward variation of the so-called cycle monotonicity technique. Consider an algorithm . We will set up a weighted graph for each bidder depending on , bidder domain and the declaration of all the bidders but in which the non-existence of negative-weight edges guarantees the truthfulness of the algorithm. This is a well known technique. More formally, fix algorithm , bidder and declarations . The declaration graph associated to algorithm has a vertex for each possible declaration in the domain . We add an arc between and in whenever a bidder of type can declare to be of type obeying (4). Following the definition of the verification setting, edge belongs to the graph if and only if 444To ease our notation we let be a shorthand for when the algorithm, the bidder and declarations are clear from the context as in this case.555Strictly speaking for an edge in the graph, we should require that only whenever as this set would be needed to verify. However, because of the monotonicity and normalization of valuations, holds also whenever , since and . The weight of the edge is defined as and thus encodes the loss that a bidder whose type is incurs by declaring . The following result (whose proof is straightforward) relates the weight of edges of the declaration graph to the truthfulness of the algorithm.

###### Proposition 1

is a truthful auction with verification without money for CAs with -minded bidders if and only if each declaration graph associated to algorithm does not have negative-weight edges.

In the case of mechanisms without verification, the graph above is complete. Such a graph can be used to check whether algorithms can be augmented with payments so to ensure truthfulness, both in the scenario with verification and without. Incentive-compatibility of algorithms is known to coincide with the case in which each graph has not negative-weight cycles [32]. We will use this fact to show that certain algorithms cannot be made truthful with money.

##### Known vs Unknown k-minded bidders

In the discussion above, we consider the case in which the collection of sets, each bidder is interested in, is private knowledge. In this case, we refer to the problem of designing truthful auctions that maximize the social welfare as CAs with unknown -minded bidders (or, simply, unknown bidders). An easier scenario is the setting in which the sets are public knowledge and bidders are only strategic about their valuations. In this case, we instead talk about CAs with known -minded bidders (or, simply, known bidders). Our upper bounds hold for the more general case of unknown bidders, while the lower bounds apply to the larger class of mechanisms truthful for known bidders.

## 3 Characterization of truthful mechanisms

In this section we characterize the algorithms that are truthful in our setting, in both the scenarios of known and unknown bidders. Interestingly, the characterizing property is algorithmic only and turns out to be a generalization of the properties used for the design of truthful CAs with money and no verification for single-minded bidders.

### 3.1 Characterization for known bidders

In this case, for each -minded bidder we know . The following property generalizes monotonicity of [22] and characterizes truthful auctions without money and with verification.

###### Definition 1

An algorithm is -monotone if the following holds for any , any , any : if then for all such that it holds .

###### Theorem 1

An algorithm is truthful without money and with verification for known -minded bidders if and only if is -monotone.

Proof. Fix , and consider the declaration graph associated to algorithm . Take any edge of the graph and let denote . By definition, the edge exists if and only if .

Now if the algorithm is -monotone then we also have that and then the weight of edge is non-negative. Vice versa, assume that the weight of is non-negative: this means that whenever then it must be and therefore is -monotone. The theorem follows from Proposition 1.

Similarly to [22], -monotonicity implies the existence of thresholds (critical values/prices). Towards this end, it is important to consider the sets in in decreasing order of (true) valuations. Accordingly, we denote , with if and only if .

###### Lemma 1

An algorithm is -monotone if and only if for any , any , any there exist threshold values such that: if and , for all then . Moreover, if , for all then .

The lemma above assumes that bidders have different valuations for each of their minds. This is a rather nonrestrictive way to model CAs for -minded bidders. In the more general case in which bidders are allowed to have ties in their valuations, one can prove that monotonicity implies the existence of thresholds, while the other direction is not true in general but only under some assumption on .

### 3.2 Characterization for unknown bidders

The following property generalizes the property of monotonicity of algorithms defined by [19] and characterizes truthful auctions without money and with verification.

###### Definition 2

An algorithm is -set monotone if the following holds for any , any and any : if then for all such that , we have with .

To see how this notion generalizes [19], it is important to understand what is . In detail, , in the above definition, should be read as to indicate that bidder going from declaration to declaration , substituted with and . This is because denotes the set in the collection of sets demanded by a bidder of type which defines the valuation of . Specifically, is such that . (Note that if belonged to then would be itself.)

###### Theorem 2

An algorithm is truthful without money and with verification for -minded bidders if and only if is -set monotone.

Proof. Fix , and consider the declaration graph associated to algorithm . Take any edge of the graph and let denote . By definition, the edge exists if and only if , with .

Now if the algorithm is -set monotone then we also have that and then the weight of edge is non-negative. Vice versa, assume that the weight of is non-negative: this means that whenever then it must be and therefore is -set monotone. The theorem follows from Proposition 1.

Observe, that our characterization of Theorem 2 for unknown single-minded bidders implies the existence of a threshold for any set. Namely, let be a given -set monotone algorithm, and let be a fixed bidder with declaration . Then for the set (here, = 1), algorithm is monotone with respect to and thus there exists a critical threshold. It is not hard to see that thresholds exist also for unknown -minded bidders, with .

The result in Theorem 2 also relates to the characterization of truthful CAs with money and no verification (see, e.g., Proposition 9.27 in [25]). While the two characterizations look pretty similar, there is an important difference: in the setting with money and no verification, each bidder optimizes her valuation minus the critical price over all her demanded sets; in the setting without money and with verification, each bidder optimizes only her valuation over all her demanded sets among those that are bounded from below by the threshold.

### 3.3 Implications of our characterizations

We discuss here two conceptually relevant consequences of our results above. In a nutshell, a reasonably large class of truthful mechanisms with money can be turned into truthful mechanisms without money, by using the verification paradigm.

#### 3.3.1 Single-minded versus multi-minded bidders

Observe that our characterization of truthful mechanisms without money for CAs with -minded bidders with known and unknown bidders is exactly the same as the characterization of truthful mechanisms with money in this setting, see, e.g., pages 274–275 in [25]. This means that the two classes of truthful mechanisms in fact coincide. More formally, we have:

###### Proposition 2

Any (deterministic) truthful -approximation mechanism with money for single-minded CAs can be turned into a (deterministic) truthful -approximation mechanism without money with verification for the same problem, and vice versa. This holds for single-minded CAs with either known or unknown bidders.

#### 3.3.2 Beyond CAs

It is known that a slight generalization of monotonicity of [19] is a sufficient property to obtain truthful mechanisms with money also for problems involving generalized single-minded bidders [3]. Intuitively, generalized single-minded bidders have private numbers associated to their type: their valuation for a solution is equal to the first of these values or minus infinity, depending on whether the solution asks the agent to “over-perform” on one of the other parameters, see [3] for details. By Theorem 2, all the truthful mechanisms with money designed for this quite general type of bidders can be turned into truthful mechanisms without money, when the verification paradigm is justifiable. As a direct corollary of our characterization, we then have a host of truthful mechanisms without money and with verification for the (multi-objective optimization) problems studied in [3, 11].

## 4 Upper bounds for unknown bidders

In this section we present our upper bounds for CAs with unknown -minded bidders.

### 4.1 CAs with arbitrary supply of goods

In this section, we consider the more general case in which elements in are available in copies each. Note that the characterizations above hold also in this multi-unit case. We present three polynomial-time algorithms, which are truthful for CAs with unknown bidders: the first is deterministic, the remaining are randomized and give rise to universally truthful CAs.

#### 4.1.1 Deterministic truthful CAs

We adapt here the overselling multiplicative price update algorithm and its analysis from [18] to our setting without money. The algorithm considers bidders in an arbitrary given order. We assume that the algorithm is given a parameter such that . We will assume that such is known to the mechanism, and afterwards we will modify our mechanism and show how to truthfully guess .

Algorithm 1 processes the bidders in an arbitrary given order, . The algorithm starts with some relatively small, uniform price of each item. When considering bidder , the algorithm uses the current prices as defining thresholds and allocates to bidder a set in her demand set that has the maximum valuation among all her sets with valuations above the thresholds. Then the prices of the elements in the set are increased by a factor and the next bidder is considered.

Let be the number of copies of good allocated to all bidders preceding bidder and denote the total allocation of good to all bidders. Let, moreover, be good ’s price at the end of the algorithm.

We claim now that if and are chosen so that , then the allocation output by Algorithm 1 is feasible, that is, it assigns at most copies of each good to the bidders. The argument is as follows. Consider any good . Notice that when the -th copy of good is sold to any bidder then its price is updated to . Thus, good alone has a price which is above the maximum valuation of any bidder, and so no further copy will be sold.

Next we prove two lower bounds on the social welfare of the sets chosen by Algorithm 1. Let denote the optimal social welfare, and recall that denotes the final price of good .

###### Lemma 2

It holds and .

Combining the above bounds yields the following result for the algorithm.

###### Theorem 3

Algorithm 1 with and produces a feasible allocation such that .

Proof. Feasibility follows from the fact that . The first bound of Lemma 2 gives , which by the second bound is , where the last inequality follows by . This finally gives us .

###### Theorem 4

Algorithm 1 is a truthful mechanism without money and with verification for CAs with unknown -minded bidders.

Proof. Fix and . As in Definition 2, take two declarations of bidder , and with , where and . (In this proof, denotes Algorithm 1.) Recall that and .

Note that the ordering is independent of the bids and then when is considered the prices for the elements of are the same in both and . Since , we note that . This yields, This implies that when executes line 1, the set is taken into consideration and we can therefore conclude that . This shows that is -set monotone and then, by Theorem 2, our claim.

We now modify Algorithm 1 in order to remove the assumption on the knowledge of . The modified algorithm is presented as Algorithm 2. We have the following result.

###### Theorem 5

Algorithm 2 is a truthful mechanism without money and with verification for CAs with unknown -minded bidders. Its approximation ratio is .

Proof. Approximation ratio and feasibility of the produced solution follow from the choice of and from setting , for . Indeed, we can use the previous analysis of Algorithm 1 that did not make any assumption on the order in which bidders are processed and only required .

We will argue now about truthfulness of the modified algorithm. Let us call bidder in Algorithm 2, the max bidder. We first observe that bidder is allocated the set in her (reported) demand with highest (reported) valuation. This is because her declaration of for her best set, say , is larger than since . Now, fix and . As in Definition 2, take two declarations of bidder , and with , where and . (In this proof, denotes Algorithm 2.) Recall that and . Let (resp., ) be the max bidder for the bid vector (resp., ). We distinguish three cases.

Case 1: . In this case, is larger than all the valuations in . Since and since is unchanged then is also larger than all the valuations in , which yields . But then, as observed above, since is the max bidder in she will get her best set in and therefore .

Case 2: . Since is the max bidder in then we can argue, as above, that she will get her best set and so we have .

Case 3: . Since the other bids are unchanged, in this case, we have . This implies that the ordering in which bidders are considered is the same in both and which in turns implies that the prices considered by the algorithm in line 2 are the same in both instances. We can then use the same arguments used in the proof of Theorem 4 to conclude that .

In all the three cases we have shown that the algorithm is -set monotone and then the claim follows from Theorem 2.

#### 4.1.2 Randomized truthful CAs

We show here how to use Algorithm 2 to obtain randomized universally truthful mechanisms with expected approximation ratios of and , respectively.

Observe first that if we execute Algorithm 1 with a smaller update factor , then the output solution allocates at most copies of each good to the bidders, where [18, Lemma 1]. This simply follows from the fact that if copies of good were sold, then its price is . But this infeasible solution is an -approximation to the optimal feasible solution: plugging in the approximation ratio of in Theorem 3 indeed implies an -approximation (see also [18, Theorem 1]). This idea leads to the following randomized algorithm in [18]: use , explicitly maintain feasibility of the produced solution, and define (where ) as the probability of allocating the best set to a bidder. (See Algorithm 3 for a precise description.) We now introduce the same randomization idea into our Algorithm 2. The resulting algorithm is Algorithm 4, where we assume .

###### Theorem 6

Algorithm 4 is a universally truthful mechanism without money and with verification for CAs with unknown -minded bidders. Its expected approximation ratio is .

Proof. Approximation guarantee and feasibility of the output solution follows from essentially the same arguments used in [18]. (For completeness we give this proof in appendix.) We will argue now about universal truthfulness of Algorithm 4. This algorithm can be viewed as a probability distribution over deterministic algorithms. Each such algorithm, call it , is defined by a -vector and first selects and serves the max bidder and then serves the remaining bidders . When serving bidder , algorithm deterministically allocates set to bidder if and only if and to bidder if and only if . Thus, algorithm is Algorithm 2, with . So, to show that is (deterministically) truthful we use the same argument of the proof of Theorem 5 and the additional observation that bidders whose corresponding bit in the vector is have no incentive to lie, since they are not served anyway.

Finally we can also obtain a universally truthful mechanism in case demanded sets have unbounded sizes.

###### Theorem 7

There exist a universally truthful mechanism without money and with verification for CAs with unknown -minded bidders with an expected approximation ratio of .

### 4.2 CAs with single supply

We now go back to the case in which the goods in are provided with single supply. We present three incentive-compatible CAs: the first is deterministic, the remaining two are randomized. Among these three mechanisms, only two run in polynomial time.

#### 4.2.1 Greedy algorithm

We now present a simple greedy algorithm for CAs where the supply , see Algorithm 5. (Note that for goods with arbitrary supply , the greedy algorithm cannot do better than Algorithm 2 because of the lower bound of in [16].) Recall that each bidder declares , where is a collection of sets bidder demands and is the valuation of set . Observe that sets are all the sets demanded by all bidders (with non-zero bids), i.e., .

We will use the linear programming duality theory to prove the approximation guarantees of our algorithm. Let us denote the set family , where bidder demands sets . For a given set we denote by the bid of bidder for that set. Let be the set . The LP relaxation of our problem is:

 max n∑i=1∑S∈Sibi(S)xi(S) (5) s.t. n∑i=1∑S:S∈Si,e∈Sxi(S)≤1 ∀e∈U (6) ∑S∈Sixi(S)≤1 ∀i∈[n] (7) xi(S)≥0 ∀i∈[n]∀S∈Si, (8)

The corresponding dual linear program is then the following:

 min ∑e∈Uye+n∑i=1zi (9) s.t. zi+∑e∈Sye≥bi(S) ∀i∈[n]∀S∈Si (10) zi,ye≥0 ∀i∈[n]∀e∈U. (11)

In this dual linear program dual variable corresponds to the constraint (7).

###### Theorem 8

Algorithm 5 is a -approximation algorithm for CAs with -minded bidders.

Proof. Suppose that Algorithm 5 has terminated and output solution . Let . Notice that for each set that was not chosen to the final solution , there either is an element which was the witness of that event during the execution of the algorithm, or there exists a bidder and set such that . For each set we keep in one witness for . In case if there is more than one witness in , we keep in the (arbitrary) witness for that belongs to the set among sets that was considered first by the greedy order. We discard the remaining elements from .

Let us also denote if and if .

Observe first that if , then any feasible solution just has a single set assigned to a single bidder and thus the algorithm outputs an optimal solution, as required.

We then assume that . We now define a dual solution during the execution of Algorithm 5. We need to know the output solution for the definition of this dual solution, which is needed only for analysis. In line 5 of Algorithm 5 we initialize these variables: for all and for all . We add the following in line 5(a) of Algorithm 5: