Thinking Twice about SecondPrice Ad Auctions
Abstract
A number of recent papers have addressed the algorithmic problem of allocating advertisement space for keywords in sponsored search auctions so as to maximize revenue, most of which assume that pricing is done via a firstprice auction. This does not realistically model the Generalized Second Price (GSP) auction used in practice, in which bidders pay the nexthighest bid for keywords that they are allocated. Towards the goal of more realistically modelling these auctions, we introduce the SecondPrice Ad Auctions problem, in which bidders’ payments are determined by the GSP mechanism.
We show that the complexity of the SecondPrice Ad Auctions problem is quite different than that of the more studied FirstPrice Ad Auctions problem. First, unlike the firstprice variant, for which small constantfactor approximations are known, we show that it is NPhard to approximate the SecondPrice Ad Auctions problem to any nontrivial factor, even when the bids are small compared to the budgets. Second, we show that this discrepancy extends even to the  special case that we call the SecondPrice Matching problem (2PM). In particular, offline 2PM is APXhard, and for online 2PM there is no deterministic algorithm achieving a nontrivial competitive ratio and no randomized algorithm achieving a competitive ratio better than . This stands in contrast to the results for the analogous special case in the firstprice model, the standard bipartite matching problem, which is solvable in polynomial time and which has deterministic and randomized online algorithms achieving better competitive ratios. On the positive side, we provide a 2approximation for offline 2PM and a 5.083competitive randomized algorithm for online 2PM. The latter result makes use of a new generalization that we prove of a classic result on the performance of the “Ranking” algorithm for online bipartite matching.
1 Introduction
The rising economic importance of online sponsored search advertising has led to a great deal of research focused on developing its theoretical underpinnings. (See, e.g., [18] for a survey). Since search engines such as Google, Yahoo! and MSN depend on sponsored search for a significant fraction of their revenue, a key problem is how to optimally allocate ads to keywords (user searches) so as to maximize search engine revenue [1, 2, 3, 5, 6, 14, 15, 20, 21, 24]. Most of the research on the dynamic version of this problem assumes that once the participants in each keyword auction are determined, the pricing is done via a firstprice auction; in other words, bidders pay what they bid. This does not realistically model the standard mechanism used by search engines, called the Generalized Second Price mechanism (GSP) [10, 25].
In an attempt to model reality more closely, we study the SecondPrice Ad Auctions problem, which is the analogue of the above allocation problem when bidders’ payments are determined by the GSP mechanism and there is only one slot for each keyword. The GSP mechanism for a given keyword auction reduces to a secondprice auction when there is one slot per keyword – given the participants in the auction, it allocates the advertisement slot to the highest bidder, charging that bidder the bid of the secondhighest bidder.
In the SecondPrice Ad Auctions problem, we assume that there is a set of keywords and a set of bidders , where each bidder has a known daily budget and a nonnegative bid for every keyword . The keywords are ordered by their arrival time, and as each keyword arrives, the algorithm (i.e., the search engine) chooses the bidders to participate in that particular auction and runs a secondprice auction with respect to those participants. Thus, instead of selecting one bidder for each keyword, two bidders need to be selected by the algorithm. Of these two bidders, the bidder with the higher bid (where bids are always reduced to the minimum of the actual bid and the bidders’ remaining budget) is allocated that keyword’s advertisement slot at the price of the other bid.
This process results in an allocation and pricing of the advertisement slots associated with each of the keywords. The goal is to select the bidders participating in each auction to maximize the total profit extracted by the algorithm. For an example instance of this problem, see Figure 1.
We note that for simplicity of presentation we have chosen to present the model, the algorithms and the results as if the bidders are competing for a single slot. Obviously, our hardness results hold for the multislot problem as well.
1.1 Our Results
We begin by considering the offline version of the SecondPrice Ad Auctions problem, in which the algorithm knows all of the original bids of the bidders (Section 3). Our main result here is that it is NPhard to approximate the optimal solution to this problem to within a factor better than , where is the number of keywords and is a constant independent of such that no bidder bids more than of its initial budget on any keyword. Thus, even when bids are small compared to budgets, it is not possible in the worst case to get a good approximation to the optimal revenue. (We show that it is trivial to get a matching approximation algorithm.) This result stands in sharp contrast to the standard FirstPrice Ad Auctions problem, for which there is a 4/3approximation to the offline problem [6, 24] (even for ), and an competitive algorithm to the online problem when bids are small compared to budgets [5, 21] (i.e., as ).
We then turn our attention to a theoretically appealing special case that we call SecondPrice Matching. In this version of the problem, all bids are either 0 or 1 and all budgets are 1. As before, the keywords are ordered by arrival time, and a keyword can be matched to a bidder with a profit of 1 only if , there is a distinct bidder with , and neither nor has been matched to a keyword that arrived before . For an example instance of this problem, see Figure 2.
Recall that for FirstPrice Matching or, as we know and love it, maximum bipartite matching, the offline problem can of course be solved optimally in polynomial time, whereas for the online problem we have the trivial 2competitive deterministic greedy algorithm and the celebrated competitive randomized algorithm due to Karp, Vazirani and Vazirani [16], both of which are best possible.
In contrast, we show that the SecondPrice Matching problem is APXhard (Section 4.1). We also give a 2approximation algorithm for the offline problem (Section 4.2). We then turn to the online version of the problem. Here, we show that no deterministic online algorithm can get a competitive ratio better than , where is the number of keywords in the instance and that no randomized online algorithm can get a competitive ratio better than 2 (Section 5.1). On the other hand, we present a randomized online algorithm that achieves a competitive ratio of (Section 5.2). To obtain this competitive ratio, we prove a generalization of the result due to Karp, Vazirani, and Vazirani [16] and Goel and Mehta [15] that the Ranking algorithm for online bipartite matching achieves a competitive ratio of .
1.2 Related Work
As discussed above, the related FirstPrice Ad Auctions problem^{5}^{5}5This problem has also been called the Adwords problem [21] and the Maximum Budgeted Allocation problem [3, 6, 24]. It is an important special case of SMW [9, 11, 17, 19, 22, 26], the problem of maximizing utility in a combinatorial auction in which the utility functions are submodular, and is also related to the Generalized Assignment Problem (GAP) [7, 11, 12, 23]. has received a fair amount of attention. Mehta et al. [21] presented an algorithm for the online version that achieves an optimal competitive ratio of for the case when the bids are much smaller than the budgets (i.e., when ), a result also proved by Buchbinder et al. [5]. When there is no restriction on the values of the bids relative to the budgets (i.e., when can be as low as ), the best known competitive ratio is 2 [19]. For the offline version of the problem, a sequence of papers [19, 2, 11, 3, 24, 6] culminating in a paper by Chakrabarty and Goel, and independently, a paper by Srinivasan, show that the offline problem can be approximated to within a factor of and that there is no polynomial time approximation algorithm that achieves a ratio better than 16/15 unless [6].
The most closely related papers to this one are the works of Abrams, Mendelevitch and Tomlin [1], and of Goel, Mahdian, Nazerzadeh and Saberi [14]. The latter looks at the online allocation problem when the search engine is committed to charging under the GSP scheme, with multiple slots per keyword. They study two models, both of which differ from the model studied in this paper, even for one slot. Their first model, called the strict model, allows a bidder’s bid to be above his remaining budget, as long as the remaining budget is strictly positive. However, as in our model, when a bidder is allocated a slot, that bidder is never charged more than his remaining budget. Thus, in the strict model, a bidder with a negligible amount of remaining budget can keep his bids high indefinitely, and as long as bidder is never allocated another slot, this high bid can determine the prices other bidders pay on many keywords.^{6}^{6}6In terms of gaming the system, this would be a great way for a bidder to potentially force his competitors to pay a lot for slots without backing those payments up with budget. This effect is even worse for the nonstrict model. Their second model, called the nonstrict model, differs from the strict model in that bidders can keep their bids positive even after their budget is depleted. Thus, even after a bidder’s budget is depleted, that bidder can determine the prices that other bidders pay on keywords indefinitely. However, a bidder is never charged more than his remaining budget for a slot. Therefore, if a bidder is allocated a slot after his budget is fully depleted, it gets the slot for free.
Under the assumption that bids are small compared to budgets, Goel et al. present a competitive algorithm for the nonstrict model and a 3competitive algorithm for the strict model. They also show that , where these quantities refer to the optimal offline revenue of the search engine in their models. Their algorithms build on the linear programming formulation of Abrams et al. [1] for the offline version of the strict model.
Interestingly, neither of their models or our model completely dominates the others in terms of the optimal offline revenue. However, it is fairly easy to show that neither nor are ever more than a constant factor larger than the optimal offline revenue from our model (see Appendix A for a proof of this). On the other hand, the optimal offline revenue in our model can be times as big as and , where is the number of keywords, and this holds even when is a large constant (see Figure 3). This is not surprising, given the strong hardness of approximation in our model, versus the fact that constant factor approximations are available, even online, in theirs.
Notice also, that for the special case of SecondPrice Matching the strict model and our model are identical, whereas the nonstrict model reduces to standard maximum matching (on all those keywords which have at least two bidders bidding on them).
We feel that our model, in which bidders are not allowed to bid more than their remaining budget, is more natural then the strict and nonstrict models. It seems inherently unfair that a bidder with negligible or no budget should be able to indefinitely set high prices for other bidders. We recognize of course that there may be technical issues involving the scale and distributed nature of search engines that could make it difficult to implement our model precisely.
2 Model and Notation
We define the SecondPrice Ad Auctions (2PAA) problem formally as follows. The input is a set of ordered keywords and bidders . Each bidder has a budget and a nonnegative bid for every keyword . We assume that all of bidder ’s bids are less than or equal to .
Let be the remaining budget of bidder immediately after the th keyword is processed (so for all ), and let . (Both quantities are defined inductively.) A solution (or secondprice matching) to 2PAA chooses for the th keyword a pair of bidders and such that , allocates the slot for keyword to bidder and charges bidder a price of , the bid of . (We say that acts as the firstprice bidder for and acts as the secondprice bidder for .) The budget of is then reduced by , so . For all other bidders , . The final value of the solution is , and the goal is to find a feasible solution of maximum value.
In the offline version of the problem, all of the bids are known to the algorithm beforehand, whereas in the online version of the problem, keyword and the bids for each are revealed only when keyword arrives, at which point the algorithm must irrevocably map to a pair of bidders without knowing the bids for the keywords that will arrive later.
The special case referred to as SecondPrice Matching (2PM) is the special case where is either 0 or 1 for all pairs and for all . Perhaps, however, it is more intuitive to think of it as a variant on maximum bipartite matching. Viewing it this way, the input is a bipartite graph , and the vertices in must be matched, in order, to the vertices in such that the profit of matching to is if and only if there is at least one additional vertex that is a neighbor of and is unmatched at that time.
Note that in 2PM, a keyword can only be allocated for profit if its degree is at least two. Therefore, we assume without loss of generality that for all inputs of 2PM, the degree of every keyword is is at least two.
For an input to 2PAA, let , and let be the number of keywords.
3 Hardness of Approximation of 2PAA
In this section, we show that it is NPhard to approximate 2PAA to a factor better than for any constant independent of (Theorem 1) and provide a trivial algorithm with an approximation guarantee that meets this factor.
For a constant , let 2PAA() be the version of 2PAA in which we are promised that . The proof of the following theorem (presented in full in Appendix C.1) uses a gadget similar to the instance presented in Figure 3. This gadget exploits the sensitivity of the problem to small changes in budget. In particular, if the budget of a key bidder is reduced by a small amount, then the optimal revenue is much higher than if it is not reduced. We prove our inapproximability result by setting up the reduction so that the budget of this bidder is reduced if and only if the answer to the problem we reduce from is “yes.”
Theorem 1.
Let be a constant integer. For any constant , it is NPhard to approximate 2PAA() to a factor of .
Proof.
See Appendix C.1. ∎
Theorem 2.
Let be a constant integer. There is an approximation to 2PAA().
Proof.
For each keyword , let be the secondhighest bid for . Consider the algorithm that selects the keywords with the highest values of and then allocates these keywords to get for each of them (i.e., chooses the two highest bidders for ). Since no bidder bids more than of its budget for any keyword, no bids are reduced from their original values during this allocation. Hence, the profit of this allocation is at least . Since the value of the optimal solution cannot be larger than , it follows that this is an approximation to 2PAA(). ∎
4 Offline SecondPrice Matching
In this section, we turn our attention to the offline version of the special case of SecondPrice Matching (2PM). First, in Section 4.1, we show that 2PM is APXhard. This stands in contrast to the maximum matching problem, the corresponding special case of the FirstPrice Ad Auctions problem, which is solvable in polynomial time. Second, in Section 4.2, we give a 2approximation for 2PM.
4.1 Hardness of Approximation
To prove our hardness result for 2PM, we use the following result due to Chlebík and Chelbíková.
Theorem 3 (Chlebík and Chelbíková [8]).
It is NPhard to approximate Vertex Cover on 4regular graphs to within .
The precise statement of our hardness result is the following theorem.
Theorem 4.
It is NPhard to approximate 2PM to within a factor of .
Proof.
See Appendix C.2 ∎
4.2 A 2Approximation Algorithm
Consider an instance of the 2PM problem. We provide an algorithm that first finds a maximum matching and then uses to return a secondprice matching that contains at least half of the keywords matched by .^{7}^{7}7Note that is a partial function. Given a matching , call an edge such that an upedge if is matched by and arrives before , and a downedge otherwise. Recall that we have assumed without loss of generality that the degree of every keyword in is at least two. Therefore, every keyword that is matched by must have at least one upedge or downedge. Theorem 5 shows that the following algorithm, called ReverseMatch, is a approximation for 2PM.
ReverseMatch Algorithm: Initialization: Find an arbitrary maximum matching on . Constructing a 2ndprice matching: Consider the matched keywords in reverse order of their arrival. For each keyword : If keyword is adjacent to a downedge : Assign keyword to bidder (with acting as the secondprice bidder). Else: Choose an arbitrary bidder that is adjacent to keyword . Remove the edge from . Assign keyword to bidder (with acting as the secondprice bidder).
Theorem 5.
The ReverseMatch algorithm is a 2approximation.
Proof.
See Appendix C.3. ∎
5 Online SecondPrice Matching
In this section, we consider the online 2PM problem, in which the keywords arrive onebyone and must be matched by the algorithm as they arrive. We start, in Section 5.1, by giving a simple lower bound showing that no deterministic algorithm can achieve a competitive ratio better than , the number of keywords. Then we move to randomized online algorithms and show that no randomized algorithm can achieve a competitive ratio better than . In Section 5.2, we provide a randomized online algorithm that achieves a competitive ratio of .
5.1 Lower Bounds
The following theorem establishes our lower bound on deterministic algorithms, which matches the trivial algorithm of arbitrarily allocating the first keyword to arrive, and refusing to allocate any of the remaining keywords.
Theorem 6.
For any , there is an adversary that creates a graph with keywords that forces any deterministic algorithm to get a competitive ratio no better than .
Proof.
The adversary shows the algorithm a single keyword (keyword ) that has two adjacent bidders, and . If the algorithm does not match keyword at all, a new keyword arrives that is adjacent to two new bidders and . The adversary continues in this way until either keywords arrive or the algorithm matches a keyword . In the first case, the algorithm’s performance is at most (because it might match keyword ), whereas the adversary can match all keywords. Hence, the ratio is at most .
In the second case, the adversary continues as follows. Suppose without loss of generality that the algorithm matches keyword to . Then each keyword , for , has one edge to and one edge to a new bidder . Since the algorithm cannot match any of these keywords for a profit, its performance is . The adversary can clearly match each keyword for profit, for , and if it matches keyword to , then it can use as a secondprice bidder for the remaining keywords to match them all to the ’s for profit. Hence, the adversary can construct a secondprice matching of size at least . ∎
The following theorem establishes our lower bound of for any (randomized) online algorithm for 2PM.
Theorem 7.
The competitive ratio of any randomized algorithm for 2PM must be at least .
Proof.
See Appendix C.4. ∎
5.2 A Randomized Competitive Algorithm
In this section, we provide an algorithm that achieves a competitive ratio of . The result builds on a new generalization of the result that the Ranking algorithm for online bipartite matching achieves a competitive ratio of . This was originally shown by Karp, Vazirani, and Vazirani [16], though a mistake was recently found in their proof by Krohn and Varadarajan and corrected by Goel and Mehta [15].
The online bipartite matching problem is merely the firstprice version of 2PM, i.e., the problem in which there is no requirement for there to exist a secondprice bidder to get a profit of for a match. The Ranking algorithm chooses a random permutation on the bidders and uses that to choose matches for the keywords as they arrive. This is described more precisely below.
Ranking Algorithm: Initialization: Choose a random permutation (ranking) of the bidders . Online Matching: Upon arrival of keyword : Let be the set of neighbors of that have not been matched yet. If , match to the bidder that minimizes .
Karp, Vazirani, and Vazirani, and Goel and Mehta prove the following result.
Theorem 8 (Karp, Vazirani, and Vazirani [16] and Goel and Mehta [15]).
The Ranking algorithm for online bipartite matching achieves a competitive ratio of .
In order to state our generalization of this result, we define the notion of a left copy of a bipartite graph . Intuitively, a left copy of makes copies of each keyword such that the neighborhood of a copy of is the same as the neighborhood of . More precisely, we have the following definition.
Definition 9.
Given a bipartite graph , define a left copy of to be a graph for which and for which there exists a map such that

for each there are exactly vertices such that , and

for all and , if and only if .
Our generalization of Theorem 8 describes the competitive ratio of Ranking on a graph that is a copy of . It’s proof, presented in Appendix B, is a modification to the proof of Theorem 8 presented by Birnbaum and Mathieu [4].
Theorem 10.
Let be a bipartite graph that has a maximum matching of size , and let be a left copy of . Then the expected size of the matching returned by Ranking on is at least
Proof.
See Appendix B. ∎
Using this result, we are able to prove that the following algorithm, called RankingSimulate, achieves a competitive ratio of .
RankingSimulate Algorithm: Initialization: Set , the set of matched bidders, to . Set , the set of reserved bidders, to . Choose a random permutation (ranking) of the bidders . Online Matching: Upon arrival of keyword : Let be the set of neighbors of that are not in or . If , do nothing. If , let be the single bidder in . With probability , match to and add to , and With probability , add to . If , let and be the two distinct bidders in that minimize . With probability , match to , add to , and add to , and With probability , match to , add to , and add to .
Let be the bipartite input graph to 2PM, and let be a left copy of . In the arrival order for , the two copies of each keyword arrive in sequential order.
Lemma 11.
Fix a ranking on . For each bidder , let be the indicator variable for the event that is matched by Ranking on , when the ranking is .^{8}^{8}8Note that once is fixed, is deterministic. Let be the indicator variable for the event that is matched by RankingSimulate on , when the ranking is . Then .
Proof.
It is easy to establish the invariant that for all , if and only if RankingSimulate puts in either or . Furthermore, each bidder is put in or at most once by RankingSimulate. The lemma follows because each time RankingSimulate adds a bidder to or , it matches it with probability . ∎
Theorem 12.
The competitive ratio of RankingSimulate is .
Proof.
For a permutation on , let be the matching of returned by RankingSimulate, and let be the matching of returned by Ranking. Lemma 11 implies that, conditioned on , . By Theorem 10,
Fix a bidder . Let be the profit from obtained by RankingSimulate. Suppose that is matched by RankingSimulate to keyword . Recall that we have assumed without loss of generality that the degree of is at least . Let be another bidder adjacent to . Then, given that is matched to , the probability that is matched to any keyword is no greater than . Therefore, . Hence, the expected value of the secondprice matching returned by RankingSimulate is
where is the size of the optimal secondprice matching on . ∎
6 Conclusion
In this paper, we have shown that the complexity of the SecondPrice Ad Auctions problem is quite different from that of the more studied FirstPrice Ad Auctions problem, and that this discrepancy extends to the special case of 2PM, whose firstprice analogue is bipartite matching. On the positive side, we have given a 2approximation for offline 2PM and a 5.083competitive algorithm for online 2PM.
Some open questions remain. Closing the gap between and in the approximability of offline 2PM is one clear direction for future research, as is closing the gap between and in the competitive ratio for online 2PM. Another question we leave open is whether the analysis for RankingSimulate is tight, though we expect that it is not.
References
 [1] Zoe Abrams, Ofer Mendelevitch, and John Tomlin. Optimal delivery of sponsored search advertisements subject to budget constraints. In EC ’07: Proceedings of the 8th ACM conference on Electronic commerce, New York, NY, USA, 2007. ACM.
 [2] Nir Andelman and Yishay Mansour. Auctions with budget constraints. In Proceedings of SWAT ’04 (Lecture Notes in Computer Science 3111), pages 26–38. Springer, 2004.
 [3] Yossi Azar, Benjamin Birnbaum, Anna R. Karlin, Claire Mathieu, and C. Thach Nguyen. Improved approximation algorithms for budgeted allocations. In ICALP ’08 (to appear), 2008.
 [4] Benjamin Birnbaum and Claire Mathieu. Online bipartite matching made simple. SIGACT News, 39(1):80–87, 2008.
 [5] Niv Buchbinder, Kamal Jain, and Joseph (Seffi) Naor. Online primaldual algorithms for maximizing adauctions revenue. In Proceedings of ESA ’07 (Lecture Notes in Computer Science 4698), pages 253–264. Springer, 2007.
 [6] D. Chakrabarty and G. Goel. On the approximability of budgeted allocations and improved lower bounds for submodular welfare maximization and GAP. Manuscript, 2008.
 [7] Chandra Chekuri and Sanjeev Khanna. A ptas for the multiple knapsack problem. In SODA ’00: Proceedings of the eleventh annual ACMSIAM symposium on Discrete algorithms, pages 213–222, Philadelphia, PA, USA, 2000. Society for Industrial and Applied Mathematics.
 [8] Miroslav Chelbík and Janka Chlebíková. Complexity of approximating bounded variants of optimization problems. Theoretical Computer Science, 354(3):320–338, 2006.
 [9] Shahar Dobzinski and Michael Schapira. An improved approximation algorithm for combinatorial auctions with submodular bidders. In SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm, pages 1064–1073, New York, NY, USA, 2006. ACM.
 [10] Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. Internet advertising and generalized secondprice auction: Selling billions of dollars worth of keywords. American Economic Review, 97:242–259, 2007.
 [11] Uriel Feige and Jan Vondrak. Approximation algorithms for allocation problems: Improving the factor of 1  1/e. FOCS, 0:667–676, 2006.
 [12] Lisa Fleischer, Michel X. Goemans, Vahab S. Mirrokni, and Maxim Sviridenko. Tight approximation algorithms for maximum general assignment problems. In SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm, pages 611–620, New York, NY, USA, 2006. ACM.
 [13] Michael R. Garey and David S. Johnson. Computers and Intractability. W. H. Freeman and Company, 1979.
 [14] Ashish Goel, Mohammad Mahdian, Hamid Nazerzadeh, and Amin Saberi. Advertisement allocation for generalized second pricing schemes. In Workshop on Sponsored Search Auctions (to appear), 2008.
 [15] Gagan Goel and Aranyak Mehta. Online budgeted matching in random input models with applications to adwords. In SODA ’08: Proceedings of the nineteenth annual ACMSIAM symposium on discrete algorithms, pages 982–991, Philadelphia, PA, USA, 2008. Society for Industrial and Applied Mathematics.
 [16] R. M. Karp, U. V. Vazirani, and V. V. Vazirani. An optimal algorithm for online bipartite matching. In STOC ’90: Proceedings of the twentysecond annual ACM symposium on Theory of computing, pages 352–358, New York, NY, USA, 1990. ACM Press.
 [17] Subhash Khot, Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta. Inapproximability results for combinatorial auctions with submodular utility functions. In Proceedings of WINE ’05 (Lecture Notes in Computer Science 3828, pages 92–101. Springer, 2005.
 [18] Sebastien Lahaie, David M. Pennock, Amin Saberi, and Rakesh V. Vohra. Sponsored search auctions. In Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V. Vazirani, editors, Algorithmic Game Theory, pages 699–716. Cambridge University Press, 2007.
 [19] Benny Lehmann, Daniel Lehmann, and Noam Nisan. Combinatorial auctions with decreasing marginal utilities. Games and Economic Behavior, 55(2):270–296, 2006.
 [20] Mohammad Mahdian, Hamid Nazerzadeh, and Amin Saberi. Allocating online advertisement space with unreliable estimates. In EC ’07: Proceedings of the 8th ACM conference on Electronic commerce, pages 288–294, New York, NY, USA, 2007. ACM.
 [21] Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani. Adwords and generalized online matching. J. ACM, 54(5):22, 2007.
 [22] Vahab Mirrokni, Michael Schapira, and Jan Vondrak. Tight informationtheoretic lower bounds for welfare maximization in combinatorial auctions. In EC ’08, 2008.
 [23] David Shmoys and Eva Tardos. An approximation algorithm for the generalized assignment problem. Mathematical Programming, 62:461–474, 1993.
 [24] Aravind Srinivasan. Budgeted allocations in the fullinformation setting. In APPROX ’08 (to appear), 2008.
 [25] Hal R. Varian. Position auctions. International Journal of Industrial Organization, 25:1163–1178, 2007.
 [26] Jan Vondrak. Optimal approximation for the submodular welfare problem in the value oracle model. In STOC ’08: Proceedings of the 40th annual ACM symposium on Theory of computing, pages 67–74, New York, NY, USA, 2008. ACM.
 [27] Andrew Yao. Probabilistic computations: Toward a unified measure of complexity. In FOCS ’77: Proceedings of the 18th IEEE Symposium on Foundations of Computer Science, pages 222–227, 1977.
Appendix A Discussion of Related Models
In this section, we prove the statements claimed in the introduction regarding the relationships between the optimal solutions of our model and the strict and nonstrict models of Goel et al. [14]. Given an instance , let be the optimal solution value in our model; let be the optimal solution value under the strict model; and let be the optimal solution value under the nonstrict model. We prove the following theorem.
Theorem 13.
For any instance , .
The first inequality follows from the work of Goel et al. [14], so we need only prove that .
The core of our argument is a reduction from 2PAA to the FirstPrice Ad Auctions problem (1PAA),^{9}^{9}9Recall that this problem has also been called the Adwords problem [21] and the Maximum Budgeted Allocation problem [3, 6, 24]. in which only one bidder is chosen for each keyword and that bidder pays the minimum of its bid and its remaining budget. Given an instance of 2PAA, we construct an instance of 1PAA problem by replacing each bid by
Denote by the optimal value of the firstprice model on . The following two lemmas prove Theorem 13 by relating both and to .
Lemma 14.
.
Proof.
For an instance , we can view a nonstrict secondprice allocation of as a pair of (partial) functions and from the keywords to the bidders , where maps each keyword to the bidder to which it is allocated and maps each keyword to the bidder acting as its secondprice bidder. Thus, if and then is allocated to and the price pays is . We have, for all such , , and , that .
We construct the firstprice allocation on defined by and claim that the value of this firstprice allocation is at least the value of the nonstrict allocation defined by and . It suffices to show that for any bidder , the profit that the nonstrict allocation gets from is at most the profit that the firstprice allocation gets from , or in other words,
This inequality follows trivially from the fact that for all allocated keywords , and hence the lemma follows. ∎
Lemma 15.
.
Proof.
Given an optimal firstprice allocation of , we can assume without loss of generality that each bidder’s budget can only be exhausted by the last keyword allocated to it, or, more formally, if are the keywords that are allocated to a bidder and they come in that order, then we can assume that . The reason for this is that if for some , and , then we can ignore the allocation of to without losing any profit.
With this assumption, we design a randomized algorithm that constructs a secondprice allocation on whose expected value in our model is at least of the firstprice allocation’s value. Viewing the firstprice allocation of as a (partial) function from the keywords to the bidders and denoting by the bidder for which , the algorithm is as follows.
Random Construction: Randomly mark each bidder with probability . For each unmarked bidder : Let . For each keyword such that : If is marked: . Assume that , where come in that order. If : Let and for all . Else: If : let and for all . Else: let and .
We claim that for the and defined by this construction, whenever is set to , the profit from that allocation is . This is not trivial because in our model, if a bidder’s remaining budget is smaller than its bid for a keyword, it changes its bid for that keyword to its remaining budget. However, one can easily verify that in all cases, if we set and , the remaining budget of is at least . Thus, the (modified) bid of for is still at least the original bid of for .
We claim that the expected value of the secondprice allocation defined by and is at least . For each bidder , let be the random variable denoting the profit that and get from , and let be the profit that gets from . We have , so it suffices to show that for all .
Consider any that is unmarked. Let . If then . If then . Thus, in both case, we have
which implies
∎
Appendix B Proof of Theorem 10
In this appendix, we provide a full proof of Theorem 10. The proof presented here is quite similar to the simplified proof of Theorem 8 presented by Birnbaum and Mathieu [4]. For intuition into the proof presented here, the interested reader is referred to that work.^{10}^{10}10For those familiar with the proof in [4], the main difference between the proof of Theorem 10 presented here and the proof of Theorem 8 presented in [4] appears in Lemma 21. Instead of letting be the single vertex that is matched to by the perfect matching, as is done in [4], we choose uniformly at random from one of the vertices that correspond to the vertex that is matched to by the perfect matching. The rest of the proof is essentially the same, but we present its entirety here for completeness.
Let be a bipartite graph and let be a left copy of . Let be a map that satisfies the conditions of Definition 9. Let be a maximum matching of .
Let denote the matching constructed on for arrival order , when the ranking is . Consider another process in which the vertices in arrive in the order given by and are matched to the available vertex that minimizes . Call the matching constructed by this process . It is not hard to see that these matchings are identical, a fact that is proved in [16].
Lemma 16 (Karp, Vazirani, and Vazirani [16]).
For any permutations and , .
The following monotonicity lemma shows that removing vertices in can only decrease the size of the matching returned by Ranking.
Lemma 17.
Let be an arrival order for the vertices in , and let be a ranking on the vertices in . Suppose that is a vertex in , and let . Let and be the orderings of and induced by and , respectively. Then .
Proof.
Suppose first that . In this case, and . Let be the set of vertices matched to vertices in that arrive at or before time (under arrival order and ranking ), and let be the set of vertices matched to vertices in that arrive at or before time (under arrival order and ranking ). We prove by induction on that , which by substituting is sufficient to prove the claim. The statement holds when , since . Now supposing we have , we prove . Suppose that is at or before the time that arrives in . Then clearly . Now suppose that is after the time that arrives in . Let be the vertex that arrives at time in . If is not matched by , then . Now suppose that is matched by , say to vertex . We show that , which by the induction hypothesis, is enough to prove that . Note that arrives at time in . Let be the vertex to which is matched by . If , we are done, so suppose that . Since , it follows by the induction hypothesis that . Therefore, vertex is available to be matched to when it arrives in . Since matched to instead, must have a lower rank than in . Since chose , vertex must have already been matched when vertex arrived at time in , or, in other words, .
Now suppose that . In this case, and . Let be the set of vertices matched to vertices in that are ranked less than or equal to (under arrival order and ranking ), and let be the set of vertices matched to vertices in that are ranked less than or equal to (under arrival order and ranking ). Then by Lemma 16, we can apply the same argument as before to show that for all , which by substituting , is sufficient to prove the claim. ∎
We define the following notation. For all , let be the set of all such that , and for any subset , let be the set of all such that . The following lemma shows that we can assume without loss of generality that is a perfect matching.
Lemma 18.
Let and be the subset of vertices that are in . Let be the subgraph of induced by , and let be the subgraph of induced by . Then the expected size of the matching produced by Ranking on is no greater than the expected size of the matching produced by Ranking on .
Proof.
The proof follows by repeated application of Lemma 17 for all that are not in . ∎
In light of Lemma 18, to prove Theorem 10, it is sufficient to show that the expected size of the matching produced by Ranking on is at least . To simplify notation, we instead assume without loss of generality that , and hence has a perfect matching. Let . Henceforth, fix an arrival order . To simplify notation, we write to mean .
Let be a map such that for all , there are exactly vertices such that . The existence of such a map follows from the assumption that has a perfect matching. For any vertex let be the set of such that . We proceed with the following two lemmas.
Lemma 19.
Let , and let . For any ranking , if is not matched by , then is matched to a vertex whose rank is less than the rank of in .
Proof.
If is not matched by , then since there is an edge between and , it was available to be matched to when it arrived. Therefore, by the behavior of , must have been matched to a vertex of lower rank. ∎
Lemma 20.
Let , and let . Fix an integer such that . Let be a permutation, and let be the permutation obtained from by removing vertex and putting it back in so its rank is . If is not matched by , then must be matched by to a vertex whose rank in is less than or equal to .
Proof.
For the proof, it is convenient to invoke Lemma 16 and consider and instead of and . In the process by which constructs its matching, call the moment that the vertex in arrives time . For any , let (resp., ) be the set of vertices in matched by time in (resp., ). By Lemma 19, if is not matched by , then must be matched to a vertex in such that . Hence . We prove the lemma by showing that . Let be the time that arrives in . Then if , the two orders and are identical through time , which implies that .
Now, in the case that , we prove that for , . The proof, which is similar to the proof of Lemma 17, proceeds by induction on . When , the claim clearly holds, since . Now, supposing that , we prove that . If , then the two orders and are identical through time , so . Now suppose that . Then the vertex that arrives at time in is the same as the vertex that arrives at time in . Call this vertex . If is not matched by , then , and we are done by the induction hypothesis. Now suppose that is matched to vertex by and to vertex by . If , then again we are done by the induction hypothesis, so suppose that . Since was available at time in , we have , and by the induction hypothesis . Hence, was available at time in . Since matched to , it must be that . This implies that must be matched when arrives at time in , or in other words, . By the induction hypothesis, we are done. ∎
Lemma 21.
For , let denote the probability over that the vertex ranked in is matched by . Then
(1) 
Proof.
Let be permutation chosen uniformly at random, and let be a permutation obtained from by choosing a vertex uniformly at random, taking it out of , and putting it back so that its rank is . Note that both and are distributed uniformly at random among all permutations. Let be a vertex chosen uniformly at random from . Note that conditioned on , is equally likely to be any of the vertices in . Let be the set of vertices in that are matched by to a vertex of rank or lower in . Lemma 20 states that if is not matched by , then . The expected size of is . Hence, the probability that , conditioned on , is . The lemma follows because the probability that is not matched by is . ∎
We are now ready to prove Theorem 10.
Appendix C Other Missing Proofs
In this appendix, we provide the missing proofs from the body of the paper.
c.1 Proof of Theorem 1
Fix a constant , and let be the smallest integer such that for all ,
(2) 
and
(3) 
Note that since depends only on , it is a constant.
We reduce from PARTITION, in which the input is a set of items, and the weight of the th item is given by . If , then the question is whether there is a partition of the items into two subsets of size such that the sum of the ’s in each subset is . It is known that this problem (even when the subsets must both have size ) is NPhard [13].
Suppose that there is an approximation algorithm to 2PAA(); we will show that constructing the following instance of 2PAA() (illustrated in Figure 4) allows us to use the approximation to solve the PARTITION instance:

First, create keywords . Second, create an additional set
of keywords. The keywords arrive in the order