Dueling algorithms

Dueling algorithms

Nicole Immorlica Department of Electrical Engineering and Computer Science, Northwestern UniversityPart of this work was performed while the author was at Microsoft Research    Adam Tauman Kalai Microsoft Research New England    Brendan Lucier Department of Computer Science, University of Toronto    Ankur Moitra Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. Supported in part by a Fannie Hurts Fellowship.    Andrew Postlewaite Department of Economics, University of Pennsylvania    Moshe Tennenholtz Microsoft R&D Israel and the Technion, Israel

We revisit classic algorithmic search and optimization problems from the perspective of competition. Rather than a single optimizer minimizing expected cost, we consider a zero-sum game in which an optimization problem is presented to two players, whose only goal is to outperform the opponent. Such games are typically exponentially large zero-sum games, but they often have a rich combinatorial structure. We provide general techniques by which such structure can be leveraged to find minmax-optimal and approximate minmax-optimal strategies. We give examples of ranking, hiring, compression, and binary search duels, among others. We give bounds on how often one can beat the classic optimization algorithms in such duels.

1 Introduction

Many natural optimization problems have two-player competitive analogs. For example, consider the ranking problem of selecting an order on items, where the cost of searching for a single item is its rank in the list. Given a fixed probability distribution over desired items, the trivial greedy algorithm, which orders items in decreasing probability, is optimal.

Next consider the following natural two-player version of the problem, which models a user choosing between two search engines. The user thinks of a desired web page and a query and executes the query on both search engines. The engine that ranks the desired page higher is chosen by the user as the “winner.” If the greedy algorithm has the ranking of pages , then the ranking beats the greedy ranking on every item except . We say the greedy algorithm is beatable because there is a probability distribution over pages for which the greedy algorithm loses of the time. Thus, in a competitive setting, an “optimal” search engine can perform poorly against a clever opponent.

This ranking duel can be modeled as a symmetric constant-sum game, with strategies, in which the player with the higher ranking of the target page receives a payoff of 1 and the other receives a payoff of 0 (in the case of a tie, say they both receive a payoff of 1/2). As in all symmetric one-sum games, there must be (mixed) strategies that guarantee expected payoff of at least 1/2 against any opponent. Put another way, there must be a (randomized) algorithm that takes as input the probability distribution and outputs a ranking, which is guaranteed to achieve expected payoff of at least against any opposing algorithm.

This conversion can be applied to any optimization problem with an element of uncertainty. Such problems are of the form , where is a probability distribution over the state of nature , is a feasible set, and is an objective function. The dueling analog has two players simultaneously choose ; player 1 receives payoff 1 if , payoff if , payoff otherwise, and similarly for player 2.111Our techniques will also apply to asymmetric payoff functions; see Appendix D.

There are many natural examples of this setting beyond the ranking duel mentioned above. For example, for the shortest-path routing under a distribution over edge times, the corresponding racing duel is simply a race, and the state of nature encodes uncertain edge delays.222 We also refer to this as the primal duel because any other duel can be represented as a race with an appropriate graph and probability distribution , though there may be an exponential blowup in representation size. For the classic secretary problem, in the corresponding hiring duel two employers must each select a candidate from a pool of candidates (though, as standard, they must decide whether or not to choose a candidate before interviewing the next one), and the winner is the one that hires the better candidate. This could model, for example, two competing companies attempting to hire CEOs or two opposing political parties selecting politicians to run in an election; the absolute quality of the candidate may be less important than being better than the other’s selection. In a compression duel, a user with a (randomly chosen) sample string chooses between two compression schemes based on which one compresses that string better. This setting can also model a user searching for a file in two competing, hierarchical storage systems and choosing the system that finds the file first. In a binary search duel, a user searches for a random element in a list using two different search trees, and chooses whichever tree finds the element faster.

Our contribution.

For each of these problems, we consider a number of questions related to how vulnerable a classic algorithm is to competition, what algorithms will be selected at equilibrium, and how well these strategies at equilibrium solve the original optimization problem.

Question 1.

Will players use the classic optimization solution in the dueling setting?

Intuitively, the answer to this question should depend on how much an opponent can game the classic optimization solution. For example, in the ranking duel an opponent can beat the greedy algorithm on almost all pages – and even the most oblivious player would quickly realize the need to change strategies. In contrast, we demonstrate that many classic optimization solutions – such as the secretary algorithm for hiring, Huffman coding for compression, and standard binary search – are substantially less vulnerable. We say an algorithm is -beatable (over distribution ) if there exists a response which achieves payoff against that algorithm (over distribution ). We summarize our results on the beatability of the standard optimization algorithm in each of our example optimization problems in the table below:

Optimization Problem Upper Bound Lower Bound
Question 2.

What strategies do players play at equilibrium?

We say an algorithm efficiently solves the duel if it takes as input a representation of the game and probability distribution , and outputs an action distributed according to some minmax optimal (i.e., Nash equilibrium) strategy. As our main result, we give a general method for solving duels that can be represented in a certain bilinear form. We also show how to convert an approximate best-response oracle for a dueling game into an approximate minmax optimal algorithm, using techniques from low-regret learning. We demonstrate the generality of these methods by showing how to apply them to the numerous examples described above. For many problems we consider, the problem of computing minmax optimal strategies reduces to finding a simple description of the space of feasible mixed strategies (i.e. expressing this set as the projection of a polytope with polynomially many variables and constraints). See [18] for a thorough treatment of such problems.

Question 3.

Are these equilibrium strategies still good at solving the optimization problem?

As an example, consider the ranking duel. How much more time does a web surfer need to spend browsing to find the page he is interested in, because more than one search engine is competing for his attention? In fact, the surfer may be better off due to competition, depending on the model of comparison. For example, the cost to the web surfer may be the minimum of the ranks assigned by each search engine. And we leave open the tantalizing possibility that this quantity could in general be smaller at equilibrium for two competing search engines than for just one search engine playing the greedy algorithm.

Related work.

The work most relevant to ours is the study of ranking games [4], and more generally the study of social context games [1]. In these settings, players’ payoffs are translated into utilities based on social contexts, defined by a graph and an aggregation function. For example, a player’s utility can be the sum/max/min of his neighbors’ payoffs. This work studies the effect of social contexts on the existence and computation of game-theoretic solution concepts, but does not re-visit optimization algorithms in competitive settings.

For the hiring problem, several competitive variants and their algorithmic implications have been considered (see, e.g., [10] and the references therein). A typical competitive setting is a (general sum) game where a player achieves payoff of 1 if she hires the very best applicant and zero otherwise. But, to the best of our knowledge, no one has considered the natural model of a duel where the objective is simply to hire a better candidate than the opponent. Also related to our algorithmic results are succinct zero-sum games, where a game has exponentially many strategies but the payoff function can be computed by a succinct circuit. This general class has been showed to be EXP-hard to solve [6], and also difficult to approximate [7].

Finally, we note the line of research on competition among mechanisms, such as the study of competing auctions (see e.g. [5, 15, 16, 17]) or schedulers [2]. In such settings, each player selects a mechanism and then bidders select the auction to participate in and how much to bid there, where both designers and bidders are strategic. This work is largely concerned with the existence of sub-game perfect equilibrium.


In Section 2 we define our model formally and provide a general framework for solving dueling problems as well as the warmup example of the ranking duel. We then use these tools to analyze the more intricate settings of the hiring duel (Section 3), the compression duel (Section 4), and the search duel (Section 5). We describe avenues of future research in Section 6.

2 Preliminaries

A problem of optimization under uncertainty, , is specified by a feasible set , a commonly-known distribution over the state of nature, , chosen from set , and an objective function . For simplicity we assume all these sets are finite. When is clear from context, we write the expected cost of as . The one-player optimum is . Algorithm takes as input and randomness , and outputs . We define and an algorithm is one-player optimal if .

In the two-person constant-sum duel game , players simultaneously choose , and player 1’s payoff is:

When is understood from context we write . Player 2’s payoff is . This models a tie, , as a half point for each. We define the value of a strategy, , to be how much that strategy guarantees, Again, when is understood from context we write simply .

The set of probability distributions over set is denoted . A mixed strategy is . As is standard, we extend the domain of to mixed strategies bilinearly by expectation. A best response to mixed strategy is a strategy which yields maximal payoff against , i.e., is a best response to if it maximizes A minmax strategy is a (possibly mixed) strategy that guarantees the safety value, in this case 1/2, against any opponent play. The best response to such a strategy yields payoffs of 1/2. The set of minmax strategies is denoted . A basic fact about constant-sum games is that the set of Nash equilibria is the cross product of the minmax strategies for player 1 and those of player 2.

2.1 Bilinear duels

In a bilinear duel, the feasible set of strategies are points in -dimensional Euclidean space, i.e., , and the payoff to player 1 is for some matrix . In bimatrix games, and are just simplices . Let be the convex hull of . Any point in is achievable (in expectation) as a mixed strategy. Similarly define . As we will point out in this section, solving these reduces to linear programming with a number of constraints proportional to the number of constraints necessary to define the feasible sets, and . (In typical applications, and have a polynomial number of facets but an exponential number of vertices.)

Let be a polytope defined by the intersection of halfspaces, Similarly, let be the intersection of halfspaces . The typical way to reduce to an LP for constant-sum games is:

The above program has a number of constraints which is , ( constraints guaranteeing that ), and is typically exponential. Instead, the following linear program has constraints, and hence can be found in time polynomial in and the bit-size representation of and the constraints in and .

Lemma 1.

For any constant-sum game with strategies and payoffs , the maximum of the above linear program is the value of the game to player 1, and any maximizing is a minmax optimal strategy.


First we argue that the value of the above LP is at least as large as the value of the game to player 1. Let maximize the above LP and let the maximum be . For any ,

Hence, this means that strategy guarantees player at least against any opponent response, . Hence with equality iff is minmax optimal. Next, let be any minmax optimal strategy, and let be the value of the constant-sum game. This means that for all with equality for some point. In particular, the minmax theorem (equivalently, duality) means that the LP has a minimum value of and that there is a vector of such that and . Hence . ∎

2.2 Reduction to bilinear duels

The sets in a duel are typically objects such as paths, trees, rankings, etc., which are not themselves points in Euclidean space. In order to use the above approach to reduce a given duel to a bilinear duel in a computationally efficient manner, one needs the following:

  1. An efficiently computable function which maps any to a feasible point in .

  2. A payoff matrix demonstrating such that , demonstrating that the problem is indeed bilinear.

  3. A set of polynomially many feasible constraints which defines .

  4. A “randomized rounding algorithm” which takes as input a point in outputs an object in .

In many cases, parts (1) and (2) are straightforward. Parts (3) and (4) may be more challenging. For example, for the binary trees used in the compression duel, it is easy to map a tree to a vector of node depths. However, we do not know how to efficiently determine whether a given vector of node depths is indeed a mixture over trees (except for certain types of trees which are in sorted order, like the binary search trees in the binary search duel). In the next subsection, we show how computing approximate best responses suffices.

2.3 Approximating best responses and approximating minmax

In some cases, the polytope may have exponentially or infinitely many facets, in which case the above linear program is not very useful. In this section, we show that if one can compute approximate best responses for a bilinear duel, then one can approximate minmax strategies.

For any , an -best response to a player 2 strategy is any such that . Similarly for player 1. An -minmax strategy for player 1 is one that guarantees player 1 an expected payoff not worse than minus the value, i.e.,

Best response oracles are functions from to and vice versa. However, for many applications (and in particular the ones in this paper) where all feasible points are nonnegative, one can define a best response oracle for all nonnegative points in the positive orthant. (With additional effort, one can remove this assumption using Kleinberg and Awerbuch’s elegant notion of a Barycentric spanner [3].) For scaling purposes, we assume that for some , the convex sets are and and the matrix is bounded as well.

Fix any . We suppose that we are given an -approximate best response oracle in the following sense. For player 1, this is an oracle which has the property that for any . Similarly for for player 2. Hence, one is able to potentially respond to things which are not feasible strategies of the opponent. As can be seen in a number of applications, this does not impose a significant additional burden.

Lemma 2.

For any , , , and any bilinear dual with convex and and , and any -best response oracles, there is an algorithm for finding -minmax strategies . The algorithm uses runtime and make oracle calls.

The reduction and proof is deferred to Appendix A. It uses Hannan-type of algorithms, namely “Follow the expected leader” [11].

We reduce the compression duel, where the base objects are trees, to a bilinear duel and use the approximate best response oracle. To perform such a reduction, one needs the following.

  1. An efficiently computable function which maps any to a feasible point in .

  2. A bounded payoff matrix demonstrating such that , demonstrating that the problem is indeed bilinear.

  3. -best response oracles for players 1 and 2. Here, the input to an best response oracle for player 1 is .

2.4 Beatability

One interesting quantity to examine is how well a one-player optimization algorithm performs in the two-player game. In other words, if a single player was a monopolist solving the one-player optimization problem, how badly could they be beaten if a second player suddenly entered. For a particular one-player-optimal algorithm , we define its beatability over distribution to be , and we define its beatability to be .

2.5 A warmup: the ranking duel

In the ranking duel, , is the set of permutations over items, and is the position of in (rank 1 is the “best” rank). The greedy algorithm, which outputs permutation such that , is optimal in the one-player version of the problem.333In some cases, such as a model of competing search engines, one could have the agents rank only items, but the algorithmic results would be similar.

This game can be represented as a bilinear duel as follows. Let and be the set of doubly stochastic matrices, Here indicates the probability that item is placed in position , in some distribution over rankings. The Birkhoff-von Neumann Theorem states that the set is precisely the set of probability distributions over rankings (where each ranking is represented as a permutation matrix ), and moreover any such can be implemented efficiently via a form of randomized rounding. See, for example, Corollary 1.4.15 of [14]. Note is a polytope in dimensions with facets. In this representation, the expected payoff of versus is

The above is clearly bilinear in and and can be written as for some matrix with bounded coefficients. Hence, we can solve the bilinear duel by the linear program (1) and round it to a (randomized) minmax optimal algorithm for ranking.

We next examine the beatability of the greedy algorithm. Note that for the uniform probability distribution , the greedy algorithm outputting, say, can be beaten with probability by the strategy . One can make greedy’s selection unique by setting , and for sufficient small greedy can be beaten a fraction of time arbitrarily close to .

3 Hiring Duel

In a hiring duel, there are two employers and and two corresponding sets of workers and with workers each. The ’th worker of each set has a common value where for all and . Thus there is a total ranking of workers (similarly ) where a rank of indicates the best worker, and workers are labeled according to rank. The goal of the employers is to hire a worker whose value (equivalently rank) beats that of his competitor’s worker. Workers are interviewed by employers one-by-one in a random order. The relative ranks of workers are revealed to employers only at the time of the interview. That is, at time , each employer has seen a prefix of the interview order consisting of of workers and knows only the projection of the total ranking on this prefix.444In some cases, an employer also knows when and whom his opponent hired, and may condition his strategy on this information as well. Only one of the settings described below needs this knowledge set; hence we defer our discussion of this point for now and explicitly mention the necessary assumptions where appropriate. Hiring decisions must be made at the time of the interview, and only one worker may be hired. Thus the employers’ pure strategies are mappings from any prefix and permutation of workers’ ranks in that prefix to a binary hiring decision. We note that the permutation of ranks in a prefix does not effect the distribution of the rank of the just-interviewed worker, and hence without loss of generality we may assume the strategies are mapings from the round number and current rank to a hiring decision.

In dueling notation, our game is where the elements of are functions indicating for any round and projected rank of current interviewee the hiring decision ; is the set of all pairs of permutations of and ; is the value of the first candidate (where indicates the projected rank of the ’th candidate among the first candidates according to ) that received an offer; and (as is typical in the secretary problem) is the uniform distribution over . The mixed strategies are simply mappings from rounds and projected ranks to a probability of a hiring decision.

The values may be chosen adversarially, and hence in the one-player setting the optimal algorithm against a worst-case is the one that maximizes the probability of hiring the best worker (the worst-case values set and for ). In the literature on secretary problems, the following classical algorithm is known to hire the best worker with probability approaching : Interview n/e workers and hire next one that beats all the previous. Furthermore, there is no other algorithm that hires the best worker with higher probability.

3.1 Common pools of workers

In this section, we study the common hiring duel in which employers see the same candidates in the same order so that and each employer observes when the other hires. In this case, the following strategy is a symmetric equilibrium: If the opponent has already hired, then hire anyone who beats his employee; otherwise hire as soon as the current candidate has at least a chance of being the best of the remaining candidates.

Lemma 3.

Strategy is efficiently computable and constitutes a symmetric equilibrium of the common hiring duel.

The computability follows from a derivation of probabilities in terms of binomials, and the equilibrium claim follows by observing that there can be no profitable deviation. This strategy also beats the classical algorithm, enabling us to provide non-trivial lower and upper bounds for its beatability.


For a round , we compute a threshold such that hires if and only if the projected rank of the current candidate is at most . Note that if candidates are observed, the probability that the ’th best among them is better than all remaining candidates is precisely . The numerator is the number of ways to place the through ’th best candidates overall among the first and the denominator is the number of ways to place the through ’th best among the whole order. Hence to efficiently compute we just need to compute or, equivalently, estimate these ratios of binomials and hire whenever on round and observing the ’th best so far, .

We further note is a symmetric equilibrium since if an employer deviates and hires early then by definition the opponent has a better than chance of getting a better candidate. Similarly, if an employer deviates and hires late then by definition his candidate has at most a chance of being a better candidate than that of his opponent. ∎

Lemma 4.

The beatability of the classical algorithm is at least and at most .

The lower bound follows from the fact that beats the classical algorithm with probability bounded above when the classical algorithm hires early (i.e., before round ), and the upper bound follows from the fact that the classical algorithm guarantees a probability of of hiring the best candidate, in which case no algorithm can beat it.


For the lower bound, note that in any event, guarantees a payoff of at least against the classical algorithm. We next argue that for a constant fraction of the probability space, guarantees a payoff of strictly better than . In particular, for some , consider the event that the classical algorithm hires in the interval . This event happens whenever the best among the first candidates is not among the first candidates, and hence has a probability of . Conditioned on this event, beats the classical algorithm whenever the best candidate overall is in the last candidates,555This is a loose lower bound; there are many other instances where also wins, e.g., if the second-best candidate is in the last candidates and the best occurs after the third best in the first candidates. which happens with probability (the conditioning does not change this probability since it is only a property of the permutation projected onto the first elements). Hence the overall payoff of against the classical algorithm is . Optimizing for yields the result.

For the upper bound, note as mentioned above that the classical algorithm has a probability approaching of hiring the best candidate. From here, we see is an upper bound on the beatability of the classical algorithm since the best an opponent can do is always hire the best worker when the classical algorithm hires the best worker and always hire a better worker when the classical algorithm does not hire the best worker. ∎

3.2 Independent pools of workers

In this section, we study the independent hiring duel in which the employers see different candidates. Thus and the employers do not see when the opponent hires. We use the bilinear duel framework introduced in Section 2.1 to compute an equilibrium for this setting, yielding the following theorem.

Theorem 1.

The equilibrium strategies of the independent hiring duel are efficiently computable.

The main idea is to represent strategies by vectors where is the (total) probability of hiring the ’th best candidate seen so far on round . Let be the probability of reaching round , and note it can be computed from the . Recall is the probability of hiring the ’th best so far at round conditional on seeing the ’th best so far at round . Thus using Bayes’ Rule we can derive an efficiently-computable bijective mapping (with an efficiently computable inverse) between and which simply sets . It only remains to show that one can find a matrix such that the payoff of a strategy versus a strategy is . This is done by calculating the appropriate binomials.

We show how to apply the bilinear duel framework to compute the equilibrium of the independent hiring duel. This requires the following steps: define a subset of Euclidean space to represent strategies, define a bijective mapping between and feasible (mixed) strategies , and show how to represent the payoff matrix of strategies in the bilinear duel space. We discuss each step in order.

Defining . For each and we define to be the (total) probability of seeing and hiring the ’th best candidate seen so far at round . Our subspace consists of the collection of probabilities . To derive constraints on this space, we introduce a new variable representing the probability of reaching round . We note that the probability of reaching round must equal the probability of reaching round and not hiring, so that . Furthermore, the probability can not exceed the probability of reaching round and interviewing the ’th best candidate seen so far. The probability of reaching round is by definition, and the probability that the projected rank of the ’th candidate is is by our choice of a uniformly random permutation. Thus . Together with the initial condition that , these constraints completely characterize .

Mapping. Recall a strategy indicates for each and the conditional probability of making an offer given that the employer is interviewing the ’th candidate and his projected rank is whereas is the total probability of interviewing the ’th candidate with a projected rank of and making an offer. Thus and so . Together with the equailities derived above that and , we can recursively map any strategy to efficiently. To map back we just take the inverse of this bijection: given a point in , we compute the (unique) satisfying the constraints and , and define .

Payoff Matrix. By the above definitions, for any strategy and corresponding mapping , the probability that the strategy hires the ’th best so far on round is . Given that employer hires the ’th best so far on round and employer hires the ’th best so far on round , we define to be the probability that the overall rank of employer ’s hire beats that of employer ’s hire plus one-half times the probability that their ranks are equal. We can derive the entries of the this matrix as follows: Let be the event that with respect to permutation the overall rank of a fixed candidate is , and be the event that the projected rank of the last candidate in a random prefix of size is . Then

Furthermore, by Bayes rule, where and . To compute , we select the ranks of the other candidates in the prefix of size . There are ways to pick the ranks of the better candidates and ways to pick the ranks of the worse candidates. As there are ways overall to pick the ranks of the other candidates, we see:

Letting be the mapping of employer ’s strategy and be the mapping of employer ’s strategy , we see that , as required.

By the above arguments, and the machinery from Section 2.1, we have proven Theorem 1 which claims that the equilibrium of the independent hiring duel is computable.

4 Compression Duel

In a compression duel, two competitors each choose a binary tree with leaf set . An element is then chosen according to distribution , and whichever player’s tree has closest to the root is the winner. This game can be thought of as a competition between prefix-free compression schemes for a base set of words. The Huffman algorithm, which repeatedly pairs nodes with lowest probability, is known to be optimal for single-player compression.

The compression duel is , where and is the set of binary trees with leaf set . For and , is the depth of in . In Section 4.3 we consider a variant in which not every element of must appear in the tree.

4.1 Computing an equilibrium

The compression duel can be represented as a bilinear game. In this case, and will be sets of stochastic matrices, where a matrix entry indicates the probability that item is placed at depth . The set is precisely the set of probability distributions over node depths that are consistent with probability distributions over binary trees. We would like to compute minmax optimal algorithms as in Section 2.2, but we do not have a randomized rounding scheme that maps elements of to binary trees. Instead, following Section 2.3, we will find approximate minmax strategies by constructing an -best response oracle.

The mapping is straightforward: it maps a binary tree to its depth profile. Also, the expected payoff of versus is which can be written as where matrix has bounded entries. To apply Lemma 2, we must now provide an best response oracle, which we implement by reducing to a knapsack problem.

Fix and . We will reduce the problem of finding a best response for to the multiple-choice knapsack problem (MCKP), for which there is an FPTAS [13]. In the MCKP, there are lists of items, say , with each item having a value and weight . The problem is to choose exactly one item from each list with total weight at most , with the goal of maximizing total value. Our reduction is as follows. For each and , define and . This defines a MCKP input instance. For any given , and by the Kraft inequality. Thus, any strategy for the compression duel can be mapped to a solution to the MCKP. Likewise, a solution to the MCKP can be mapped in a value-preserving way to a binary tree with leaf set , again by the Kraft inequality. This completes the reduction.

4.2 Beatability

We will obtain a bound of on the beatability of the Huffman algorithm. The high-level idea is to choose an arbitrary tree and consider the leaves for which beats and vice-versa. We then apply structural properties of trees to limit the relative sizes of these sets of leaves, then use properties of Huffman trees to bound the relative probability that a sampled leaf falls in one set or the other.

Before bounding the beatability of the Huffman algorithm in the No Fail compression model, we review some facts about Huffman trees. Namely, that nodes with lower probability occur deeper in the tree, and that siblings are always paired in order of probability (see, for example, page 402 of Gersting [9]. In what follows, we will suppose that is a Huffman tree.

Fact 1.

If then .

Fact 2.

If and are siblings with , then for every node either or .

We next give a bound on the relative probabilities of nodes on any given level of a Huffman tree, subject to the tree not being too “sparse” at the subsequent (deeper) level. Let and .

Lemma 5.

Choose any and nodes such that . If is not the common ancestor of all nodes of depth greater than , then .


Let . By assumption there exists a non-leaf node with , say with children and . Then and by Fact 1, so . This implies that ’s sibling has probability at most by Fact 2, so the parent of has probability at most . Fact 1 then implies that as required. ∎

For any and set of nodes we define the weight of to be . The Kraft inequality for binary trees is . In fact, we have since we can assume each interior node of has two children.

Lemma 6.

Choose such that no node of is a descendent of any other, and suppose for some . Then .


We will show ; the argument for the other inequality is similar. We proceed by induction on . If the result is trivial (since where ). Otherwise, since , there must be at least two nodes of the maximum depth present in . Let and be the two such nodes with smallest probability, say with . Let be the parent of . Then , since the sibling of has weight at least by Fact 2. Also, since and no node of is a descendent of any other. Let . Then , , and no node of is a descendent of any other. Thus, by induction, as required. ∎

We are now ready to show that the beatability of the Huffman algorithm is at most .

Proposition 2.

The beatability of the Huffman algorithm is at most .

Fix and . Let denote the Huffman tree and choose any other tree . Define , . That is, is the set of elements of for which beats , and is the set of elements for which beats . Our goal is to show that , which would imply that .

We first claim that . To see this, write and note that, by the Kraft inequality,


Moreover, , , and (since for all ). Applying these inequalities to (2) implies , completing the claim.

Our approach will be to express and as disjoint unions and such that for all . To this end, we express the quantities and in binary: choose and from such that and . Since is a sum of element weights that are inverse powers of two, we can partition the elements of into disjoint subsets such that for all . Similarly, we can partition into disjoint subsets such that for all .

Let . Note that, since , we must have and .

We first show that for each . Since , we either have or else . In the latter case, suppose first that . Then, since consists of a single leaf and is not the maximum depth of tree , we can apply Lemma 6 and Lemma 5 to conclude . Next suppose that . We would again like to apply Lemma 5, but we must first verify that its conditions are met. Suppose for contradiction that all nodes of depth greater than share a common ancestor of depth . Then, since and , it must be that contains all such nodes, which contradicts the fact that contains at least one node of depth greater than . We conclude that the conditions of Lemma 5 are satisfied for all and at depth , and therefore as required.

We next consider . Let and . We claim that . If then this is certainly true, so suppose otherwise. Then , so contains elements of depth greater than . As in the case , this implies that either contains only a single node (and cannot be the common ancestor of all nodes of depth greater than ), or else not all nodes of depth greater than have a common ancestor of depth . We can therefore apply Lemma 6 and Lemma 5 to conclude .

Since and are disjoint partitions, we conclude that as required. ∎

We now give an example to demonstrate that the Huffman algorithm is at least -beatable for every . For any , consider the probability distribution given by , for all , and . For this distribution, the Huffman tree satisfies for each and . Consider the alternative tree in which and for all . Then will win if any of are chosen, and will tie on . Thus , and hence the Huffman algorithm is -beatable for every .

We conclude the section by noting that if all probabilities are inverse powers of , the Huffman algorithm is minmax optimal.

Proposition 3.

Suppose there exist integers such that for each . Then the value of the Huffman tree is .


We suppose that there exist integers such that for each . Our goal is to show that the value of the Huffman tree is .

For this set of probabilities, the Huffman tree will set for all . In this case, for all . Choose any other tree , and define sets and as in the proof of Proposition 2. That is, is the set of elements of for which beats , and is the set of elements for which beats . Then, as in Proposition 2, we must have , and hence . Thus . We conclude that the best response to the Huffman tree must be itself, and thus strategy has a value of . ∎

4.3 Variant: allowed failures

We consider a variant of the compression duel in which an algorithm can fail to encode certain elements. If we write to be the set of leaves of binary tree , then in the (original) model of compression we require that for all , whereas in the “Fail” model we require only that . If , we will take . The Huffman algorithm is optimal for single-player compression in the Fail model.

We note that our method of computing approximate minmax algorithms carries over to this variant; we need only change our best-response reduction to use a Multiple-Choice Knapsack Problem in which at most one element is chosen from each list. What is different, however, is that the Huffman algorithm is completely beatable in the Fail model. If we take with and , the Huffman tree places each of the elements of at depth . If is the singleton tree that consists of as the root, then .

5 Binary Search Duel

In a binary search duel, and is the set of binary search trees on (i.e. binary trees in which nodes are labeled with elements of in such a way that an in-order traversal visits the elements of in sorted order). Let be a distribution on . Then for and , is the depth of the node labeled by “” in the tree . In single-player binary search and uniform , selecting the median element in as the root node and recursing on the left and right subsets to construct sub-trees is known to be optimal.

The binary search game can be represented as a bilinear duel. In this case, and will be sets of stochastic matrices (as in the case of the compression game) and the entry will represent the probability that item is placed at depth . Of course, not every stochastic matrix is realizable as a distribution on binary search trees (i.e. such that the probability is placed at depth is ). In order to define linear constraints on so that any matrix in is realizable, we will introduce an auxiliary data structure in Section 5.1 called the State-Action Structure that captures the decisions made by a binary search tree. Using these ideas, we will be able to fit the binary search game into the bilinear duel framework introduced in Section 2.2 and hence be able to efficiently compute a Nash equilibrium strategy for each player.

Given a binary search tree , we will write for the depth of in . We will also refer to as the time that finds .

5.1 Computing an equilibrium

In this subsection, we give an algorithm for computing a Nash equilibrium for the binary search game, based on the bilinear duel framework introduced in Section 2.2. We will do this by defining a structure called the State-Action Structure that we can use to represent the decisions made by a binary search tree using only polynomially many variables. The set of valid variable assignments in a State-Action Structure will also be defined by only polynomially many linear constraints and so these structures will naturally be closed under taking convex combinations. We will demonstrate that the value of playing against any value matrix – see Definition 1 is a linear function of the variables in the State-Action Structure corresponding to . Furthermore, all valid State-Action Structures can be efficiently realized as a distribution on binary search trees which achieves the same expected value.

To apply the bilinear duel framework, we must give a mapping from the space of binary search trees to a convex set defined explicitly by a polynomial number of linear constraints (on a polynomial number of variables). We now give an informal description of : The idea is to represent a binary search tree as a layered graph. The nodes (at each depth) alternate in type. One layer represents the current knowledge state of the binary search tree. After making some number of queries (and not yet finding the token), all the information that the binary search tree knows is an interval of values to which the token is confined - we refer to this as the live interval. The next layer of nodes represents an action - i.e. a query to some item in the live interval. Correspondingly, there will be three outgoing edges from an action node representing the possible replies that either the item is to the left, to the right, or at the query location (in which case the outgoing edge will exit to a terminal state).

We will define a flow on this layered graph based on and the distribution on . Flow will represent total probability - i.e. the total flow into a state node will represent the probability (under a random choice of according to ) that reaches this state of knowledge (in exactly the corresponding number of queries). Then the flow out of a state node represents a decision of which item to query next. And lastly, the flow out of an action node splits according to Bayes’ Rule - if all the information revealed so far is that the token is confined to some interval, we can express the probability that (say) our next query to a particular item finds the token as a conditional probability. We can then take convex combinations of these ”basic” flows in order to form flows corresponding to distributions on binary search trees.

We give a randomized rounding algorithm to select a random binary search tree based on a flow - in such a way that the marginal probabilities of finding a token at time are exactly what the flow specifies they should be. The idea is that if we choose an outgoing edge for each state node (with probability proportional to the flow), then we have fixed a binary search tree because we have specified a decision rule for each possible internal state of knowledge. Suppose we were to now select an edge out of each action node (again with probability proportional to the flow) and we were to follow the unique path from the start node to a terminal node. This procedure would be equivalent to searching for a randomly chosen token chosen according to and using this token to choose outgoing edges from action nodes. This procedure generates a random path from the start node to a terminal node, and is in fact equivalent to sampling a random path in the path decomposition of the flow proportionally to the flow along the path. Because these two rounding procedures are equivalent, the marginal distribution that results from generating a binary search tree (and choosing a random element to look for) will exactly match the corresponding values of the flow.

5.2 Notation

The natural description of the strategy space of the binary search game is exponential (in ) – so we will assume that the value of playing any binary search tree against an opponent’s mixed strategy is given to us in a compact form which we will refer to as a value matrix:

Definition 1.

A value matrix is an matrix in which the entry is interpreted to be the value of finding item at time .

Given any binary search tree , we can define a value matrix so that the expected value of playing any binary search tree against in the binary search game can be written as :

Definition 2.