Power of Choices with Simple Tabulation^{1}^{1}1This research is supported by Mikkel Thorup’s Advanced Grant DFF060202499B from the Danish Council for Independent Research and by his Villum Investor grant 16582.
Abstract
Suppose that we are to place balls into bins sequentially using the choice paradigm: For each ball we are given a choice of bins, according to hash functions and we place the ball in the least loaded of these bins breaking ties arbitrarily. Our interest is in the number of balls in the fullest bin after all balls have been placed.
Azar et al. [STOC’94] proved that when and when the hash functions are fully random the maximum load is at most whp (i.e. with probability for any choice of ).
In this paper we suppose that are simple tabulation hash functions which are simple to implement and can be evaluated in constant time. Generalising a result by Dahlgaard et al [SODA’16] we show that for an arbitrary constant the maximum load is whp, and that expected maximum load is at most . We further show that by using a simple tiebreaking algorithm introduced by Vöcking [J.ACM’03] the expected maximum load drops to where is the rate of growth of the ary Fibonacci numbers. Both of these expected bounds match those of the fully random setting.
The analysis by Dahlgaard et al. relies on a proof by Pătraşcu and Thorup [J.ACM’11] concerning the use of simple tabulation for cuckoo hashing. We require a generalisation to hash functions, but the original proof is an 8page tour de force of adhoc arguments that do not appear to generalise. Our main technical contribution is a shorter, simpler and more accessible proof of the result by Pătraşcu and Thorup, where the relevant parts generalise nicely to the analysis of choices.
1 Introduction
Suppose that we are to place balls sequentially into bins. If the positions of the balls are chosen independently and uniformly at random it is wellknown that the maximum load of any bin is^{4}^{4}4All logarithms in this paper are binary. whp (i.e. with probability for arbitrarily large fixed ). See for example [10] for a precise analysis.
Another allocation scheme is the choice paradigm (also called the choice balanced allocation scheme) first studied by Azar et al. [1]: The balls are inserted sequentially by for each ball choosing bins, according to hash functions and placing the ball in the one of these bins with the least load, breaking ties arbitrarily. Azar et al. [1] showed that using independent and fully random hash functions the maximum load surprisingly drops to at most whp. This result triggered an extensive study of this and related types of load balancing schemes. Currently the paper by Azar et al. has more than 700 citations by theoreticians and practitioners alike. The reader is referred to the text book [13] or the recent survey [21] for thorough discussions. Applications are numerous and are surveyed in [11, 12].
An interesting variant was introduced by Vöcking [20]. Here the bins are divided into groups each of size and for each ball we choose a single bin from each group. The balls are inserted using the choice paradigm but in case of ties we always choose the leftmost of the relevant bins i.e. the one in the group of the smalles index. Vöcking proved that in this case the maximum load drops further to whp.
In this paper we study the use of simple tabulation hashing in the load balancing schemes by Azar et al. and by Vöcking.
1.1 Simple tabulation hashing
Recall that a hash function is a map from a key universe to a range chosen with respect to some probability distribution on . If the distribution is uniform we say that is fully random but we may impose any probability distribution on .
Simple tabulation hashing was first introduced by Zobrist [23]. In simple tabulation hashing and for some . We identify with the vector space . The keys are viewed as vectors consisting of characters with each . We always assume that . The simple tabulation hash function is defined by
where are chosen independently and uniformly at random from . Here denotes the addition in which can in turn be interpreted as the bitwise XOR of the elements when viewed as bitstrings of length .
Simple tabulation is trivial to implement, and very efficient as the character tables fit in fast cache. Pătraşcu and Thorup [15] considered the hashing of 32bit keys divided into 4 8bit characters, and found it to be as fast as two 64bit multiplications. On computers with larger cache, it may be faster to use 16bit characters. We note that the character table lookups can be done in parallel and that character tables are never changed once initialised.
In the choice paradigm, it is very convenient that all the output bits of simple tabulation are completely independent (the th bit of is the XOR of the th bit of each ). Using bit hash values, can therefore be viewed as using independent bit hash values, and the choices can thus be computed using a single simple tabulation hash function and therefore only lookups.
1.2 Main results
We will study the maximum load when the elements of a fixed set with are distributed into groups of bins each of size using the choice paradigm with independent simple tabulation hash functions . The choices thus consist of a single bin from each group as in the scheme by Vöcking but we may identify the codomain of with and think of all as mapping to the same set of bins as in the scheme by Azar et al.
Dahlgaard et al. [7] analysed the case . They proved that if balls are distributed into two tables each consisting of bins according to the two choice paradigm using two independently chosen simple tabulation hash functions, the maximum load of any bin is whp. For they further provided an example where the maximum load is at least with probability . Their example generalises to arbitrary fixed so we cannot hope for a maximum load of or even whp when is constant. However, as we show in Appendix D, their result implies that even with choices the maximum load is whp.
Dahlgaard et al. also proved that the expected maximum load is at most when . We prove the following result which generalises this to arbitrary .
Theorem 1.
Let be a fixed constant. Assume balls are distributed into tables each of size according to the choice paradigm using independent simple tabulation hash functions . Then the expected maximum load is at most .
When in the choice paradigm we sometimes encounter ties when placing a ball — several bins among the choices may have the same minimum load. As observed by Vöcking [20] the choice of tie breaking algorithm is of subtle importance to the maximum load. In the fully random setting, he showed that if we use the AlwaysGoLeft algorithm which in case of ties places the ball in the leftmost of the relevant bins, i.e. in the bin in the group of the smallest index, the maximum load drops to whp. Here is the unique positive real solution to the equation . We prove that his result holds in expectation when using simple tabulation hashing.
Theorem 2.
Suppose that we in the setting of Theorem 1 use the AlwaysGoLeft algorithm for tiebreaking. Then the expected maximum load of any bin is at most .
Note that is the rate of growth of the so called ary Fibonacci numbers for example defined by for , and finally when . With this definition . It is easy to check that is an increasing sequence converging to 2.
1.3 Technical contributions
In proving Theorem 1 we would ideally like to follow the approach by Dahlgaard et al. [7] for the case as close as possible. They show that if some bin gets load then either the hash graph (informally, the uniform hypergraph with an edge for each ) contains a subgraph of size with more edges than nodes or a certain kind of “witness tree” . They then bound the probability that either of these events occur when for some sufficiently large constant . Putting for a sufficiently large constant we similarly have three tasks:

Define the ary witness trees and argue that if some bin gets load then either (A): the hash graph contains a such, or (B): it contains a subgraph of size with .

Bound the probability of (A).

Bound the probability of (B).
Step (1) and (2) require intricate arguments but the techniques are reminiscent to those used by Dahlgaard et al. in [7]. It is not surprising that their arguments generalise to our setting and we will postpone our work with step (1) and (2) to the appendices.
Our main technical contribution is our work on step (3) as we now describe. Dealing with step (3) in the case Dahlgaard et al. used the proof by Pătraşcu and Thorup [15] of the result below concerning the use of simple tabulation for cuckoo hashing^{5}^{5}5Recall that in cuckoo hashing, as introduced by Pagh and Rodler [14], we are in the 2choice paradigm but we require that no two balls collide. However, we are allowed to rearrange the balls at any point and so the feasibility does only depend on the choices of the balls..
Theorem 3 (Pătraşcu and Thorup [15]).
Fix . Let be any set of keys. Let be such that . With probability the keys of can be placed in two tables of size with cuckoo hashing using two independent simple tabulation hash functions and .
Unfortunately for us, the original proof of Theorem 3 consists of 8 pages of intricate adhoc arguments that do not seem to generalise to the choice setting. Thus we have had to develop an alternative technique for dealing with step (3) As an extra reward this technique gives a new proof of Theorem 3 which is shorter, simpler and more readable and we believe it to be our main contribution and of independent interest^{6}^{6}6We mention in passing that Theorem 3 is best possible: There exists a set of keys such that with probability cuckoo hashing is forced to rehash (see [15])..
1.4 Alternatives
We have shown that balanced allocation with choices with simple tabulation gives the same expected maximum load as with fullyrandom hashing. Simple tabulation uses lookups in tables of size and bitwise XOR. The experiments from [15], with and , indicate this to be about as fast as two multiplications.
Before comparing with alternative hash functions, we note that we may assume that . If is larger, we can first apply a universal hash function [3] from to . This yields an expected number of collisions. We can now apply any hash function, e.g., simple tabulation, to the reduced keys in . Each of the duplicate keys can increase the maximum load by at most one, so the expected maximum load increases by at most . If , we can use the extremely simple universal hash function from [8], multiplying the key by a random odd bit number and performing a rightshift.
Looking for alternative hash functions, it can be checked that independence suffices to get the same maximum load bounds as with full randomness even with high probability. High independence hash functions were pioneered by Siegel [17] and the most efficient construction is the double tabulation of Thorup [18]. It gives independence using space in time . With a constant this would suffice for our purposes. However, looking into the constants suggested in [18], with 16bit characters for 32bit keys, we have 11 times as many character table lookups with double tabulation as with simple tabulation and we loose the same factor in space, so this is not nearly as efficient.
Another approach was given by Woelfel [22] using the hash functions he earlier developed with Dietzfelbinger [9]. He analysed Vöcking’s AlwaysGoLeft algorithm, bounding the error probability that the maximum load exceeded . Slightly simplified and translated to match our notation, using independent hash functions and lookups in tables of size , the error probability is . Recall that we may assume , so this matches the space of simple tabulation with characters. With, say, , he needs 5independent hashing to get any nontrivial bound, but the fastest 5independent hashing is the tabulation scheme of Thorup and Zhang [19], which according to the experiments in [15] is at least twice as slow as simple tabulation, and much more complicated to implement.
A final alternative is to compromise with the constant evaluation time. Reingold et al. [16] have shown that using the hash functions from [4] yields a maximum load of whp. The functions use random bits and can be evaluated in time . Very recently Chen [5] used a refinement of the hash family from [4] giving a maximum load of at most whp and whp using the AlwaysGoLeft algorithm. His functions require random bits and can be evaluated in time . We are not so concerned with the number of random bits. Our main interest in simple tabulation is in the constant evaluation time with a very low constant.
1.5 Structure of the paper
In Section 2 we provide a few preliminaries for the proofs of our main results. In Section 3 we deal with step (3) described under Technical contributions. To provide some intuition we first provide the new proof of Theorem 3. Afterwards, we show how to proceed for general . In Appendix A we show how to complete step (1) In Appendix B and Appendix C we complete step (2) Finally we show how to complete the proof of Theorem 1 and Theorem 2 in Appendix D. In Appendix E we mention a few open problems.
2 Preliminaries
First, recall the definition of a hypergraph:
Definition 4.
A hypergraph is a pair where is a set and is a multiset consisting of elements from . The elements of are called vertices and the elements of are called edges. We say that is uniform if for all .
When using the choice paradigm to distribute a set of keys there is a natural uniform hypergraph associated with the keys of .
Definition 5.
Given a set of keys the hash graph is the uniform hypergraph on with an edge for each .
When working with the hash graph we will hardly ever distinguish between a key and the corresponding edge, since it is tedious to write . Statements such as “ is a path” or “The keys and are adjacent in the hash graph” are examples of this abuse of notation.
Now we discuss the independence of simple tabulation. First recall that a position character is an element . With this definition a key can be viewed as the set of position characters but it is sensible to define for any set of position characters.
In the classical notion of independence of Carter and Wegman [3] simple tabulation is not even 4independent. In fact, the keys and are dependent, the issue being that each position character appears an even number of times and so the bitwise XOR of the hash values will be the zero string. As proved by Thorup and Zhang [19] this property in a sense characterises dependence of keys.
Lemma 6 (Thorup and Zhang [19]).
The keys are dependent if and only if there exists a nonempty subset such that each position character in appears an even number of times. In this case we have that .
When each position character appears an even number of times in we will write which is natural when we think of a key as a set of position characters and as the symmetric difference. As shown by Dahlgaard et al. [6] the characterisation in Lemma 6 can be used to bound the independence of simple tabulation.
Lemma 7 (Dahlgaard et al. [6]).
Let . The number of tuples such that is at most^{7}^{7}7Recall the double factorial notation: If is a positive integer we write for the product of all the positive integers between and that have the same parity as . .
3 Cuckoo hashing and generalisations
Theorem 8.
Suppose that we are in the setting of Theorem 1 i.e. is a fixed constant, with and are independent simple tabulation hash functions. The probability that the hash graph contains a subgraph of size with is at most .
Before giving the full proof however we provide the new proof of Theorem 3 which is more readable and illustrates nearly all the main ideas.
Proof of Theorem 3.
It is well known that cuckoo hashing is possible if and only if the hash graph contains no subgraph with more edges than nodes. A minimal such graph is called a double cycle and consists of two cycles connected by a path or two vertices connected by three disjoint paths (see Figure 1). Hence, it suffices to bound the probability that the hash graph contains a double cycle by .
We denote by the number of bins in each of the two groups. Thus in this setting . First of all, we argue that we may assume that the hash graph contains no trail of length at least consisting of independent. Indeed, the keys of a such can be chosen in at most ways and since we require equations of the form , to be satisfied and since these events are independent the probability that the hash graph contains such a trail is by a union bound at most
Now we return to the double cycles. Let denote the event that the hash graph contains a double cycle of size consisting of independent keys. The graph structure of a such can be chosen in ways and the keys (including their positions) in at most ways. Since there are equations of the form , to be satisfied the probability that the hash graph contains a double cycle consisting of independent keys is at most
The argument above is the same as in the fully random setting. We now turn to the issue of dependencies in the double cycle starting with the following definition.
Definition 9.
We say that a graph is a trident if it consists of three paths of nonzero lengths meeting at a single vertex . (see the nonblack part of Figure 2).
We say that a graph is a lasso if it consists of a path that has one end attached to a cycle (see the nonblack part of Figure 2).
We claim that in any double cycle consisting of dependent keys we can find one of the following structures (see Figure 2):

S1: A lasso consisting of independent keys together with a key not on and incident to the degree 1 vertex of such that is dependent on the keys of .

S2: A trident consisting of independent keys together with (not necessarily distinct) keys not on but each dependent on the keys of and incident to the vertices of degree on
To see this suppose first that one of the cycles of consists of independent keys. In this case any maximal lasso of independent keys in containing the edges of is an .
On the other hand if all cycles contained in consist of dependent keys we pick a vertex of of degree at least and incident edges. These 3 edges form an independent trident (simple tabulation is 3independent) and any maximal independent trident contained in and containing these edges forms an .
Our final step is thus to show that the probability that these structures appear in the hash graph is
The lasso ():
Since the edges of the lasso form an independent trail it by the initial observation suffices to bound the probability that the hash graph contains an of size for any . Fix the size of the lasso. The number of ways to choose the structure of the lasso is . Denote the set of independent keys of the lasso by and let be the dependent key in . By Lemma 6 we may write for some . Fix the size (which is necessarily odd). By Lemma 7 the number of ways to choose the keys of (including their order) is at most and the number of ways to choose their positions in the lasso is . The number of ways to choose the remaining keys of is trivially bounded by and the probability that the choice of independent keys hash to the correct positions in the lasso is at most . By a union bound the probability that the hash graph contains an for fixed values of and is at most This is maximised for . In fact, when and we have that Thus the probability that the hash graph contains an of size is at mostThe trident ():
Fix the size of the trident. The number of ways to choose the structure of the trident is bounded by (once we choose the lengths of two of the paths the length of the third becomes fixed). Let , and be the three paths of the trident meeting in . As before we may assume that each has length . Let denote the keys of the trident and enumerate in some order. Write , and for some . By a proof almost identical to that given for the lasso we may assume that . Indeed, if for example we by Lemma 7 save a factor of nearly when choosing the key of and this makes up for the fact that the trident contains no cycles and hence that the probability of a fixed set of independent keys hashing to it is a factor of larger. The next observation is that we may assume that . Again the argument is of the same flavour as the one given above. If for example we by an application of Lemma 7 obtain that the number of ways to choose the keys of is . Conditioned on this, the number of ways to choose the keys is by another application of Lemma 7 with one of the ’s a singleton. Thus we save a factor of when choosing the keys of which will again suffice. The bound gets even better when where we save a factor of . Suppose now that is not a summand of . Write and let be the event that the independent keys of hash to the trident (with the equation involving and being without loss of generality). Then . We observe that since is a conjunction of events of the form none of them involving ^{8}^{8}8If , say, we don’t necessarily get the probability . In this case the probability is and the event might actually be included in in which case the probability is . This can of course only happen if the keys and are adjacent in the trident so we could impose even further restrictions on the dependencies in . . A union bound then gives that the probability that this can happen is at most Thus we may assume that is a summand of and by similar arguments that is a summand of and that is a summand of . To complete the proof we need one final observation. We can define an equivalence relation on by if . Denote by the set of equivalence classes. One of them, say , consists of the elements . We will say that the equivalence class is large if and small otherwise. Note that by Lemma 7. In particular the number of large equivalence classes is at most . If is a simple tabulation hash function we can welldefine a map by . Since the number of large equivalence classes is the probability that for some large and some is and we may thus assume this does not happen. In particular, we may assume that , and each represent small equivalence classes as they are adjacent in the hash graph. Now suppose that is not a summand in . The number of ways to pick is at most by Lemma 7. By doing so we fix the equivalence class of but not so conditioned on this the number of ways to pick is at most . The number of ways to choose the remaining keys is bounded by and a union bound gives that the probability of having such a trident is at most which suffices. We may thus assume that is a summand in and by an identical argument that is a summand in and hence . But the same arguments apply to and reducing to the case when which is clearly impossible. ∎3.1 Proving Theorem 8
Now we will explain how to prove Theorem 8 proceeding much like we did for Theorem 3. Let us say that a uniform hypergraph is tight if . With this terminology Theorem 8 states that the probability that the hash graph contains a tight subgraph of size is at most . It clearly suffices to bound the probability of the existence of a connected tight subgraph of size .
We start with the following two lemmas. The counterparts in the proof of Theorem 3 are the bounds on the probability of respectively an independent double cycle and an independent lasso with a dependent key attached.
Lemma 10.
Let denote the event that the hash graph contains a tight subgraph of size consisting of independent keys. Then .
Proof.
Let be fixed. The number of ways to choose the keys of is trivially bounded by and the number of ways to choose the set of nodes in the hash graph is . For such a choice of nodes let denote the number of nodes of in the ’th group. The probability that one of the keys hash to is then
By the independence of the keys and a union bound we thus have that
as desired. ∎
Lemma 11.
Let be the event that the hash graph contains a subgraph with and such that the keys of are independent but such that there exists a key dependent on the keys of . Then .
Proof.
Let be fixed and write . We want to bound the number of ways to choose the keys of . By Lemma 6, for some with for some odd . Let be fixed for now. Using Lemma 7, we see that the number of ways to choose the keys of is no more than . For fixed and the probability is thus bounded by
and a union bound over all and suffices. ∎
We now generalise the notion of a double cycle starting with the following definition.
Definition 12.
Let be a uniform hypergraph. We say that a sequence of edges of is a path if for and when .
We say that is a cycle if , for all and when .
Next comes the natural extension of the definition of double cycles to uniform hypergraphs.
Definition 13.
A uniform hypergraph is called a double cycle if it has either of the following forms (see Figure 3).

D1: It consists of of two vertex disjoint cycles and connected by a path such that and for . We also allow to have zero length and .

D2: It consist of a cycle and a path of length such that and for . We also allow and .
Note that a double cycle always has .
Now assume that the hash graph contains a connected tight subgraph of size but that neither of the events of Lemma 10 and 11 has occurred. In particular no two edges of has and no cycle consists of independent keys.
It is easy to check that under this assumption contains at least two cycles. Now pick a cycle of least possible length. Since simple tabulation is independent the cycle consists of at least edges. If there exists an edge not part of with we get a double cycle of type . If we can use to obtain a shorter cycle than which is a contradiction^{9}^{9}9Here we use that the length of is at least 4. If has length the fact that contains three nodes of only guarantees a cycle of length at most .. Using this observation we see that if there is a cycle such that then we can find a in the hash graph. Thus we may assume that any cycle satisfies .
Now pick a cycle different from of least possible length. As before we may argue that any edge not part of satisfies that . Picking a shortest path connecting and (possibly the length is zero) gives a double cycle of type .
Next we define tridents (see the nongrey part of Figure 4).
Definition 14.
We call a uniform hypergraph a trident if it consists of paths , and of nonzero length such that either:

There is a vertex such that , is contained in no other edge of and no vertex different from is contained in more than one of the three paths.

, and are vertex disjoint and is a path.
Like in the proof of of Theorem 3 the existence of a double cycle not containing a cycle of independent keys implies the existence of the following structure (see Figure 4):

S1: A trident consisting of three paths , and such that the keys of the trident are independent and such that there are, not necessarily distinct, keys not in the trident extending the paths , and away from their common meeting point such that and are each dependent on the keys in the trident.
We can bound the probability of this event almost identically to how we proceeded in the proof of Theorem 3. The only difference is that when making the ultimate reduction to the case where this event is in fact possible (see Figure 4). In this case however, there are three different hash function and such that , and . What is the probability that this can happen? The number of ways to choose the keys is at most by Lemma 7. The number of ways to choose the hash functions is upper bounded by . Since the hash functions are independent the probability that this can happen in the hash graph is by a union bound at most
which suffices to complete the proof of Theorem 8.
Summary
For now we have spent most of our energy proving Theorem 8. At this point it is perhaps not clear to the reader why it is important so let us again highlight the steps to Theorem 1. First of all let for a sufficiently large constant. The steps are:

Show that if some bin has load then either the hash graph contains a tight subgraph of size or a certain kind of witness tree .

Bound the probability that the hash graph contains a by .

Bound the probability that the hash graph contains a tight subgraph of size by .
We can now cross (3) of the list. In fact, we have a much stronger bound. The remaining steps are dealt with in the appendices as described under Structure of the paper.
As already mentioned the proofs of all the above steps (except step (3)) are intricate but straightforward generalisations of the methods in [7].
References
 [1] Yossi Azar, Andrei Z. Broder, Anna R. Karlin, and Eli Upfal. Balanced allocations. SIAM Journal of Computation, 29(1):180–200, 1999. See also STOC’94.
 [2] Petra Berenbrink, Artur Czumaj, Angelika Steger, and Berthold Vöcking. Balanced allocations: The heavily loaded case. In Proc. 52 ACM Symposium on Theory of Computing, STOC, pages 745–754, 2000.
 [3] Larry Carter and Mark N. Wegman. Universal classes of hash functions. Journal of Computer and System Sciences, 18(2):143–154, 1979. See also STOC’77.
 [4] L. Elisa Celis, Omer Reingold, Gil Segev, and Udi Wieder. Balls and bins: Smaller hash families and faster evaluation. In IEEE 52nd Symposium on Foundations of Computer Science, FOCS, pages 599–608, 2011.
 [5] Xue Chen. Derandomized balanced allocation. CoRR, abs/1702.03375, 2017. Preprint.
 [6] Søren Dahlgaard, Mathias Bæk Tejs Knudsen, Eva Rotenberg, and Mikkel Thorup. Hashing for statistics over partitions. In Proc. 56th Symposium on Foundations of Computer Science, FOCS, pages 1292–1310, 2015.
 [7] Søren Dahlgaard, Mathias Bæk Tejs Knudsen, Eva Rotenberg, and Mikkel Thorup. The power of two choices with simple tabulation. In Proc. 27. ACMSIAM Symposium on Discrete Algorithms, SODA, pages 1631–1642, 2016.
 [8] Martin Dietzfelbinger, Torben Hagerup, Jyrki Katajainen, and Martti Penttonen. A reliable randomized algorithm for the closestpair problem. Journal of Algorithms, 25(1):19–51, 1997.
 [9] Martin Dietzfelbinger and Philipp Woelfel. Almost random graphs with simple hash functions. In Proc. 35th ACM Symposium on Theory of Computing, STOC, pages 629–638, 2003.
 [10] Gaston H. Gonnet. Expected length of the longest probe sequence in hash code searching. Journal of the ACM, 28(2):289–304, April 1981.
 [11] Michael Mitzenmacher. The power of two choices in randomized load balancing. IEEE Transactions on Parallel and Distribed Systems, 12(10):1094–1104, October 2001.
 [12] Michael Mitzenmacher, Andrea W. Richa, and Ramesh Sitaraman. The power of two random choices: A survey of techniques and results. Handbook of Randomized Computing: volume 1, pages 255–312, 2001.
 [13] Michael Mitzenmacher and Eli Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, New York, NY, USA, 2005.
 [14] Rasmus Pagh and Flemming F. Rodler. Cuckoo hashing. Journal of Algorithms, 51(2):122–144, May 2004. See also ESA’01.
 [15] Mihai Pǎtraşcu and Mikkel Thorup. The power of simple tabulation hashing. Journal of the ACM, 59(3):14:1–14:50, June 2012. Announced at STOC’11.
 [16] Omer Reingold, Ron D. Rothblum, and Udi Wieder. Pseudorandom graphs in data structures. In Proc. 41st International Colloquium on Automata, Languages and Programming, ICALP, pages 943–954, 2014.
 [17] Alan Siegel. On universal classes of extremely random constanttime hash functions. SIAM Journal of Computing, 33(3):505–543, March 2004. See also FOCS’89.
 [18] Mikkel Thorup. Simple tabulation, fast expanders, double tabulation, and high independence. In Proc. 54th Symposium on Foundations of Computer Science, FOCS, pages 90–99, 2013.
 [19] Mikkel Thorup and Yin Zhang. Tabulationbased 5independent hashing with applications to linear probing and second moment estimation. SIAM Journal of Computing, 41(2):293–331, April 2012. Announced at SODA’04 and ALENEX’10.
 [20] Berthold Vöcking. How asymmetry helps load balancing. Journal of the ACM, 50(4):568–589, July 2003. See also FOCS’99.
 [21] Udi Wieder. Hashing, load balancing and multiple choice. Foundations and Trends in Theoretical Computer Science, 12(34):275–379, 2017.
 [22] Philipp Woelfel. Asymmetric balanced allocation with simple hash functions. In Proc. 17th ACMSIAM Symposium on Discrete Algorithm, SODA, pages 424–433, 2006.
 [23] Albert L. Zobrist. A new hashing method with application for game playing. Tech. Report 88, Computer Sciences Department, University of Wisconsin, Madison, Wisconsin, 1970.
Appendix
Appendix A Implications of having a bin of large load
Before we start we will introduce some definitions concerning uniform hypergraphs. We say that a uniform hypergraph is a tree if is connected and . We say that is a forest if the connected component of are trees. The following result and its corollary are easily proven.
Lemma 15.
Let be a connected uniform hypergraph. Then is a tree if and only if does not contain a cycle or a pair of distinct edges with .
Corollary 16.
A connected subgraph of a forest is a tree.
We define a rooted tree to be a hypertree where we have fixed a root . We can define the depth of a node to be the length of the shortest path from this vertex to the root. Any edge in a rooted tree can be written such that for some we have that has depth and each has depth . With this notation we will say that are children of . We will say that a node is internal if it has at least one child and that is a leaf if it has no children. Note finally that for each vertex we have an induced subtree of rooted at . If has depth this tree can be described as the maximal connected subgraph of containing in which each node has depth at least . If is a node of we will say that is an ancestor of or that is a descendant of .
In the next two subsections we introduce the witnessing trees in the settings of Theorem 1 and 2 respectively and show that if some bin has load at least then either the hash graph will contain a tight subgraph of size or such a witnessing tree.
a.1 The nomial trees
To define the witness tree we will need the notion of the ’th load graph of a vertex in the hash graph. It is intuitively a subgraph of the hash graph witnessing how the bin corresponding to obtained its first balls.
Definition 17.
Suppose is a vertex of the hash graph corresponding to a bin of load at least . We recursively define the ’th load graph of to be the following uniform hypergraph.

If we let .

If we let be the edge corresponding to the ’th key landing in . Write . Then is the graph with
As we are distributing the balls according to the choice paradigm the definition is sensible.
It should be no surprise that if we know that the ’th load graph of a vertex is a tree we can actually describe the structure of that tree. We now describe that tree.
Definition 18.
A nomial tree for is the rooted uniform hypertree defined recursively as follows:

is a single node
