Balanced Allocations and Double Hashing \titlenote

Balanced Allocations and Double Hashing \titlenote

Michael Mitzenmacher\titlenoteSupported in part by NSF grants CCF-0915922, IIS-0964473, and CNS-1011840.
Harvard University
School of Engineering and Applied Sciences
michaelm@eecs.harvard.edu
Abstract

With double hashing, for an item , one generates two hash values and , and then uses combinations for to generate multiple hash values from the initial two. We show that the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of choices, as well as several related variants. We perform an empirical study, and consider multiple theoretical approaches. While several techniques can be used to show asymptotic results for the maximum load, we demonstrate how fluid limit methods explain why the behavior of double hashing and fully random hashing are essentially indistinguishable in this context.

\numberofauthors

1

1 Introduction

The standard balanced allocation paradigm works as follows: suppose balls are sequentially placed into bins, where each ball is placed in the least loaded of uniform independent choices of the bins. Then the maximum load (that is, the maximum number of balls in a bin) is , much lower than the obtained where each ball is placed according to a single uniform choice [3].

The assumption that each ball obtains independent uniform choices is a strong one, and a reasonable question, tackled by several other works, is how much randomness is needed for these types of results (see related work below). Here we consider a novel approach, examining balanced allocations in conjunction with double hashing. In the well-known technique of standard double hashing for open-addressed hash tables, the th ball obtains two hash values, and . For a hash table of size , and . Successive locations , , are tried until an empty slot is found. As discussed later in this introduction, double hashing is extremely conducive to both hardware and software implementations and is used in many deployed systems.

In our context, we use the double hashing approach somewhat differently. The th ball again obtains two hash values and . The choices for the th ball are then given by , , and the ball is placed in the least loaded. We generally assume that is uniform over , is uniform over all numbers in relatively prime to , and all hash values are independent. (It is convenient to consider a prime, or take to be a power of 2 so that the are uniformly chosen random odd numbers, to ensure the values are distinct.)

It might appear that limiting the space of random choices available to the balls in this way might change the behavior of this random process significantly. We show that this is not the case both in theory and in practice. Specifically, by “essentially indistinguishable”, we mean that, empirically, for any constant and sufficiently large the fraction of bins of load is well within the difference expected by experimental variance for the two methods. Essentially indistinguishable means that in practice for even reasonable one cannot readily distinguish the two methods. By “vanishing” we mean that, analytically, for any constant the asymptotic fraction of bins of load for double hashing differs only by terms from fully independent choices with high probability. A related key result is that bounds on the maximum load hold for double hashing as well. Surprisingly, the difference between fully independent choices and choices using double hashing are essentially indistinguishable for sufficiently large and vanishing asymptotically. 111To be clear, we do not mean that there is no difference between double hashing and fully random hashing in this setting; there clearly is and we note a simple example further in the paper. As we show, analytically in the limit for large the difference is vanishing (Theorem 8 and Corollary 9), and for finite the results from our experiments demonstrate the difference is essentially indistinguishable (Section A).

As an initial example of empirical results, Table 1 below shows the fraction of bins of load for various taken over 10000 trials, with balls thrown into bins using and choices, using both double hashing and fully random hash values (where for our proxy for “random” we utilize the standard approach of simply generating successive random values using the drand48 function in C initially seeded by time). Most values are given to five decimal places. The performance difference is essentially indistinguishable, well within what one would expect simply from variance from the sampling process.

Load Fully Random Double Hashing
0 0.17693 0.17691
1 0.64664 0.64670
2 0.17592 0.17589
3 0.00051 0.00051
(a) 3 choices, balls and bins
Load Fully Random Double Hashing
0 0.14081 0.14081
1 0.71840 0.71841
2 0.14077 0.14076
3
(b) 4 choices, balls and bins
Table 1: An initial example showing the performance of double hashing compared to fully random hashing. In our tables, the row with load gives the fraction of the bins that have load over all trials. So over 10000 trials of throwing balls into bins using 3 choices and double hashing, the fraction of bins with load 0 was .

More extensive empirical results appear in Appendix A. In particular, we also consider two extensions to the standard paradigm: Vöcking’s extension (sometimes called -left hashing), where the bins are split into subtables of size laid out left to right, the choices consist of one uniform independent choice in each subtable, and ties for the least loaded bin are broken to the left [39]; and the continuous variation, where the bins represent queues, and the balls represent customers that arrive as a Poisson process and have exponentially distributed service requirements [27]. We again find empirically that replacing fully random choices with double hashing leads to essentially indistinguishable results in practice.222We encourage the reader to examine these experimental results. However, because we recognize some readers are as a rule uninterested in experimental results, we have moved them to an appendix.

In this paper, we provide theoretical results explaining why this would be the case. There are multiple methods available that can yield bounds on the maximum load when balls are thrown into bins in the setting of fully random choices. We therefore first demonstrate how some previously used methods, including the layered induction approach of [3] and the witness tree approach of [39], readily yield bounds; this asymptotic behavior is, arguably, unsurprising (at least in hindsight). We then examine the key question of why the difference in empirical results is vanishing, a much stronger requirement. For the case of fully random choices, the asymptotic fraction of bins of each possible load can be determined using fluid limit methods that yield a family of differential equations describing the process behavior [27]. It is not a priori clear, however, why the method of differential equations should necessarily apply when using double hashing, and the primary result of this paper is to explain why it in fact applies. The argument depends technically on the idea that the “history” engendered by double hashing in place of fully random hash functions has only a vanishing (that is, ) effect on the differential equations that correspond to the limiting behavior of the bin loads. We believe this resolution suggests that double hashing will be found to obtain the same results as fully random hashing in other additional hash-based structures, which may be important in practical settings.

We argue these results are important for multiple reasons. First, we believe the fact that moving from fully random hashing to double hashing does not change performance for these particular balls and bins problems is interesting in its own right. But it also has practical applications; multiple-choice hashing is used in several hardware systems (such as routers), and double hashing both requires less (pseudo-)randomness and is extremely conducive to implementation in hardware [11, 17]. (As we discuss below, it may also be useful in software systems.) Both the fact that double hashing does not change performance, and the fact that one can very precisely determine the performance of double hashing for load balancing simply using the same fluid limit equations as have been used under the assumption of fully random hashing, are therefore of major importance for designing systems that use multiple-choice methods (and convincing system designers to use them). Finally, as mentioned, these results suggest that using double hashing in place of fully random choices may similarly yield the same performance in other settings that make use of multiple hash functions, such as for cuckoo hashing or in error-correcting codes, offering the same potential benefits for these problems. We have explored this issue further in a subsequent (albeit already published) paper [30], where there remain further open questions. In particular, we have not yet found how to use the fluid limit analysis used here for these other problems.

Finally, it has been remarked to us that all of our arguments here apply beyond double hashing; any hashing scheme where the choices for a ball are made so that they are pairwise independent and uniform would yield the same result by the same argument. That is, if for a given ball with choices , for any distinct bins and we have for all :

then our results apply. Unfortunately, we do not know of any actual scheme besides double hashing in practical use with these properties; hence we focus on double hashing throughout.

1.1 Related Work

The balanced allocations paradigm, or the power of two choices, has been the subject of a great deal of work, both in the discrete balls and bins setting and in the queueing theoretic setting. See, for example, the survey articles [21, 29] for references and applications.

Several recent works have considered hashing variations that utilize less randomness in place of assuming perfectly random hash functions; indeed, there is a long history of work on universal hash functions [9], and more recently min-wise independent hashing [8]. Specific recent related works include results on standard one-choice balls and bins problems [10], hashing with linear probing with limited independence [34], and tabulation hashing [35]; other works involving balls and bins with less randomness include [15, 36]. As another example, Woelfel shows that a variation of Vöcking’s results hold using simple hash functions that utilize a collection of -wise independent hash functions for small , and a random vector requiring space [41].

Another related work in the balls and bins setting is the paper of Kenthapadi and Panigrahy [19], who consider a setting where balls are not allowed to choose any two bins, but are forced to choose two bins corresponding to an edge on an underlying random graph. In the same paper, they also show that two random choices that yield bins are sufficient for similar bounds on maximum loads that one obtains with fully random choices, where in their case each random choice gives a contiguous block of bins.

Interestingly, the classical question regarding the average length of an unsuccessful search sequence for standard double hashing in an open address hash table when the table load is a constant has been shown to be, up to lower order terms, , showing that double hashing has essentially the same performance as random probing (where each ball would have its own random permutation of the bins to examine, in order, until finding an empty bin) when using traditional hash tables [6, 16, 24]. These results appear to have been derived using different techniques than we utilize here; it could be worthwhile to construct a general analysis that applies for both schemes.

A few papers have recently suggested using double hashing in schemes where one would use multiple hash functions and shown little or no loss in performance. For Bloom filters, Kirsch and Mitzenmacher [20], starting from the empirical analysis by Dillinger and Manolios [13], prove that using double hashing has negligible effects on Bloom filter performance. This result is closest in spirit to our current work; indeed, the type of analysis here can be used to provide an alternative argument for this phenomenon, although the case of Bloom filters is inherently simpler. Several available online implementations of Bloom filters now use this approach, suggesting that the double hashing approach can be significantly beneficial in software as well as hardware implementations.333See, for example, http://leveldb.googlecode.com/svn/trunk/util/bloom.cc, https://github.com/armon/bloomd, and http://hackage.haskell.org/packages/archive/bloomfilter/1.0/doc/html/bloomfilter.txt. Bachrach and Porat use double hashing in a variant of min-wise independent sketches [4]. The reduction in randomness stemming from using double hashing to generate multiple hash values can be useful in other contexts. For example, it is used in [33] to improve results where pairwise independent hash functions are sufficient for suitably random data; using double hashing requires fewer hash values to be generated (two in place of a larger number), which means less randomness in the data is required. Finally, in work subsequent to the original draft of this paper [30], we have empirically examined double hashing for other algorithms such as cuckoo hashing, and again found essentially no empirical difference between fully random hashing and double hashing in this and other contexts. However, theoretical results for these settings that prove this lack of difference are as of yet very limited.

Arguably, the main difference between our work and other related work is that in our setting with double hashing we find the empirical results are essentially indistinguishable in practice, and we focus on examining this phenomenon.

2 Initial Theoretical Results

We now consider formal arguments for the excellent behavior for double hashing. We begin with some simpler but coarser arguments that have been previously used in multiple-choice hashing settings, based on majorization and witness trees. While our witness tree argument dominates our majorization argument, we present both, as they may be useful in considering future variations, and they highlight how these techniques apply in these settings. In the following section, we then consider the fluid limit methodology, which best captures the result we desire here, namely that the load distributions are essentially the same with fully random hashing and double hashing. However, the fluid limit methodology captures results about the fraction of bins with load , for every constant value , and does not readily provide bounds (without specialized additional work, which often depend on the techniques used below). The reader conversant with balanced allocation results utilizing majorization and witness trees may choose to skip this section.

2.1 A Majorization Argument

We first note that using double hashing with two choices and using random hashing with two distinct hash values per ball are equivalent. With this we can provide a simple argument, showing the seemingly obvious fact that using double hashing with choices is at least as good as using 2 random choices. This in turn shows that double hashing maintains maximum load in the standard balls and bins setting.

Our approach uses a standard majorization and coupling argument, where the coupling links the random choices made by the processes when using double hashing and using random hashing while maintaining the fidelity of both individual processes. (See, e.g., [3, 5], or [26] for more background on majorization.) Let be a vector with elements in non-increasing order, so , and similarly for . We say that majorizes if and, for , . For two Markovian processes and , we say that stochastically majorizes if there is a coupling of the processes and so that at each step under the coupling the vector representing the state of majorizes the vector representing the state of . We note that because we use the loads of the bins as the state, the balls and bins processes we consider are Markovian.

We make use of the following simple and standard lemma. (See, for example, [3, Lemma 3.4].)

Lemma 1

If majorizes for vectors , of positive integers, and represents a unit vector with a 1 in the th entry and 0 elsewhere, then majorizes for .

Theorem 2

Let process be the process where balls are placed into bins with two distinct random choices, and be the corresponding scheme with choices using double hashing. Then stochastically majorizes .

{proof}

At each time step, we let and be the vectors corresponding to the loads sorted in decreasing order. We inductively claim that majorizes at all time steps under the coupling of the processes where if the th and th bins in the sorted order for are chosen, the th and th bins in the sorted order for are chosen as the first two choices, and then the remaining choices are determined by double hashing. That is, the hash choices are such that the gap between successive choices is , so the choices are , , , , and so on (modulo the size of the table). Clearly majorizes as the vectors are equal. It is simple to check that this process maintains the majorization using Lemma 1, as the coordinate that increases in at each step is deeper in the sorted order than the coordinate that increases in .

As two random choices stochastically majorizes choices from double hashing under this coupling, we see that

for any value . Since the seminal result of [3] shows that using two choices gives a maximum load of with high probability, we therefore have this corollary.

Corollary 3

The maximum load using choices and double hashing for balls and bins is with high probability.

We note that similarly, when using double hashing, we can show that using choices stochastically majorizes using choices.

2.2 A Witness Tree Argument

It is well known that choices performs better than 2 choices for multiple-choice hashing; while the maximum load remains , the constant factor depends on , and can be important in practice. Our simple majorization argument does not provide this type of bound, so to achieve it, we next utilize the witness tree approach, following closely the work of Vöcking [39]. (See also [38] for related arguments.) While we discuss the case of insertions only, the arguments also apply in settings with deletions as well; see [39] for more details. Similarly, here we consider only the standard balls and bins setting of balls and bins with being a constant, but similar results for balls for some constant can also be derived by simply changing the “base case” at the leaves of the witness tree accordingly, and similar results for Vöcking’s scheme can be derived by using the “unbalanced” witness tree used by Vöcking [39] in place of the balanced one.

These methods allow us to prove statements of the following form:

Theorem 4

Suppose balls are placed into bins using the balanced allocation scheme with double hashing as described above. Then with choices the maximum load is with high probability.

We note that, while Vöcking obtains a bound of , we have an term that appears necessary to handle the leaves in our witness tree. (A similar issue appears to arise in [41].) For constant these are asymptotically the same; however, an additive term is more pleasing both theoretically and potentially in practice. How we deviate from Vöcking’s argument is explained below.

{proof}

Following [39], we define a witness tree, which is a tree-ordered (multi)set of balls. Each node in the tree represents a ball, inserted at a certain time; the th inserted ball corresponds to time in the natural way. The ball represented by the root is placed at time , and a child node must have been inserted at a time previous to its parent. A leaf node in Vöcking’s argument is activated if each of the locations of the corresponding ball contains at least three balls when it is inserted. An edge is activated if when is the th child of , then the th location of ’s ball is the same as one of the locations of ’s ball. A witness tree is activated if all of its leaf nodes and edges are activated.

Following Vöcking’s approach, we first bound the probability that a witness tree is activated for the simpler case where the nodes of the witness trees represent distinct balls. The argument then can be generalized to deal with witness trees where the same ball may appear multiple times. As this follows straightforwardly using the technical approach in [39], we do not provide the full argument here.

We now explain where we must deviate from Vöcking’s argument. The original argument utilizes the fact that most bins have load at least 3, deterministically. As leaf nodes in Vöcking’s argument are required to have all choices of bins have load at least 3 to be activated, a leaf node corresponding to a ball with choices of bins is activated with probability at most , and a collection of leaf nodes are all activated with probability . However, this argument will not apply in our case, because the choices of bins are not independent when using double hashing, and depending on which bins are loaded, we can obtain very different results. For example, consider a case where the first bins have load at least 3. The fraction of choices using double hashing where all bins have load at least 3 is significantly more than , which would be the probability if bins with load 3 were randomly distributed. Indeed, for a newly placed ball , if and are both less than , all choices will have load at least 3, and this occurs with probability at least . While such a configuration is unlikely, the deterministic argument used by Vöcking no longer applies.

We modify the argument to deal with this issue. In our double hashing setting, let us call a leaf active if either

  • Some ball in the past has two or more of the bins at this leaf among its choices.

  • All the bins chosen by this ball have previously been chosen by previous balls.

The probability that any previous ball has hit two or more of the bins at the leaf is : there are pairs of bins from the choices at the leaf; at most pairs of positions within the choices where that pair could occur in any previous ball; at most possible previous balls; and each bad choice that leads that previous ball to have a specific pair of bins in a specific pair of positions occurs with probability . Once we exclude this case, we can consider only balls that hit at most one of the bins associated with the leaf.

For any time corresponding to a leaf, we bound the probability that any specific bin has been chosen by or more previous balls. We note by symmetry that the probability any specific ball chooses a specific bin is . The probability in question is then at most

which is less than whenever . Further, once we consider the case of previous balls that choose two or more bins at this leaf separately, the events that the bins chosen by this ball have previously been chosen by previous balls are negatively correlated. Hence, we find the probability a specific leaf node is activated is less than .

However, following [39], we need to consider a collection of leaves and show the probability that they are all active is at most . We will do this below by using Azuma’s inequality to show the fraction of choices of hash values from double hashing that lead to an activated ball is less than with high probability. As balls corresponding to leaves independently choose their hash values, this result suffices.

Let be the set of pairs of hash values that generate values that would activate a leaf at time . We have for some constant , so for some constant and large enough . Consider the Doob martingale obtained by revealing the bins for the balls one at a time. Each ball can change the final value of by at most , since the bin where any ball is placed is involved in less than choices of pairs. Azuma’s inequality (e.g., [31, Section 12.5]) then yields

for a constant that depends on and . It follows readily that the fraction of pairs of hash values that activate a leaf is at most with very high probability throughout the process; by conditioning on this event, we can continue with Vöcking’s argument. (The conditioning only adds an exponentially small additional probability to the probability the maximum load exceeds our bound.)

Specifically, we note for there to be a bin of load , there must be an activated witness tree of depth . We can bound the probability that some witness tree (with distinct balls) of depth is activated. The probability an edge is activated is the probability a ball chooses a specific bin, which as previously noted is . As all balls are distinct, the probability that a witness tree of balls has all edges activated is , and as we have shown the probability of all leaves being activated is bounded above by where is the number of leaves. Following [39], as there are at most ways of choosing the balls for the witness tree, the probability that there exists an active witness tree is at most

Hence choosing guarantees a maximum load of with probability .

3 The Fluid Limit Argument

We now consider the fluid limit approach of [28]. (A useful survey of this approach appears in [12].) The fluid limit approach gives equations that describe the asymptotic fraction of bins with each possible integer load, and concentration around these values follows from martingale bounds (e.g., [14, 22, 42]). Values can easily be determined numerically, and prove highly accurate even for small numbers of balls and bins. We show that the same equations apply even in the setting of double hashing, giving a theoretical justification for our empirical findings in Appendix A. This approach can be easily extended to other multiple choice processes (such as Vöcking’s scheme and the queuing setting). We emphasize that the fluid limit approach does not, in itself, yield bounds of the type that the maximum load is with high probability naturally; rather, it says that for any constant integer , the fraction of bins of load is concentrated around the value obtained by the fluid limit. One generally has to do additional work – generally similar in nature to the arguments in the proceeding sections – to obtain bounds. As we already have an bound from alternative techniques, here our focus is on showing the fluid limits are the same under double hashing and fully random hashing, which explains our empirical findings. (We show one could achieve an bound from the results of this section – actually bound of – in Appendix B.)

The standard balls and bins fluid limit argument runs as follows. Let be a random variable denoting the number of bins with load at least after balls have been thrown; hence and for all . Let . For to increase when a ball is thrown, all of its choices must have load at least , but not all of them can have load at least . Hence for

Let and . Then the above can be written as:

In the limit as grows, we can view the limiting version of the above equation as

where we remove the on the right hand side as the meaning is clear. Again, previous works [14, 22, 42] justify how the Markovian load balancing process converges to the solution of the differential equations.444In particular, the technical conditions corresponding to Wormald’s result [42, Theorem 1] hold, and this theorem gives the appropriate convergence; we explain further in our Theorem 8. Specifically, it follows from Wormald’s theorem [42, Theorem 1] that

with probability , or equivalently that the fraction of balls of load is within of the result of the limiting differential equations with probability . These equations allow us to compute the limiting fraction of bins of each load numerically, and these results closely match our simulations, as for example shown in Table 2.

Tail load Fluid Limit Fully Random Double Hashing
0.8231 0.8231 0.8231
0.1765 0.1764 0.1764
0.00051 0.00051 0.00051
Table 2: 3 choices, fluid limit () vs. balls and bins

Given our empirical results, it is natural to conclude that these differential equations must also necessarily describe the behavior of the process when we use double hashing in place of standard hashing. The question is how can we justify this, as the equations were derived utilizing the independence of choices, which is not the case for double hashing.

We now prove that, for constant number of choices , constant load values , and a constant time (corresponding to total balls), the loads of the bins chosen by double hashing behave essentially the same as though the choices were independent, in that, with high probability over the entire course of the process,

that is, the gap is only in terms. This suffices for [42, Theorem 1] (specifically condition (ii) of [42, Theorem 1] allows such differences). The result is that double hashing has no effect on the fluid limit analysis. (Again, we emphasize our restriction to constant choices , constant load values , and constant time parameter .) Our approach is inspired by the work of Bramson, Lue, and Prabhakar [7], who use a similar approach to obtain asymptotic independence results in the queueing setting. However, there the concern was on limiting independence in equilibrium with general service time distributions, and the choices of queues were assumed to be purely random. We show that this methodology can be applied to the double hashing setting.

Lemma 5

When using double hashing, with high probability over the entire course of the process,

{proof}

We refer to the ancestry list of a bin at time as follows. The list begins with the balls that have had bin as one of their choices, where is the number of balls that have chosen bin up to time . Note that each is associated with a corresponding time and other bin choices. For each , we recursively add the list of balls that have chosen each of those bins up to time , and so on recursively. We also think of the bins associated with these balls as being part of the ancestry list, where the meaning is clear. It is clear that the ancestry list gives all the necessary information to determine the load of the bin at time (assuming the information regarding choices is presented in such a way to include how placement will occur in case of ties; e.g., the bin choices are ordered by priority). We note that the ancestry list holds more information (and more balls and bins) than the witness trees used by Vöcking (and by us in Section 2.2).

In what follows below let us assume is prime for convenience (we explain the difference if is not prime in footnotes). We claim that for asymptotic independence of the load among a collection of bins at a specific time when a new ball is placed, it suffices to show that these ancestry lists are small. Specifically, we start with showing in Lemma 6 that all ancestry lists contain only associated bins with high probability. We then show as a consequence in Lemma 7 that the ancestry lists of the bins associated with a newly placed ball have no bins in common with high probability. This last fact allows us to complete the main lemma, Lemma 5.

Lemma 6

The number of bins in the ancestry list of every bin after the first steps is at most with high probability.

{proof}

We view the growth of the ancestry list as a variation of the standard branching process, by going backward in time. Let correspond to size of an initial ancestry list of a bin , consisting of the bin itself. If the th ball thrown has as one of its choices, then additional bins are added to the ancestry list, and we then have ; otherwise we have no change and . (Note that when measuring the size of the ancestry list in bins, each bin is counted only once, even if it is associated with multiple balls.) If the st ball thrown has a bin in the ancestry list as one of its choices, then (at most) bins are added to the ancestry list, and we set ; otherwise, we have . We continue to add to the ancestry list with at each step or , depending on whether the st ball has one of it choices as a bin on the ancestry list, or not.

This process is almost equivalent to a Galton-Watson branching process where in each generation, each existing element produces 1 offspring with probability (or equivalently, moves itself into the next generation), or produces offspring (adding new elements) with probability . The one issue is that the production of offspring are not independent events; at most elements are added at each step in the process. (There is also the issue that perhaps fewer than elements are added when elements are added to the ancestry list; for our purposes, it is pessimistic to assume offspring are produced.) Without this dependence concern, standard results on branching process would give that , which is a constant. Further, we could apply (Chernoff-like) tail bounds from Karp and Zhang [18, Theorem 1], which states the following: for a supercritical finite time branching process over time steps starting with , with mean offspring per element , and with , there exists constants and such that

In our setting, that would give that there exists constants and such that

This would give our desired high probability bound on the size of the ancestry list.

To deal with this small deviation, it suffices to consider a modified Galton-Watson process where each element produces offspring with probability ; we shall see that suffices. Let be the resulting size of this Galton Walton process. From the above we have that with high probability for some suitable constant .

Our original desired ancestry list process is dominated by a process where with probability and otherwise, and this process is in turn dominated for values of up to by a Galton-Waston branching process where the constant satisfies

for all , so that at every stage the Galton-Watson process is more likely to have at least new offspring (and may have more). We see suffices, as

which is greater than for sufficiently large when is . The straightforward step by step coupling of the processes yields that

giving our desired bound.

We also suggest a slightly cleaner alternative, which may prove useful for other variations: embed the branching process in a continuous time branching process. We scale time so that balls are thrown as a Poisson process or rate per unit time over time units. Each element therefore generates new offspring at time instants that are exponentially distributed with mean (the average time before a ball hits any bin on the ancestry list). Again, assuming new offspring is a pessimistic bound. If we let be the number of elements at time (starting from 1 element at time 0), it is well known (see, e.g., [2, p.108 eq. (4)], and note that generating new offspring is equivalent to “dieing” and generating offspring) that for such a process,

In our case, we run to a fixed time and , a constant. Indeed, in this specific case, the generating function for the distribution of the number of elements is known (see, e.g., [2, p.109]), allowing us to directly apply a Chernoff bound. Specifically,

Hence we have

for constants and that depend on and . Hence, this gives that the size of the ancestry list as viewed from the setting of the continuous branching process is with high probability.

The last concern is that running the continuous process for time does not guarantee that balls are thrown; this can be dealt with by thinking of the process running for a slightly longer time . That is, choose for a small constant . Standard Chernoff bounds on the Poisson random variables then guarantee that at least balls are then thrown with high probability, and the size of the ancestry lists are stochastically monotonically increasing with the number of balls thrown. Changing to time units maintains that each ancestry list is with high probability.

Finally, by choosing the constant in the term appropriately, we can achieve a high enough probability to apply a union bound so that this holds for all ancestry lists simultaneously with high probability.

We now use Lemma 6 to show the following.

Lemma 7

The bins in the ancestry lists of the choices are disjoint with probability for .

{proof}

Let F be the probability that the bins are disjoint, and let be the event that no pair of the choices were previously chosen by the same ball. If occurs, the ancestry lists are clearly not disjoint. Hence we wish to bound

Consider any two of the bins chosen by the ball being placed. Each of the up to previous balls have ways of choosing those two bins as two of their choices (e.g., picking that bin as the 2nd and 4th choice, for example), and the probability of choosing those two bins for each possible pair of choice positions is .555If is not prime, this probability is , where is the Euler totient function counting the number of numbers less than that are relatively prime to . We note is usually and is always , so this does not affect our argument substantially. There are pairs of balls, so by a union bound is .

Now suppose that no pair of the bins were previously chosen by the same ball. Suppose the bins for each of the ancestry lists of the choices are ordered in some fixed fashion (say according to decreasing ball time, randomly permuted for each ball). We consider the probability that the th bin in the ancestry list of one bin matches the th bin in another. Since the lists do not share any ball in common, the th bin in the second list matches the th bin in the first list with probability only , as even conditioned on the value of the th bin on the first list, the th bin on the second list is uniform over possibilities.666Again, for not prime, we may use possibilities. We now condition on all of the ancestry lists being of size ; from Lemma 6, this can be made to occur with any inverse polynomial probability by choosing the constant factor in the term, so we assume this bound on ancestry list sizes. In his case, the probability of a match among any of the bins is only in total, where the factor is from the possible ways of choosing bins, and the term follows the bound on the size ancestry lists. Hence is , and the total probability that the ancestry lists of the choices are not disjoint is .

We now show that this yields the Lemma 5. To clarify this, consider bins that were chosen by a ball at some time . (Recall our scaling of time.) The probability that all bins have load at least at that time is equivalent to the probability that each bin has a corresponding ancestry list showing that it has load at some time . Fix a collection of ancestry lists , and let be the event defined by “bin has ancestry list ”. If these ancestry lists have disjoint sets of bins, then the corresponding balls in each ancestry list occur at different times and have no intersecting bins, and as such

For constant , , and , the probability that all bins have load at least is constant. Hence, if the probability that the ancestry lists for the bins intersect at any bin is , we have asymptotic independence. Specifically, let be the set of collections of ancestry lists for balls that yield that each bin has load at least at time , let be the subset of collections in where the ancestry lists have no bins in common, and for a collection in let be the corresponding event defined by “bin has ancestry list in collection ”. Then

Here the first line uses that the ancestry lists intersect somewhere with probability ; the second lines uses that for ancestry lists in we probability of the intersection is the product of the probabilities; and the third line is again because the the collections in have total probability . Hence up to an term, the behavior is the same as if the choices were independent (with respect to all bins having load at least ). Thus

as needed.

As a result of Lemma 5, we have the following theorem, generalizing the differential equations approach for balanced allocations to the setting of double hashing.

Theorem 8

Let , , and be constants. Suppose balls are sequentially thrown into bins with each ball having choices obtained from double hashing and each ball being placed in the least loaded bin (ties broken randomly). Let be the number of bins of load at least after the balls are thrown. Let be determined by the family of differential equations

where for all time and for . Then with probability ,

{proof}

This follows from the fact that

and applying Wormald’s result [42, Theorem 1].

We remark that Theorem 1 of [42] includes other technical conditions that we briefly consider here. The first condition is that is bounded by a constant; all such values here are bounded by 1. The second (and only challenging) condition exactly corresponds to our statement that over the course of the process. The third condition is our functions on the right hand side, that is , are continuous and satisfy a Lipschitz condition on an open neighborhood containing the path of the process. These functions are continuous on the domain where all up to the value being considered, and they satisfy the Lipschitz condition as

taking note that all values are in the interval . Hence the conditions for Wormald’s theorem are met.

The following corollary, based on the known fact that the result of Theorem 8 also holds in the setting of fully random hashing [28], states that the difference between fully random hashing and double hashing is vanishing.

Corollary 9

Let , , and be constants. Consider two processes, where in each balls are sequentially thrown into bins with each ball having choices and each ball being placed in the least loaded bin (ties broken randomly), In one process, the choices are fully random; in the other, the choices are made by double hashing. Then with probability , the fraction of bins with load differ by an additive term.

Given the results for the differential equations, it is perhaps unsurprising that one can use these methods to obtain, for example, a maximum load of maximum load for balls in bins, using the related layered induction approach of [3]. While we suggest this is not the main point (given Theorem 4), we provide further details in Appendix B.

4 Conclusion

We have first demonstrated empirically that using double hashing with balanced allocation processes (e.g., the power of (more than) two choices), surprisingly, does not noticeably change performance when compared with fully random hashing. We have then shown that previous methods can readily provide bounds for this approach. However, explaining why the fraction of bins of load for each appears the same requires revisiting the fluid limit model for such processes. We have shown, interestingly, that the same family of differential equations applies for the limiting process. Our argument should extend naturally to other similar processes; for example, the analysis can similarly be made to apply in a straightforward fashion for the differential equations for Vöcking’s -left scheme [32].

This opens the door to the interesting possibility that double hashing can be suitable for other problem or analyses where this type of fluid limit analysis applies, such as low-density parity-check codes [25]. Here, however, the asymptotic independence required was aided by the fact that we were looking at the history of the process, allowing us to tie the ancestry lists to a corresponding branching process. Whether similar asymptotic independence can be derived for other problems remains to be seen. For other problems, such as cuckoo hashing, the fluid limit analysis, while an important step, may not offer a complete analysis. Even for load balancing problems, fluid limits do not straightforwardly apply for the heavily loaded case where the number of balls is superlinear in the number of bins [5], and it is unclear how double hashing performs in that setting. So again, determining more generally where double hashing can be used in place of fully random hashing without significantly changing performance may offer challenging future questions.

Acknowledgments

The author thanks George Varghese for the discussions which led to the formulation of this problem, and thanks Justin Thaler for both helpful conversations and offering several suggestions for improving the presentation of results.

References

  • [1] N. Alon and J. Spencer. The Probabilistic Method, John Wiley & Sons, 1992.
  • [2] K. Athreya and P. Ney. Branching Processes. Springer-Verlag, 1972.
  • [3] Y. Azar, A. Broder, A. Karlin, and E. Upfal. Balanced allocations. SIAM Journal of Computing 29(1):180-200, 1999.
  • [4] Y. Bachrach and E. Porat. Fast pseudo-random fingerprints. Arxiv preprint arXiv:1009.5791, 2010.
  • [5] P. Berenbrink, A. Czumaj, A. Steger, and B. Vöcking. Balanced allocations: The heavily loaded case. SIAM Journal on Computing, 35(6):1350-1385, 2006.
  • [6] P. Bradford and M. Katehakis. A probabilistic study on combinatorial expanders and hashing. SIAM Journal on Computing, 37(1):83-111, 2007.
  • [7] M. Bramson, Y. Lu, and B. Prabhakar. Asymptotic independence of queues under randomized load balancing. Queueing Systems, 71(3):247-292, 2012.
  • [8] A. Broder, M. Charikar, A, Frieze, and M. Mitzenmacher. Min-wise independent permutations. Journal of Computer and System Sciences, 60:3, pp. 630-659, 2000.
  • [9] J. L. Carter and M. N. Wegman. Universal classes of hash functions. Journal of Computer and System Sciences, 18(2):143–154, 1979.
  • [10] L. Celis, O. Reingold, G. Segev, and U. Wieder. Balls and Bins: Smaller Hash Families and Faster Evaluation. In Proc. of the 52nd Annual Symposium on Foundations of Computer Science, pp. 599-608, 2011.
  • [11] ChunkStash: speeding up inline storage deduplication using flash memory. B. Debnath, S. Sengupta, and J. Li. In Proc. of the USENIX Technical Conference, p.16, 2010.
  • [12] J. Díaz and D. Mitsche. The cook-book approach to the differential equations method. Computer Science Review, 4(3):129-151, 2010.
  • [13] P.C. Dillinger and P. Manolios. Bloom Filters in Probabilistic Verification. In Proc. of the 5th International Conference on Formal Methods in Computer-Aided Design, pp. 367-381, 2004.
  • [14] S. N. Ethier and T. G. Kurtz. Markov Processes: Characterization and Convergence. John Wiley and Sons, 1986.
  • [15] P. Godfrey. Balls and bins with structure: balanced allocations on hypergraphs. In Proc. of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 511-517, 2008.
  • [16] L. Guibas and E. Szemeredi. The analysis of double hashing. Journal of Computer and System Sciences, 16(2):226-274, 1978.
  • [17] G. Heileman and W. Luo. How caching affects hashing. In Proc. of ALENEX/ANALCO, pp. 141–154, 2005.
  • [18] R. Karp and Y. Zhang. Finite branching processes and AND/OR tree evaluation. ICSI Berkeley Technical Report TR-93-043. See also Bounded branching process and and/or tree evaluation. Random Structures and Algorithms, 7:97-116, 1995.
  • [19] K. Kenthapadi and R. Panigrahy. Balanced allocation on graphs. In Proc. of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 434-443, 2006.
  • [20] A. Kirsch and M. Mitzenmacher. Less hashing, same performance: Building a better Bloom filter. Random Structures & Algorithms, 33(2):187-218, 2008.
  • [21] A. Kirsch, M. Mitzenmacher and G. Varghese. Hash-Based Techniques for High-Speed Packet Processing. In Algorithms for Next Generation Networks, (G. Cormode and M. Thottan, eds.), pp. 181-218, Springer London, 2010.
  • [22] T. G. Kurtz. Solutions of Ordinary Differential Equations as Limits of Pure Jump Markov Processes. Journal of Applied Probability Vol. 7, 1970, pp. 49-58.
  • [23] L. Le Cam. An approximation theorem for the Poisson binomial distribution. Pacific J. Math, 10(4):1181-1197, 1960.
  • [24] G. Lueker and M. Molodowitch. More analysis of double hashing. Combinatorica, 13(1):83-96, 1993.
  • [25] M. Luby, M. Mitzenmacher, A. Shokrollahi, and D. Spielman. Efficient erasure correcting codes. IEEE Transactions on Information Theory, 47(2):569-584, 2001.
  • [26] A. Marshall, I. Olkin, and B. Arnold. Inequalities: Theory of Majorization and Its Applications (2nd Edition). Springer-Verlag, 2010.
  • [27] M. Mitzenmacher. The power of two choices in randomized load balancing. IEEE Transactions on Parallel and Distributed Systems, 12(10):1094-1104, 2001.
  • [28] M. Mitzenmacher. The power of two choices in randomized load balancing. Ph.D. thesis, 1996.
  • [29] M. Mitzenmacher, A. Richa, and R. Sitaraman. The Power of Two Choices: A Survey of Techniques and Results. In Handbook of Randomized Computing, (P. Pardalos, S. Rajasekaran, J. Reif, and J. Rolim, edds), pp. 255-312, Kluwer Academic Publishers, Norwell, MA, 2001.
  • [30] M. Mitzenmacher and J. Thaler. Peeling Arguments and Double Hashing. In Proc. of Allerton 2012, pp. 1118-1125.
  • [31] M. Mitzenmacher and E. Upfal. Probability and computing: Randomized algorithms and probabilistic analysis, 2005, Cambridge University Press.
  • [32] M. Mitzenmacher and B. Vöcking. The Asymptotics of Selecting the Shortest of Two, Improved. In Proc. of Allerton 1999, pp. 326-327.
  • [33] M. Mitzenmacher and S. Vadhan. Why Simple Hash Functions Work: Exploiting the Entropy in a Data Stream. In Proc. of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 746-755, 2008.
  • [34] A. Pagh, R. Pagh, and M. Ruzic. Linear Probing with 5-wise Independence. SIAM Review, 53(3):547-558, 2011.
  • [35] M. Patrascu and M. Thorup. The power of simple tabulation hashing. In Proc. of the 43rd Annual ACM Symposium on Theory of Computing, pp.1-10, 2011.
  • [36] Y. Peres, K. Talwar, and U. Wieder. The (1+ )-choice process and weighted balls-into-bins. In Proc. of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1613-1619, 2010.
  • [37] J. Schmidt-Pruzan and E. Shamir. Component structure in the evolution of random hypergraphs. Combinatorica, vol. 5, pp. 81-94, 1985.
  • [38] T. Schickinger and A. Steger. Simplified witness tree arguments. SOFSEM 2000: Theory and Practice of Informatics, pp. 71–87, 2000.
  • [39] B. Vöcking. How asymmetry helps load balancing. Journal of the ACM, 50(4):568-589, 2003.
  • [40] N.D. Vvedenskaya, R.L. Dobrushin, and F.I. Karpelevich. Queueing system with selection of the shortest of two queues: an asymptotic approach. Problems of Information Transmission, 32:15–27, 1996.
  • [41] P. Woelfel. Asymmetric balanced allocation with simple hash functions. In Proc. of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 424–433, 2006.
  • [42] N.C. Wormald. Differential equations for random processes and random graphs. The Annals of Applied Probability, 5(1995), pp. 1217–1235.

Appendix A Empirical Results

We have done extensive simulations to test whether using double hashing in place of idealized random hashing makes a difference for several multiple choice schemes. Theoretically, of course, there is some difference; for example, the probability that balls choose the same specified set of bins is with fully random choices, and only with double hashing (where the order notation may hide factors that depend on ). Hence, to be clear, the best we can hope for are differences up to events. Empirically, however, our experiments suggest the effects on the distribution of the loads, or in particular on the probability the maximum load exceeds some value, are all found deeply in the lower order terms. Experiments show that unless especially rare events are of special concern, we expect the two to perform similarly.

a.1 The Standard -Choice Scheme

We first consider balls and bins using choices without replacement, comparing fully random choices with double hashing.777We also considered choices with replacement, but the difference was not apparent except for very small , so we present only results without replacement. However, we note that conversations with George Varghese regarding hardware settings with small originally motivated our examination of this approach. When using double hashing we choose an odd stride value as explained previously. All results presented are over 10000 trials. Table 3 shows the distributions of bin loads for 3 and 4 choices, averaged over all 10000 trials, for and . (Recall was shown in Table 1.) As can be seen, the deviations are all very small, within standard sampling error.

Load Fully Random Double Hashing
0 0.17695 0.17693
1 0.64661 0.64664
2 0.17593 0.17592
3 0.00051 0.00051
(c) 3 choices, balls and bins
Load Fully Random Double Hashing
0 0.14081 0.14083
1 0.71841 0.71835
2 0.14076 0.14079
3
(d) 4 choices, balls and bins
Load Fully Random Double Hashing
0 0.17696 0.17696
1 0.64658 0.64648
2 0.17595 0.17595
3 0.00051 0.00051
(e) 3 choices, balls and bins
Load Fully Random Double Hashing
0 0.14083 0.14082
1 0.71837 0.71838
2 0.14078 0.14078
3
(f) 4 choices, balls and bins
Table 3: Essentially indistinguishable differences in simulation between double hashing and fully random hashing.
Fully Random Double Hashing
39.78 39.40
64.71 65.15
86.90 87.05
98.37 98.63
100.00 99.99
100.00 100.00
(g) 3 choices, fraction with maximum load 3
Fully Random Double Hashing
2.24 2.23
8.91 8.52
30.75 31.42
78.23 77.72
99.77 99.79
100.00 100.00
(h) 4 choices, fraction with maximum load 3
Table 4: Comparing maximum loads. The fraction of runs with maximum load 3 is similar.
Load min avg max std.dev.
0 36522 36913.75 37308 111.06
1 187533 188322.55 189103 222.02
2 36516 36901.67 37298 110.96
3 1 6.04 17 2.42
(i) Fully random, load distribution over 10000 trials
Load min avg max std.dev.
0 36535 36916.57 37301 109.89
1 187544 188316.93 189078 219.71
2 36524 36904.45 37297 109.85
3 1 6.06 18 2.44
(j) Double hashing, load distribution over 10000 trials
Table 5: Viewing the sample standard deviation, 4 choices, balls and bins.
Load Fully Random Double Hashing
9
10
11
12
13 0.00076 0.00076
14 0.01254 0.01254
15 0.16885 0.16877
16 0.62220 0.62234
17 0.19482 0.19475
18 0.00079 0.00079
(k) 3 choices, balls and bins
Load Fully Random Double Hashing
11
12
13
14 0.00349 0.00349
15 0.13908 0.13906
16 0.71110 0.71114
17 0.14622 0.14620
18
(l) 4 choices, balls and bins
Table 6: The similarity in performance persists under higher loads.
Load Fully Random Double Hashing
0 0.12420 0.12421
1 0.75160 0.75158
2 0.12420 0.12421
(m) 4 choices, balls and bins
Load Fully Random Double Hashing
0 0.12421 0.12421
1 0.75159 0.75158
2 0.12421 0.12421
3
(n) 4 choices, balls and bins
Table 7: Double hashing performance with Vöcking’s -left scheme.

We may also consider the maximum load. In Table 4, we consider values of where the maximum load is at most 3, and examine the fraction of time a load of 3 is achieved over the 10000 trials. Again, the difference between the two schemes appears small, to the point where it would be a challenge to differentiate between the two approaches.

We focus in on the case of 4 choices with balls and bins to examine the sample standard deviation (across 10000 trials) in Table 5. This example is representative of behavior in our other experiments. By looking at the number of bins of each load over several trials, we see the sample standard deviation is very small compared to the number of bins of a given load, whether using double hashing or fully random hashing, and again performance is similar for both.

A reasonable question is whether the same behavior occurs if the average load is larger than 1. We have tested this for several cases, and again found that empirically the behavior is essentially indistinguishable. As an example, Table 6 gives results in the case of balls being thrown into bins, for an average load of 16. Again, the differences are at the level of sampling deviations.

Choices Fully Random Double Hashing
0.9 3 2.02805 2.02813
0.9 4 1.77788 1.77792
0.99 3 3.85967 3.86073
0.99 4 3.24347 3.24410
Table 8: queues, average time

We note that we obtain similar results under variations of the standard -choice scheme. For example, using Vöcking’s approach of splitting in subtables and breaking ties to the left, we obtain essentially indistinguishable load distributions with fully random hashing and double hashing. Table 7 shows results from a representative case where , again averaging over 10000 trials. The case of is instructive; this appears very close to the threshold where bins with load 3 can appear. While there appears to be a deviation, with double hashing have some small fraction of bins with load 3, this corresponds to exactly 2 bins over the 10000 trials. Further simulations suggest that this apparent gap is less significant than it might appear; over 100000 trials, for random, the maximum load was 3 for three trials, while for double hashing, it was 3 for four trials.

In the standard queueing setting, balls arrive as a Poisson process of rate for to a bank of first-in first-out queues, and have exponentially distributed service times with mean 1. Jobs are placed by choosing queues and going to the queue with the fewest jobs. The asymptotic equilibrium distributions for such systems with independent, uniform choices can be found by fluid limit models [27, 40]. We ran 100 simulations of 10000 seconds, recording the average time over all packets after time 1000 (allowing the system to “burn in”.) An example appears in Table 8. While double hashing performs slightly worse in these trials, the gap is far less than 0.1% in all cases.

Appendix B Extending the Fluid Limit

We sketch an approach to extend the fluid limit result to provide an result. In fact, we show here that for balls being thrown into bins via double hashing, we obtain a load of , avoiding the term of Section 2.2. While this is technicality for the case of constant, this approach could be used to obtain bounds for super-constant values of .

The basic approach is not new, and has been used in other settings, such as [3, 28]. Essentially, we can repeat the “layered induction” approach of [3] in the setting of double hashing, making use of the results of Section 3 that the deviations from the fully random setting are at most for a suitable number of levels.

This allows us to state the following theorem:

Theorem 10

Suppose balls are placed into bins using the balanced allocation scheme with double hashing. Then with choices (for constant) the maximum load is with high probability.

{proof}

Let be the number of bins of load after all balls have been thrown. We will follow the framework of the original balanced allocations paper [3], and start by noting that . Now from the argument of Section 3, the probability that the th ball chooses bins all with load at least is bounded above by , where was determined in Lemma 5, as long as, up to that point, we can condition on all the ancestry lists being suitably small, which is a high probability event. We will denote the event that the ancestry lists are suitably small throughout the process by .

Finally, let and for . Let be the event that occurs and that . (We choose values similarly to [3] for convenience, but use the constant 4 on the right hand side whereas [3] uses the constant to account for the extra in our probability over just the value .) A simple induction using the formula for yields for .

Now we fix some and consider random variables , where if the following conditions all hold: all choices for the th ball have load at least , the number of bins with load at least before the ball is thrown is at most , and the ancestry lists are all suitably small when the ball is thrown so the polylogarithmic bound on the “extra probability” that a ball ends up with all choices having load at least holds. Let otherwise. We note that the number of bins with load at least is at most the sum of the . Let . Conditioned on , we have

Now the sum are dominated by a binomial random variable of trials, each with probability of success, because of the definition of the .

As in [3], we can use the simple Chernoff bound from [1]

Note that, for large enough and , , as will be a lower order term. Hence for such values,

With these choices, we see that as long as (note that for this value of , is indeed a lower order term),

and using

we have

Recall again that depended on and , and the latter holds with certainty.

Note that we only require before , based on the bound for the . Hence the total probability that the required events do not hold up to this point is bounded by Hence, as long is (which we argued in Section 3), we are good for loads up to . After only one more round, using the same argument, we can get to the point where , using the same Chernoff bound argument, since the expected number of bins with load at least would be dominated by .

From this point, one can show that the maximum load is for some constant with high probability by continuing with a variation of the layered induction argument as used in [3]. If we condition on there being bins with load at least for some , for a ball have all choices have bins with at least , it must have at least two of its bin choices have load at least . Even when using double hashing, for any ball, any pair of the choices of bins are chosen independently from all possible pairs of distinct bins888Here we again assume is prime; if not, we need to take into account the issue that the offset is relatively prime to .; hence, by a union bound the probability any ball causes a bin to have load at least is at most , giving an expected number of bins of load at least of at most . (Here this step is slightly different than the corresponding step in [3]; because of the use of double hashing in place of independent hashes, we use a union bound over the pairs of bins. This avoids the issue of the ancestry lists completely at this point of the argument, which we take advantage of once we’ve gotten down to a small enough number of bins to complete the argument.)

Applying the same Chernoff bounds as previously, we find with high probability, with high probability. By a union bound, the probability of any ball having at least 2 choices with load at least is at most , and hence with probability . Note can make the probability smaller (such as ) by taking a larger constant term. This gives that the maximum load is with high probability under double hashing.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
212591
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description