Asymmetric Minwise Hashing
Abstract
Minwise hashing (Minhash) is a widely popular indexing scheme in practice. Minhash is designed for estimating set resemblance and is known to be suboptimal in many applications where the desired measure is set overlap (i.e., inner product between binary vectors) or set containment. Minhash has inherent bias towards smaller sets, which adversely affects its performance in applications where such a penalization is not desirable. In this paper, we propose asymmetric minwise hashing (MHALSH), to provide a solution to this problem. The new scheme utilizes asymmetric transformations to cancel the bias of traditional minhash towards smaller sets, making the final “collision probability” monotonic in the inner product. Our theoretical comparisons show that for the task of retrieving with binary inner products asymmetric minhash is provably better than traditional minhash and other recently proposed hashing algorithms for general inner products. Thus, we obtain an algorithmic improvement over existing approaches in the literature. Experimental evaluations on four publicly available highdimensional datasets validate our claims and the proposed scheme outperforms, often significantly, other hashing algorithms on the task of near neighbor retrieval with set containment. Our proposal is simple and easy to implement in practice.
undefined
1 Introduction
Record matching (or linkage), data cleansing and plagiarism detection are among the most frequent operations in many largescale data processing systems over the web. Minwise hashing (or minhash) [6, 7] is a popular technique deployed by big data industries for these tasks. Minhash was originally developed for economically estimating the resemblance similarity between sets (which can be equivalently viewed as binary vectors). Later, because of its locality sensitive property [21], minhash became a widely used hash function for creating hash buckets leading to efficient algorithms for numerous applications including spam detection [6], collaborative filtering [4], news personalization [15], compressing social networks [13], graph sampling [14], record linkage [24], duplicate detection [20], all pair similarity [5], etc.
1.1 Sparse Binary Data, Set Resemblance, and Set Containment
Binary representations for web documents are common, largely due to the wide adoption of the “Bag of Words” (BoW) representations for documents and images. In BoW representations, the word frequencies within a document follow power law. A significant number of words (or combinations of words) occur rarely in a document and most of the higher order shingles in the document occur only once. It is often the case that just the presence or absence information suffices in practice [9, 19, 23, 28]. Leading search companies routinely use sparse binary representations in their large data systems [8].
The underlying similarity measure of interest with minhash is the resemblance, also known as the Jaccard similarity. The resemblance similarity between two sets , is
(1)  
where 
Sets can be equivalently viewed as binary vectors with each component indicating the presence or absence of an attribute. The cardinality (e.g., , ) is the number of nonzeros in the binary vector.
While the resemblance similarity is convenient and useful in numerous applications, there are also many scenarios where the resemblance is not the desirable similarity measure [1, 11]. For instance, consider text descriptions of two restaurants:

“Five Guys Burgers and Fries Brooklyn New York”

“Five Kitchen Berkley”
Shingle based representations for strings are common in practice. Typical (firstorder) shingle based representations of these names will be (i) {five, guys, burgers, and, fries, brooklyn, new, york } and (ii) {five, kitchen, berkley}. Now suppose the query is “Five Guys” which in shingle representation is {Five, Guys}. We would like to match and search the records, for this query “Five Guys”, based on resemblance. Observe that the resemblance between query and record (i) is = 0.25, while that with record (ii) is = 0.33. Thus, simply based on resemblance, record (ii) is a better match for query “Five Guys” than record (i), which should not be correct in this content.
Clearly the issue here is that the resemblance penalizes the sizes of the sets involved. Shorter sets are unnecessarily favored over longer ones, which hurts the performance in record matching [1] and other applications. There are many other scenarios where such penalization is undesirable. For instance, in plagiarism detection, it is typically immaterial whether the text is plagiarized from a big or a small document.
To counter the often unnecessary penalization of the sizes of the sets with resemblance, a modified measure, the set containment (or Jaccard containment) was adopted [6, 1, 11]. Jaccard containment of set and with respect to is defined as
(2) 
In the above example with query “Five Guys” the Jaccard containment with respect to query for record (i) will be and with respect to record (ii) it will be , leading to the desired ordering. It should be noted that for any fixed query , the ordering under Jaccard containment with respect to the query, is the same as the ordering with respect to the intersection (or binary inner product). Thus, near neighbor search problem with respect to is equivalent to the near neighbor search problem with respect to .
1.2 Maximum Inner Product Search (MIPS) and Maximum Containment Search (MCS)
Formally, we state our problem of interest. We are given a collection containing sets (or binary vectors) over universe with (or binary vectors in ). Given a query , we are interested in the problem of finding such that
(3) 
where is the cardinality of the set. This is the socalled maximum inner product search (MIPS) problem.
For binary data, the MIPS problem is equivalent to searching with Jaccard containment with respect to the query, because the cardinality of the query does not affect the ordering and hence the .
(4) 
which is also referred to as the maximum containment search (MCS) problem.
1.3 Shortcomings of Inverted Index Based Approaches for MIPS (and MCS)
Owing to its practical significance, there have been many existing heuristics for solving the MIPS (or MCS) problem [30, 33, 12]. A notable recent work among them made use of the inverted index based approach [1]. Inverted indexes might be suitable for problems when the sizes of documents are small and each record only contains few words. This situation, however, is not commonly observed in practice. The documents over the web are large with huge vocabulary. Moreover, the vocabulary blows up very quickly once we start using higherorder shingles. In addition, there is an increasing interest in enriching the text with extra synonyms to make the search more effective and robust to semantic meanings [1], at the cost of a significant increase of the sizes of the documents. Furthermore, if the query contains many words then the inverted index is not very useful. To mitigate this issue several additional heuristics were proposed, for instance, the heuristic based on minimal infrequent sets [1]. Computing minimal infrequent sets is similar to the set cover problem which is hard in general and thus [1] resorted to greedy heuristics. The number of minimal infrequent sets could be huge in general and so these heuristics can be very costly. Also, such heuristics require the knowledge of the entire dataset before hand which is usually not practical in a dynamic environment like the web. In addition, inverted index based approaches do not have theoretical guarantees on the query time and their performance is very much dataset dependent.
1.4 Probabilistic Hashing
Locality Sensitive Hashing (LSH) [21] based randomized techniques are common and successful in industrial practice for efficiently solving NNS (near neighbor search). They are some of the few known techniques that do not suffer from the curse of dimensionality. Hashing based indexing schemes provide provably sublinear algorithms for search which is a boon in this era of big data where even linear search algorithms are impractical due to latency. Furthermore, hashing based indexing schemes are massively parallelizable and can be updated incrementally (on data streams), which makes them ideal for modern distributed systems. The prime focus of this paper will be on efficient hashing based algorithms for binary inner products.
Despite the interest in Jaccard containment and binary inner products, there were no hashing algorithms for these measures for a long time and minwise hashing is still a widely popular heuristic [1]. Very recently, it was shown that general inner products for real vectors can be efficiently solved by using asymmetric locality sensitive hashing schemes [35, 37]. The asymmetry is necessary for the general inner products and an impossibility of having a symmetric hash function can be easily shown using elementary arguments. Thus, binary inner product (or set intersection) being a special case of general inner products also admits provable efficient search algorithms with these asymmetric hash functions which are based on random projections. However, it is known that random projections are suboptimal for retrieval in the sparse binary domain [38]. Hence, it is expected that the existing asymmetric locality sensitive hashing schemes for general inner products are likely to be suboptimal for retrieving with sparse high dimensional binarylike datasets, which are common over the web.
1.5 Our Contributions
We investigate hashing based indexing schemes for the problem of near neighbor search with binary inner products and Jaccard containment. Binary inner products are special. The impossibility of existence of LSH for general inner products shown in [35] does not hold for the binary case. On the contrary, we provide an explicit construction of a provable LSH based on sampling, although our immediate investigation reveals that such an existential result is only good in theory and unlikely to be a useful hash function in practice.
Recent results on hashing algorithms for maximum inner product search [35] have shown the usefulness of asymmetric transformations in constructing provable hash functions for new similarity measures, which were otherwise impossible. Going further along this line, we provide a novel (and still very simple) asymmetric transformation for binary data, that corrects minhash and removes the undesirable bias of minhash towards the sizes of the sets involved. Such an asymmetric correction eventually leads to a provable hashing scheme for binary inner products, which we call asymmetric minwise hashing (MHALSH). Our theoretical comparisons show that for binary data, which are common over the web, the new hashing scheme is provably more efficient that the recently proposed asymmetric hash functions for general inner products [35, 37]. Thus, we obtain a provable algorithmic improvement over the stateoftheart hashing technique for binary inner products. The construction of our asymmetric transformation for minhash could be of independent interest in itself.
The proposed asymmetric minhash significantly outperforms existing hashing schemes, in the tasks of ranking and near neighbor search with Jaccard containment as the similarity measure, on four realworld highdimensional datasets. Our final proposed algorithm is simple and only requires very small modifications of the traditional minhash and hence it can be easily adopted in practice.
2 Background
2.1 Approximate Near Neighbor Search and Classical LSH
Past attempts of finding efficient algorithms, for exact near neighbor search based on space partitioning, often turned out to be a disappointment with the massive dimensionality of modern datasets [39]. Due to the curse of dimensionality, theoretically it is hopeless to obtain an efficient algorithm for exact near neighbor search. Approximate versions of near neighbor search problem were proposed [21] to overcome the linear query time bottleneck. One commonly adopted such formulation is the approximate Near Neighbor (NN).
Definition 1
(Approximate Near Neighbor or NN). [21] Given a set of points in a dimensional space , and parameters , , construct a data structure which, given any query point q, does the following with probability : if there exist an near neighbor of q in P, it reports some near neighbor.
The usual notion of near neighbor is in terms of distance. Since we are dealing with similarities, we define near neighbor of point as a point with , where is the similarity function of interest.
The popular technique, with near optimal guarantees for NN in many interesting cases, uses the underlying theory of Locality Sensitive Hashing (LSH) [21]. LSH are family of functions, with the property that similar input objects in the domain of these functions have a higher probability of colliding in the range space than nonsimilar ones. More specifically, consider a family of hash functions mapping to some set .
Definition 2
(Locality Sensitive Hashing) A family is called sensitive if for any two point and chosen uniformly from satisfies the following:

if then

if then
For approximate nearest neighbor search typically, and is needed. Note, as we are defining neighbors in terms of similarity. To obtain distance analogy we can resort to
Fact 1
[21] Given a family of sensitive hash functions, one can construct a data structure for NN with query time and space ,
LSH trades off query time with extra preprocessing time and space that
can be accomplished offline. It requires constructing a one time data
structure which costs space and further any
approximate near neighbor queries can be answered in time in the worst case.
A particularly interesting sufficient condition for existence of LSH is the monotonicity of the collision probability in . Thus, if a hash function family satisfies,
(5) 
where is any strictly monotonically increasing function, then the conditions of Definition 2 are automatically satisfied for all .
The quantity is a property of the LSH family, and it is of particular interest because it determines the worst case query complexity of the approximate near neighbor search. It should be further noted, that the complexity depends on which is the operating threshold and , the approximation ratio we are ready to tolerate. In case when we have two or more LSH families for a given similarity measure, then the LSH family with smaller value of , for given and , is preferred.
2.2 Minwise Hashing (Minhash)
Minwise hashing [6] is the LSH for the resemblance, also known as the Jaccard similarity, between sets. In this paper, we focus on binary data vectors which can be equivalent viewed as sets.
Given a set , the minwise hashing family applies a random permutation on and stores only the minimum value after the permutation mapping. Formally minwise hashing (or minhash) is defined as:
(6) 
Given sets and , it can be shown that the probability of collision is the resemblance :
(7) 
where , , and . It follows from Eq. ( 7) that minwise hashing is
sensitive family of hash function when the
similarity function of interest is resemblance.
Even though minhash was really meant for retrieval with resemblance similarity, it is nevertheless a popular hashing scheme used for retrieving set containment or intersection for binary data [1]. In practice, the ordering of inner product and the ordering or resemblance can be different because of the variation in the values of and , and as argued in Section 1, which may be undesirable and lead to suboptimal results. We show later that by exploiting asymmetric transformations we can get away with the undesirable dependency on the number of nonzeros leading to a better hashing scheme for indexing set intersection (or binary inner products).
2.3 LSH for L2 Distance (L2LSH)
[16] presented a novel LSH family for all () distances. In particular, when , this scheme provides an LSH family for distance. Formally, given a fixed number , we choose a random vector with each component generated from i.i.d. normal, i.e., , and a scalar generated uniformly at random from . The hash function is defined as:
(8) 
where is the floor operation. The collision probability under this scheme can be shown to be
(9) 
(10) 
where is the cumulative density function (cdf) of standard normal distribution and is the Euclidean distance between the vectors and . This collision probability is a monotonically decreasing function of the distance and hence is an LSH for distances. This scheme is also the part of LSH package [2]. Here is a parameter.
2.4 LSH for Cosine Similarity (SRP)
Signed Random Projections (SRP) or simhash is another popular LSH for the cosine similarity measure, which originates from the concept of Signed Random Projections (SRP) [17, 10]. Given a vector , SRP utilizes a random vector with each component generated from i.i.d. normal, i.e., , and only stores the sign of the projection. Formally simhash is given by
(11) 
It was shown in the seminal work [17] that collision under SRP satisfies the following equation:
(12) 
where . The term is the popular cosine similarity.
For sets (or equivalently binary vectors), the cosine similarity reduces to
(13) 
The recent work on coding for random projections [26, 27] has shown the advantage of SRP (and 2bit random projections) over L2LSH for both similarity estimation and near neighbor search. Interestingly, another recent work [38] has shown that for binary data (actually even sparse nonbinary data), minhash can significantly outperform SRP for near neighbor search even as we evaluate both SRP and minhash in terms of the cosine similarity (although minhash is designed for resemblance). This motivates us to design asymmetric minhash for achieving better performance in retrieving set containments. But first, we provide an overview of asymmetric LSH for general inner products (not restricted to binary data).
2.5 Asymmetric LSH (ALSH) for General Inner Products
The term “ALSH” stands for asymmetric LSH, as used in a recent work [35]. Through an elementary argument, [35] showed that it is not possible to have a Locality Sensitive Hashing (LSH) family for general unnormalized inner products.
For inner products between vectors and , it is possible to have . Thus for any hashing scheme to be a valid LSH, we must have , which is an impossibility. It turns out that there is a simple fix, if we allow asymmetry in the hashing scheme. Allowing asymmetry leads to an extended framework of asymmetric locality sensitive hashing (ALSH). The idea to is have a different hashing scheme for assigning buckets to the data point in the collection , and an altogether different hashing scheme while querying.
Definition: (Asymmetric Locality Sensitive Hashing (ALSH)) A family , along with the two vector functions (Query Transformation) and (Preprocessing Transformation), is called sensitive if for a given NN instance with query , and the hash function chosen uniformly from satisfies the following:

if then

if then
Here is any point in the collection . Asymmetric LSH borrows all theoretical guarantees of the LSH.
Fact 2
Given a family of hash function and the associated query and preprocessing transformations and respectively, which is sensitive, one can construct a data structure for NN with query time and space , where .
[35] showed that using asymmetric transformations, the problem of maximum inner product search (MIPS) can be reduced to the problem of approximate near neighbor search in . The algorithm first starts by scaling all by a constant large enough, such that . The proposed ALSH family (L2ALSH) is the LSH family for distance with the Preprocessing transformation and the Query transformation defined as follows:
(14)  
(15) 
where [;] is the concatenation. appends scalers of the form followed by “1/2s” at the end of the vector , while first appends “1/2s” to the end of the vector and then scalers of the form . It was shown that this leads to provably efficient algorithm for MIPS.
Fact 3
Here the guarantees depends on the maximum norm of the space .
Quickly, it was realized that a very similar idea can convert the MIPS problem in the problem of maximum cosine similarity search which can be efficiently solve by SRP leading to a new and better ALSH for MIPS SignALSH [37] which works as follows: The algorithm again first starts by scaling all by a constant large enough, such that . The proposed ALSH family (SignALSH) is the SRP family for cosine similarity with the Preprocessing transformation and the Query transformation defined as follows:
(17)  
(18) 
where [;] is the concatenation. appends scalers of the form
followed by “0s” at the end of the vector , while
appends “0” followed by scalers of the form
to the end of the vector . It was shown that this leads to provably efficient algorithm for MIPS.
As demonstrated by the recent work [26] on coding for random projections, there is a significant advantage of SRP over L2LSH for near neighbor search. Thus, it is not surprising that SignALSH outperforms L2ALSH for the MIPS problem.
Similar to L2LSH, the runtime guarantees for SignALSH can be shown as:
Fact 4
For the problem of approximate MIPS, one can construct a data structure having query time and space , where is the solution to constraint optimization problem
(19)  
There is a similar asymmetric transformation [3, 31] which followed by signed random projection leads to another ALSH having very similar performance to SignALSH. The values, which were also very similar to the can be shown as
(20) 
Both L2ALSH and SignALSH work for any general inner products over . For sparse and highdimensional binary dataset which are common over the web, it is known that minhash is typically the preferred choice of hashing over random projection based hash functions [38]. We show later that the ALSH derived from minhash, which we call asymmetric minwise hashing (MHALSH), is more suitable for indexing set intersection for sparse binary vectors than the existing ALSHs for general inner products.
3 A Construction of LSH for Indexing Binary Inner Products
In [35], it was shown that there cannot exist any LSH for general
unnormalized inner product. The key argument used in the proof was the
fact that it is possible to have and with . However, binary inner product (or set intersection) is special. For any
two binary vectors and we always have .
Therefore, the argument used to show nonexistence of LSH for general inner products
does not hold true any more for this special case. In fact, there does exist an LSH for
binary inner products (although it is mainly for theoretical interest). We provide an explicit construction in this
section.
Our proposed LSH construction is based on sampling. Simply sampling a random component leads to the popular LSH for hamming distance [32]. The ordering of inner product is different from that of hamming distance. The hamming distance between and query is given by , while we want the collision probability to be monotonic in the inner product . makes it nonmonotonic in . Note that has no effect on ordering of because it is constant for every query. To construct an LSH monotonic in binary inner product, we need an extra trick.
Given a binary data vector , we sample a random coordinate (or attribute). If the value of this coordinate is (in other words if this attribute is present in the set), our hash value is a fixed number . If this randomly sampled coordinate has value (or the attribute is absent) then we independently generate a random integer uniformly from . Formally,
(21) 
Theorem 1
Given two binary vectors and , we have
(22) 
Proof 1
The probability that both and have value 0 is . The only other way both can be equal is when the two independently generated random numbers become equal, which happens with probability . The total probability is which simplifies to the desired expression.
Corollary 1
is sensitive locality sensitive hashing for binary inner product with
3.1 Shortcomings
The above LSH for binary inner product is likely to be very inefficient for sparse and high dimensional datasets. For those datasets, typically the value of is very high and the sparsity ensures that is very small. For modern web datasets, we can have running into billions (or ) while the sparsity is only in few hundreds or perhaps thousands [8]. Therefore, we have which essentially boils down to . In other words, the hashing scheme becomes worthless in sparse high dimensional domain. On the other hand, if we observe the collision probability of minhash Eq.( 7), the denominator is , which is usually of the order of and much less than the dimensionality for sparse datasets.
Another way of realizing the problem with the above LSH is to note that it is informative only if a randomly sampled coordinate has value equal to 1. For very sparse dataset with , sampling a non zero coordinate has probability . Thus, almost all of the hashes will be independent random numbers.
3.2 Why Minhash Can Be a Reasonable Approach?
In this section, we argue why retrieving inner product based on plain minhash is a reasonable thing to do. Later, we will show a provable way to improve it using asymmetric transformations.
The number of nonzeros in the query, i.e., does not change the identity of in Eq.(4). Let us assume that we have data of bounded sparsity and define constant as
(23) 
where is simply the maximum number of nonzeros (or maximum cardinality of sets) seen in the database. For sparse data seen in practice is likely to be small compared to . Outliers, if any, can be handled separately. By observing that , we also have
(24) 
Thus, given the bounded sparsity, if we assume that the number of nonzeros in the query is given, then we can show that minhash is an LSH for inner products because the collision probability can be upper and lower bounded by purely functions of and .
Theorem 2
Given bounded sparsity and query with , minhash is a sensitive for inner products with
This explains why minhash might be a reasonable hashing approach for retrieving inner products or set intersection.
Here, if we remove the assumption that then in the worst case and we get in the denominator. Note that the above is the worst case analysis and the assumption is needed to obtain any meaningful with minhash. We show the power of ALSH in the next section, by providing a better hashing scheme and we do not even need the assumption of fixing .
4 Asymmetric Minwise Hashing (MHALSH)
In this section, we provide a very simple asymmetric fix to minhash, named asymmetric minwise hashing (MHALSH), which makes the overall collision probability monotonic in the original inner product . For sparse binary data, which is common in practice, we later show that the proposed hashing scheme is superior (both theoretically as well as empirically) compared to the existing ALSH schemes for inner product [35].
4.1 The New ALSH for Binary Data
We define the new preprocessing and query transformations and as:
(25)  
(26) 
where [;] is the concatenation to vector . For we append 1s and rest zeros, while in we simply append zeros.
At this point we can already see the power of asymmetric transformations. The original inner product between and is unchanged and its value is . Given the query , the new resemblance between and is
(27) 
If we define our new similarity as , which is similar in nature to the containment , then the near neighbors in this new similarity are the same as near neighbors with respect to either set intersection or set containment . Thus, we can instead compute near neighbors in which is also the resemblance between and . We can therefore use minhash on and .
Observe that now we have in the denominator, where is the maximum nonzeros seen in the dataset (the cardinality of largest set), which for very sparse data is likely to be much smaller than . Thus, asymmetric minhash is a better scheme than with collision probability roughly for very sparse datasets where we usually have . This is an interesting example where we do have an LSH scheme but an altogether different asymmetric LSH (ALSH) improves over existing LSH. This is not surprising because asymmetric LSH families are more powerful [35].
From theoretical perspective, to obtain an upper bound on the query and
space complexity of approximate near neighbor with binary inner products, we want the collision probability to be independent
of the quantity . This is not difficult to achieve. The
asymmetric transformation used to get rid of in the denominator
can be reapplied to get rid of .
Formally, we can define and as :
(28) 
where in we append 1s and rest zeros, while in we append zeros, then 1s and rest zeros
Again the inner product is unaltered, and the new resemblance then becomes
(29) 
which is independent of and is monotonic in . This allows us to achieve a formal upper bound on the complexity of approximate maximum inner product search with the new asymmetric minhash.
From the collision probability expression, i.e., Eq. (29), we have
Theorem 3
Minwise hashing along with Query transformation and Preprocessing transformation defined by Equation 28 is a sensitive asymmetric hashing family for set intersection.
This leads to an important corollary.
Corollary 2
There exist an algorithm for approximate set intersection (or binary inner products), with bounded sparsity , that requires space and , where
(30) 
Given query and any point , the collision probability under traditional minhash is . This penalizes sets with high , which in many scenarios not desirable. To balance this negative effect, asymmetric transformation penalizes sets with smaller . Note, that ones added in the transformations gives additional chance in proportion to for minhash of not to match with the minhash of . This asymmetric probabilistic correction balances the penalization inherent in minhash. This is a simple way of correcting the probability of collision which could be of independent interest in itself. We will show in our evaluation section, that despite this simplicity such correction leads to significant improvement over plain minhash.
4.2 Faster Sampling
Our transformations and always create sets with nonzeros. In case when is big, hashing might take a lot of time. We can use fast consistent weighted sampling [29, 22] for efficient generation of hashes. We can instead use transformations and that makes the data nonbinary as follows
(31)  
It is not difficult to see that the weighted Jaccard similarity (or weighted resemblance) between and for given query and any is
(32) 
Therefore, we can use fast consistent weighted sampling for weighted Jaccard similarity on and to compute the hash values in time constant per nonzero weights, rather than maximum sparsity . In practice we will need many hashes for which we can utilize the recent line of work that make minhash and weighted minhash significantly much faster [36, 18].
5 Theoretical Comparisons
For solving the MIPS problem in general data types, we already know two asymmetric hashing schemes, L2ALSH and SignALSH, as described in Section 2.5. In this section, we provide theoretical comparisons of the two existing ALSH methods with the proposed asymmetric minwise hashing (MHALSH). As argued, the LSH scheme described in Section 3 is unlikely to be useful in practice because of its dependence on ; and hence we safely ignore it for simplicity of the discussion.
Before we formally compare various asymmetric LSH schemes for maximum inner product search, we argue why asymmetric minhash should be advantageous over traditional minhash for retrieving inner products. Let be the binary query vector, and denotes the number of nonzeros in the query. The for asymmetric minhash in terms of and is straightforward from the collision probability Eq.(27):
(33) 
For minhash, we have from theorem 2 . Since is the upper bound on the sparsity and is some value of inner product, we have . Using this fact, the following theorem immediately follows
Theorem 4
For any query q, we have .
This result theoretically explains why asymmetric minhash is better for retrieval with binary inner products, compared to plain minhash.
For comparing asymmetric minhash with ALSH for general inner products, we compare with the ALSH for inner products based on signed random projections. Note that it was shown that has better theoretical values as compared to L2ALSH [37]. Therefore, it suffices to show that asymmetric minhash outperforms signed random projection based ALSH. Both and can be rewritten in terms of ratio as follows. Note that for binary data we have
(34) 
Observe that is also the upper bound on any inner product. Therefore, we have . We plot the values of and for with . The comparison is summarized in Figure 1. Note that here we use derived from [3, 31] instead of for convenience although the two schemes perform essentially identically.
We can clearly see that irrespective of the choice of threshold or the approximation ratio , asymmetric minhash outperforms signed random projection based ALSH in terms of the theoretical values. This is not surprising, because it is known that minwise hashing based methods are often significantly powerful for binary data compared to SRP (or simhash) [38]. Therefore ALSH based on minwise hashing outperforms ALSH based on SRP as shown by our theoretical comparisons. Our proposal thus leads to an algorithmic improvement over stateoftheart hashing techniques for retrieving binary inner products.
6 Evaluations
In this section, we compare the different hashing schemes on the actual task of retrieving topranked elements based on set Jaccard containment. The experiments are divided into two parts. In the first part, we show how the ranking based on various hash functions correlate with the ordering of Jaccard containment. In the second part, we perform the actual LSH based bucketing experiment for retrieving topranked elements and compare the computational saving obtained by various hashing algorithms.
6.1 Datasets
We chose four publicly available high dimensional sparse datasets: EP2006^{1}^{1}1We downloaded EP2006 from LIBSVM website. The original name is “E2006LOG1P” and we rename it to “EP2006”., MNIST, NEWS20, and NYTIMES. Except MNIST, the other three are high dimensional binary “BoW” representation of the corresponding text corpus. MNIST is an image dataset consisting of 784 pixel image of handwritten digits. Binarized versions of MNIST are commonly used in literature. The pixel values in MNIST were binarized to 0 or 1 values. For each of the four datasets, we generate two partitions. The bigger partition was used to create hash tables and is referred as the training partition. The small partition which we call the query partition is used for querying. The statistics of these datasets are summarized in Table 1. The datasets cover a wide spectrum of sparsity and dimensionality.
Dataset  # Query  # Train  # Dim  nonzeros (mean std) 

EP2006  2,000  17,395  4,272,227  6072 3208 
MNIST  2,000  68,000  784  150 41 
NEWS20  2,000  18,000  1,355,191  454 654 
NYTIMES  2,000  100,000  102,660  232 114 
6.2 Competing Hash Functions
We consider the following hash functions for evaluations:

Asymmetric minwise hashing (Proposed): This is our proposal, the asymmetric minhash described in Section 4.1.

L2 based Asymmetric LSH for Inner products (L2ALSH): This is the asymmetric LSH of [35] for general inner products based on LSH for L2 distance.

SRP based Asymmetric LSH for Inner Products (SignALSH): This is the asymmetric hash function of [37] for general inner products based on SRP.
6.3 Ranking Experiment: Hash Quality Evaluations
We are interested in knowing, how the orderings under different competing hash functions correlate with the ordering of the underlying similarity measure which in this case is the Jaccard containment. For this task, given a query vector, we compute the top100 gold standard elements from the training set based on the Jaccard containment . Note that this is the same as the top100 elements based on binary inner products. Give a query , we compute different hash codes of the vector and all the vectors in the training set. We then compute the number of times the hash values of a vector in the training set matches (or collides) with the hash values of query defined by
(35) 
where is the indicator function. subscript is used to distinguish independent draws of the underlying hash function. Based on we rank all elements in the training set. This procedure generates a sorted list for every query for every hash function. For asymmetric hash functions, in computing total collisions, on the query vector we use the corresponding function (query transformation) followed by underlying hash function, while for elements in the training set we use the function (preprocessing transformation) followed by the corresponding hash function.
We compute the precision and the recall of the top100 gold standard elements in the ranked list generated by different hash functions. To compute precision and recall, we start at the top of the ranked item list and walk down in order, suppose we are at the ranked element, we check if this element belongs to the gold standard top100 list. If it is one of the top 100 gold standard elements, then we increment the count of relevant seen by 1, else we move to . By step, we have already seen elements, so the total elements seen is . The precision and recall at that point is then computed as:
(36) 
It is important to balance both. Methodology which obtains higher precision at a given recall is superior. Higher precision indicates higher ranking of the relevant items. We finally average these values of precision and recall over all elements in the query set. The results for are summarized in Figure 2.
We can clearly see, that the proposed hashing scheme always achieves better, often significantly, precision at any given recall compared to other hash functions. The two ALSH schemes are usually always better than traditional minwise hashing. This confirms that fact that ranking based on collisions under minwise hashing can be different from the rankings under Jaccard containment or inner products. This is expected, because minwise hashing in addition penalizes the number of nonzeros leading to a ranking very different from the ranking of inner products. SignALSH usually performs better than L2LSH, this is in line with the results obtained in [37].
It should be noted that ranking experiments only validate the monotonicity of the collision probability. Although, better ranking is definitely a very good indicator of good hash function, it does not always mean that we will achieve faster sublinear LSH algorithm. For bucketing the probability sensitivity around a particular threshold is the most important factor, see [32] for more details. What matters is the gap between the collision probability of good and the bad points. In the next subsection, we compare these schemes on the actual task of near neighbor retrieval with Jaccard containment.
6.4 Bucketing Experiment: Computational Savings in Near Neighbor Retrieval
In this section, we evaluate the four hashing schemes on the standard parameterized bucketing algorithm [2] for sublinear time retrieval of near neighbors based on Jaccard containment. In parameterized LSH algorithm, we generate different metahash functions. Each of these metahash functions is formed by concatenating different hash values as
(37) 
where and , are different independent evaluations of the hash function under consideration. Different competing scheme uses its own underlying randomized hash function .
In general, the parameterized LSH works in two phases:

Preprocessing Phase: We construct hash tables from the data by storing element , in the training set, at location in the hashtable . Note that for vanilla minhash which is a symmetric hashing scheme . For other asymmetric schemes, we use their corresponding functions. Preprocessing is a one time operation, once the hash tables are created they are fixed.

Query Phase: Given a query , we report the union of all the points in the buckets , where the union is over hash tables. Again here is the corresponding function of the asymmetric hashing scheme, for minhash .
Typically, the performance of a bucketing algorithm is sensitive to the choice of parameters and . Ideally, to find best and , we need to know the operating threshold and the approximation ratio in advance. Unfortunately, the data and the queries are very diverse and therefore for retrieving topranked near neighbors there are no common fixed threshold and approximation ratio that work for all the queries.
Our objective is to compare the four hashing schemes and minimize the effect of and , if any, on the evaluations. This is achieved by finding best and at every recall level. We run the bucketing experiment for all combinations of and for all the four hash functions independently. These choices include the recommended optimal combinations at various thresholds. We then compute, for every and , the mean recall of Top pairs and the mean number of points reported, per query, to achieve that recall. The best and at every recall level is chosen independently for different s. The plot of the mean fraction of points scanned with respect to the recall of top gold standard near neighbors, where , is summarized in Figure 3.
The performance of a hashing based method varies with the variations in the similarity levels in the datasets. It can be seen that the proposed asymmetric minhash always retrieves much less number of points, and hence requires significantly less computations, compared to other hashing schemes at any recall level on all the four datasets. Asymmetric minhash consistently outperforms other hash functions irrespective of the operating point. The plots clearly establish the superiority of the proposed scheme for indexing Jaccard containment (or inner products).
L2ALSH and SignALSH perform better than traditional minhash on EP2006 and NEWS20 datasets while they are worse than plain minhash on NYTIMES and MNIST datasets. If we look at the statistics of the dataset from Table 1, NYTIMES and MNIST are precisely the datasets with less variations in the number of nonzeros and hence minhash performs better. In fact, for MNIST dataset with very small variations in the number of nonzeros, the performance of plain minhash is very close to the performance of asymmetric minhash. This is of course expected because there is negligible effect of penalization on the ordering. EP2006 and NEWS20 datasets have huge variations in their number of nonzeros and hence minhash performs very poorly on these datasets. What is exciting is that despite these variations in the nonzeros, asymmetric minhash always outperforms other ALSH for general inner products.
The difference in the performance of plain minhash and asymmetric minhash clearly establishes the utility of our proposal which is simple and does not require any major modification over traditional minhash implementation. Given the fact that minhash is widely popular, we hope that our proposal will be adopted.
7 Conclusion and Future Work
Minwise hashing (minhash) is a widely popular indexing scheme in practice for similarity search. Minhash is originally designed for estimating set resemblance (i.e., normalized size of set intersections). In many applications the performance of minhash is severely affected because minhash has a bias towards smaller sets. In this study, we propose asymmetric corrections (asymmetric minwise hashing, or MHALSH) to minwise hashing that remove this often undesirable bias. Our corrections lead to a provably superior algorithm for retrieving binary inner products in the literature. Rigorous experimental evaluations on the task of retrieving maximum inner products clearly establish that the proposed approach can be significantly advantageous over the existing stateoftheart hashing schemes in practice, when the desired similarity is the inner product (or containment) instead of the resemblance. Our proposed method requires only minimal modification of the original minwise hashing algorithm and should be straightforward to implement in practice.
Future work: One immediate future work would be asymmetric consistent weighted sampling for hashing weighted intersection: , where and are general realvalued vectors. One proposal of the new asymmetric transformation is the following:
(38) 
where . It is not difficult to show that the weighted Jaccard similarity between and is monotonic in as desired. At this point, we can use existing methods for consistent weighted sampling on the new data after asymmetric transformations [29, 22, 18].
References
 [1] P. Agrawal, A. Arasu, and R. Kaushik. On indexing errortolerant set containment. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, pages 927–938. ACM, 2010.
 [2] A. Andoni and P. Indyk. E2lsh: Exact euclidean locality sensitive hashing. Technical report, 2004.
 [3] Y. Bachrach, Y. Finkelstein, R. GiladBachrach, L. Katzir, N. Koenigstein, N. Nice, and U. Paquet. Speeding up the xbox recommender system using a euclidean transformation for innerproduct spaces. In Proceedings of the 8th ACM Conference on Recommender Systems, RecSys ’14, 2014.
 [4] Y. Bachrach, E. Porat, and J. S. Rosenschein. Sketching techniques for collaborative filtering. In Proceedings of the 21st International Jont Conference on Artifical Intelligence, IJCAI’09, 2009.
 [5] R. J. Bayardo, Y. Ma, and R. Srikant. Scaling up all pairs similarity search. In WWW, pages 131–140, 2007.
 [6] A. Z. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21–29, Positano, Italy, 1997.
 [7] A. Z. Broder, M. Charikar, A. M. Frieze, and M. Mitzenmacher. Minwise independent permutations. In STOC, pages 327–336, Dallas, TX, 1998.
 [8] T. Chandra, E. Ie, K. Goldman, T. L. Llinares, J. McFadden, F. Pereira, J. Redstone, T. Shaked, and Y. Singer. Sibyl: a system for large scale machine learning.
 [9] O. Chapelle, P. Haffner, and V. N. Vapnik. Support vector machines for histogrambased image classification. IEEE Transactions on Neural Networks, 10(5):1055–1064, 1999.
 [10] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pages 380–388, Montreal, Quebec, Canada, 2002.
 [11] S. Chaudhuri, V. Ganti, and R. Kaushik. A primitive operatior for similarity joins in data cleaning. In ICDE, 2006.
 [12] S. Chaudhuri, V. Ganti, and D. Xin. Mining document collections to facilitate accurate approximate entity matching. Proceedings of the VLDB Endowment, 2(1):395–406, 2009.
 [13] F. Chierichetti, R. Kumar, S. Lattanzi, M. Mitzenmacher, A. Panconesi, and P. Raghavan. On compressing social networks. In KDD, pages 219–228, Paris, France, 2009.
 [14] G. Cormode and S. Muthukrishnan. Space efficient mining of multigraph streams. In Proceedings of the twentyfourth ACM SIGMODSIGACTSIGART symposium on Principles of database systems, pages 271–282. ACM, 2005.
 [15] A. S. Das, M. Datar, A. Garg, and S. Rajaram. Google news personalization: scalable online collaborative filtering. In Proceedings of the 16th international conference on World Wide Web, pages 271–280. ACM, 2007.
 [16] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokn. Localitysensitive hashing scheme based on stable distributions. In SCG, pages 253 – 262, Brooklyn, NY, 2004.
 [17] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of ACM, 42(6):1115–1145, 1995.
 [18] B. Haeupler, M. Manasse, and K. Talwar. Consistent weighted sampling made fast, small, and easy. Technical report, arXiv:1410.4266, 2014.
 [19] M. Hein and O. Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136–143, Barbados, 2005.
 [20] M. R. Henzinger. Finding nearduplicate web pages: a largescale evaluation of algorithms. In SIGIR, pages 284–291, 2006.
 [21] P. Indyk and R. Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604–613, Dallas, TX, 1998.
 [22] S. Ioffe. Improved consistent sampling, weighted minhash and L1 sketching. In ICDM, pages 246–255, Sydney, AU, 2010.
 [23] Y. Jiang, C. Ngo, and J. Yang. Towards optimal bagoffeatures for object categorization and semantic video retrieval. In CIVR, pages 494–501, Amsterdam, Netherlands, 2007.
 [24] N. Koudas, S. Sarawagi, and D. Srivastava. Record linkage: similarity measures and algorithms. In Proceedings of the 2006 ACM SIGMOD international conference on Management of data, pages 802–803. ACM, 2006.
 [25] P. Li and K. W. Church. A sketch algorithm for estimating twoway and multiway associations. Computational Linguistics (Preliminary results appeared in HLT/EMNLP 2005), 33(3):305–354, 2007.
 [26] P. Li, M. Mitzenmacher, and A. Shrivastava. Coding for random projections. In ICML, 2014.
 [27] P. Li, M. Mitzenmacher, and A. Shrivastava. Coding for random projections and approximate near neighbor search. Technical report, arXiv:1403.8144, 2014.
 [28] P. Li, A. Shrivastava, J. Moore, and A. C. König. Hashing algorithms for largescale learning. In NIPS, Granada, Spain, 2011.
 [29] M. Manasse, F. McSherry, and K. Talwar. Consistent weighted sampling. Technical Report MSRTR201073, Microsoft Research, 2010.
 [30] S. Melnik and H. GarciaMolina. Adaptive algorithms for set containment joins. ACM Transactions on Database Systems (TODS), 28(1):56–99, 2003.
 [31] B. Neyshabur and N. Srebro. A simpler and better lsh for maximum inner product search (mips). Technical report, arXiv:1410.5518, 2014.
 [32] A. Rajaraman and J. Ullman. Mining of Massive Datasets. http://i.stanford.edu/ ullman/mmds.html.
 [33] K. Ramasamy, J. F. Naughton, and R. Kaushik. Set containment joins: The good, the bad and the ugly.
 [34] A. Shrivastava and P. Li. Beyond pairwise: Provably fast algorithms for approximate kway similarity search. In NIPS, Lake Tahoe, NV, 2013.
 [35] A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (mips). In NIPS, Montreal, CA, 2014.
 [36] A. Shrivastava and P. Li. Densifying one permutation hashing via rotation for fast near neighbor search. In ICML, Beijing, China, 2014.
 [37] A. Shrivastava and P. Li. An improved scheme for asymmetric lsh. arXiv preprint arXiv:1410.5410, 2014.
 [38] A. Shrivastava and P. Li. In defense of minhash over simhash. In AISTATS, 2014.
 [39] R. Weber, H.J. Schek, and S. Blott. A quantitative analysis and performance study for similaritysearch methods in highdimensional spaces. In VLDB, pages 194–205, 1998.