A neural network catalyzer for multi-dimensional similarity search

A neural network catalyzer for multi-dimensional similarity search

Abstract

This paper aims at learning a function mapping input vectors to an output space in a way that improves high-dimensional similarity search. As a proxy objective, we design and train a neural network that favors uniformity in the spherical output space, while preserving the neighborhood structure after the mapping. For this purpose, we propose a new regularizer derived from the Kozachenko–Leonenko differential entropy estimator and combine it with a locality-aware triplet loss. Our method operates as a catalyzer for traditional indexing methods such as locality sensitive hashing or iterative quantization, boosting the overall recall. Additionally, the network output distribution makes it possible to leverage structured quantizers with efficient algebraic encoding, in particular spherical lattice quantizers such as the Gosset lattice . Our experiments show that this approach is competitive with state-of-the-art methods such as optimized product quantization. 1

1 Introduction

Recent work kraska17learnedindex () proposed to leverage the pattern-matching ability of machine learning algorithms to improve traditional index structures such as B-trees or Bloom filters, with encouraging results. In their one-dimensional case, an optimal B-Tree can be constructed if the cumulative density function (CDF) is known, and thus Kraska et al. kraska17learnedindex () learn the CDF using a neural network. We emphasize that the CDF itself is a mapping between an arbitrary input distribution and a uniform distribution in . In this work, we wish to generalize such an approach to multi-dimensional spaces: we aim at learning a function that improves or simplifies the subsequent high-dimensional indexing method.

Similarity search methods often rely on various forms of learning machinery jegou11pq (); gong2013iterative (); wang2014hashing (); douze2016polysemous (), in particular there is a substantial body of literature on methods producing compact codes. Another line of work shows the interest of neural networks in the context of binary hashing liong2015deep (); jain17subic (). Yet the problem of jointly optimizing a coding stage and a neural network remains essentially unsolved, partly because it is difficult to optimize through a discretization function. For this reason, most efforts have been devoted to networks producing binary codes, for which optimization tricks exist, such as soft binarization or stochastic relaxation. However it is difficult to improve over more powerful codes such as those produced by product quantization jegou11pq (), and recent solutions addressing product quantization require complex optimization procedures klein2017defense ().

In order to circumvent this problem, we propose a drastic simplification of learning algorithms for indexing. Instead of trying to optimize through a discretization layer, we learn a mapping such that the output follows the distribution under which the subsequent discretization method, either binary or a more general quantizer, is optimal. In other terms, instead of trying to adapt an indexing structure to the data, we adapt the data to the index. As a side note, many similarity search methods are implicitly designed for the range search problem (or near neighbor, as opposed to nearest neighbor indyk1998approximate (); andoni2006near ()), that aims at finding all vectors whose distance to the query vector is below a fixed threshold. For real-world high-dimensional distributions, range search typically returns either no neighbors or too many. The discrepancy between near– and nearest– neighbors is much lower with uniform data.

Our proposal requires to jointly optimize two antithetical criteria. First, we need to ensure that neighbors are preserved by the mapping, using a vanilla ranking loss usunier2009ranking (); chechik2010large (); wang2014learning (). Second, the training must favor a uniform output. This suggests a regularization similar to maximum entropy pereyra2017regularizing (), except that in our case we consider a continuous output space. We therefore propose to cast an existing differential entropy estimator into a regularization term, which plays the same “distribution-matching” role as the Kullback-Leiber term of variational auto-encoders doersch2016tutorial ().

Our approach is illustrated by Figure 1. We summarize our contributions as follows:

  • We introduce an approach for multi-dimensional indexing that maps the input data to an output space in which indexing is easier. It learns a neural network that plays the role of an adapter for subsequent similarity search methods.

  • For this purpose we introduce a loss derived from the Kozachenko-Leonenko differential entropy estimator to favor uniformity in the spherical output space.

  • The end-to-end performance of existing techniques like Locality Sensitive Hashing (LSH) with short codes charikar02lsh () or Iterative Quantization gong2013iterative () is improved significantly by our mapping.

  • Our learned mapping makes it possible to leverage, in particular for non-uniform data, spherical lattice quantizers with competitive coding properties and efficient algebraic encoding.

This paper is organized as follows. Section 2 discusses related works. Our neural model and the optimization scheme are introduced in Section 3. Section 4 details how we combine this strategy with lattice assignment to produce compact codes. The experimental section 5 evaluates our approach.

  
  input
Figure 1: Illustration of our method: we learn a neural network that aims at preserving the neighborhood structure in the input space while best covering the output space (uniformly). This trade-off is controlled by a parameter . Our method takes as input a set of samples from an unknown distribution. The case keeps the locality of the neighbors but does not cover the output space. On the opposite, when the network is trained only with the differential entropic regularizer (), the neighbors are not maintained through the mapping. Intermediate values offer different trade-offs between neighbor fidelity and uniformity, which is proper input for a powerful lattice quantizer (depicted here by the hexagonal lattice ).

2 Related work

Generative modeling. Models such as Generative Adversarial Networks (GANs) goodfellow14gan () or Variational Auto-Encoders (VAEs) kingma13vae () aim at learning a mapping between an isotropic Gaussian distribution and the empirical distribution of a training set. Our approach maps an empirical input distribution to a uniform distribution on the spherical output space. Another distinction is that GANs learn a unidirectional mapping from the latent code to an image (decoder), whereas VAEs learn a bidirectional mapping (encoder - decoder). In our work, we focus on learning the encoder, which is the neural network pre-processing of input vectors for subsequent indexing.

Dimensionality reduction and representation learning. There is a large body of literature on the topic of dimensionality reduction, see for instance the review by van Der Maaten et al. van2009dimensionality (). Relevant work includes the stochastic neighbor embedding hinton03sne () and the subsequent t-SNE approach vandermaaten08tsne (), which is tailored to low-dimensional spaces for visualisation purposes. Both works are non-linear dimensionality reduction aiming at preserving the neighborhood in the output space.

Learning to index and quantize. The literature on product compact codes for indexing is most relevant to our work, see  wang2014hashing (); wang2016learning () for an overview of the topic. Early popular high-dimensional approximate neighbor methods, such as Locality Sensitive Hashing indyk1998approximate (); GIM99 (); charikar02lsh (); andoni2006near (), were mostly relying on statistical guarantees without any learning stage. This lack of data adaptation was subsequently addressed by several works. The Iterative quantization (ITQ) gong2013iterative () modifies the coordinate system to improve the binarization, while methods inspired by compression jegou11pq (); babenko2014additive (); ZQTW15 (); jain2016quantizedsparse () have gradually emerged as strong competitors for estimating distances or similarities with compact codes. While most of these works aim at reproducing target (dis-)similarity, some recent works directly leverage semantic information in a supervised manner with neural networks liong2015deep (); jain17subic (); klein2017defense (); sablayrolles2017should ().

Lattices, also known as Euclidean networks, are discrete subsets of the Euclidean space that are of particular interest due to their space covering and sphere packing properties conway2013sphere (). They also have excellent discretization properties under some assumptions about the distribution, and most interestingly the closest point of a lattice is determined efficiently thanks to algebraic properties ran1998efficient (). This is why lattices have been proposed jegou2008query () as hash functions in LSH. However, for real-world data, lattices waste capacity because they assume that all regions of the space have the same density pauleve2010locality (). In this paper, we are interested in spherical lattices because of their bounded support.

Entropy regularization appears in many areas of machine learning and indexing. For instance, Pereya et al. pereyra2017regularizing () argue that penalizing confident output distributions is an effective regularization. Another proposal by Bojanowski et al. bojanowski2017unsupervised () in an unsupervised learning context, is to spread the output by enforcing input images to map to points drawn uniformly on a sphere. Interestingly, most recent works on binary hashing introduces some form of entropic regularization. Recent works on binary hashing, such as deep hashing liong2015deep (), typically employ a regularization term that increases the marginal entropy of each bit. SUBIC jain17subic () extends this idea to one-hot codes.

3 Our approach: Learning the catalyzer

As discussed in the introduction, our proposal is inspired by prior work for one-dimensional indexing kraska17learnedindex (). However their approach based on uni-dimensional density estimation can not be directly translated to the multi-dimensional case. Our strategy is to train a neural network that maps vectors from a -dimensional space to a -dimensional space. We perform the training on input points . We first simplify the problem by constraining the output representation to lie on the hyper-sphere via -normalization. On we can define a bounded, uniform and isotropic distribution. The trunk of the network itself is a simple Multi-Layer Perceptron (MLP)rosenblatt58perceptron () with 2 hidden layers, and uses rectified linear units as a non-linearity.

3.1 KoLeo: Differential entropy regularizer

Let us first introduce our regularizer, which we design to spread out points uniformly across . With the knowledge of the density of points , we could directly maximize the differential entropy . Given only samples , we instead use an estimator of the differential entropy as a proxy. It was shown by Kozachenko and Leononenko (see e.g. beirlant97entropy ()) that defining , the differential entropy of the distribution can be estimated by

(1)

where and are two constants that depend on the number of samples and the dimensionality of the data . Ignoring the affine components, we define our entropic regularizer as

(2)

This loss also has a satisfactory geometric interpretation: closest points are pushed away, with a non-linearity that is non-decreasing and concave, which ensures diminishing returns: as points get away from each other, the marginal impact of increasing the distance becomes smaller.

Remarks.

We are essentially seeking the same effect as Pereyra et al. pereyra2017regularizing (), i.e., that the penalty terms tend to make the output more uniform. Note however that in our case, we have a continuous output and therefore a discrete entropy term is not appropriate. Citing Doersh doersch2016tutorial (), “it is interesting to view the Kullback-Leibler as a regularization term”, one could also draw a relationship with the Kullback-Leibler (KL) terms occurring in VAEs and t-SNE, in that the formula of the KL becomes very similar to Eqn. 2 when considering a uniform distribution. A key difference is the quantity , which is computed differently and does not need to have a probabilistic interpretation in our case.

3.2 Rank preserving loss

We enforce the outputs of the neural network to follow the same neighborhood structure as in the input space by adopting the margin triplet loss  chechik2010large (); wang2014learning ()

(3)

where is a query, a positive match, a negative match and is the desired margin. The positive matches are obtained by computing the nearest neighbors of each point in the training set in the input space. The negative matches are generated by taking the -th nearest neighbor of in . In order to speed up the learning, we compute the -th nearest neighbor of every point in the dataset at the end of each epoch, and use these as negatives for the next epoch. Our overall loss combines the triplet loss and the entropy regularizer, as

(4)

where the parameter controls the trade-off between ranking quality and uniformity.

3.3 Discussion

Figure 2: Agreement between nearest neighbor and range search: average number of results per query for given values of (indicated on the curve), and corresponding recall values. For example: to obtain 80% recall, the search in the original space requires to set , which returns 700 results per query on average, while in the transformed space returns just 200 results.

Choice of .

Figure 1 was produced by our method on da toy dataset adapted to the disk as the output space. Without the regularization term, neighboring points tend to collapse and most of the output space is not exploited. If we quantize this output with a regular quantizer, many Voronoi cells are empty, meaning that we waste coding capacity. In contrast, if we solely rely on the entropic regularizer, the neighbors are poorly preserved. Interesting trade-offs are achieved when setting the parameter by a cross-validation that includes the subsequent quantization stage.

Nearest neighbor vs range search.

Figure 2 shows how our method achieves a better agreement between range search and k-nearest neighbors search on real data. In this experiment, we consider different thresholds for the range search and perform a set of queries for each . Then we measure how many vectors we must return, on average, to achieve a certain recall in terms of the nearest neighbors in the original space. Without our mapping, there is a large variance on the number of results for a given . In contrast, after the mapping it is possible to use a unique threshold to find most neighbors.

Visualization of the output distribution.

While Figure 1 illustrates our method with the 2D disk as an output space, we are interested in mapping input samples to a higher dimensional hyper-sphere. Figure 3 proposes a visualization of the high-dimensional density from a different viewpoint, with the Deep1M dataset mapped in dimensions. We sample 2D-planes randomly in and project the dataset points on them. For each row, the 5 figures are the angular histograms of the points with a polar parametrization of this plane. The area inside the curve is constant and proportional to the number of samples . A uniform angular distribution produces a centered disk, and less uniform distributions look like unbalanced potatoes.

The densities we represent are marginalized, so if the distribution looks non-uniform then it is non-uniform in -dimensional space, but the reverse is not true. Yet one can compare the results obtained for different regularization coefficients, which shows that our regularizer has a strong uniformizing effect on the mapping, ultimately resembling that of a uniform distribution for .

Figure 3: Impact of the regularizer on the output distribution. Each line corresponds to a different amount of regularization (). Each column corresponds to a different view of the hyper-spherical empirical distribution, parametrized by an angle in , see text for details.

4 Compact codes with lattices

In this section we describe how we leverage lattices with our catalyzer. Our motivation is as follows: as discussed by Paulevé et al. pauleve2010locality (), lattices impose a rigid partitioning the feature space, which is suboptimal for arbitrary distributions, see Figure 1. In contrast, lattices offer excellent quantization properties for a uniform distribution conway2013sphere (). Thanks to our regularizer, we are likely to better meet the condition of optimality in the output space, thereby making lattices an attractive choice.

Note that, prior works employing lattices andoni2006near (); pauleve2010locality () for indexing were considering a framework different from ours, inspired by the E2LSH variant of LSH DIIM04 () and often referred to as the “cell-probe model”: lattices are used to select regions likely to contain neighbors, and a subsequent stage compares the query vectors with all the vectors assigned to the same lattice point. In contrast, we consider the problem of producing compact codes that can be used to compare compressed vectors charikar02lsh (); jegou11pq (). The relevant tradeoff is the retrieval accuracy vs the size of the code in bits.

4.1 Spherical lattice

From now on, we focus on spherical lattices, which are regular subsets of points on a hyper-sphere centered at the origin. One practical way conway2013sphere () to construct a good spherical quantizer is to intersect a top-performing Euclidean lattice with such a hyper-sphere. In this paper, we focus on the -dimensional lattice. We denote by the -dimensional hyper-sphere of radius .

The first hyper-sphere intersecting is . The intersection is a regular subset of 240 points, which are projected back to the hyper-sphere by -normalization. We obtain a larger number of points on the hyper-sphere by increasing the radius. The hyper-spheres having a non-void intersection with are those such that is an even integer. They offer many advantages:

  • They inherit the remarkable compactness and quantization qualities of .

  • The assignment is performed efficiently, see supplementary material for details.

  • The lattice points are enumerated and constructed without storing them in a table. This is especially important for large radii, for which the size of can be in the millions.

4.2 Product space output

As the radius grows, our quantization procedure becomes more accurate, but the performance of our indexing system is eventually limited by the output dimensionality of our neural network : the 8-dimensional space may be too small to preserve the neighborhood, as we observed in our experiments. We thus propose to get more flexibility by considering the Cartesian product of lattices2. This is the strategy employed in product quantization jegou11pq (); GHKS13 (); klein2017defense (), for which the subspaces are quantized separately with k-means. It is straightforward to extend our approach to a product space: if we employ lattices, we can use an output space of dimensionality for . The only required change is to -normalize each sub-vector separately.

4.3 Compact codes and comparison procedure

In terms of coding, the size of the code is bits. In practice we only consider (note that ). In order to assign a vector in , we first compute , and find the nearest vector on using the fast assignment operation, which formally minimizes:

(5)

Given a query and its representation , we approximate the similarity between and using the code: (asymmetric comparison jegou11pq ()).

5 Experiments

This section presents our experimental results. We focus on the class of similarity search methods that represents the database vectors with a compressed representation charikar02lsh (); jegou11pq (); gong2013iterative (); GHKS13 (), which enables to store very large dataset in memory LCL04 (); torralba2008small ().

5.1 Experimental setup

Datasets and metrics.

We carry out our experiments on two public datasets, namely Deep1M and BigAnn1M. Deep1M consists of the first million vectors of the Deep1B dataset babenko2016efficient (). The vectors were obtained by running a convnet on an image collection, reduced to 96 dimensions by principal component analysis and subsequently -normalized. We also experiment with the BigAnn1M jegou2011searching (), which consists of SIFT descriptors Lowe04sift (). This dataset is used in many prior works.

Both datasets contain 1M vectors that serve as a reference set, 10k query vectors and a very large training set of which we use k elements for training, and 1M vectors that we use a base to cross-validate the hyperparameters.

We also perform one experiment on the full Deep1B and BigAnn datasets, that contain 1 billion elements. We evaluate methods with the recall at 10 performance measure, which is the proportion of results that contain the ground truth nearest neighbor when returning the top 10 candidates.

Training.

For all methods, we train our neural network and cross-validate the hyper-parameters on the provided training set, and use a different set of vectors for evaluation. In contrast, some works carry out training on the database vectors themselves ML14 (); malkov2016efficient (); gong2013iterative (), in which case the index is tailored to a particular fixed set of database vectors.

5.2 Model architecture and optimization

We adopt a multi-layer perceptron to map between our input and output space. Our model consists of hidden layers, each of which comprises units followed by a ReLU non-linearity. A final linear layer projects the dataset to the desired output dimension , along with -normalization. We use batch normalization ioffe15batchnorm () and train our model with Stochastic Gradient Descent with an initial learning rate of and a momentum of . The learning rate is decayed by a factor of if the training loss does not improve for one epoch. We used 100 epochs of optimization for each experiment. On a CPU-only server with cores, the training with 300k samples takes about three hours.

5.3 Binary hashing: Catalyzing existing methods

Deep1M BigAnn1M
bits per vector 16 32 64 128 16 32 64 128
baseline LSH 0.8 4.9 14.6 32.2 0.3 2.6 9.5 24.4
catalyst + LSH 0.9 5.6 16.4 35.3 1.0 5.5 18.2 39.4
baseline ITQ 1.0 7.3 21.0 n/a 1.5 8.5 22.8 41.8
catalyst + ITQ 2.0 10.4 25.2 46.5 2.3 11.0 28.3 50.6
Table 1: Performance (1-recall at 10)(%) with LSH, on Deep1M and BigAnn1M, as a function of the number of bits per index vector. All results are averaged over 5 runs with different random seeds. Our catalyzer dramatically boosts the performance for both LSH and ITQ.

We first show the interest of our method as a catalyzer for two popular binary hashing methods charikar02lsh (); gong2013iterative ():

LSH

maps Euclidean vectors to binary codes that are then compared with Hamming distance. A set of fixed projection directions are drawn randomly and isotropically in , and each vector is encoded into bits by taking the sign of the dot product with each direction. The theoretical framework of LSH guarantees that with some probability, the Hamming distance reproduces the cosine similarity (up to the monotonous arccos function). However, it is not adapted to the data since the directions are random. Noticeably, it has trouble differentiating vectors that are in denser areas of the vector distribution and waste some capacity. Balu et al. balu2014beyond () even showed that there are binary codes that cannot be reached.

ITQ

is another popular hashing method, that improves LSH by using a random rotation rather than projections and that optimizes it to minimize the quantization error.

Table 1 shows how our transformation improves the performance of LSH and ITQ when applied as a pre-processing step before hashing. The catalyst improves the performance by 2-9 percentage points in all settings from 32 to 128 bits. In particular, ITQ was initially developed and evaluated on range search, and performs poorly for the nearest neighbor search that is evaluated here. However, its performance is significantly boosted by our uniformizing mapping.

5.4 Similarity search with lattice vector quantizers

We now evaluate the lattice-based indexing proposed in Section 4, and compare it to more conventional methods based on quantization, namely PQ jegou11pq () and Optimized Product Quantization (OPQ) GHKS13 (). Figure 4 provides a comparison of all these methods. OPQ is in the usual setting where each sub-vector is assigned one byte, meaning that each individual quantizer has 256 centroids. We use the Faiss johnson2017billion () implementation of OPQ that does not constrain the quantization space to match that of the input space. For our product lattice, we vary the value of to increase the quantizer size, hence generating curves for each value of .

On Deep1M, the lattice quantizer easily outperforms PQ for most code sizes. The lattice quantizer also obtains better performance than OPQ.

Figure 4: Comparison of the performance of the product lattice vs OPQ on Deep1M (left) and BigAnn1M (right). Our method maps the input vectors to a -dimensional space, that is then quantized with lattices of radius . We obtain the curves by varying the radius .

Large-scale experiments.

We experiment with the full Deep1B (resp. BigAnn) dataset, that contains 1 billion vectors. We use 64 bits per code, obtained with a 8x8 OPQ or a 4-head product lattice quantizer with . At that scale, the recall at 10 drops to 26.1% for OPQ and to 30.3% for the lattice quantizer (resp. 21.0% and 21.9%). This means that the precision advantage of the lattice quantizer is maintained at large scale. The encoding of 1 billion vectors takes 131 minutes on a 16-core machine, to compare with 33 minutes for OPQ.

Complexity analysis.

The product lattice indexing method proceeds in much the same way as OPQ GHKS13 (). It starts with a transformation stage, which is linear for OPQ, and a MLP in our case. Then we apply a quantization, with k-means for OPQ, and a lattice for our approach. The complexity of OPQ is therefore a natural comparison choice.

In terms of the quantization complexity, OPQ performs 34 kFLOPS per vector, while the forward pass of our method leads to  kFLOPs for (the quantization cost is negligible). If the efficiency is important, we can reduce the hidden layers to components, in which case the performance and efficiency (30 kFLOPS) is comparable to OPQ.

6 Concluding remarks

We have proposed to learn a neural network to map to an output distribution that makes high-dimensional indexing more effective. We have demonstrated the interest of our proposal by first considering two popular binary hashing methods, namely LSH and ITQ, which are both significantly improved by our catalyzer.

We also showed a somewhat counter-intuitive result: for similarity search with compact codes, it is as good or even better to pre-process the data so that it uses a fixed, rigid, lattice quantizer, than directly quantizing it with an optimized product quantizer. This approach has several benefits: once the data is mapped, we can use simpler indexing structures, with less parameters. Since this result is, to the best our knowledge, the first of its kind, we hope that subsequent works will propose new architectures for such catalyzers.

Acknowledgement

The authors thank Armand Joulin and Piotr Bojanowski for useful comments and discussions.

Decoding on the intersection of and hyper-spheres

The lattice is built from the lattice that includes the integer points such that and is even. The vertices of consist of two subsets of points, corresponding to the lattice and another lattice translated to best fill the holes. More precisely, the translated consists of half-integer points . We are interested in the intersection of with a hypersphere of integer radius , where is even (no point has an odd squared norm): .

We consider “atoms”, 8-uplets of nonnegative integers or half integers which sums to an even number, and which squares sum up to . For small values of (we typically use up to ), atoms can be enumerated exhaustively. For example, all the atoms for = 10 are permutations of:

(6)

The number of atoms as a function of is shown in Figure 5.

All vectors of are a permutation of an atom, with added signs. There are permutations, but the permutation of equal components is irrelevant, which divides the number combinations. For example atom corresponds to nonnegative vectors of . The signs can be set as follows:

  • for integer atoms, the components except 0 can have any sign. This generates variants, where is the number of 0s in the atom;

  • for half-integer atoms, there is no 0, but the number of negative (and positive) values has to be even. Therefore, the number of possible signed variants is .

To find the nearest to a given input , we proceed as follows:

  • we normalize by taking its absolute value and sorting its components, producing

  • we loop over the atoms to find the atom that maximizes the dot product with

  • we revert the permutation on the atom

  • We assign ’s sign to the corresponding atom components.

Figure 5: Number of atoms of the hyper-sphere of . (linear scale), and the corresponding number of points on the hyper-sphere (log scale).

Footnotes

  1. University Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France.
  2. We could consider lattices directly defined in higher dimensional space, see the lattice bestiary by Conway and Sloane conway2013sphere (). Yet the choice of a product is competitive in all aspects, noticeably quantization performance and efficiency, while other powerful lattices such as the Leech lattice are computationally expensive.

References

  1. Tim Kraska, Alex Beutel, Ed H. Chi, Jeff Dean, and Neoklis Polyzotis. The case for learned index structures. arXiv preprint arXiv:1712.01208, 2017.
  2. Hervé Jégou, Matthijs Douze, and Cordelia Schmid. Product Quantization for Nearest Neighbor Search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011.
  3. Yunchao Gong, Svetlana Lazebnik, Albert Gordo, and Florent Perronnin. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12), 2013.
  4. Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014.
  5. Matthijs Douze, Hervé Jégou, and Florent Perronnin. Polysemous codes. In European Conference on Computer Vision. Springer, 2016.
  6. Venice Erin Liong, Jiwen Lu, Gang Wang, Pierre Moulin, Jie Zhou, et al. Deep hashing for compact binary codes learning. In Conference on Computer Vision and Pattern Recognition, volume 1, 2015.
  7. Himalaya Jain, Joaquin Zepeda, Patrick Perez, and Remi Gribonval. SUBIC: A supervised, structured binary code for image search. In International Conference on Computer Vision, 2017.
  8. Benjamin Klein and Lior Wolf. In defense of product quantization. arXiv preprint arXiv:1711.08589, 2017.
  9. Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In ACM symposium on Theory of computing, 1998.
  10. Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Symposium on the Foundations of Computer Science, 2006.
  11. Nicolas Usunier, David Buffoni, and Patrick Gallinari. Ranking with ordered weighted pairwise classification. In International Conference on Machine Learning, 2009.
  12. Gal Chechik, Varun Sharma, Uri Shalit, and Samy Bengio. Large scale online learning of image similarity through ranking. Journal of Machine Learning Research, 11(Mar), 2010.
  13. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. Learning fine-grained image similarity with deep ranking. In Conference on Computer Vision and Pattern Recognition, 2014.
  14. Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017.
  15. Carl Doersch. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016.
  16. Moses Charikar. Similarity estimation techniques from rounding algorithms. In ACM symposium on Theory of computing, 2002.
  17. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2014.
  18. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  19. Laurens Van Der Maaten, Eric Postma, and Jaap Van den Herik. Dimensionality reduction: a comparative review. Journal of Machine Learning Research, 10, 2009.
  20. Geoffrey Hinton and Sam Roweis. Stochastic neighbor embedding. In Advances in Neural Information Processing Systems, 2003.
  21. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008.
  22. Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data: a survey. Proceedings of the IEEE, 104(1), 2016.
  23. Arisitides Gionis, Piotr Indyk, and Rajeev Motwani. Similarity search in high dimension via hashing. In International Conference on Very Large DataBases, pages 518–529, 1999.
  24. Artem Babenko and Victor Lempitsky. Additive quantization for extreme vector compression. In Conference on Computer Vision and Pattern Recognition, 2014.
  25. Ting Zhang, Guo-Jun Qi, Jinhui Tang, and Jingdong Wang. Sparse composite quantization. In Conference on Computer Vision and Pattern Recognition, June 2015.
  26. Himalaya Jain, Patrick Pérez, Rémi Gribonval, Joaquin Zepeda, and Hervé Jégou. Approximate search with quantized sparse representations. In European Conference on Computer Vision, October 2016.
  27. Alexandre Sablayrolles, Matthijs Douze, Nicolas Usunier, and Hervé Jégou. How should we evaluate supervised hashing? In International Conference on Acoustics, Speech, and Signal Processing, 2017.
  28. John Horton Conway and Neil James Alexander Sloane. Sphere packings, lattices and groups, volume 290. Springer Science & Business Media, 2013.
  29. Moshe Ran and Jakov Snyders. Efficient decoding of the gosset, coxeter-todd and the barnes-wall lattices. In International Symposium on Information Theory, page 92, 1998.
  30. Hervé Jégou, Laurent Amsaleg, Cordelia Schmid, and Patrick Gros. Query adaptative locality sensitive hashing. In International Conference on Acoustics, Speech, and Signal Processing, 2008.
  31. Loïc Paulevé, Hervé Jégou, and Laurent Amsaleg. Locality sensitive hashing: A comparison of hash function types and querying mechanisms. Pattern recognition letters, 31(11), 2010.
  32. Piotr Bojanowski and Armand Joulin. Unsupervised learning by predicting noise. In International Conference on Machine Learning, 2017.
  33. F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 1958.
  34. Jan Beirlant, E J. Dudewicz, L Gyor, and E.C. Meulen. Nonparametric entropy estimation: An overview. International Journal of Mathematical and Statistical Sciences, 6, 1997.
  35. Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab .S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In ACM symposium on Theory of computing, 2004.
  36. Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximate nearest neighbor search. In Conference on Computer Vision and Pattern Recognition, 2013.
  37. Qin Lv, Moses Charikar, and Kai Li. Image similarity search with compact data structures. In International Conference on Information and Knowledge, pages 208–217, November 2004.
  38. Antonio Torralba, Rob Fergus, and Yair Weiss. Small codes and large image databases for recognition. In Conference on Computer Vision and Pattern Recognition, 2008.
  39. Artem Babenko and Victor Lempitsky. Efficient indexing of billion-scale datasets of deep descriptors. In Conference on Computer Vision and Pattern Recognition, 2016.
  40. Hervé Jégou, Romain Tavenard, Matthijs Douze, and Laurent Amsaleg. Searching in one billion vectors: re-rank with source coding. In International Conference on Acoustics, Speech, and Signal Processing, 2011.
  41. David G. Lowe. Distinctive image features from scale-invariant keypoints. International journal of Computer Vision, 60(2), 2004.
  42. Marius Muja and David G. Lowe. Scalable nearest neighbor algorithms for high dimensional data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 2014.
  43. Yu A Malkov and DA Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. arXiv preprint arXiv:1603.09320, 2016.
  44. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
  45. Raghavendran Balu, Teddy Furon, and Hervé Jégou. Beyond “project and sign” for cosine estimation with binary codes. In International Conference on Acoustics, Speech, and Signal Processing, 2014.
  46. Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
202296
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description