Global Hashing System for Fast Image Search
Hashing methods have been widely investigated for fast approximate nearest neighbor searching in large datasets. Most existing methods use binary vectors in lower dimensional spaces to represent data points, which are usually real vectors of higher dimensionality. However, according to Shannon’s Source Coding Theorem (SSCT) in information theory, it is logical to represent low-dimensional real vectors with high-dimensional binary vectors, since a binary bit contains less information than a real number. We design a novel hashing method based on this principle. Data points are first embedded in a low-dimensional space, and then the Global Positioning System (GPS) method is introduced but modified for hashing. We devise data-independent and data-dependent methods to distribute the “satellites” at appropriate locations. Benefitting from the rationale of SSCT and rules on distributing satellites in a GPS, our data-dependent method outperforms other methods in different-sized datasets from 100K to 10M. By incorporating the orthogonality of the code matrix, both our data-independent and data-dependent methods are particularly impressive in experiments on longer bits.
Hashing methods are efficient for approximate nearest neighbor (ANN) searching, which is important in computer vision  and machine learning . Hashing methods map original input data points to binary hash codes while preserving their mutual distances; that is, the binary strings of similar data points in the original feature space should have low Hamming distances. Hashing with short codes can substantially reduce storage requirements and boost the ANN searching speed.
Popular hashing methods can be categorized into two groups according to their dependence on data. The most well-known data-independent hashing methods are Locality-Sensitive Hashing (LSH)  and its variances, e.g., those adopting cosine similarity  and kernel similarity . The main drawback of these methods is the demand of more bits per hashing table, due to randomized hashing .
Data-dependent methods have become popular in the machine learning community. Spectral Hashing (SH) , one of the most popular data-dependent methods, generate hashing codes by solving the relaxed mathematical problem to circumvent the computation of pairwise distances in the whole dataset, i.e, the affinity matrix and the constraints that lead a NP-hard problem. Anchor Graph Hashing (AGH)  optimizes the object function of SH by using anchor points to construct a highly sparse affinity matrix. Discrete Graph Hashing (DGH)  follows this idea and incorporates the orthogonality of hashing code matrix. There are also methods based on linear projections of Principal Component Analysis (PCA)  or Linear Discriminant Analysis  and those hashing in kernel space, such as binary reconstructive embeddings (BRE) , random maximum margin hashing (RMMH)  and kernel-based supervised hashing (KSH) . Unlike the ITQ that rotates the projection matrix obtained by PCA to minimize the loss function, the Neighborhood Discriminant Hashing (NDH)  incorporate the computation of the projection matrix during the minimization procedure. In general, the linear dimensionality reduction techniques, such as PCA, is inferior to nonlinear manifold learning methods which are able to more effectively preserve the local structure of the input data without assuming global linearity . However, the nonlinear manifold techniques may be intractable for large datasets because of their high computation costs. To address this problem, Inductive Manifold Hashing (IMH)  learns the nonlinear manifold on a small subset and inductively insert the remainder of data. Besides, hashing methods focus on the image representations have been developed recently. For example, RZhang et al.  unifies the feature extraction and the hashing function learning. Zhang et al.  and Liu et al  develop their methods on multiple representations.
However, the main theoretical deficit in the data-dependent methods is that they fail to conform to Shannon’s Source Coding Theorem (SSCT) . In practice, an image in the dataset is usually represented by a descriptor, e.g., SIFT  or GIST  descriptor with more than 128-dimensional 8-bit characters or 32-bit single real numbers in a computer. In information theory , entropy is the average amount of information contained in a message, which, in this context, refers to a descriptor vector or binary code vector. According to SSCT, the code length should be no less than the Shannon entropy of original data points. Without ambiguity in this paper, entropy refers to Shannon entropy. The entropy is defined as , where is a random variable and is the probability of . For instance, by assuming uniform distribution, the entropy of a 64-dimensional 8-bit character vector is 512, which means 512-bit binary strings are needed.
Exploiting this principle, we first reduce the dimensionality of the original data points, i.e., the descriptor vectors, by PCA. Then, the projections on the first principle components are encoded by -dimensional binary code, where . Hence, we need an over-determined system that can uniquely position every data point. This is similar to Global Positioning Systems (GPS) , which use dozens of satellites to position a receiver on the Earth surface. Since our method is directly inspired by GPS, we name it the Global Hashing System (GHS). We tackle the major issue of how to distribute satellites and propose two methods: one data-dependent method and one data-independent method. Unlike most existing methods  that handle the degraded version of orthogonality of code matrix in continuous domain, both our methods approximate the orthogonal code matrix directly in binary domain, which leads better performance on long-bit experiments. Note that although SH can be regarded as assigning more bits to PCA directions along which the data have greater ranges, it is somewhat heuristic .
After the satellites are well distributed, the distances from data points to each satellite (to simplify following discussion, this distance is denoted as D2S hereafter) are sorted separately. The nearest half is denoted as -1 while the other half is denoted as 1. Hence, our method can generate balanced code matrix easily. Although a balanced code matrix is considered to be one of the two conditions for good codes , it is rarely considered because it usually results in a NP-hard problem.
Let us define the used notations. A set of data points in a -dimensional space is represented by , which form the rows of data matrix . is obtained by the first eigenvectors of the data covariance matrix . and is the th row vector of . A binary code corresponding to is defined by , where is the length of the code and the code matrix .
Ii-a Global Positioning/Coding System
A satellite in a GPS has the ability to measure the distance between itself and a signal receiver on Earth surface. This results in a circle on which every point has the same distance to this satellite as the receiver. Hence, at least three satellites are needed to determine the true position which is the unique intersection of three such circles. More generally, a -dimensional point can be determined by its Euclidean distances to other points in this space .
In our GHS, each satellite only has 1-bit to record the Euclidean distances. That is, the receivers far from a satellite are denoted as 1 while the nearby ones are denoted as -1. Hence, our hashing function can be defined as:
where computes the Frobenius norm of each row of and can be any proper functions that return a positive real number. Here is adopted to generate a balanced code matrix. is the coordinate of the th satellite and it forms up the th row of satellite matrix .
Ii-B Data-dependent method (GHS-DD)
Formally, our hashing model can be described as:
Randomly setting does not produce satisfactory results. Furthermore, Eq. (2) requires pairwise distance between each pair of data points, which leads heavy burden in storage and computation. Inspired by ITQ, we circumvent it by minimizing the quantization loss.
At first, let us consider following quantization loss:
Because is always non-negative, we scale and shift B to . The underlying reasonability of Eq. (3) is similar to ITQ. To uniquely position a data point in -dimensional space, at least satellites are required and the locations of these satellites should satisfy following condition :
where and . Eq. (4) is called the existence and uniqueness condition for GPS solution . It can be satisfied by initializing an orthogonal . Hence, we create groups of satellites. Within each group, there are satellites, of which are orthogonal to each other. We define , a parameter discussed in Section II-D. Note that no more than mutual orthogonal vectors in a -dimensional space. Each group is rotated by an orthogonal matrix to find the best location, which gives the following model:
where is an indicator function. , if and , if . and are used to transform the values of D2S into a proper interval. Eq. (5) is minimized by iterative minimization.
Initialization. In each group, is initialized by the left singular vectors of a random matrix, so does . Another random vector is added into each group.
Update . The th column of is calculated by Eq. (1).
Update . Take the partial derivative with respect to , resulting
Update . Similar to ,
Please note when we deduce Eq. (7), is applied.
Update . We divide this step to two sub-problems. First, is substituted by to form up following minimization problem:
which is equivalent to
where . If we treat as a receiver, as satellites and as the D2S, the solution of Eq. (9) is the standard solution of GPS .
We construct following two matrices for each : and , where represents the th column of and returns a row vector which contains the diagonal elements of . Let . Then solve following quadratic equation about :
Eq. (10) usually have two solutions and , therefore two possible can be found by , where and which is useless in our model is related to D2S. To automatically choose a suitable from two solutions, we initialize with , where is a positive real constant. The whose norm is closer to is chosen for following steps. is also used in our data-independent satellite distribution algorithm and discussed in Section II-D along with parameter .
After s are calculated, is found by minimizing following problem:
Eq. (11) can be solved by singular value decomposition (SVD). Given and which contain and of Group , respectively, through SVD, we can get and
Convergence. When or maximum iteration is reached, the algorithm is terminated, where is a small positive real constant.
Output. and thresholds, i.e., in Eq. (1).
Out-of-Sample Hashing. A new query is projected by and then its distance to each satellite is cut off by .
Ii-C Data-independent method (GHS-DI)
Another condition for good code is uncorrelation , i.e., . A direct way to satisfy this condition is distributing the satellites such that only one is close to each receiver; that is, there is no intersection among all spheres, where is the minimum radius that include the nearby data points of . However, in this situation, each receiver only has 1-bit 1. The hamming distance between any pair of receivers is 0 or 2, which means the distance between two data points in input space is not well preserved. What’s more, if we strictly satisfy the balance condition as well as uncorrelation condition in this way, at most 2 satellites can be used.
An alternative way is minimizing the intersections of sphere and sphere for any . That is, we put a tolerance for the values of non-diagonal elements of . They are allowed to be non-zero numbers with small absolute values.
The intersection of two -dimensional sphere is too difficult to compute, therefore the pairwise distance between each pair of satellites is maximized. Without constraints, the resulting may be . A reasonable constraint is distributing all satellites on the surface of sphere. As there is no prior knowledge about the data, we assume data points are uniformly distributed in a sphere. By , the D2S of each satellite will be comparable.
Under the abovementioned assumption, minimizing intersections can be achieved by maximizing the pairwise distance between each pair of satellites:
Eq. (12) can be maximized by Gradient Projection Algorithm (GPA) . The GPA iteratively updates by moving along the gradient direction of and projects to the boundary defined by the constraint (Algorithm 1). The gradient of with respect to is
The projection step can be directly implemented by normalizing each . As the orthogonality of is considered, our GHS-DI method usually produces the second best results on experiments of longer hash bits. Actually the way that GHS-DD satisfies Eq. (4) intrinsically incorporates orthogonality. When , the hyper-sphere surface that separates the near and far data points can be treated as a hyper-plane. In this situation, with orthogonal and assumption of uniform distribution of data points, this property is easy to understand in and cases. More generally, we have following theorem.
If (1) data points are uniformly distributed in a sphere, (2) and (3) , then , where and are column vectors whose elements are the binary hash codes generated by Eq. (1).
Since the data points are uniformly distributed in a sphere, without losing generality, let us set and . In Eq. (1), if , the th element of will be set to , otherwise it will be set to . For any two points and that satisfy , we have , when . That is, which implies , where is the angle between two unit vectors along and , respectively. Hence, and locate on a plane whose distance to is .
To generate a balanced , should cross the origin and perpendicular to . Since , is also perpendicular to which corresponds to . It is evident that and separate the sphere into four parts with equal volume:
Since there are equal number of data points in these four parts, it is easy to verify that . ∎
In Theorem 1, condition (1) and (2) are impractical and therefore only the second sufficient condition can be satisfied by setting ; however, this contravenes the perspective of SSCT and the existence and uniqueness condition for GPS solution. In Section II-D, we will show usually cannot generate the best results. Although our methods cannot exactly fulfill these three conditions, its superiority of considering the orthogonality was proven by its high F-measure in experiments on longer bits (Section IV).
Ii-D Parameters and
There are two key parameters in our methods - and . should not be too small. Consider an extreme example that , then all bits of the points close to the origin will equal to 0 and bits of other points will equal to 1. Obviously, such codes are inefficient.
should be moderate. If is too large, the binary codes will gradually lose their ability to encode the values of projections which are real numbers. On the other hand, when becomes small, fewer projections can be used, so the data points reconstructed by these projections cannot approximate the original ones accurately enough.
The mean average precision (MAP) on CIFAR-10 dataset  with varying and is shown in Fig. 1. CIFAR-10 comprises of 60K images from the 80 Million Tiny Image dataset  and we use 1024-dimensional GIST descriptor to represent each image. Their PCA projections are normalized by the largest Euclidean norm of all projected data. When testing on different s , at most one group containing less than satellites may exist. Based on the results in Fig. 1, we empirically set as 2 for all experiments and set as 1 for experiments whose , while 0.5 for others.
We also tested our two methods by setting (Table I). The percentages shown in Table I denote the improvement by setting . Referring to Table I, we observe that for , both methods perform better with , suggesting that the existence and uniqueness condition for GPS solution is important. For experiment on , the situation is opposite, because the number of PCA projections are too small and its effect dominates results. However, the differences are slight in these cases (less than 1%), so we did not use parameter setting in experiments of Section 4.
Iii Relations to Existing Methods
During past several years, many state-of-the-art data-dependent hashing methods have been proposed. These methods derive from various motivations. In this section, only those related to our proposed methods are briefly reviewed.
Iii-a Iterative Quantization (ITQ)
Gong et al.  formulated ITQ as a minimization problem:
Eq. (18) is minimized by iteratively updating and . is required to be orthogonal, which can be considered as a rotation to . IsoH  is directly derived from ITQ by finding a projection with equal variances for different dimensions. HH  rotates ; however, unlike ITQ, it uses an auxiliary variable for the code matrix during the iterative optimization and puts an orthogonal constraint on it. Then, the auxiliary variable is thresholded to generate code matrix. ok-means  rotates and scales to minimize the quantization loss. Our method rotates and scales the D2S. ITQ, IsoH and HH use principle components whose number is exactly equal to the bit length of hash codes. That is, they cannot be used to produce hash code that is longer than the data dimension. Theoretically, our methods can produce arbitrary length of hash codes.
Iii-B Inductive Hashing on Manifolds (IMH)
IMH  first generates the Base matrix by K-means clustering. Each column corresponds to a cluster center. Then it embeds into low-dimensional space by manifold learning methods . The embedding methods affect the performance of IMH. Throughout this paper, t-SNE  is used because it achieved the best results in the authors’ experiments . Finally, the embedding for the training data is calculated by
where the elements in is defined as
where is the th column of . Eq. (17) is quite similar to membership in fuzzy c-means clustering . The embedding for the training data is linear combination of embedding for . In our method, each satellite encodes 1-bit according to the distances from itself to the data points and we don’t encode the satellites.
Iii-C Spectral Hashing (SH)
Weiss et al.  formulated the SH as:
Eq. (2) is similar to Eq. (18). The graph affinity matrix with is intractable for large datasets. SH evaluates smallest eigenvalues for each PCA direction to create a list of eigenvalues, sorts this list to find the smallest eigenvalues and then thresholds the corresponding eigenfunctions. The eigenvalue list creation step is consistent with the perspective of SSCT, however it is somewhat heuristic . AGH and DGH compute D2S to form up a highly sparse affinity matrix to minimize the modified object function of SH. GHS-DD avoids the computation and storage of pairwise distances of all data points by minimizing the quantization loss. Furthermore, our method generates a balanced code matrix but they cannot.
Iii-D Spherical Hashing (SpH)
The final step of SpH  is the same as our method, so SpH also generates a balanced code matrix. However, SpH searches the locations of special points in the entire space, which makes it difficult to find a good solution. The authors claimed that the distances between these points should be neither too large nor too small, and hence an empirical point-finding procedure was devised that has less theoretical support. With more concrete theoretical analysis, our proposed method appears to outperform SpH.
Our experiments were conducted on three datasets of three different scales: SUN397 , GIST1M  and SIFT10M. SUN397 contains about 108K images and we represent each image by a 512-dimensional GIST descriptor . GIST1M consists of 1 million 960-dimensional GIST descriptors. SIFT10M is a 10 million subset of SIFT1B  dataset which comprises of 1 billion 128-dimensional SIFT descriptors . The 10 million data points are randomly chosen. 1K images are randomly selected from the whole SUN397 to form a separate test dataset. For GIST1M, there is a 1K test dataset available. For SIFT10M, we randomly selected 1K data points from its 10K test dataset. Groundtruth neighbors for a given query are defined as the samples in the top of 2% Euclidean distance.
Iv-a Protocols and Baselines
We evaluate our methods by comparing to seven hashing methods which includes: Iterative Quantization (ITQ) , Isotropic Hashing (IsoH) , Harmonious Hashing (HH) , Spectral Hashing (SH) , Inductive Manifold Hashing (IMH) , Orthogonal K-means (ok-means)  and Spherical Hashing (SpH) . Our data-dependent and data-independent are denoted as GHS-DD and GHS-DI, respectively. We use publicly available codes of comparing methods and follow the suggesting parameter settings by corresponding publications. All data are zero-centered and in our methods, their PCA projections are normalized by the largest Euclidean norm of all projected data in our methods. Two kinds of experiments - Hamming ranking and hash lookup were conducted. The performance of Hamming ranking is measured by MAP and F1 score which is denoted as F-measure is used for evaluating the performance of hash lookup, where F1 score is defined as . Ground truths are defined by Euclidean neighbors.
Iv-B Quantitative Evaluation
The mean average precision (MAP) values are given in Table II-IV. It can be seen that GHS-DD outperforms all compared methods. The performance of GHS-DI is poorer than ITQ, HH and SH except of 128-bit experiments. Benefitting from the reasonability on information theory and balanced code matrix, GHS-DD exceeds ITQ, IsoH and HH. Due to the limitation on computation, SpH works on a small subset of the whole dataset and its empirical satellite distribution algorithm is demonstrated to be less efficient than ours. The F-measure is illustrated in Fig. 2. Again, GHS-DD exceeds others. It is worth noticing that GHS-DI generated the second best MAP and F-measure in experiments on longer bits (), because GHS-DI considers orthogonality of the code matrix. The way that GHS-DD satisfies the condition of uniqueness and existence of GPS solution, i.e., Eq. (4) and its data-dependent property makes it work better than GHS-DI.
Iv-C Computational Efficiency
Training and testing time on 32-bit are given in Table V. All experiments were done on MATLAB R2013b installed on a PC with 2.85 GHz CPU and 128 GB RAM. The major computation cost of GHS-DI is the calculation of D2S at the final step, which is linearly related to the product of data dimension and size of dataset. Hence, it takes the least time on GIST1M and SIFT10M. Because GHS-DD computes D2S in every iteration, its computation cost is moderate. When testing a new query, GHS-DI and GHS-DD computes D2S and hence their computation costs are approximate. Although the testing procedure of SpH is similar to ours, it computes D2S in original input data space whose dimension is , so its testing time is longer.
Iv-D Incorporating Label Information
To incorporate label information, a supervised dimensionality reduction
method can be used to better capture the semantic structure of the
dataset. Among various supervised dimensionality reduction methods,
Canonical Correlation Analysis (CCA)  has proven to be efficient
for extracting a common latent space from two views  and robust
to noise .
Let be a label vector, where is the total number of labels. If the th image is associated with the corresponding label, and otherwise. is the matrix whose rows are comprised of label vectors. The goal of CCA is to maximize the correlation between projected data matrix and label matrix by finding two projection directions and . The correlation is defined as:
can be got by solving the following generalized eigenvalue problem:
where is a small regularization constant and is set to be
0.0001 here. Just as in the case of PCA, the leading generalized eigenvectors
scaled their corresponding eigenvalues form up the rows of
and we obtain the embeded data matrix .
Finally, both of our data-independent and data-dependent methods can
be used to generate hashing codes.
CIFAR-10 dataset is used in this experiment. The 60K images in CIFAR-10
are labelled as 10 classes with 6,000 samples for each class. Again,
each image is represented by a 1024 dimensional GIST feature. 1,000
samples are randomly chosen as queries and the remaining samples are
used for training. Our proposed supervised hashing methods are denoted as CCA-GHS-DI and CCA-GHS-DD, respectively. The baseline methods are Supervised Discrete Hashing (SDH) , KSH , FastHash  and CCA-ITQ .
The mean F-measure of hash lookup Hamming distance 2 and MAP scores of the compared methods are given in Fig. 3. CCA-GHS-DD achieves the best F-measures and MAPs for all code lengths, while CCA-GHS-DI is only a little inferior to SDH for 16-bit code length. In the hash lookup experiments, we found that setting Hamming distance as 2 is favorable for both of our proposed methods, because two groups of satellites were used for experiments of . In Fig. 4, 5 queries with their corresponding results retrieved by compared methods using 16-bit hashing code are illustrated to qualitatively evaluate the performance. It can be seen that both CCA-GHS-DI and CCA-GHS-DD outperform the compared methods.
Iv-E Classification with hashing codes
In this subsection, the MNIST dateset is used for evaluate the performance of the learned hashing codes by compared methods. The MNIST dataset consists of 70, 000 images, each of which is 784-dimensional. These images are handwritten digits from ‘0’ to ‘9’. BRE, CCA-ITA, KSH, FastHash and SDH are used as baselines.
Linear Support Vector Machine (SVM) is applied on the hashing codes. The LIBLINEAR  solver is used to train the SVM. The classification results are given in Fig. 5. From Fig. 5, it can be seen that both CCA-GHS-DD gets the highest classification accuracy over all hash bit length, while CCA-GHS-DI is the second best when but trail SDH in experiments on 32-bit hash codes.
We have proposed a novel hashing method based on and Shannon’s Source Coding Theorem witch requires that the hashing codes should be longer than the embedding for original training data. To circumvent computation of pairwise distances between each pair of data points, we minimize the new formulation of quantization loss which is based on Global Positioning System (GPS). Data-dependent and data-independent methods are proposed to distribute the satellites. According to the experimental results on three scales of datasets, the data-dependent method (GHS-DD) was superior to other methods, and the data-independent method (GHS-DI) produced promising results in less training time. However, GHS-DD took a moderate length of time to train, and the demand on RAM was limited by the computation of the covariance matrix in PCA. By incorporating Canonical Correlation Analysis (CCA), the proposed methods can be used for supervised hashing. The performance of CCA-GHS-DI and CCA-GHS-DD are superior. Finally, the retained hashing codes are used for classification problem to further demonstrate the outstanding performance of the proposed methods. Future work will focus on improving the computational efficiency and investigating methods to train the model using a few samples from the whole dataset to handle larger datasets such as SIFT1B and Tiny 80M.
-  (1991-Nov.) Existence and uniqueness of GPS solutions. IEEE Transactions on Aerospace and Electronic System 27 (6), pp. 952–956. Cited by: §II-A, §II-B.
-  (2008-Jan.) Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Communications of the ACM 51 (1), pp. 117–122. Cited by: §I.
-  (1985-Jan.) An algebraic solution of the GPS equations. IEEE Transactions on Aerospace and Electronic System 21, pp. 56–59. Cited by: §II-B.
-  (1981) Pattern recognition with fuzzy objective function algorithms. Kluwer Academic Publishers. External Links: Cited by: §III-B.
-  (2008) Correlational spectral clustering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. Cited by: §IV-D.
-  (2002) Similarity estimation techniques from rounding algorithms. In ACM Symposium on Theory of Computing, pp. 380–388. Cited by: §I.
-  (2013) Fast, accurate detection of 100,000 object classes on a single machine. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1814–1821. Cited by: §I.
-  (2008-Nov.) LIBLINEAR: a library for large linear classification. Journal of Machine Learning Research 9, pp. 1871–1874. Cited by: §IV-E.
-  (2007-Jan.) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing 1 (4), pp. 586–597. Cited by: §II-C.
-  (2008) Multi-view dimensionality reduction via canonical correlation analysis. Technical report . Cited by: §IV-D.
-  (2011) Entropy and information theory. 2 edition, Springer-Verlag. Cited by: §I.
-  (2002) Stochastic neighbor embedding. In Advances in Neural Information Processing Systems, pp. 833–840. Cited by: §III-B.
-  (1997) Global positioning system: theory and practice. Springer-Verlag. Cited by: §I.
-  (1936-Dec.) Relations between two sets of variables. Biometrika 28, pp. 321–377. Cited by: §IV-D.
-  (2012) Spherical hashing. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2957–2964. Cited by: §III-D, §IV-A.
-  (2011-Mar.) Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (1), pp. 117–128. Cited by: §IV.
-  (2010) SUN database: large-scale scene recognition from abbey to zoo. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3485–3492. Cited by: §IV.
-  (2013) Random maximum margin hashing. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 873–880. Cited by: §I.
-  (2012-Sep.) Semi-supervised hashing for large-scale search. IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (12), pp. 2393–2406. Cited by: §I.
-  (2012) Isotropic hashing. In Advances in Neural Information Processing Systems, pp. 1646–1654. Cited by: §I, §III-A, §IV-A.
-  (2009) Learning multiple layers of features from tiny images. Technical report Cited by: §II-D.
-  (2009) Learning to hash with binary reconstructive embeddings. In Advances in Neural Information Processing Systems, pp. 1042–1050. Cited by: §I.
-  (2012-Nov.) Kernelized locality-sensitive hashing. IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (6), pp. 1092–1104. Cited by: §I.
-  (2011) Hashing algorithms for large-scale learning. In Advances in Neural Information Processing System, Cited by: §I.
-  (2014) Fast supervised hashing with decision trees for high-dimensional data. In The IEEE Conference on Computer Vision and Pattern Recognition, pp. 1971–1978. Cited by: §IV-D.
-  (2015-03) Multiview alignment hashing for efficient image search. IEEE Transactions on Image Processing 24 (3), pp. 956–966. Cited by: §I.
-  (2014) Discrete graph hashing. In Advances in Neural Information Processing Systems, Cited by: §I.
-  (2011) Hashing with graphs. In International Conference on Machine Learning, Cited by: §I.
-  (2012) Compact hyperplane hashing with bilinear functions. In International Conference on Machine Learning, Cited by: §I.
-  (1999) Object recognition from local scale-invariant features. In IEEE International Conference on Computer Vision, pp. 1150–1157. Cited by: §I, §IV.
-  (2013) Cartesian k-means. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3017–3024. Cited by: §III-A, §IV-A.
-  (2001-05) Modeling the shape of the scene: a holistic representation of the spatial envelope. International Journal of Computer Vision 42 (3), pp. 145–175. Cited by: §I, §IV.
-  (2015) Hashing on nonlinear manifolds. IEEE Transactions on Image Processing 24 (6), pp. 1839–1851. Cited by: §I.
-  (2015) Supervised discrete hashing. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 37–45. Cited by: §IV-D.
-  (2013) Inductive hashing on manifolds. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1562–1569. Cited by: §I, §III-B, §IV-A.
-  (2009-Nov.) Hash kernels for structured data. Journal of Machine Learning Research 10, pp. 2615–2637. External Links: Cited by: §I.
-  (2012-05) LDAHash: improved matching with smaller descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (1), pp. 66–78. Cited by: §I.
-  (2008) Large-scale manifold learning. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–8. Cited by: §I.
-  (2015-Sept) Neighborhood discriminant hashing for large-scale image retrieval. IEEE Transactions on Image Processing 24 (9), pp. 2827–2840. Cited by: §I.
-  (2008-05) 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (11), pp. 1958–1970. Cited by: §I, §II-D.
-  Visualizing data using t-sne. . Cited by: §III-B.
-  (2012) Supervised hashing with kernels. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2074–2081. Cited by: §I, §IV-D.
-  (2009) Feature hashing for large scale multitask learning. In International Conference on Machine Learning, pp. 1113–1120. Cited by: §I.
-  (2008) Spectral hashing. In Advances in Neural Information Processing Systems, pp. 1753–1760. Cited by: §I, §III-C, §IV-A.
-  (2013) Harmonious hashing. In International Joint Conference on Artificial Intelligence, pp. 1820–1826. Cited by: §III-A, §IV-A.
-  (2011) Iterative quantization: a procrustean approach to learning binary codes. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 817–824. Cited by: §I, §III-A, §III-C, §IV-A, §IV-D.
-  (2016-02) Sparse hashing tracking. IEEE Transactions on Image Processing 25 (2), pp. 840–849. Cited by: §I.
-  (2015-07) Full-space local topology extraction for cross-modal retrieval. IEEE Transactions on Image Processing 24 (7), pp. 2212–2224. Cited by: §I.
-  (2015-12) Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. IEEE Transactions on Image Processing 24 (12), pp. 4766–4779. Cited by: §I.