Supervised Online Hashing via Similarity Distribution Learning
Abstract
Online hashing has attracted extensive research attention when facing streaming data. Most online hashing methods, learning binary codes based on pairwise similarities of training instances, fail to capture the semantic relationship, and suffer from a poor generalization in largescale applications due to large variations. In this paper, we propose to model the similarity distributions between the input data and the hashing codes, upon which a novel supervised online hashing method, dubbed as Similarity Distribution based Online Hashing (SDOH), is proposed, to keep the intrinsic semantic relationship in the produced Hamming space. Specifically, we first transform the discrete similarity matrix into a probability matrix via a Gaussianbased normalization to address the extremely imbalanced distribution issue. And then, we introduce a scaling Student distribution to solve the challenging initialization problem, and efficiently bridge the gap between the known and unknown distributions. Lastly, we align the two distributions via minimizing the KullbackLeibler divergence (KLdiverence) with stochastic gradient descent (SGD), by which an intuitive similarity constraint is imposed to update hashing model on the new streaming data with a powerful generalizing ability to the past data. Extensive experiments on three widelyused benchmarks validate the superiority of the proposed SDOH over the stateoftheart methods in the online retrieval task.
1 Introduction
Hashing based visual search has attracted extensive research attention in recent years due to the rapid growth of visual data on the Internet [7, 33, 8, 26, 12, 13, 30, 32, 25, 35, 27]. In various scenarios, online hashing has become a hot topic due to the emergence of handling the streaming data, which aims to resolve an online retrieval task by updating the hash functions from sequentially arriving data instances. On one hand, online hashing takes advantages of traditional offline hashing methods, i.e., low storage cost and efficiency of pairwise distance computation in the Hamming space. On the other hand, it also merits in training efficiency and scalability for largescale applications, since the hash functions are updated instantly and solely based on the current streaming data, which is superior to traditional hashing methods based on a hashing model entirely trained from scratch.
Several recent endeavors have been made for robust and efficient online hashing, i.e., OKH [12], SketchHash [17], AdaptHash [6], OSH [4], FROSH [2], MIHash [5], HCOH [21, 19] and BSODH [20]. Unsupervised online hashing methods, e.g., SketchHash [17] and FROSH [2], consider a sketch of the whole streaming data, which is efficient but lacks in accuracy, as the label information is ignored.
Recent advances have advocated more on supervised online hashing, which yields better results in practice. As shown in Fig. 1(a), early works such as OKH [12], AdaptHash [6], MIHash [5] and BSODH [20] utilize label information to define the pairwise similarities between different training instances to guide the learning of hash functions. However, these methods suffer from a poor generalization. To explain, as demonstrated in a previous work [21], only pairwise relationships of sequential data at current stage are considered, which ignores the data variations in different stages. As a result, the property of the past data becomes less conspicuous as the dataset grows. In terms of OSH [4] and HCOH [21, 19], the label information is used to assign “codeword” from a predefined ECOC codebook, as shown in Fig. 1(b). And the hash functions map the tobelearned hashing codes to the assigned “codeword”, which however heavily depends on the quality of ECOC codebook. Though a recent work in [21] considers the Hadamard matrix [11] as the ECOC codebook, it restricts the length of hashing bits to be consistent with the size of the Hadamard matrix.
Despite the extensive progress made, supervised online hashing remains unsolved due to the defect in modeling the supervised cues. Existing methods only preserve the information from the current data, and their update does not take the distributions of previous data into account. We argue that, these defects can be compensated by aligning the distributions between the input data and the hashing space when updating, which has been demonstrated informatively beyond online hashing as revealed in [22, 9, 23, 10, 34]. Inspired from it, we aim to impose an intuitive constraint on similarity preservation in the Hamming space to capture not only the pairwise similarity at the current stage, but also the semantic relationship among different stages. By doing so, the learning can take both the information from the current streaming data, but also the past data into account.
In this paper, we propose a novel online hashing method, called Similarity Distribution based Online Hashing (SDOH), which exploits the distribution over different pairwise similarities towards optimal supervised online hashing, as shown in Fig. 1(c). To this end, we first transfer the discrete similarity matrix into a probability matrix via a Gaussianbased normalization. Noticeably, Lin et al. [23] adopted a similar idea which simply obtains a probability matrix via normalization. However, such a normalization may generate an extremely imbalanced distribution (as illustrated in Fig. 2(a)) when facing a highly sparse pairwise discrete similarity matrix. And the optimization takes a risk of losing the information of dissimilar pairs (see Fig. 2(c)) [1]. Therefore, we introduce a Gaussian distribution to smooth the imbalanced distribution before normalization, which bridges the gap between similar and dissimilar probabilities (as in Fig. 2(b)). Second, we develop a scaling Student distribution to transform pairwise distances in the Hamming space into a probability (see Fig. 2(f)). Different from traditional Student distribution that suffers from poor performance due to the instability of parameter initialization (see Fig. 2(e)), the proposed scaling Student distribution not only improves the performance but also accelerates the training speed (see Fig. 2(d)). Lastly, to better approximate the probability, we adopt KLdivergence minimization between the two introduced distributions to preserve relationships among different pairwise similarities.
Our main contributions include:

We investigate the online hashing problem by modeling the similarity distribution, instead of only exploiting the pairwise similarities that suffer from a poor generalization problem. The Gaussian normalization is introduced to smooth the extremely imbalanced distribution, while a scaling Student distribution is proposed to solve the initialization problem, and bridge the gap between the known and unknown distributions.

We propose to align the distributions via KLdivergence between the input data and the binary space, which imposes an intuitive similarity constraint to update hash functions on the new streaming data with a powerful generalizing ability to the past data.
2 Related Work
According to the learning type, online hashing can be categorized into the SGDbased methods and matrix sketchbased methods.
SGDbased online hashing employs SGD to update the learned parameters. To our best knowledge, Online Kernel Hashing (OKH) [12] is the first of this kind, which requires pairs of points to update the hash functions via an online passiveaggressive strategy [3]. Adaptive Hashing (AdaptHash) [6] defines a hingelike loss, which is approximated by a differentiable Sigmoid function adopted to update the hash functions with SGD. In [4], a more general twostep hashing was introduced, in which binary Error Correcting Output Codes (ECOC) are first assigned to labeled data, and then the hash functions are learned to fit the binary ECOC using Online Boosting. Cakir et al. [5] developed an Online Hashing with Mutual Information (MIHash), which targets at optimizing the mutual information between the neighbors and nonneighbors, given a query. Lin et al. [21, 19] proposed a Hadamard Codebook based Online Hashing (HCOH), where a more discriminative Hadamard matrix is used as the ECOC codebook to guide the learning of hash functions.
The inspiration of matrix sketchbased online hashing methods comes from the idea of “data sketch” [18], where a small size of sketch data is leveraged to preserve the main property of a largescale dataset. To this end, Leng et al. [17] proposed an Online Sketching Hashing (SketchHash), which employs an efficient variant of SVD decomposition to learn hash functions, with a PCAbased batch learning on the sketch to learn hashing weights. A faster version of Online Sketch Hashing (FROSH) was developed in [2], where the independent Subsampled Randomized Hadamard Transform (SRHT) is employed on different data chunks to make the sketch more compact and accurate, and to accelerate the sketching process.
However, existing sketchbased online hashing methods are unsupervised, which suffer from a low performance due to the lack of supervised labels. SGDbased methods [12, 6, 4, 5, 21, 20] aim to make full use of labels, which still face practical problems as discussed in Sec. 1. For OKH [12], AdaptHash [6], MIHash [5] and BSODH [20], less generalization ability exists since only pairwise relationships of current sequential data are considered. As for OSH [4] and HCOH [21, 19], a welldefined ECOC codebook has to be given in advance, which still fails when the size of codebook is inconsistent with the length of hashing bits.
3 Proposed Method
3.1 Problem Definition
Suppose there is a dataset with its corresponding labels , where is the th instance with its class label . Assume there are hash functions to be learned, which map each into a bit binary code , and the th binary bit of is computed as follows:
(1) 
where is the th hash function, and is a projection of . The function returns , if , and otherwise.
Let be the projection matrix. Then, the binary codes of can be computed as:
(2) 
Online hashing aims to resolve an online retrieval task by updating hash functions from a sequence of data instances one at a time. Therefore, is not available once for all. Without loss of generality, we denote as the input streaming data at the stage, as the learned binary codes for and as the corresponding label set, where is the size of data at the stage. Further, we denote as the pairwise similarity matrix, where , if and share the same label, otherwise . In an online setting, the parameter matrix should be updated based on the current data only.
3.2 Proposed Framework
The framework of the proposed method can be seen in Fig. 2. Specifically, suppose that at the th round, we have a known similarity distribution matrix and an unknown Hamming distance distribution matrix . The goal of the proposed SDOH is to align with , such that the similarity can be well preserved in the Hamming space. It is achieved by minimizing the KLdivergence as follows:
(3) 
where and are the th elements in the th row of and , respectively. In the following, we elaborate on the details of and .
3.2.1 Gaussianbased Normalization
One common approach to obtain is to normalize the similarity matrix with each element as:
(4) 
However, such a probability matrix may suffer from an extremely imbalanced distribution, as shown in Fig. 2(a). For instance, when is a highly sparse matrix that is common in an online setting [20], is with a higher probability if and grows quickly. Similarly, if , decreases to 0 quickly.
Therefore, the learning of Eq.( 4) heavily relies on similar pairs and thus loses the information of dissimilar pairs, as shown in Fig. 2(c). To address this issue, one key novelty in our proposed SDOH is to modify Eq.( 4) as:
(5) 
where is introduced to smooth the imbalanced distribution as shown in Fig. 2(b). We assume that follows a Gaussian distribution widely used in practice, i.e., , where and are the mean and variance of the pairwise similarity distribution, respectively. Therefore, we derive exp, where exp is the exponential function. Different values of the pair have different impacts on . To sum up, decides the position of the highest value of . The larger the is, the smoother the function is. Therefore, Eq.( 5) can well alleviate the imbalanced distribution problem caused by Eq.( 4) (see Fig. 2(d)).
3.2.2 Scaling Student distribution
For the distribution , we define it as the probability of Hamming distance. The similarity between and can be measured by the Hamming distance as:
(6) 
We propose a scaling Student distribution based on a new with one degree of freedom to transform Hamming distances into probabilities. We start from the works in [28, 23], and each element of the original is defined as:
(7) 
However, such an assigned distribution causes an unsatisfactory initialization of , which may lead to the performance degradation. Ideally, if , we need a higher value of . However, the value of in Eq.( 7) depends on the initialization of . If does not initialize well, is likely to be very small for . In such a case, in Eq.( 3) grows quickly. Similarly, when , in Eq.( 3) may be very small. Therefore, Eq.( 7) may result in an extremely poor initialization (see Fig. 2(e)).
To avoid such a poor initialization, another key novelty in our approach is to revise Eq.( 7) as follows:
(8) 
where the scaling parameter if , otherwise . To analyze, scaling up will increase the value of , thus further decrease the value of when . Similar analyses can be applied to the case of (see Fig. 2(f)). Therefore, Eq.( 8) can well reduce the influence of initialization problem (see Fig. 2(d)).
The final objective function can be derived:
(9) 
We note that the proposed SDOH clearly differs from SePH [23] as: Our SDOH is based on a new and well defined and which enable to avoid the imbalanced distribution and poor initialization problems. We are the first to employ Student distribution for online unimodal retrieval, while SePH aims at solving offline crossmodal retrieval. Our SDOH is implemented in an endtoend manner, where the learning of hash functions is integrated into KLdivergence. While SePH is based on a twostep framework, where KLdivergence is used to guide the learning of binary codes first, and then hash functions are learned to approximate the learned binary codes.
3.3 The Optimization
After obtaining the distributions and , we aim to optimize the KLdivergence in Eq.( 9) to preserve the similarities in the Hamming space. Due to the discrete sign function in Eq.( 2), the above objective function is highly nonconvex and difficult (usually NPhard) to optimize. To solve it, we follow the work in [24] to replace the sign function with the tanh function as follows:
(10) 
To solve the optimization problem in Eq.( 9), we adopt the SGD algorithm to update the hash functions at the stage as below:
(11) 
where is a positive learning rate.
We now elaborate the partial derivative of w.r.t. , i.e., . The gradient w.r.t. can be computed as follows:
(12) 
Further, we denote , where is the matrix with being . Let , where stands for the Hadamard product. Let , where is the diagonal matrix, and represents a vector with all elements being . Therefore, we obtain the gradient w.r.t. as:
(13) 
The optimization process for the proposed SDOH is summarized in the supplement material with more details.
Method  mAP  Precision@H2  

32bit  48bit  64bit  128bit  32bit  48bit  64bit  128bit  
OKH  0.223  0.252  0.268  0.350  0.100  0.452  0.175  0.372 
SketchHash  0.302  0.327      0.385  0.059     
AdaptHash  0.216  0.297  0.305  0.293  0.185  0.093  0.166  0.164 
OSH  0.129  0.131  0.127  0.125  0.137  0.117  0.083  0.038 
MIHash  0.675  0.668  0.667  0.664  0.657  0.604  0.500  0.413 
HCOH  0.688  0.707  0.724  0.734  0.731  0.694  0.633  0.471 
BSODH  0.689  0.656  0.709  0.711  0.691  0.697  0.690  0.602 
SDOH  0.765  0.762  0.751  0.742  0.785  0.781  0.734  0.550 
4 Experiments
We conduct experiments on three benchmark datasets, i.e., CIFAR [14], Places [36], and MNIST [16]. Our proposed SDOH will be compared with several stateoftheart online hashing methods [12, 6, 17, 4, 5, 21, 20] to demonstrate its performance.
4.1 Experimental Settings
Datasets. The CIFAR dataset consists of K instances from categories. Each instance is represented by a dim CNN feature vector, extracted from VGG [31]. Similar to [5], we randomly split the dataset into a retrieval set with K samples, and a test set with K samples. Besides, K training images from the retrieval set are sampled to learn the hash functions.
The Places dataset is a largescale and challenging dataset contains more than million images with scenes. We extract CNN features from the layer of AlexNet [15], which are reduced into dim features by PCA. Following [4], images from each scene are used to construct a test set, and the remaining is used as the retrieval set. A random subset of K images is used to update the hash functions.
The MNIST dataset consists of K handwritten digit images with classes. Following [21], each image is represented by dim normalized pixel values. The test set is constructed by sampling instances from each class, and the others are used to form the retrieval set. Besides, K images from the retrieval set are sampled to learn the hash functions.
Method  mAP@  Precision@H2  

32bit  48bit  64bit  128bit  32bit  48bit  64bit  128bit  
OKH  0.122  0.048  0.114  0.258  0.026  0.017  0.217  0.075 
SketchHash  0.202  0.242      0.220  0.176     
AdaptHash  0.195  0.223  0.222  0.229  0.012  0.185  0.021  0.022 
OSH  0.022  0.032  0.043  0.164  0.012  0.023  0.030  0.059 
MIHash  0.244  0.288  0.308  0.332  0.204  0.242  0.202  0.069 
HCOH  0.259  0.280  0.321  0.347  0.252  0.179  0.114  0.036 
BSODH  0.250  0.273  0.308  0.337  0.241  0.246  0.212  0.101 
SDOH  0.306  0.309  0.324  0.348  0.249  0.248  0.217  0.103 
Compared Methods. The proposed SDOH is compared with seven stateoftheart online hashing methods, including the Online Kernel Hashing () [12], Online Sketch Hashing () [17], Adaptive Hashing () [6], Online Supervised Hashing () [4], Online hashing with Mutual Information () [5], Hadamard Codebook based Online Hashing () [21] and Balanced Similarity for Online Discrete Hashing (BSODH) [20]. Our model is implemented with MATLAB. The training is done on a standard workstation with a GHz Intel Core I CPU and G RAM. The source codes of these methods are publicly available. We carefully follow the original parameter settings for each method and report their best results.
Evaluation Protocols. We use five widelyadopted evaluation metrics for performance comparisons, including mean Average Precision (denoted as mAP), precision within a Hamming ball of radius centered on each query (denoted as Precision@H), mean precision of the top R retrieved neighbors (denoted as Precision@R), mAP vs. different sizes of training instances, as well as its corresponding area under the mAP curve (denoted as AUC). Noticeably, due to the large scale of Places dataset, it is very timeconsuming to compute mAP. Following [5, 21], we only compute mAP on the top retrieved items (denoted as mAP@). And for SketchHash [17], the batch size has to be larger than the size of hashing bits. Thus, we only report its performance for hashing bits of and .
4.2 Results and Discussions
First, we report the mAP (mAP@) and Precision@H performance in Tabs. 1, 2 and 3. The highest values are shown in boldface, and the second best are with underlines. It can be seen that the proposed SDOH is the best in almost all cases. Interestingly, as the number of hashing bit increases, the proposed SDOH outperforms others by large margins.
Second, we analyze the Precision@R performance with R ranging from to on the three benchmarks in Figs. 3, 6 and 8. SDOH achieves super results on all three benchmarks in all hashing bits, which demonstrates the excellent performance of the proposed method.
Third, we report the mAP (mAP@) measure w.r.t. different training sizes in Figs. 4, 7 and 9. As the size of training data increases, SDOH has consistently higher mAP (mAP@) on all three benchmarks. To quantitatively evaluate the performance of all methods, we take a deeper analysis in terms of their AUC metrics in Fig. 5. For CIFAR and MNIST, the AUC performance of Fig. 4 and Fig. 9 is charted in Fig. 5(a) and Fig. 5(c), respectively. Obviously, in all hashing bits, SDOH outperforms other methods by a large margin.
Method  mAP  Precision@H2  

32bit  48bit  64bit  128bit  32bit  48bit  64bit  128bit  
OKH  0.224  0.273  0.301  0.404  0.457  0.724  0.522  0.126 
SketchHash  0.348  0.369      0.691  0.251     
AdaptHash  0.319  0.318  0.292  0.208  0.535  0.335  0.163  0.168 
OSH  0.130  0.148  0.146  0.143  0.192  0.134  0.109  0.019 
MIHash  0.744  0.780  0.713  0.681  0.814  0.739  0.720  0.471 
HCOH  0.756  0.772  0.759  0.771  0.826  0.766  0.643  0.370 
BSODH  0.747  0.743  0.766  0.760  0.826  0.804  0.814  0.643 
SDOH  0.814  0.799  0.802  0.823  0.835  0.833  0.850  0.828 
On Places, the AUC performance in Fig. 7 is plotted in Fig. 5(b). When the hashing bit is , the proposed method also transcends all other methods by a certain margin. Though, similar results can be observed for MIHash and HCOH in other hashing bits, the performance of SDOH still ranks the best. Quantitatively, compared with the second best method, i.e., HCOH, SDOH achieves an average improvement of , and on CIFAR, Places, and MNIST, respectively.
Note that Figs. 4, 7 and 9 also validate the generalization ability of the proposed SDOH. Taking the case of hashing bit on CIFAR for instance (Fig. 4), when the size of training data is , SDOH already achieves a satisfying result of mAP compared with other methods, e.g., mAP for MIHash and mAP for HCOH. For a more indepth analysis, MIHash only considers the pairwise similarities while HCOH restricts the length of hashing bit to be consistent with the size of ECOC codebook applied. It shows that the proposed method not only captures the pairwise similarities of the current data batch, but also the relationships among data batches at different stages, which demonstrates the usefulness of exploiting the distribution of pairwise similarities.
Furthermore, we find the advantage of KLbased solution in comparison with others, such as inner product. As shown in the above figures and tables, the innerproductbased BSODH shows advantages mainly in Precision@H. Nevertheless, the proposed KLbased SDOH still outperforms BSODH by a clear margin, which demonstrates the effectiveness and capability of the KLbased solution.
4.3 Retrieval of Unseen Classes
We further test the performance on unseen classes as in [29]. To do that, of the categories are treated as seen classes to form the training set. The remaining categories are regarded as unseen classes, which are divided into a retrieval set and test set to evaluate the hashing model. For each query, we retrieve the nearest neighbors among the retrieval set and then compute the Precision@R. The experiments are done when the hashing bit is . The results are shown in Fig. 10. Clearly, the proposed SDOH achieves the best performance among all methods, which further demonstrates the generalization capability of the proposed framework for online hashing.
5 Conclusions
We have presented a novel online hashing method, named SDOH, which aims to align the similarity distributions between the original data space and the hashing space to preserve the semantic relationship well in the Hamming space. To achieve the goal, we first transform the discrete similarity matrix into a specified probability matrix via a Gaussianbased normalization to solve the imbalanced distribution problem. Second, to deal with the poor initialization, we have developed a scaling Student distribution to transform pairwise Hamming distance computation into a probability estimation problem. Finally, we approximate these two distributions via the KLdivergence to impose an intuitive similarity constraint to update hash functions with a powerful generalization ability. Experimental results have shown that the proposed SDOH achieves better results than the stateoftheart methods in comparison.
6 Acknowledge
This work is supported by the National Key R&D Program (No. 2017YFC0113000, and No. 2016YFB1001503), Nature Science Foundation of China (No. U1705262, No. 61772443, and No. 61572410), Post Doctoral Innovative Talent Support Program under Grant BX201600094, China PostDoctoral Science Foundation under Grant 2017M612134, Scientific Research Project of National Language Committee of China (Grant No. YB13549), and Nature Science Foundation of Fujian Province, China (No. 2017J01125 and No. 2018J01106).
References
 [1] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In Proceedings of the ICLR, 2017.
 [2] X. Chen, I. King, and M. R. Lyu. Frosh: Faster online sketching hashing. In Proceedings of the UAI, 2017.
 [3] K. Crammer, O. Dekel, J. Keshet, S. ShalevShwartz, and Y. Singer. Online passiveaggressive algorithms. JMLR, 2006.
 [4] C. Fatih, S. A. Bargal, and S. Sclaroff. Online supervised hashing. CVIU, 2017.
 [5] C. Fatih, K. He, S. A. Bargal, and S. Sclaroff. Mihash: Online hashing with mutual information. In Proceedings of the ICCV, 2017.
 [6] C. Fatih and S. Sclaroff. Adaptive hashing for fast similarity search. In Proceedings of the ICCV, 2015.
 [7] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In Proceedings of the VLDB, 1999.
 [8] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In Proceedings of the CVPR, 2011.
 [9] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proceedings of the NeurIPS, 2014.
 [10] Y. Hao, T. Mu, J. Y. Goulermas, J. Jiang, R. Hong, and M. Wang. Unsupervised tdistributed video hashing and its deep hashing extension. IEEE TIP, 2017.
 [11] K. J. Horadam. Hadamard matrices and their applications. Princeton university press, 2012.
 [12] L. Huang, Q. Yang, and W. Zheng. Online hashing. In Proceedings of the IJCAI, 2013.
 [13] Q. Jiang and W. Li. Scalable graph hashing with feature transformation. In Proceedings of the IJCAI, 2015.
 [14] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
 [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Proceedings of the NeurIPS, 2012.
 [16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 1998.
 [17] C. Leng, J. Wu, J. Cheng, X. Bai, and H. Lu. Online sketching hashing. In Proceedings of the CVPR, 2015.
 [18] E. Liberty. Simple and deterministic matrix sketching. In Proceedings of the ACM SIGKDD, 2013.
 [19] M. Lin, R. Ji, H. Liu, X. Sun, S. Chen, and Q. Tian. Hadamard matrix guided online hashing. arXiv preprint arXiv:1905.04454, 2019.
 [20] M. Lin, R. Ji, H. Liu, X. Sun, Y. Wu, and Y. Wu. Towards optimal discrete online hashing with balanced similarity. In Proceedings of the AAAI, 2019.
 [21] M. Lin, R. Ji, H. Liu, and Y. Wu. Supervised online hashing via hadamard codebook learning. In Proceedings of the ACM MM, 2018.
 [22] R.S. Lin, D. A. Ross, and J. Yagnik. Spec hashing: Similarity preserving algorithm for entropybased coding. In Proceedings of the CVPR, 2010.
 [23] Z. Lin, G. Ding, M. Hu, and J. Wang. Semanticspreserving hashing for crossview retrieval. In Proceedings of the CVPR, 2015.
 [24] H. Liu, R. Ji, Y. Wu, and F. Huang. Ordinal constrained binary code learning for nearest neighbor search. In Proceedings of the AAAI, 2017.
 [25] H. Liu, M. Lin, S. Zhang, Y. Wu, F. Huang, and R. Ji. Dense autoencoder hashing for robust crossmodality retrieval. In ACM MM, 2018.
 [26] W. Liu, J. Wang, R. Ji, Y. Jiang, and S.F. Chang. Supervised hashing with kernels. In Proceedings of the CVPR, 2012.
 [27] X. Liu, X. Nie, W. Zeng, C. Cui, L. Zhu, and Y. Yin. Fast discrete crossmodal hashing with regressing from semantic labels. In In Proceedings of the ACM MM, 2018.
 [28] L. v. d. Maaten and G. Hinton. Visualizing data using tsne. JMLR, 2008.
 [29] A. Sablayrolles, M. Douze, N. Usunier, and H. Jégou. How should we evaluate supervised hashing? In Proceedings of the ICASSP, 2017.
 [30] F. Shen, W. Liu, S. Zhang, Y. Yang, and H. Tao Shen. Learning binary codes for maximum inner product search. In Proceedings of the ICCV, 2015.
 [31] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. In Proceedings of the ICLR, 2015.
 [32] J. Wang, W. Liu, S. Kumar, and S.F. Chang. Learning to hash for indexing big data â a survey. Proceedings of the IEEE, 2016.
 [33] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Proceedings of the NeurIPS, 2009.
 [34] L. Wu, H. Ling, P. Li, J. Chen, Y. Fang, and F. Zou. Deep supervised hashing based on stable distribution. IEEE Access, 2019.
 [35] C. C. H. S. Y. Y. Xingbo Liu, Xiushan Nie. Modalityspecific structure preserving hashing for crossmodal retrieval. In In Proceedings of the IEEE ICASSP, 2018.
 [36] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In Proceedings of the NeurIPS, 2014.