Fast Training of Triplet-based Deep Binary Embedding NetworksPublished at Proc. IEEE Conference on Computer Vision and Pattern Recognition 2016. Code can be downloaded at http://bit.ly/2asfI14.

Fast Training of Triplet-based Deep Binary Embedding NetworksPublished at Proc. IEEE Conference on Computer Vision and Pattern Recognition 2016. Code can be downloaded at http://bit.ly/2asfI14.

Abstract

In this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary codes which serve as the labels of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental/interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-the-art hashing for various retrieval tasks.

1Introduction

With the rapid development of big data, large-scale nearest neighbor search with binary hash codes has attracted much more attention. Hashing methods aim to map the original features to compact binary codes that are able to preserve the semantic structure of the original features in the Hamming space. Compact binary codes are extremely suitable for efficient data storage and fast search.

A few hashing methods in the literature incorporate the triplet ranking loss to learn codes that preserve relative similarity relations [22]. In these works usually a triplet ranking loss is defined, followed by solving an expensive optimization problem. For instance, Lai [15] and Zhao [39] map original features into binary codes via deep convolutional neural networks (CNNs). Both use a triplet ranking loss designed to preserve relative similarities, with the key difference being in the exact form of the loss function used. Similarly, FaceNet [25] uses the triplet loss to learn a real-valued compact embedding of faces. All these methods suffer from huge training complexity, because they directly train the CNNs using the triplets, the number of which scales cubically with the number of images in the training set. For example, the training of FaceNet [25] took a few months on Google’s computer clusters. Other work like [32] simply subsamples a small subset to reduce the computation complexity.

To address this issue, we employ a collaborative two-step approach, originally proposed in [18], to avoid directly learning hash functions based on the triplet ranking loss. This two-step approach enables us to convert triplet-based hashing into an efficient combination of solving binary quadratic programs and learning conventional CNN classifiers. Hence, we don’t need to directly optimize the loss function with huge number of triplets to learn deep hash functions. The result is an algorithm with computational complexity that is orders of magnitude lower than existing work such as [39], but without sacrificing accuracy.

The two-step approach to hashing advocated by [17] uses decision trees as hash functions in combination with the design of efficient binary code inference methods. The main difference of our work is as follows. The work in [17] only preserves the pairwise similarity relations which do not directly encode relative semantic similarity relationships that are important for ranking-based tasks. In contrast, we use a triplet-based ranking loss to preserve relative semantic relationships. However it is not trivial to extend the first step (binary code inference) in [17] to triplet-based loss functions. The formulated binary quadratic problem (BQP) in [17] can be viewed as a pairwise Markov random field (MRF) inference problem, while in our case we need to solve large-scale high-order MRF inference. We here propose an efficient high-order binary code inference algorithm, in which we equivalently convert the binary high-order inference into the second-order binary quadratic problem, and graph cuts based block search method can be applied. In the second step of hash function learning, the work of [17] relies on training classifiers such as linear SVM or decision trees on handcrafted features. We instead fit deep CNNs with incremental optimization to simultaneously learn feature representations and hash codes.

Our contributions are summarized as follows.

  • To address the issue of prohibitively high computational complexity in triplet-based binary code learning, we propose a new efficient and flexible framework for interactively inferring binary codes and learning the deep hash functions, using a triplet-based loss function. We show how to convert the high-order loss introduced by the triplets into a binary quadratic problem that can be optimized efficiently in the manner of [17], using block-coordinate descent with graph-cuts. To learn the mapping from images to hash codes, we design deep CNNs capable of preserving their semantic ranking information of the data.

  • We propose a novel incremental group-wise training approach, that interleaves finding groups of bits of the hash codes, with learning the hash functions. We show experimentally that this approach improves the quality of hash functions while retaining the advantage of efficient training.

  • We demonstrate that our method outperforms many existing state-of-the-art hashing methods on several benchmark datasets by a large margin. We also demonstrate our hashing method in the context of a face search/retrieval system. We achieve the best reported results on face search under the IJB-A protocol.

1.1Related work

Hashing methods may be roughly categorized into data-dependent and data-independent schemes. Data-independent methods [6] focus on using random projections to construct random hash functions. The canonical example is the locality-sensitive hashing (LSH) [6], which offers guarantees that metric similarity is preserved for sufficiently long codes based on random projections. Recent research focuses have been shifted to data-dependent methods, which learn hash functions in a either unsupervised, semi-supervised, or supervised learning fashion. Unsupervised hashing methods [2] try to map the original features into hamming space while preserving similarity relations between the original features using unlabelled data. Supervised methods [5] use labelled training data for the similarity relations, aiming to preserve the “ground truth” similarity in the hash codes. Semi-supervised hashing methods incorporate ground-truth similarity information for the subset of the training data for which it is available, but also use unlaballed data. Our proposed method belongs to the supervised hashing framework.

Recently hashing using deep learning has shown great promise. The authors of [39] learn hash bits such that multilevel semantic similarities are kept, taking raw pixels as input and training a deep CNN. This has the effect of simultaneously learning an image feature representation (in the early layers of the network) and the hash bits, which are obtained by thresholding the outputs of the last network layer, or hash layer at 0.5. Note that these methods suffer from huge computation complexity introduced by the triplet ranking loss for hashing. In contrast, our proposed method is much more efficient in training, as shown in our experiments.

2The proposed approach

Our general problem formulation is as follows. Let be a set of training triplet samples, in which is some semantic similarity measures, is the -th training sample and is semantically more similar to than to . Let be the -bit hash codes of image . We simplify the notation by rewriting , and using , and , respectively. Our goal is to learn embedding hash functions to preserve the relative similarity ranking order for the images after being mapped into the binary Hamming space. For that purpose, we define a general form of loss functions:

Here is the matrix that collects binary codes for all the data points and is the bit length. is a triplet loss function.

Unlike approaches such as [39], our method shares the advantage of [18] that we are not tied to a specific form of the loss. One typical example of losses that could be used include the Hinge ranking loss:

Here is the Hamming distance.

We propose an approach to learning binary hash codes that proceeds in two stages. The first stage uses the labelled training data to infer a set of binary codes in which the hamming distance between codes preserves the semantic ranking between triplets of data. The second stage uses deep CNNs to learn the mapping from images to the binary code space (i.e. to learn the hash functions). A similar two-stage approach was advocated in [17], but that work used only pairwise data, and used boosted decision trees rather than deep CNNs to learn the hash functions.

There are various difficulties associated with direct application of triplet losses, and of CNNs to the problem. First, the binary code learning stage requires optimization of Eq. (Equation 1) which is in general NP-hard. In Section 3, we describe how to infer binary codes with triplet ranking loss by reducing the problem to a binary quadratic program. The use of triplets considerably complicates this process and so this is one of our significant contributions in this paper. Second, while the two-stage approach gains significantly in training time, it has the disadvantage that the learning of the codes and the hash functions do not interact and therefore cannot be mutually beneficial. We propose a method to interleave the code and hash function learning into groups of bits, a process that retains much of the training efficiency, but improves the quality of the codes and hash functions considerably. We explain our use of CNNs and this interleaved and incremental learning in Section 4 below.

3Inference for binary codes with triplet ranking loss

Since simultaneously infer multiple bits are intractable in inference task, inspired by the work of [17], we sequentially solve for one bit at a time conditioning on previous bits. When solving for the -th bit, the previous bits are fixed. The binary inference problem becomes minimization of the following objective:

where is the loss function output of the -th bit conditioned on the previous bits. is the binary code of the -th data point and the -th bit, is the binary code vector of the previous bits for the -th data point.

3.1Solving high-order binary inference problem

Directly optimizing the loss function which involves high-order relations (more than pairwise relations) in Eq. (Equation 3) is difficult since the optimization involves an extremely large number of triplets, and so can be computationally intractable. To address this problem, we show here how to convert the high-order inference task to a second-order problem which is much more feasible to be optimized. The key “special properties” of the binary space that we rely on are: (i) the possibility of enumerating all possible inputs (there are ); (ii) the symmetry of the hamming distance . Based on this, the triplet loss can be decomposed into a set of second-order combinations as:

where are the coefficients of the corresponding second-order combinations. Then we will show that there exists a solution for to make it a valid decomposition. Here we ignore the redundant terms in Eq. (Equation 4), hence it can be rewritten as

has 8 possible input combinations for (or equivalently has 8 possible value combinations), leading to 8 constraints of the form of (Equation 5). Because the loss is defined on Hamming distance/affinity, changing the sign of every input leads to identical value of the loss, thus some of these combinations lead to redundant constraints. Eliminating all these redundant combinations leaves only four independent equations (Equation 5). Stacking these so that each forms a row of a matrix yields the follow set of equations:

which can be easily inverted to yield the unique solution of . This shows that for a given triplet loss function, we can decompose it into a set of pairwise terms for each triplet.

We now seek a solution for – the bit of the code for every data point – that optimizes the triplet relations. Because the triplet relations are now encoded as pairwise relations, we can solve for as follows. We define as a weight matrix in which -th element of , , represents a relation weight between the -th and -th training points. Specifically, each element of is computed as

where are the coefficients corresponding to the pair . There will be one such for every triplet in which data points and appear.

The triplet optimization problem in Eq. (Equation 3) can now be equivalently formulated as

Note that the coefficients matrix is sparse and symmetric, therefore Eq. (Equation 7) is a standard binary quadratic problem. Although we have now shown how to convert the third-order objective in Eq. (Equation 3) into a second-order formulation amenable to BQP, a further issue remains: the quadratic objective above contains non-submodular terms, and is therefore difficult to optimize.

To address this, we follow the proposal in [17]. This proceeds by creating a set of sub-problems (or “blocks”) each involving a subset of the variables in which the pairwise relations are all sub-modular. The sub-problems are then solved in turn, treating the variables that are not involved in the current block as constants. The inference problem for one block is written as

and is the block to be optimized. Since the above inference problem for one block is sub-modular, we can solve it efficiently using graph cuts.

Algorithm ( ?) details how the blocks are defined. It is subtly different from [17]; because we are using a triplet loss, the criterion for inclusion in a block is to ensure for each pair in the block, which guarantees sub-modularity for all pairs.

3.2Loss function

The discussion above provides a general framework for learning the binary codes using a triplet loss, but is agnostic to the exact form of the loss. In the experiments reported in this paper, we use as the triplet-based hinge loss function defined in Eq. (Equation 2):

where,

4Deep hash functions learning

Our general scheme now requires that we learn hash functions that map from data points to binary codes. We propose to do this using deep CNNs because they have repeatedly been shown to be very effective for similar tasks. The straightforward approach is then to use the training samples, and their known codes as the labelled training set for a standard CNN. As we have noted this two-stage approach yields significant training time gains.

However a major disadvantage is that because the binary codes are determined independently of the hash functions, and the hash functions have no possibility to influence the choice of binary codes. Ideally these stages would interact so that the choice of binary hash codes is influenced not only by the ground-truth relative similarity relations but also by how hard the training points are.

To address this, we propose an interleaved process where we infer a group of bits within a code, followed by learning suitable hash functions for that set of bits and its predecessors, followed in turn by inference of the next group of bits, and so on. This provides a compromise between independently learning the codes and hash functions, and a more end-to-end – but very expensive – approach such as [15].

4.1Incremental optimization

Our key idea here is to optimize the hashing framework in an incremental group-wise manner. More specifically, we assume there are groups of bits and each group has bits (e.g., for 64-bit codes we may break this into 8 groups of 8 bits each). For convenience, we shall refer to inference of the -th group binary codes followed by learning the deep hash functions, as the “-th training stage”. In the -th training stage, we first infer the bits of the -th group one bit at a time (as described in Section 3) and then train the network parameters so that it minimizes the cross-entropy loss:

where is the indication function. Here at the -th stage we are targetting the first bits of the code; is the -th output of the last sigmoid layer for the -th training sample; is the corresponding bit of the binary code obtained from the inference step which serves as the target label of the multi-label classification problem above. Note that in the -th training stage, the bits from all groups are used to guide the learning of the deep hash functions.

Having completed training the hash functions, we then update the binary codes for all groups by the output of the learned hash functions. The effect of this is to ensure that the error in the learned hash functions will influence the inference of the next group of hash bits.

This incremental training approach adaptively regulates the binary codes according to both the fitting capability of the deep hash functions and the properties of the training data, steadily improving the quality of hash codes and the final performance. Finally, we summarize our hashing framework in Algorithm ?.

4.2Network architecture

The network of learning deep hash functions consists of multiple convolutional, pooling, and fully connected layers (we follow the VGG-16 model), and a multi-label loss layer for multi-label classification.

We use the pre-trained VGG-16 [28] model for initialization, which is trained on the large-scale ImageNet dataset. The multiple convolution-pooling and fully connected layers are used to capture mid-level image representations. The intermediate output of the last fully connected layer are mapped to a multi-label layer as the feature representation. Then neurons in the multi-label layer are activated by a sigmoid function so that the activations are approximated to , followed by the cross-entropy loss of Eq. (Equation 10) for multi-label classification.

5Experiments

Experimental settings

We test the proposed hashing method on two multi-class datasets, one multi-label dataset and one face retrieval dataset. For multi-class datasets, we use the MIT Indoor dataset [23] and CIFAR-10 dataset [12]. The MIT Indoor dataset contains 67 indoor scene categories, and 6,700 images for evaluation. CIFAR-10 contains 60,000 small images in 10 classes. For multilevel similarity measurement, we test our method on the multi-label dataset NUS-WIDE [4]. The NUS-WIDE dataset is a large database containing 269,648 images annotated with 81 concepts. We compare the search accuracies with four recent state-of-the-art state-of-the-art hashing methods, including SFHC [15] (the recent deep CNNs method), FSH [17] (two-step hashing approach using decision trees), KSH [19] and ITQ [7].

For fair comparison, we evaluate the compared hashing methods FSH, KSH and ITQ on the features obtained from the activations of the last hidden layer of the VGG-16 model pre-trained on the ImageNet ILSVRC-2012 dataset [24]. We find that using deep CNN features in general improve the performance for these three hashing methods, compared with what was originally proposed. We initialize our CNN using the pre-trained model and fine-tune the network on the corresponding training set.

Again for fair comparison, for the deep CNN approach SFHC, we replace its network structure (convolution-pooling, fully-connected layers) with the VGG-16 model and end-to-end train the network based on the triplet hinge loss used in the original paper. We implement SFHC using Theano [1] and train the model using two GeForce GTX Titan X. The triplet samples are randomly generated in the course of training, following [15].

For the NUS-WIDE dataset, we construct two comparison settings, setting-1 and setting-2. For setting-1, following the previous work [15], we consider the 21 most frequent tags and the similarity is defined based on whether two images share at least one common tag. For setting-2, we use the similarity precision evaluation metric to evaluate pairwise and triplet performance. As in [32], similarity precision is defined as the % of triplets being correctly ranked.

Given a triplet image set , where . We assume as the query, if the rank of is higher than , then we say triplet is correctly ranked. We first randomly sample 1000 probe images from all the data sharing the selected 21 attributes in setting-1. Then we obtain a ranking list for each probe image according to how many attributes it shares with the data and randomly generate 50 triplets per probe image according to the ranking list to form the test set. For the triplet-based methods, the sampled training data is the same as in setting-1. For the compared pairwise-based methods, we directly use the hash functions learned in setting-1 since semantic ranking information cannot be incorporated into the pairwise-based inference pipeline. For CIFAR-10 and NUS-WIDE setting-1, we use the same experimental setting as described in [15].

We use two evaluation metrics: Mean Average Precision (MAP) and the precision of the top-K retrieved examples (Precision), where K is set to 100 in CIFAR-10 and NUS-WIDE setting-1 and set to 80 in MIT Indoor dataset. For NUS-WIDE setting-1, we calculate the MAP values within the top 5000 returned neighbors. The results are represented in Figure ? and Figure ?.

5.1Implementation details

We implement the network training based on the CNN toolbox Theano. Training is done on a standard desktop with a GeForce GTX TITAN X with 12GB memory. In all experiments, we set the mini-batch size for gradient descent to 50, momentum 0.9, weight decay 0.0005 and dropout rate 0.5 on the fully connected layer to avoid over-fitting. The number of binary codes per group is set to 8.

5.2Analysis of retrieval results

On all the three datasets, our proposed method shows superior performance in terms of MAP and precision evaluation metrics against the most related work SFHC (deep CNN) and FSH (two-step hashing with boosted trees). As expected, the training speed of our method is much faster than SFHC, and the result is summarized in Table ?. Rather than simply end-to-end learn the hash functions, our method incorporates hash functions learning with a collaborative inference step, where the image representation learning and hash coding can benefit each other through this feedback scheme.

Compared to FSH, the results demonstrate the effectiveness of incorporating relative similarity information as supervision. Note that FSH is based on pairwise information while ours uses triplet based ranking information to learn hash codes. The triplet loss may be better for retrieval tasks because it is directly linked to retrieval measure such as the AUC score. The pairwise loss used by FSH encourages all images in one category to be projected onto a single point in the Hamming space. The triplet loss maximizes a margin between each pair of same-category images and images from different categories. As argued in [25], this may enable images belonging to the same category to reside on a manifold; and at the same time to maintain a distance from other categories.

5.3Triplet vs. pairwise

From the results shown in Figure ?, we can clearly observe the superiority of triplet-based methods on the ranking based evaluation metric. Thanks to the high quality binary codes and the strong fitting capability of our deep model, our proposed method provides much better performance than pairwise methods by a large margin.

Since the two triplet-based methods (Ours-Triplet and SFHC) simultaneously learn feature representations and hash codes while considering the semantic ranking information, they are more likely to learn hash functions that are tailored for the ranking-based retrieval metric than the pairwise-based methods (Ours-pairwise and FSH).

5.4Evaluation of binary codes quality

Face search accuracies under the IJB-A protocol. Results for GOTS and OpenBR are quoted from [11]. Results are reported as the average standard deviation over the 10-fold cross validation sets specified in the IJB-A protocol.

Algorithm

Rank-1 Rank-5 0.1 0.01
GORS
OpenBR
Deep Face Search[31]
Proposed Method

We evaluate the binary codes quality on CIFAR-10, MIT Indoor and NUS-WIDE setting-1 datasets (see Figure ?). To evaluate the effectiveness of the binary codes inference pipeline, we infer 64 binary bits without learning the deep hash functions. Then the training database is used as both the probe set and the gallery set for evaluating the inference performance. For the three datasets, we calculate the MAP values within the returned neighbors. We can observe that for CIFAR-10, the binary codes converge very fast at around 10-th bits. MIT Indoor dataset converges slightly slower due to the fact that it has more classes. The binary codes can still perfectly separate all the training samples from different classes. This is because the relations between training points are very simple due to the multi-class similarity relationships. In contrast, due to the complicated relationships between the multi-label training samples, the accuracy of NUS-WIDE setting-1 keeps improving up to 64 bits and is lower than those multi-class datasets. We can see that the code quality is directly proportional to the final retrieval performance. This makes sense since the deep hash functions are learned to fit the binary codes, so the performance of the inference pipeline has a direct impact on the quality of the learned deep hash functions.

5.5Face retrieval

We implement the face search application as follows. Data preprocessing. The preprocessing pipeline is: 1) detect the face region using the robust face detector [21] and find 68 face landmarks using the (state-of-the-art) face alignment algorithm [36]; 2) select the middle landmark between two eyes and the middle landmark of the mouth as alignment-anchor points, and align/scale the face image such that distance between the landmarks is 40 pixels; 3) finally we crop a region around the mid-point of the two landmarks in (2).

Face search accuracies of the proposed method under the IJB-A protocol using different bits per group.

Group length

Rank-1 Rank-5 0.1 0.01
8 bits
32 bits
64 bits
128 bits

Supervised pre-training. We pre-train the VGG-16 [28] network (using Caffe [9]) to classify all the 10575 subjects in the CASIA dataset [37]. This dataset has 494414 images of the 10575 subjects, and we double the number of training examples by horiozontal mirroring, making the feature representation more robust to pose variation.

We test the pre-trained model’s discriminative power on the LFW verification data as follows. We use the last 4096-dimensional fully-connected layer as the feature representation and then use PCA to compress it into a 160-dimensional feature vector. Then CNN features are centered and normalized for evaluation. Under the standard LFW [8] face verification protocol, for a single network using only cosine similarity, we achieve an accuracy of . Using the joint Bayesian method [3] for face verification, we achieve an accuracy of .

Despite using only publicly available training data and one single network, the performance of this model is competitive with state-of-the-art [25].

Face search. We then use the above pre-trained CNN model to initialize the deep CNN that models the hash functions of our proposed hashing method. We test the face search performance on the IARPA Janus Benchmark-A (IJB-A) dataset [11] which contains 500 subjects with a total of 25,813 face images. This dataset contains many challenging face images and defines both verification and search protocols. The search task (1: search) is defined in terms of comparisons between templates consisting of several face images, rather than single face images. For the search protocol, which evaluates both closed-set and open-set search performance, 10-fold cross validation sets are defined based on both the probe and gallery sets consisting of templates. Given an image from the IJB-A dataset, we first detect and align the face following the data preprocessing pipeline. After processing, the final training set consists approximately 1 million faces and 1 billion randomly sampled triplets. Clearly, such a large-scale training dataset may render most existing triplet-based hashing methods computationally intractable. The deep hash functions are learned based on the proposed two-step hashing framework. After the deep hash functions are learned, we generate bits hash codes for each input face image for fast face retrieval. The definitions of CMC, FNIR and FPIR are explained in [31]. The results of the proposed method along with the compared algorithms are reported in Table ?. In [31], a face is represented by the combined features extracted by 6 deep models. However, in our paper, 128 bits binary codes are directed extracted by a single deep model for face representation which enjoys both faster searching speed and less storage space. Also, although using the same training database, the searching accuracy on two protocols both demonstrate the effectiveness of our hashing framework.

5.6Evaluation of the incremental learning

We evaluate different group lengths used in the incremental learning to prove the effectiveness of such an optimization strategy. We implement the experiments on the face retrieval task as described above since there are sufficient training examples and faces are difficult for the deep architecture to fit because of the relatively weak discriminative information they share. The results are reported in Table ?. From the results, we clearly see that smaller group length corresponds to better search accuracies, demonstrating our assertion that incremental optimization helps in terms of code quality and the final performance.

6Conclusion

In this paper, we develop a general supervised hashing method with triplet ranking loss for large-scale image retrieval. Instead of directly training on the extremely large amount of triplet samples, we formulate learning of the deep hash functions as a multi-label classification problem, which allows us to learn deep hash functions orders of magnitude faster than the previous triplet based hashing methods in terms of training speed. The deep hash functions are learned in an incremental scheme, where the inferred binary codes are used to learn image representations and the learned hash functions can give feedback for boosting the quality of binary codes. Experiments demonstrate that the superiority of the proposed method over other state-of-the-art hashing methods.

References

  1. Theano: new features and speed improvements.
    F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio. arXiv preprint arXiv:1211.5590, 2012.
  2. Hashing with binary autoencoders.
    M. A. Carreira-Perpinan and R. Raziperchikolaei. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 557–566, 2015.
  3. Bayesian face revisited: A joint formulation.
    D. Chen, X. Cao, L. Wang, F. Wen, and J. Sun. In Proc. Eur. Conf. Comp. Vis., pages 566–579. 2012.
  4. Nus-wide: a real-world web image database from national university of singapore.
    T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng. In Proc. of the ACM Int. Conf. on Image and Video Retrieval., 2009.
  5. Deep hashing for compact binary codes learning.
    V. Erin Liong, J. Lu, G. Wang, P. Moulin, and J. Zhou. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2475–2483, 2015.
  6. Similarity search in high dimensions via hashing.
    A. Gionis, P. Indyk, R. Motwani, et al. In Proc. Int. Conf. Very Large Datadases, volume 99, pages 518–529, 1999.
  7. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval.
    Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2916–2929, 2013.
  8. Labeled faces in the wild: A database for studying face recognition in unconstrained environments.
    G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Technical report, University of Massachusetts, Amherst, 2007.
  9. Caffe: Convolutional architecture for fast feature embedding.
    Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. In Proc. of the ACM Int. Conf. on Multimedia., pages 675–678, 2014.
  10. Revisiting kernelized locality-sensitive hashing for improved large-scale image retrieval.
    K. Jiang, Q. Que, and B. Kulis. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 4933–4941, 2015.
  11. Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a.
    B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1931–1939, 2015.
  12. Learning multiple layers of features from tiny images.
    A. Krizhevsky. Technical report, 2009.
  13. Learning to hash with binary reconstructive embeddings.
    B. Kulis and T. Darrell. In Proc. Adv. Neural Inf. Process. Syst., pages 1042–1050, 2009.
  14. Kernelized locality-sensitive hashing for scalable image search.
    B. Kulis and K. Grauman. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2130–2137, 2009.
  15. Simultaneous feature learning and hash coding with deep neural networks.
    H. Lai, Y. Pan, Y. Liu, and S. Yan. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 3270–3278, 2015.
  16. Learning hash functions using column generation.
    X. Li, G. Lin, C. Shen, A. Van den Hengel, and A. Dick. In Proc. Int. Conf. Mach. Learn., pages 142–150, 2013.
  17. Fast supervised hashing with decision trees for high-dimensional data.
    G. Lin, C. Shen, Q. Shi, A. van den Hengel, and D. Suter. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1971–1978, 2014.
  18. A general two-step approach to learning-based hashing.
    G. Lin, C. Shen, D. Suter, and A. van den Hengel. In Proc. IEEE Int. Conf. Comp. Vis., pages 2552–2559, 2013.
  19. Supervised hashing with kernels.
    W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2074–2081, 2012.
  20. Hashing with graphs.
    W. Liu, J. Wang, S. Kumar, and S.-F. Chang. In Proc. Int. Conf. Mach. Learn., pages 1–8, 2011.
  21. Face detection without bells and whistles.
    M. Mathias, R. Benenson, M. Pedersoli, and L. Van Gool. In Proc. Eur. Conf. Comp. Vis., pages 720–735. 2014.
  22. Hamming distance metric learning.
    M. Norouzi, D. M. Blei, and R. R. Salakhutdinov. In Proc. Adv. Neural Inf. Process. Syst., pages 1061–1069, 2012.
  23. Recognizing indoor scenes.
    A. Quattoni and A. Torralba. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 413–420, 2009.
  24. Imagenet large scale visual recognition challenge.
    O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Int. J. Comp. Vis., pages 1–42, 2015.
  25. FaceNet: A unified embedding for face recognition and clustering.
    F. Schroff, D. Kalenichenko, and J. Philbin. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 815–823, 2015.
  26. Supervised discrete hashing.
    F. Shen, C. Shen, W. Liu, and H. T. Shen. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 37–45, 2015.
  27. Inductive hashing on manifolds.
    F. Shen, C. Shen, Q. Shi, A. Van Den Hengel, and Z. Tang. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1562–1569, 2013.
  28. Very deep convolutional networks for large-scale image recognition.
    K. Simonyan and A. Zisserman. arXiv preprint arXiv:1409.1556, 2014.
  29. Deepid3: Face recognition with very deep neural networks.
    Y. Sun, D. Liang, X. Wang, and X. Tang. arXiv preprint arXiv:1502.00873, 2015.
  30. Deepface: Closing the gap to human-level performance in face verification.
    Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1701–1708, 2014.
  31. Face search at scale: 80 million gallery.
    D. Wang, C. Otto, and A. K. Jain. arXiv preprint arXiv:1507.07242, 2015.
  32. Learning fine-grained image similarity with deep ranking.
    J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1386–1393, 2014.
  33. Distance metric learning for large margin nearest neighbor classification.
    K. Q. Weinberger and L. K. Saul. J. Mach. Learn. Res., 10:207–244, 2009.
  34. Multidimensional spectral hashing.
    Y. Weiss, R. Fergus, and A. Torralba. In Proc. Eur. Conf. Comp. Vis., pages 340–353. 2012.
  35. Spectral hashing.
    Y. Weiss, A. Torralba, and R. Fergus. In Proc. Adv. Neural Inf. Process. Syst., pages 1753–1760, 2009.
  36. Supervised descent method and its applications to face alignment.
    X. Xiong and F. De la Torre. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 532–539, 2013.
  37. Learning face representation from scratch.
    D. Yi, Z. Lei, S. Liao, and S. Z. Li. arXiv preprint arXiv:1411.7923, 2014.
  38. Bit-scalable deep hashing with regularized similarity learning for image retrieval.
    R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang. IEEE Trans. Image Proc., (12):4766–4779, 2015.
  39. Deep semantic ranking based hashing for multi-label image retrieval.
    F. Zhao, Y. Huang, L. Wang, and T. Tan. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1556–1564, 2015.
10012
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description