DeepHashing using Triplet Loss

DeepHashing using Triplet Loss


Hashing is one of the most efficient techniques for approximate nearest neighbor search for large scale image retrieval. Most of the techniques are based on hand-engineered features and do not give optimal results all the time. Deep Convolutional Neural Networks have proven to generate very effective representation of images that are used for various computer vision tasks and inspired by this there have been several Deep Hashing models like Wang et al. (2016) have been proposed. These models train on the triplet loss function which can be used to train models with a superior representation capabilities. Taking the latest advancements in training using the triplet loss I propose new techniques that help the Deep Hashing models train more faster and efficiently. Experiment result 1 show that using the more efficient techniques for training on the triplet loss, we have obtained a 5% percent improvement in our model compared to the original work of Wang et al. (2016). Using a larger model and more training data we can drastically improve the performance using the techniques we propose.


1 Introduction

Deep learning has been solving a lot of really hard problems. Its been used in a growing number of fields and every fields where it is applied, the deep learning models perform way better than the previous methods. The main advantage of a Deep, hierarchical model is that it can learn robust and effective feature sets for itself, which are more effective than their hand-engineered counter parts.

The advent of the Internet on the other side has created a large amount of image data that have to be curated and stored in a way that allows them to be effectively searched and retrieved. Hashing is one of the most popular and powerful techniques for Approximate Nearest Neighbour (ANN) search due to its computational and storage efficiencies. Hashing aims to map high dimensional image features into compact hash codes or binary codes so that the Hamming distance between hash codes approximates the Euclidean distance between image features. Previous methods have been using hand-generated features for converting the images to hashes and to store it in the database. Then using the ANN technique we can retrieve images similar to the query image from the database.

So using Deep Convolutional Neural Networks we can train models to be better hash functions that learn complex features and relationships about the images and are able to translate that into the most effective vector representations. In this work we try to show that in-fact DCNNs trained on triplet loss function using some special tricks become very effective at this.

2 Related Work

This is work is mainly based on the works of Wang et al. (2016) and Li et al. (2015) who propose the idea of training a CNN to generate the binary encoding for the input images. These were trained using the Triplet loss by Hoffer and Ailon (2015) which takes an anchor image, positive image and a negative image and trying to maximise the distance between the anchor and negative while minimising the the distance between anchor and positive images. Schroff et al. (2015) further worked on this idea and implemented in a wide scale and used it to identify faces. The major concern with the Triplet loss function is that the number of training data increases cubically but Hermans et al. (2017) showed how effective it is and also proposed a improved Triplet loss function which I have used for the experiments.

3 Method

All the experiments and test were performed on the CIFAR-10 dataset which contains 50k training images and 10k test images. I use the K-Nearest Neighbours (KNN) model to find the accuracy of the embeddings for the images that are generated by the trained model. The KNN model is trained on the entire training set and a subset of the images from the test set is used to calculate the accuracy. We use the KNN model to retrieve similar images that is used to calculate the Mean Average Precision (mAP). Query images, taken from the test set were used. The different experiments performed are explained below.

  1. First, I compared various pretrained models like VGG by Bengio and LeCun (2015) and Resnet by He et al. (2015) to evaluate their basic representation capabilities. The features from the last layer (just before classification) were used for evaluation. The accuracy was checked in accordance with the steps described above. The results are in Table 1.

  2. Next to evaluate the performance of the triplet training loss I implemented the triplete loss proposed by Schroff et al. (2015) and Hermans et al. (2017) and trained it on the model that was initialized with the pretrained weights of the ResNet-18 model. The standard evaluation methods were used and the results are in Table 1.

  3. The major caveat to training a model with the triplet loss is the very large dataset. The number of training examples increase cubically for the number of training images. Schroff et al. (2015) does mention a few online and offline hard mining techniques but we have to explicitly find the images that will maximize the loss. However, inspired by the work of Hermans et al. (2017), I implemented a variation of triplet loss that takes in batches of images from every class to compute the loss. The loss I used is as follows

    There are 3 variations of this that I used. SemiHardNegetive takes triplets that have a loss greater than zero but less than (which is the margin). HardNegetive takes the triplets that has the maximum loss in each batch and RandomNegetive method takes random triplets that have a loss greater than zero.

  4. The above experiments proved the image representation capabilities of a CNN trained on the triplet loss to be very good. So building on top of Wang et al. (2016) work, I created a CNN the can directly generate the binary codes for the images which can later be used for image retrieval. The loss function of the model is


    where is


    The hyperparameters used in by Wang et al. (2016) where =16 and =100 but I found this combination to be non-converging. The we turned off the quantization error completely during the starting of the training and made = 10 after 15 epoch. Regarding the alpha parameter I tried new approach by slowly increasing the alpha values every 3 epochs. This gave some interesting properties as discussed in Section 4.

4 Results

In the Experiment 1, I observed that the features from the last layers of the pretrained model do have some representation capabilities. In Figure 1, first plot we can observe that similar objects like automobiles and trucks, deer and horse etc that share similar properties are in-fact, clustered together. Also there is improvements in the representation capabilities as the size of the model increases. So larger models will have better representation capabilities.

In Experiment 2 we can see that there is a fair amount of improvement in the representation and classification capabilities of the model. But the training does take a long time for it to converge and is not optimal. Also the additional embedding layer that was added seems to have no additional benefits.

The use of a better variant of the Triplet loss that takes into account the hard mining factor also is very effective in training the models. Both in time and performance, these models significantly improved the representation and classification capabilities of the models. SemiHardNegetive Triplet Selector shows very fast convergence when compare to the RandomNegetive Triplet Selector but Figure 2 show us that the latter seems to have better representation (t-SNE plots) though there were no significant differences in performance measured.

Figure 1: Plot1 - The t-SNE plots of the last feature layer from resnet101. Plot2 - t-SNE plot after training the resnet18 model using the Triplet Loss. Plot 3,4 - t-SNE plots showing the representation of the embedding learned after the resnet18 model was trained using the SemiHard and RandomNegetive hard mining losses respectively.

All the results strongly supported the CNNs models capabilities of generating binary encodings that can be used for creating representation of the images in image retrieval systems. The result of the DeepHash model also support this but is was different from what was observed in the Wang et al. (2016) work. The high quantization error parameter of = 100 or even 10 failed to converge the model. I observed that the such a high parameter promoted the premature optimisation for the quantization error and hence the model was not able to effectively learn any useful representations. Training the model by slowly increasing the parameters was a much more effective way to train and optimises both parts of the loss functions. There is a bit of representation capabilities lost due to the quantization error since the parameter is low but I suspect it can be further reduced by training the model longer. In Figure 2 Plot 2 we can see that there are clusters of small dense portions but a lot of the data points are scattered about.

Figure 2: The t-SNE plot of the Hashing CNN model. The first plot is the direct output of model without applying the sign function and the second is plotted after applying the sign function.
Model KNN Accuracy mAP
VGG16 0.5726 0.4795
VGG19 0.5926 0.5155
Resnet18 0.5719 0.4619
Resnet50 0.5306 0.4586
Resnet101 0.5540 0.5171
Resnet trained on triplet loss with margin 1 0.7160 0.6121
Resnet trained on triplet loss with margin 2 0.7006 0.6115
Resnet trained on triplet loss with additional embeddings layer 0.6373 0.6168
Resnet trained using the SemiHardNegetive triplet selector 0.8506 0.8680
Resnet trained using the HardestNegetive triplet selector 0.3653 0.2944
Resnet trained using the RandomNegetive triplet selector 0.8426 0.8676
Resnet epoch=15 = 16 = 0 0.7920 0.8483
Resnet epoch=3*5 = 1-16 = 0 0.8190 0.7955
Table 1: This table shows results of Experiments 1-3. Here how the representation and retrieval capabilities of various models

5 Conclusion

The Results show that a very effective CNN model for creating hash codes of images can be trained end-to-end using the model proposed in Experiment 4 and it shows good accuracy. Currently the Resnet18 architecture was used for the CNN model but Results how that a more Deeper model, trained longer will be able to create an even more effective hashes for the images.


  1. The source code is available in Github


  1. 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings. External Links: Link Cited by: item 1.
  2. Deep residual learning for image recognition. CoRR abs/1512.03385. External Links: Link, 1512.03385 Cited by: item 1.
  3. In defense of the triplet loss for person re-identification. CoRR abs/1703.07737. External Links: Link, 1703.07737 Cited by: §2, item 2, item 3.
  4. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition, pp. 84–92. Cited by: §2.
  5. Feature learning based deep supervised hashing with pairwise labels. CoRR abs/1511.03855. External Links: Link, 1511.03855 Cited by: §2.
  6. FaceNet: A unified embedding for face recognition and clustering. CoRR abs/1503.03832. External Links: Link, 1503.03832 Cited by: §2, item 2, item 3.
  7. Deep supervised hashing with triplet labels. CoRR abs/1612.03900. External Links: Link, 1612.03900 Cited by: DeepHashing using Triplet Loss, §2, item 4, item 4, §4.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description