Random VLAD based Deep Hashing for Efficient Image Retrieval

Random VLAD based Deep Hashing for Efficient Image Retrieval

Abstract

Image hash algorithms generate compact binary representations that can be quickly matched by Hamming distance, thus become an efficient solution for large-scale image retrieval. This paper proposes RV-SSDH, a deep image hash algorithm that incorporates the classical VLAD (vector of locally aggregated descriptors) architecture into neural networks. Specifically, a novel neural network component is formed by coupling a random VLAD layer with a latent hash layer through a transform layer. This component can be combined with convolutional layers to realize a hash algorithm. We implement RV-SSDH as a point-wise algorithm that can be efficiently trained by minimizing classification error and quantization loss. Comprehensive experiments show this new architecture significantly outperforms baselines such as NetVLAD and SSDH, and offers a cost-effective trade-off in the state-of-the-art. In addition, the proposed random VLAD layer leads to satisfactory accuracy with low complexity, thus shows promising potentials as an alternative to NetVLAD.

image hash, image retrieval, vector of locally aggregated descriptors, deep learning, neural network.

I Introduction

Content-based image search is one of the basic challenges in the management of massive multimedia data. Solutions typically rely on finding appropriate features which are robust and discriminative. Previously, numerous feature detectors and descriptors have been proposed, such as GIST [1], SIFT [2], BRIEF [3], etc. In the big data era, the amount of visual content is drastically increasing, and efficient image search has become a more demanding task. In order to have more compact feature representations, two approaches have been developed, which can be summarized as “aggregation” and “quantization”. The former transforms multiple descriptors into a concise form; the latter reduces the precision of a descriptor. Typical examples include the bag-of-features (BoF) representation [4] and the product quantization (PQ) [5]. Along these lines of work, this paper focuses on two subsequent techniques. The first is VLAD (vector of locally aggregated descriptors) [6]. This representation is related to BoF but carries more discriminative information. The other technique is called hashing [7], an active domain in recent years. It generally means to convert features into binary vectors, called hash values. The comparison of hash values can be extremely fast, because Hamming distance is used, which can be efficiently computed by the Exclusive Or (XOR) operation.

Given their advantages, aggregation and quantization can be combined. For example, BoF can be complemented by hashing [8]; VLAD can be compressed by PQ [6]. After the research community enters the deep learning era, both VLAD and hashing have developed their neural network versions. Two representative works are NetVLAD [9] and SSDH (supervised semantic-preserving deep hashing) [10]. While they outperform their counterparts based on hand-crafted features, a question has naturally arisen – whether one can combine the advantages of the two. In this work, such an attempt is made. We propose a deep learning based hash algorithm which incorporates NetVLAD and SSDH. It is named RV-SSDH (random VLAD based SSDH). The core of the algorithm is a hash component that can be inserted into different neural networks (Fig. 1). The hash component consists of a random VLAD layer, a transform layer, and a hash layer. It is designed with the following properties: 1) RV-SSDH outperforms both SSDH and NetVLAD in terms of accuracy, speed, and compactness; 2) point-wise training is used for simplicity and versatility. In particular, a random VLAD layer is used instead of the original NetVLAD. It works better in our scenario, and shows promising potentials as a general block in neural network design. To summarize, the contribution of this paper is two-fold:

  • The proposed RV-SSDH is a state-of-the-art algorithm with O(N) complexity;

  • The proposed random VLAD block is an interesting alternative to the NetVLAD block.

Extensive experiments have been performed. The results confirm that RV-SSDH outperforms SSDH and NetVLAD.

Fig. 1: Schematic diagram of the proposed RV-SSDH algorithm.

The rest of the paper is organized as follows. Section II introduces related work in hashing. Section III gives the background of VLAD. Section IV describes the proposed scheme. Section V presents the experiment results. Section VI concludes the work.

Ii Related work

Hashing, or robust Hashing, is also referred to as fingerprinting [11, 12] or perceptual hashing [13]. Early techniques are data independent and use hand-crafted features. Many hash algorithms are approximately linear. They have a simple structure like the following:

(1)

where is the element-wise sign function, is a feature vector, is a transform matrix, and is a bias vector. Existing approaches typically differ in the derivation of . Without learning, can be generated randomly, but the performance is limited. A representative work is the locality-sensitive hashing (LSH) [14, 15, 16]. Learning based hash algorithms focus on computing from training data, which can be divided into unsupervised and supervised categories. Unsupervised algorithms capture structure information in the feature data representation. They are versatile, but typically do not involve any “semantic” of data. Well-known methods include spectral hashing [17], iterative quantization [18], k-means hashing [19], and kernelized locality sensitive hashing [20]. In order to take semantics into account, supervised algorithms are developed. Typical examples are semi-supervised hashing [21], kernel-based supervised hashing [22], restricted Boltzmann machine based hashing [23], and supervised semantic-preserving deep hashing [10]. A comprehensive survey can be found in [7].

In the deep learning era, the linear transformation in Eqn. (1) is replaced by neural networks with increased non-linearity and complexity [24, 10, 25]. The performance mainly depends on the structure of the network and the way of training. In particular, the use of ranking loss in image retrieval [26, 27, 9] also brings improved performance in hashing [28, 29]. On the other hand, new components, such as GAN [30] is beginning to be deployed. Differing from previous work that relies on new training methods, this paper focuses on a new component in the main network structure called random VLAD. The proposed RV-SSDH is trained point-wise, thus takes relatively low computational cost.

Iii Background of VLAD

VLAD is related to the BoF [4] representation. In the BoF representation, local features are detected from images, and described by descriptors such as SIFT [2]. A local descriptor typically consists of a feature vector, a coordinate, a scale, an orientation, etc. Feature vectors usually go through vector quantization such as -means [31], and get encoded by a vocabulary of codewords, which are typically cluster centers in high-dimensional space. Let denote feature vectors of an image. The BoF representation is basically a histogram vector of codewords

(2)
(3)

where is a vector quantization function, is the number of occurrences of the codeword , which is also called term frequency. The BoF generally shows satisfactory performance in image retrieval thanks to its robustness, but it has certain disadvantages: 1) it is typically a long sparse vector which needs compression; 2) too much information is lost by using a histogram, which decreases the discrimination power. VLAD is an improvement on BoF. Let denote the VLAD representation of an image. It tries to capture the neighbourhood of each codeword by the sum of associated descriptors

(4)
(5)

With increased discrimination, VLAD typically uses a smaller number of codewords. BoF and VLAD are still widely used in vision applications. VLAD also leads to extensions like NetVLAD [9] and multiscale NetVLAD [32].

Iv The proposed scheme

The proposed RV-SSDH algorithm is a combination of conventional neural network layers and a novel hash component. A typical scheme is shown in Fig. 1, where convolutional layers are followed by an RV-SSDH layer and a classification layer. In order to facilitate transfer learning, convolutional layers of pre-trained networks can be used. For example, AlexNet [33] and VGG-F [34] are used in this work. In the following, we focus on the RV-SSDH component. It consists of three parts: a random VLAD layer, a transform layer, and a hash layer. They are explained in details below.

Fig. 2: The proposed random VLAD layer in comparison with NetVLAD. The yellow parts are used in NetVLAD.

Iv-a The random VLAD layer

The random VLAD layer is a modified version of the one used in NetVLAD [9], which is described here first. Figure 2 gives a comparison between the two. In a NetVLAD network, the input to VLAD pooling is the output from a previous layer. It can be viewed as -dimensional “local” feature vectors. The output size of VLAD core is . The th output vector is defined as

(6)

where indicates whether is associated with the cluster center . In order to make it differentiable, is replaced with a soft assignment function

(7)
(8)

where and . In order to further improve flexibility, NetVLAD actually decouples , from by setting them as three independent parameter sets. The soft assignment is implemented by a convolution block followed by a softmax operation. The initial values of anchors are obtained by applying -means clustering to the input of VLAD core.

Besides the VLAD core, NetVLAD also contains a pre-L2 normalization layer, an intra normalization layer, and a post-L2 normalization layer. In practice, the last step can be skipped if cosine similarity is used for comparison.

The proposed scheme incorporates a random VLAD layer. It is similar to NetVLAD, but with the following differences:

  • L2 and intra normalization are not used.

  • The anchors are randomly initialized.

These modifications not only reduce algorithm complexity, but also improve retrieval performance, as shown by experiment results in Section V-H.

Iv-B The transform layer

The transform layer consists of two fully connected (FC) layers, each of which is followed by a rectified linear unit (ReLU) unit. The first FC layer converts the output of VLAD core to a -dimensional vector. The second FC layer further reduces the feature dimensionality from to . In general, . This part is motivated by the fact that some well-known networks typically have two FC layers after convolutional layers. They function as high-level feature extraction and transformation on top of low-level features.

In our experiments, . According to the results in Sect. V-I, the transform layer is not always necessary, but it generally helps to improve retrieval performance, especially for datasets with semantic gaps. We also find that, without preceding FC layers, the training of SSDH might not converge.

Iv-C The hash layer

The hash layer originates from SSDH [10]. It consists of an FC layer and a sigmoid activation layer. The FC layer compresses the -dimensional input to an -dimensional feature vector. The feature vector is then binarized to derive an -bit hash value. Since ideal binarization is not differentiable, the logistic sigmoid function is used as an approximation to facilitate back-propagation. This layer can be defined as

(9)

where , are the weight matrix and the bias vector respectively, and is the sigmoid function with the output range from to .

In contrary to other hash algorithms, the hash layer is the last second layer (during training). It is assumed to contain latent attributes for classification. The elements of a hash value can be viewed as indicators of these attributes.

Iv-D The prediction layer

The proposed hash algorithm is based on the architecture of SSDH. This architecture utilizes a classification problem to induce latent attributes, so a prediction layer is put after the hash layer. The prediction layer is also an FC layer with sigmoid activation. It maps a hash value to class probabilities. The mapping is assumed to be a linear transformation, so the expression is similar to Eqn. (9) but the bias term is ignored.

(10)

where is the predicted label vector.

Iv-E Optimization

The proposed RV-SSDH is trained to generate similar hash values for similar (or relevant) images, in a similar way as SSDH. Let denote the number of samples. The basic objective function is based on classification error:

(11)

where represents the loss for the true label vector and the predicted label vector , and the constant controls the relative weight of the regularization term. In order to support different types of label information, the loss function takes the general form:

(12)

where is the number of classes, and depends on the application. This work mainly focuses on single-label classification, so the log loss is used

(13)

In order to make the hash output close to or , another constraint is enforced:

(14)

where is the continuous hash value, is a vector of “1”s, and . Combining the constraints, the overall optimization problem is

(15)

where are weight factors. Optimal parameters of the network can be found by back propagation and stochastic gradient descent (SGD) [35]. Since the algorithm is trained in a point-wise manner, the complexity of training is O(N).

In SSDH, there is another constraint that aims for equal probable bits:

(16)

This constraint is ignored in our implementation, for it has a minor impact on the retrieval performance [10], and it potentially reduces the capacity of a hash space.

Iv-F Training and testing

The training and testing of RV-SSDH is different. During training, network parameters are learned. The prediction layer is only used in this phase, where the hash output is continuous.

During testing, hash values are first generated for a database and a query set. Then the retrieval performance is evaluated. In this phase, the prediction layer is removed, and is further quantized

(17)

where is a binary hash value. During retrieval, the Hamming distance is used for comparing hash values.

V Experiment results

Comprehensive experiments are performed to evaluate RV-SSDH. First, retrieval performance is examined together with classification performance. Then algorithm complexity is measured in terms of retrieval speed and training speed. Initial results are obtained for small datasets and compared with baselines. Following a large-scale retrieval test, more experiments are carried out to study the effects of random VLAD. Finally, RV-SSDH is compared with some more algorithms from state-of-the-art. The detailed results are described below. Figures are best viewed in color.

V-a Datasets and evaluation metrics

Three datasets are used in the paper: MNIST [36], CIFAR-10 [37], and Places365 [38]. MNIST is a gray image dataset of handwritten digits (from 0 to 9). CIFAR-10 is a dataset of color images in these classes: airplane, automobile, dog, cat, bird, deer, frog, horse, ship, truck. They both contain ten classes and 6000 images per class. The image sizes are and respectively. For these two datasets, we use 10000 images for validation and the rest for training. Places365 is a large-scale dataset of 365 scene categories. We use its training set which contains more than 1.8 million images: 80% is used for training and the rest for validation. In order to reveal the performance gain brought by RV-SSDH, no data augmentation is used.

The three datasets have different purposes: MNIST is a relatively simple dataset without much semantics; CIFAR-10 is more difficult for it has severe semantic gaps; Places365 is the most challenging.

Algorithms are tested in a retrieval scenario. For MNIST and CIFAR-10, the validation set is used as a database, and 1000 items from the database are randomly selected as queries; for Places365, 30000 images from the validation set are used as a database, and 3000 images from the database are randomly selected as queries. The retrieval performance is measured by two metrics: the precision-recall (P-R) curve and the mean average precision (mAP). For a query, precision and recall are defined as

(18)
(19)

Different trade-offs can be achieved by adjusting the number of retrieved items. A P-R curve is plotted by averaging over all queries. The mAP is defined as area under the curve which represents overall retrieval performance.

V-B The baselines

The SSDH [10] and the NetVLAD [9] algorithms are the main baselines in this work. Since RV-SSDH is a pluggable component, the actual implementation of a complete network also depends on the preceding layers. For a particular setting of convolutional and FC layers, denoted by CNN, the following baselines are considered

  • CNN;

  • CNN+(FC)+SSDH;

  • CNN+NetVLAD;

  • CNN+RV-SSDH.

Specifically, three CNNs are used: the first two are the well-known AlexNet [33] and VGG-F [34], which can test RV-SSDH in a transfer learning scenario; the third one is a small custom network defined in Table I, which we call ToyNet. When AlexNet or VGG-F is used, parameters of pre-trained models (based on ImageNet) are loaded into the convolutional layers, while ToyNet is trained from scratch. Note that when a CNN is used alone, it includes convolutional layers and possibly two FC layers (for AlexNet/VGG-F); when a CNN is used together with another block, only its convolutional layers are used. For example, AlexNet+RV-SSDH means the convolutional layers of AlexNet (conv1–conv5) are combined with RV-SSDH. A special case is AlexNet/VGG-F+FC+SSDH, where two FC layers are added in the middle. This is because we find that without the FC layers it is difficult to make the training converge.

TABLE I: ToyNet: a small custom convolutional neural network.

It should also be noted that the original NetVLAD uses triplet-based training [9, 27]. It is modified to use point-wise training in our implementation. This modification not only guarantees fair comparison with others, but also reveals the performance of NetVLAD in a more general setting.

For the same test, the same amount of epochs (typically 50) is used for all candidate algorithms. For SSDH and RV-SSDH, hash lengths from 8 to 128 bits are mainly considered.

V-C Retrieval performance

Figure 3 shows a comparison of mAP values, where the dataset is MNIST and the base network is ToyNet. For SSDH and RV-SSDH, the mAP varies with the hash length; for NetVLAD, only the best mAP is shown; for ToyNet, the mAP is constant. One can see that using ToyNet alone leads to an mAP around , while NetVLAD can boost it to . The large difference confirm the effectiveness of NetVLAD. On the other hand, SSDH performs even better, and RV-SSDH gives the highest mAP. The observation gives a basic ranking: RV-SSDHSSDHNetVLADToyNet.

Fig. 3: mAP comparison (MNIST, ToyNet).

Figure 4 shows a comparison of mAP values, where the dataset is CIFAR-10 and the base network is ToyNet. A similar trend is observed, but the mAP gain over ToyNet is not as large as in Fig. 3, especially for NetVLAD. This is because CIFAR-10 is more difficult than MNIST and ToyNet is not a sophisticated network. The advantage of RV-SSDH becomes more noticeable in this case.

Fig. 4: mAP comparison (CIFAR-10, ToyNet).

Figure 5 shows a comparison of mAP values, where the dataset is CIFAR-10 and the base network is AlexNet. The trend stays the same. Since a more sophisticated base network is used, the mAP is generally much improved, compared with ToyNet. RV-SSDH still performs the best, with an approximate margin of above SSDH.

The results from Fig. 3 to Fig. 5 are consistent, so one can basically conclude that RV-SSDH has the best retrieval performance among the candidate algorithms. These figures also provide some other insights. For example, the mAP typically increases with the hash length (especially when is small), but it might decrease when is too large. This could be a consequence of insufficient training – a larger requires more network parameters while the number of epochs is fixed, so a large does not necessarily guarantee better discrimination power. The results show that typically works best. Compared with others, NetVLAD seems more sensitive to the choice of base network and the dataset.

Fig. 5: mAP comparison (CIFAR-10, AlexNet).

The mAP represents overall retrieval performance. More details and trade-offs can be found in the precision-recall curve. Figures 68 show some comparisons of P-R curves for MNIST and CIFAR-10. In these figures, the parameter(s) of each curve corresponds to the algorithm’s highest mAP. For example, in Fig. 6, for SSDH and MNIST, the best mAP is achieved by , which is outperformed by RV-SSDH with and ; for NetVLAD, the best mAP is achieved when , whose curve is significantly above ToyNet’s but clearly below SSDH’s.

Note that in Fig. 7, the P-R curve of NetVLAD is not always above ToyNet’s. This explains why NetVLAD only has a small advantage over ToyNet in Fig. 4. These results again confirm that RV-SSDH has superior retrieval performance.

Fig. 6: P-R curve comparison (MNIST, ToyNet).
Fig. 7: P-R curve comparison (CIFAR-10, ToyNet).
Fig. 8: P-R curve comparison (CIFAR-10, AlexNet).

V-D Classification performance

Since the proposed RV-SSDH is trained in a classification framework (recall that there is a prediction layer after the hash layer), a question is whether a hash value is also suitable for classification.

To answer this question, Table II shows a comparison on the Top-1 error rate during validation, where the dataset is MNIST and the base network is ToyNet. In the table, the smallest and the two largest rates are marked in bold. It is interesting to see that NetVLAD gives the largest value (SSDH gives the second largest), and the general performance ranking is: RV-SSDHSSDHToyNetNetVLAD, which is different from the retrieval cases. Compared with Fig. 38, the results show a difference between classification and retrieval. In other words, good retrieval performance does not guarantee good classification performance, or vice versa. Nevertheless, for RV-SSDH, the behaviour is consistent.

More results are shown in Table IIIIV, and similar patterns can be observed. Note that in Table IV AlexNet performs better than SSDH but worse than RV-SSDH. The best accuracy achieved by RV-SSDH on CIFAR-10 with AlexNet is 86.19%, which is close to the accuracy (89%) in the AlexNet paper [33]. Considering that no data augmentation is used in our experiments and our AlexNet’s accuracy is 82.61%, we conclude that RV-SSDH is likely to give a significant boost in classification performance.

Note that the error rates of SSDH and RV-SSDH are computed using unbinarized hash values (i.e. ). Therefore the above results only prove that the continuous RV-SSDH is useful for classification. Although the quantization error is generally small, the actual performance of binary hash values in classification is left for future investigation.

Method error rate vs. hash length
8 bits 16 bits 32 bits 64 bits 128 bits
ToyNet 0.0126
NetVLAD 0.0474
SSDH 0.0147 0.0129 0.0117 0.0118 0.0116
RV-SSDH 0.0101 0.0089 0.0087 0.0093 0.0088
TABLE II: Top-1 error comparison (MNIST, ToyNet).
Method error rate vs. hash length
8 bits 16 bits 32 bits 64 bits 128 bits
ToyNet 0.2802
NetVLAD 0.5787
SSDH 0.3127 0.2757 0.2675 0.2724 0.2714
RV-SSDH 0.2780 0.2453 0.2375 0.2313 0.2357
TABLE III: Top-1 error comparison (CIFAR-10, ToyNet).
Method error rate vs. hash length
8 bits 16 bits 32 bits 64 bits 128 bits
Alexnet 0.1739
NetVLAD 0.2846
SSDH 0.2306 0.2158 0.2029 0.2049 0.2079
RV-SSDH 0.1637 0.1413 0.1472 0.1381 0.1407
TABLE IV: Top-1 error comparison (CIFAR-10, AlexNet).

V-E Complexity comparison

The complexity of RV-SSDH can be evaluated in two aspects: 1) the retrieval speed; 2) the training speed. Some quantitative results are given in this section. The experiments are performed on a computer with Intel i7-8700K CPU, 16G memory, and Nvidia GTX1080 GPU. Figure 9 shows a comparison of retrieval speed (seconds per query) in log scale, where the dataset is CIFAR-10 and the base network is AlexNet. For RV-SSDH and SSDH, the retrieval speed is the same and the hash length is varied; for NetVLAD, the centroid number is varied, and the output length is ; for AlexNet, the classification layer is removed, and the output length is .

Note that the distance metric for retrieval is different for different algorithms: RV-SSDH/SSDH uses Hamming distance; NetVLAD uses Cosine distance; AlexNet uses Euclidean distance. This is why NetVLAD is faster than AlexNet for the same output size. In the NetVLAD paper [9], the authors also propose to compress the output with PCA [39] to dimensions. That corresponds to the case. To conclude, RV-SSDH and SSDH are always the fastest; NetVLAD is faster than AlexNet for small . Figure 10 shows the retrieval speed of RV-SSDH in linear scale. The actual speed is approximately linear in the hash length.

Table V shows a comparison of training speed, which is represented by the number of processed images per second (Hz). It is obvious that AlexNet is the fastest, because fewest layers are used. The speed of SSDH is about half of AlexNet. The hash length is not shown in the table because we find that the training speed is insensitive to . On the other hand, for RV-SSDH and NetVLAD, the speed actually depends on the anchor number . For a small , RV-SSDH can be faster than SSDH and get close to AlexNet. This is because the VLAD core reduces the complexity of subsequent FC layers. In general, the speed of NetVLAD is about half of RV-SSDH. The speed gain of RV-SSDH comes from removing the normalization layers. To conclude, RV-SSDH is almost always a good choice in terms of retrieval accuracy and complexity.

Fig. 9: Retrieval speed comparison (seconds/query) in log scale (CIFAR-10, AlexNet).
Fig. 10: Retrieval speed (seconds/query) of RV-SSDH (CIFAR-10, AlexNet).
Method AlexNet NetVLAD SSDH RV-SSDH
987 410 (K=8) 540 900 (K=8)
Speed 360 (K=16) 800 (K=16)
(Hz) 310 (K=32) 620 (K=32)
240 (K=64) 415 (K=64)
TABLE V: Training speed comparison (CIFAR-10, AlexNet).

V-F Large-scale retrieval

Besides MNIST and CIFAR-10, a much larger dataset Places365 [38] is also used to evaluate RV-SSDH. Since previous results show that NetVLAD generally performs worse than SSDH, it is no longer considered in this test. Table VI shows a mAP comparison between RV-SSDH and SSDH for hash lengths 256 and 512 (larger hash lengths are used here for the dataset is much larger than before). Two base networks are used: AlexNet and VGG-F [34]. Although the margin is not as large as in the CIFAR-10 case, RV-SSDH still outperforms SSDH by 2%–4.9% in mAP. We also note that VGG-F works better than AlexNet, and 256-bit length works better than 512-bit length (perhaps due to insufficient training).

method mAP
VGG-F AlexNet
RV-SSDH (K=32,L=256) 0.2023 0.1904
RV-SSDH (K=32,L=512) 0.1835 0.1679
SSDH (L=256) 0.1822 0.1413
SSDH (L=512) 0.1595 0.1292
TABLE VI: mAP comparison with Places365.

V-G The choice of VLAD parameters

RV-SSDH has parameters and . The choice of typically depends on the number of classes, while controls the level of aggregation. By varying , and comparing the mAP, some tests are performed to find suitable values for these parameters. While the hash length typically depends on the dataset size and the number of classes, it is not straightforward to see the best value of . According to the results in Figures 1113, one can see that and are generally good choices for the tested datasets. Since also controls the complexity of the algorithm (see Table V), our rule of thumb is that should not be too large.

Fig. 11: mAP comparison for RV-SSDH (MNIST, ToyNet).
Fig. 12: mAP comparison for RV-SSDH (CIFAR-10, ToyNet).
Fig. 13: mAP comparison for RV-SSDH (CIFAR-10, AlexNet).

V-H The effects of random VLAD

The random VLAD layer in RV-SSDH is inspired by VLAD and NetVLAD, but the random nature makes it different from the ancestors. What happens if the original NetVLAD is used in RV-SSDH? Some tests are performed to answer this question. The results are shown in Figures 1415, where the dataset is CIFAR-10 and the base network is AlexNet. There is a significant performance drop if random VLAD is replaced by NetVLAD, and even a non-convergence (K=16, L=8). Therefore, “NetVLAD+SSDH” is not a good option.

Fig. 14: mAP comparison (NetVLAD vs. Random VLAD, K=16).
Fig. 15: mAP comparison (NetVLAD vs. Random VLAD, K=64).

V-I The effects of the transform layer

The transform layer is placed in between the random VLAD layer and the hash layer. The effects are verified by running another set of tests without the transform layer and comparing the mAP values. The results are shown in Fig. 1617 for two scenarios. In the best case, the transform layer increases mAP by approximately 10%; in general, the gain is about 1%–7%. Therefore, it is good practice to keep the transform layer.

Fig. 16: Effects of transform layer (MNIST, ToyNet, RV-SSDH, K=16).
Fig. 17: Effects of transform layer (CIFAR-10, AlexNet, RV-SSDH, K=16).

V-J Comparison with other hash algorithms

Previous results already show the superiority of RV-SSDH over SSDH and NetVLAD. In order to know the position of RV-SSDH among other state-of-the-art hash algorithms, more tests are carried out with CIFAR-10 and VGG-F, in two settings with small hash lengths: Case-1 and Case-2 (see the description in [25]). The results are shown in Table VIIVIII and compared with some data collected from [25]. One can see that RV-SSDH still has advantages over a few baselines (such as the classic SH [17], ITQ [40], and KSH [22]), but it is outperformed by MIHash [25], a recent deep hash algorithm. This is not a surprising result, because RV-SSDH uses point-wise training with complexity , while MIHash (and many other baselines) uses pairwise training with complexity . In fact, Case-1 is a disadvantageous situation for point-wise algorithms, where the training set is only 10% of the total. According to the reasonable performance in Case-2, RV-SSDH is still an attractive “economic” solution, taking into account the low computational cost. Note that RV-SSDH can be modified to use pairwise or triplet training too, which is a promising path for future research; but on the other hand, pairwise training might prohibits large hash lengths due to the exponentially increasing complexity, thus reduces versatility.

Method mAP vs. hash length
12 bits 24 bits 32 bits 48 bits
SH [17] 0.183 0.164 0.161 0.161
ITQ [40] 0.237 0.246 0.255 0.261
SPLH [41] 0.299 0.33 0.335 0.33
KSH [22] 0.488 0.539 0.548 0.563
SDH [42] 0.478 0.557 0.584 0.592
RV-SSDH 0.551 0.589 0.603 0.598
MIHash [25] 0.738 0.775 0.791 0.816
TABLE VII: mAP comparison (CIFAR-10, VGG-F, Case 1).
Method mAP vs. hash length
12 bits 24 bits 32 bits 48 bits
DRSH [28] 0.608 0.611 0.617 0.618
DRSCH [24] 0.615 0.622 0.629 0.631
RV-SSDH 0.899 0.906 0.908 0.905
DPSH [43] 0.903 0.885 0.915 0.911
MIHash [25] 0.927 0.938 0.942 0.943
TABLE VIII: mAP comparison (CIFAR-10, VGG-F, Case 2).

Vi Conclusion and discussion

In this work, we propose RV-SSDH, a deep hash algorithm that incorporates VLAD (vector of locally aggreggated descriptors) into a neural network architecture. The core of RV-SSDH is a random VLAD layer coupled with a latent hash layer through a transform layer. It is a point-wise algorithm that can be efficiently trained by minimizing classification error and quantization loss. This novel construction significantly outperforms baselines such as NetVLAD and SSDH in both accuracy and complexity, thus offers an alternative trade-off in the state-of-the-art. Our future work might include pairwise or triplet training, adding GAN, or multiscale extension [32].

Our experiment results also reveal some drawbacks of NetVLAD:1) the normalization steps are slow; 2) the initialization of anchors is cumbersome and inflexible (even ineffective); 3) it is not suitable for point-wise training. These issues make our random VLAD an interesting alternative.

Li Weng is currently an Assistant Professor at Hangzhou Dianzi University. He received his PhD in electrical engineering from University of Leuven (Belgium) in 2012. He worked on encryption, authentication, and hash algorithms for multimedia data. He then worked at University of Geneva (Switzerland) and Inria (France) on large-scale CBIR systems with emphasis on privacy protection. He was a post-doctoral researcher at IGN - French Mapping Agency. His research interests include multimedia signal processing, machine learning, and information security.

Lingzhi Ye received her Bachelor’s degree in Automation from Hangzhou Dianzi University, Hangzhou, China, in 2017. She is currently working towards the Master’s degree at the Hangzhou Dianzi University. Her research interests include deep learning, computer vision.

Jiangmin Tian received her B.S. degree in software engineering and PhD degree in control science and engineering from Huazhong University of Science and Technology, Wuhan, China, in 2010 and 2019 respectively. She is currently a lecturer at Hangzhou Dianzi University. Her research interests include machine learning and computer vision.

Jiuwen Cao received the B.Sc. and M.Sc. degrees from the School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu, China, in 2005 and 2008, respectively, and the Ph.D. degree from the School of Electrical and Electronic Engineering, Nanyang Technological University (NTU), Singapore, in 2013. From 2012 to 2013, he was a Research Fellow with NTU. Now, He is a Professor of Hangzhou Dianzi University, Hangzhou, China. His research interests include machine learning, artificial neural networks, intelligent data processing, and array signal processing.

He is an Associate Editor of IEEE Transactions on Circuits and Systems I: Regular paper, Journal of the Franklin Institute, Multidimensional Systems and Signal Processing, and Memetic computing. He has served as a Guest Editor of Journal of the Franklin Institute and Multidimensional Systems and Signal Processing.

Jianzhong Wang received the Bachelor’s degree with the School of Computer Science and Engineering, XiDian University, Xi’an, China, in 1985, and the Master’s degree with the School of Computer Science and Engineering, Zhejiang University, Hangzhou, China, in 1993, respectively. He has been a Faculty with the Hangzhou Dianzi University, Hanzghou, since 1985, where he is currently a Professor. He served as the Vice Dean from 2000 and has been serving as the Dean since 2016 with the School of Automation. He has published extensively in international journals and conferences and authorized over 30 patents. His current research interests include computer information system development, computer control, embedded system, and system modeling and optimization. He was a recipient of a number of national science and technology awards, including the Second Prize of State-Level Teaching Award.

References

  1. A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” International Journal of Computer Vision, vol. 42, no. 3, pp. 145–175, 2001.
  2. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
  3. M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in ECCV, 2010, pp. 778–792.
  4. J. Sivic and A. Zisserman, “Video google: a text retrieval approach to object matching in videos,” in Proc. of IEEE International Conference on Computer Vision (ICCV), Oct 2003, pp. 1470–1477.
  5. H. Jegou, M. Douze, and C. Schmid, “Product quantization for nearest neighbor search,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 1, pp. 117–128, 2011.
  6. H. Jégou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010, pp. 3304–3311.
  7. J. Wang, W. Liu, S. Kumar, and S. F. Chang, “Learning to hash for indexing big data – A survey,” Proceedings of the IEEE, vol. 104, no. 1, pp. 34–57, Jan 2016.
  8. H. Jégou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” in European Conference on Computer Vision, 2008, pp. 304–317.
  9. R. Arandjelović, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “NetVLAD: CNN architecture for weakly supervised place recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1437–1451, June 2018.
  10. H. F. Yang, K. Lin, and C. S. Chen, “Supervised learning of semantics-preserving hash via deep convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 2, pp. 437–451, Feb. 2018.
  11. M. Schneider and S.-F. Chang, “A robust content based digital signature for image authentication,” in Proc. of International Conference on Image Processing, vol. 3, 1996, pp. 227–230.
  12. A. Varna and M. Wu, “Modeling and analysis of correlated binary fingerprints for content identification,” IEEE Transactions on Information Forensics and Security, vol. 6, no. 3, pp. 1146–1159, Sep. 2011.
  13. L. Weng, I.-H. Jhuo, and W.-H. Cheng, Big Data Analytics for Large-ScaleMultimedia Search.   Wiley, 2019, ch. Perceptual Hashing for Large-Scale Multimedia Search, pp. 239–265.
  14. M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni, “Locality-sensitive hashing scheme based on p-stable distributions,” in Proc. of 20th Symposium on Computational Geometry (SCG), 2004, pp. 253–262.
  15. M. S. Charikar, “Similarity estimation techniques from rounding algorithms,” in Proc. of 34th annual ACM Symposium on Theory of Computing (STOC), 2002, pp. 380–388.
  16. M. Slaney and M. Casey, “Locality-sensitive hashing for finding nearest neighbors [lecture notes],” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 128–131, March 2008.
  17. Y. Weiss, A. Torralba, and R. Fergus, “Spectral hashing,” in Advances in Neural Information Processing Systems (NIPS), 2009, pp. 1753–1760.
  18. Y. Gong and S. Lazebnik, “Iterative quantization: A procrustean approach to learning binary codes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 817–824.
  19. K. He, F. Wen, and J. Sun, “K-means hashing: An affinity-preserving quantization method for learning binary compact codes,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2938–2945.
  20. B. Kulis and K. Grauman, “Kernelized locality-sensitive hashing for scalable image search,” in Proc. of IEEE International Conference on Computer Vision (CVPR), 2009, pp. 2130–2137.
  21. J. Wang, S. Kumar, and S.-F. Chang, “Semi-supervised hashing for scalable image retrieval,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, 2010, pp. 3424–3431.
  22. W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang, “Supervised hashing with kernels,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2074–2081.
  23. A. Torralba, R. Fergus, and Y. Weiss, “Small codes and large image databases for recognition,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8.
  24. R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang, “Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 4766–4779, Dec 2015.
  25. F. Cakir, K. He, S. A. Bargal, and S. Sclaroff, “Hashing with mutual information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 10, pp. 2424–2437, Oct 2019.
  26. J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu, “Learning fine-grained image similarity with deep ranking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014, pp. 1386–1393.
  27. F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 815–823.
  28. Fang Zhao, Y. Huang, L. Wang, and Tieniu Tan, “Deep semantic ranking based hashing for multi-label image retrieval,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 1556–1564.
  29. K. He, F. Çakir, S. A. Bargal, and S. Sclaroff, “Hashing as tie-aware learning to rank,” in IEEE Conference on Computer Vision and Pattern Recognition CVPR, 2018, pp. 4023–4032.
  30. Y. Cao, B. Liu, M. Long, and J. Wang, “HashGAN: Deep learning to hash with pair conditional wasserstein GAN,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition CVPR, 2018, pp. 1287–1296.
  31. S. Lloyd, “Least squares quantization in pcm,” IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129–137, 1982.
  32. Z. Shi, L. Zhang, Y. Sun, and Y. Ye, “Multiscale multitask deep NetVLAD for crowd counting,” IEEE Transactions on Industrial Informatics, vol. 14, no. 11, pp. 4953–4962, Nov 2018.
  33. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS), 2012, pp. 1097–1105.
  34. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” in British Machine Vision Conference, 2014, p. 12.
  35. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  36. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
  37. A. Krizhevsky, “Learning multiple layers of features from tiny images,” University of Toronto, Tech. Rep., 2009.
  38. B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452–1464, Jun. 2018.
  39. C. M. Bishop, Pattern recognition and machine learning.   Springer, 2006.
  40. Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin, “Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 12, pp. 2916–2929, 2013.
  41. J. Wang, S. Kumar, and S.-F. Chang, “Sequential projection learning for hashing with compact codes,” in International Conference on Machine Learning, 2010, pp. 1127–1134.
  42. F. Shen, C. Shen, W. Liu, and H. T. Shen, “Supervised discrete hashing,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 37–45.
  43. W. Li, S. Wang, and W. Kang, “Feature learning based deep supervised hashing with pairwise labels,” in International Joint Conference on Artificial Intelligence, 2016, pp. 1711–1717.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406845
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description