Random VLAD based Deep Hashing for Efficient Image Retrieval
Abstract
Image hash algorithms generate compact binary representations that can be quickly matched by Hamming distance, thus become an efficient solution for largescale image retrieval. This paper proposes RVSSDH, a deep image hash algorithm that incorporates the classical VLAD (vector of locally aggregated descriptors) architecture into neural networks. Specifically, a novel neural network component is formed by coupling a random VLAD layer with a latent hash layer through a transform layer. This component can be combined with convolutional layers to realize a hash algorithm. We implement RVSSDH as a pointwise algorithm that can be efficiently trained by minimizing classification error and quantization loss. Comprehensive experiments show this new architecture significantly outperforms baselines such as NetVLAD and SSDH, and offers a costeffective tradeoff in the stateoftheart. In addition, the proposed random VLAD layer leads to satisfactory accuracy with low complexity, thus shows promising potentials as an alternative to NetVLAD.
I Introduction
Contentbased image search is one of the basic challenges in the management of massive multimedia data. Solutions typically rely on finding appropriate features which are robust and discriminative. Previously, numerous feature detectors and descriptors have been proposed, such as GIST [1], SIFT [2], BRIEF [3], etc. In the big data era, the amount of visual content is drastically increasing, and efficient image search has become a more demanding task. In order to have more compact feature representations, two approaches have been developed, which can be summarized as “aggregation” and “quantization”. The former transforms multiple descriptors into a concise form; the latter reduces the precision of a descriptor. Typical examples include the bagoffeatures (BoF) representation [4] and the product quantization (PQ) [5]. Along these lines of work, this paper focuses on two subsequent techniques. The first is VLAD (vector of locally aggregated descriptors) [6]. This representation is related to BoF but carries more discriminative information. The other technique is called hashing [7], an active domain in recent years. It generally means to convert features into binary vectors, called hash values. The comparison of hash values can be extremely fast, because Hamming distance is used, which can be efficiently computed by the Exclusive Or (XOR) operation.
Given their advantages, aggregation and quantization can be combined. For example, BoF can be complemented by hashing [8]; VLAD can be compressed by PQ [6]. After the research community enters the deep learning era, both VLAD and hashing have developed their neural network versions. Two representative works are NetVLAD [9] and SSDH (supervised semanticpreserving deep hashing) [10]. While they outperform their counterparts based on handcrafted features, a question has naturally arisen – whether one can combine the advantages of the two. In this work, such an attempt is made. We propose a deep learning based hash algorithm which incorporates NetVLAD and SSDH. It is named RVSSDH (random VLAD based SSDH). The core of the algorithm is a hash component that can be inserted into different neural networks (Fig. 1). The hash component consists of a random VLAD layer, a transform layer, and a hash layer. It is designed with the following properties: 1) RVSSDH outperforms both SSDH and NetVLAD in terms of accuracy, speed, and compactness; 2) pointwise training is used for simplicity and versatility. In particular, a random VLAD layer is used instead of the original NetVLAD. It works better in our scenario, and shows promising potentials as a general block in neural network design. To summarize, the contribution of this paper is twofold:

The proposed RVSSDH is a stateoftheart algorithm with O(N) complexity;

The proposed random VLAD block is an interesting alternative to the NetVLAD block.
Extensive experiments have been performed. The results confirm that RVSSDH outperforms SSDH and NetVLAD.
Ii Related work
Hashing, or robust Hashing, is also referred to as fingerprinting [11, 12] or perceptual hashing [13]. Early techniques are data independent and use handcrafted features. Many hash algorithms are approximately linear. They have a simple structure like the following:
(1) 
where is the elementwise sign function, is a feature vector, is a transform matrix, and is a bias vector. Existing approaches typically differ in the derivation of . Without learning, can be generated randomly, but the performance is limited. A representative work is the localitysensitive hashing (LSH) [14, 15, 16]. Learning based hash algorithms focus on computing from training data, which can be divided into unsupervised and supervised categories. Unsupervised algorithms capture structure information in the feature data representation. They are versatile, but typically do not involve any “semantic” of data. Wellknown methods include spectral hashing [17], iterative quantization [18], kmeans hashing [19], and kernelized locality sensitive hashing [20]. In order to take semantics into account, supervised algorithms are developed. Typical examples are semisupervised hashing [21], kernelbased supervised hashing [22], restricted Boltzmann machine based hashing [23], and supervised semanticpreserving deep hashing [10]. A comprehensive survey can be found in [7].
In the deep learning era, the linear transformation in Eqn. (1) is replaced by neural networks with increased nonlinearity and complexity [24, 10, 25]. The performance mainly depends on the structure of the network and the way of training. In particular, the use of ranking loss in image retrieval [26, 27, 9] also brings improved performance in hashing [28, 29]. On the other hand, new components, such as GAN [30] is beginning to be deployed. Differing from previous work that relies on new training methods, this paper focuses on a new component in the main network structure called random VLAD. The proposed RVSSDH is trained pointwise, thus takes relatively low computational cost.
Iii Background of VLAD
VLAD is related to the BoF [4] representation. In the BoF representation, local features are detected from images, and described by descriptors such as SIFT [2]. A local descriptor typically consists of a feature vector, a coordinate, a scale, an orientation, etc. Feature vectors usually go through vector quantization such as means [31], and get encoded by a vocabulary of codewords, which are typically cluster centers in highdimensional space. Let denote feature vectors of an image. The BoF representation is basically a histogram vector of codewords
(2)  
(3) 
where is a vector quantization function, is the number of occurrences of the codeword , which is also called term frequency. The BoF generally shows satisfactory performance in image retrieval thanks to its robustness, but it has certain disadvantages: 1) it is typically a long sparse vector which needs compression; 2) too much information is lost by using a histogram, which decreases the discrimination power. VLAD is an improvement on BoF. Let denote the VLAD representation of an image. It tries to capture the neighbourhood of each codeword by the sum of associated descriptors
(4)  
(5) 
With increased discrimination, VLAD typically uses a smaller number of codewords. BoF and VLAD are still widely used in vision applications. VLAD also leads to extensions like NetVLAD [9] and multiscale NetVLAD [32].
Iv The proposed scheme
The proposed RVSSDH algorithm is a combination of conventional neural network layers and a novel hash component. A typical scheme is shown in Fig. 1, where convolutional layers are followed by an RVSSDH layer and a classification layer. In order to facilitate transfer learning, convolutional layers of pretrained networks can be used. For example, AlexNet [33] and VGGF [34] are used in this work. In the following, we focus on the RVSSDH component. It consists of three parts: a random VLAD layer, a transform layer, and a hash layer. They are explained in details below.
Iva The random VLAD layer
The random VLAD layer is a modified version of the one used in NetVLAD [9], which is described here first. Figure 2 gives a comparison between the two. In a NetVLAD network, the input to VLAD pooling is the output from a previous layer. It can be viewed as dimensional “local” feature vectors. The output size of VLAD core is . The th output vector is defined as
(6) 
where indicates whether is associated with the cluster center . In order to make it differentiable, is replaced with a soft assignment function
(7)  
(8) 
where and . In order to further improve flexibility, NetVLAD actually decouples , from by setting them as three independent parameter sets. The soft assignment is implemented by a convolution block followed by a softmax operation. The initial values of anchors are obtained by applying means clustering to the input of VLAD core.
Besides the VLAD core, NetVLAD also contains a preL2 normalization layer, an intra normalization layer, and a postL2 normalization layer. In practice, the last step can be skipped if cosine similarity is used for comparison.
The proposed scheme incorporates a random VLAD layer. It is similar to NetVLAD, but with the following differences:

L2 and intra normalization are not used.

The anchors are randomly initialized.
These modifications not only reduce algorithm complexity, but also improve retrieval performance, as shown by experiment results in Section VH.
IvB The transform layer
The transform layer consists of two fully connected (FC) layers, each of which is followed by a rectified linear unit (ReLU) unit. The first FC layer converts the output of VLAD core to a dimensional vector. The second FC layer further reduces the feature dimensionality from to . In general, . This part is motivated by the fact that some wellknown networks typically have two FC layers after convolutional layers. They function as highlevel feature extraction and transformation on top of lowlevel features.
In our experiments, . According to the results in Sect. VI, the transform layer is not always necessary, but it generally helps to improve retrieval performance, especially for datasets with semantic gaps. We also find that, without preceding FC layers, the training of SSDH might not converge.
IvC The hash layer
The hash layer originates from SSDH [10]. It consists of an FC layer and a sigmoid activation layer. The FC layer compresses the dimensional input to an dimensional feature vector. The feature vector is then binarized to derive an bit hash value. Since ideal binarization is not differentiable, the logistic sigmoid function is used as an approximation to facilitate backpropagation. This layer can be defined as
(9) 
where , are the weight matrix and the bias vector respectively, and is the sigmoid function with the output range from to .
In contrary to other hash algorithms, the hash layer is the last second layer (during training). It is assumed to contain latent attributes for classification. The elements of a hash value can be viewed as indicators of these attributes.
IvD The prediction layer
The proposed hash algorithm is based on the architecture of SSDH. This architecture utilizes a classification problem to induce latent attributes, so a prediction layer is put after the hash layer. The prediction layer is also an FC layer with sigmoid activation. It maps a hash value to class probabilities. The mapping is assumed to be a linear transformation, so the expression is similar to Eqn. (9) but the bias term is ignored.
(10) 
where is the predicted label vector.
IvE Optimization
The proposed RVSSDH is trained to generate similar hash values for similar (or relevant) images, in a similar way as SSDH. Let denote the number of samples. The basic objective function is based on classification error:
(11) 
where represents the loss for the true label vector and the predicted label vector , and the constant controls the relative weight of the regularization term. In order to support different types of label information, the loss function takes the general form:
(12) 
where is the number of classes, and depends on the application. This work mainly focuses on singlelabel classification, so the log loss is used
(13) 
In order to make the hash output close to or , another constraint is enforced:
(14) 
where is the continuous hash value, is a vector of “1”s, and . Combining the constraints, the overall optimization problem is
(15) 
where are weight factors. Optimal parameters of the network can be found by back propagation and stochastic gradient descent (SGD) [35]. Since the algorithm is trained in a pointwise manner, the complexity of training is O(N).
In SSDH, there is another constraint that aims for equal probable bits:
(16) 
This constraint is ignored in our implementation, for it has a minor impact on the retrieval performance [10], and it potentially reduces the capacity of a hash space.
IvF Training and testing
The training and testing of RVSSDH is different. During training, network parameters are learned. The prediction layer is only used in this phase, where the hash output is continuous.
During testing, hash values are first generated for a database and a query set. Then the retrieval performance is evaluated. In this phase, the prediction layer is removed, and is further quantized
(17) 
where is a binary hash value. During retrieval, the Hamming distance is used for comparing hash values.
V Experiment results
Comprehensive experiments are performed to evaluate RVSSDH. First, retrieval performance is examined together with classification performance. Then algorithm complexity is measured in terms of retrieval speed and training speed. Initial results are obtained for small datasets and compared with baselines. Following a largescale retrieval test, more experiments are carried out to study the effects of random VLAD. Finally, RVSSDH is compared with some more algorithms from stateoftheart. The detailed results are described below. Figures are best viewed in color.
Va Datasets and evaluation metrics
Three datasets are used in the paper: MNIST [36], CIFAR10 [37], and Places365 [38]. MNIST is a gray image dataset of handwritten digits (from 0 to 9). CIFAR10 is a dataset of color images in these classes: airplane, automobile, dog, cat, bird, deer, frog, horse, ship, truck. They both contain ten classes and 6000 images per class. The image sizes are and respectively. For these two datasets, we use 10000 images for validation and the rest for training. Places365 is a largescale dataset of 365 scene categories. We use its training set which contains more than 1.8 million images: 80% is used for training and the rest for validation. In order to reveal the performance gain brought by RVSSDH, no data augmentation is used.
The three datasets have different purposes: MNIST is a relatively simple dataset without much semantics; CIFAR10 is more difficult for it has severe semantic gaps; Places365 is the most challenging.
Algorithms are tested in a retrieval scenario. For MNIST and CIFAR10, the validation set is used as a database, and 1000 items from the database are randomly selected as queries; for Places365, 30000 images from the validation set are used as a database, and 3000 images from the database are randomly selected as queries. The retrieval performance is measured by two metrics: the precisionrecall (PR) curve and the mean average precision (mAP). For a query, precision and recall are defined as
(18)  
(19) 
Different tradeoffs can be achieved by adjusting the number of retrieved items. A PR curve is plotted by averaging over all queries. The mAP is defined as area under the curve which represents overall retrieval performance.
VB The baselines
The SSDH [10] and the NetVLAD [9] algorithms are the main baselines in this work. Since RVSSDH is a pluggable component, the actual implementation of a complete network also depends on the preceding layers. For a particular setting of convolutional and FC layers, denoted by CNN, the following baselines are considered

CNN;

CNN+(FC)+SSDH;

CNN+NetVLAD;

CNN+RVSSDH.
Specifically, three CNNs are used: the first two are the wellknown AlexNet [33] and VGGF [34], which can test RVSSDH in a transfer learning scenario; the third one is a small custom network defined in Table I, which we call ToyNet. When AlexNet or VGGF is used, parameters of pretrained models (based on ImageNet) are loaded into the convolutional layers, while ToyNet is trained from scratch. Note that when a CNN is used alone, it includes convolutional layers and possibly two FC layers (for AlexNet/VGGF); when a CNN is used together with another block, only its convolutional layers are used. For example, AlexNet+RVSSDH means the convolutional layers of AlexNet (conv1–conv5) are combined with RVSSDH. A special case is AlexNet/VGGF+FC+SSDH, where two FC layers are added in the middle. This is because we find that without the FC layers it is difficult to make the training converge.
It should also be noted that the original NetVLAD uses tripletbased training [9, 27]. It is modified to use pointwise training in our implementation. This modification not only guarantees fair comparison with others, but also reveals the performance of NetVLAD in a more general setting.
For the same test, the same amount of epochs (typically 50) is used for all candidate algorithms. For SSDH and RVSSDH, hash lengths from 8 to 128 bits are mainly considered.
VC Retrieval performance
Figure 3 shows a comparison of mAP values, where the dataset is MNIST and the base network is ToyNet. For SSDH and RVSSDH, the mAP varies with the hash length; for NetVLAD, only the best mAP is shown; for ToyNet, the mAP is constant. One can see that using ToyNet alone leads to an mAP around , while NetVLAD can boost it to . The large difference confirm the effectiveness of NetVLAD. On the other hand, SSDH performs even better, and RVSSDH gives the highest mAP. The observation gives a basic ranking: RVSSDHSSDHNetVLADToyNet.
Figure 4 shows a comparison of mAP values, where the dataset is CIFAR10 and the base network is ToyNet. A similar trend is observed, but the mAP gain over ToyNet is not as large as in Fig. 3, especially for NetVLAD. This is because CIFAR10 is more difficult than MNIST and ToyNet is not a sophisticated network. The advantage of RVSSDH becomes more noticeable in this case.
Figure 5 shows a comparison of mAP values, where the dataset is CIFAR10 and the base network is AlexNet. The trend stays the same. Since a more sophisticated base network is used, the mAP is generally much improved, compared with ToyNet. RVSSDH still performs the best, with an approximate margin of above SSDH.
The results from Fig. 3 to Fig. 5 are consistent, so one can basically conclude that RVSSDH has the best retrieval performance among the candidate algorithms. These figures also provide some other insights. For example, the mAP typically increases with the hash length (especially when is small), but it might decrease when is too large. This could be a consequence of insufficient training – a larger requires more network parameters while the number of epochs is fixed, so a large does not necessarily guarantee better discrimination power. The results show that typically works best. Compared with others, NetVLAD seems more sensitive to the choice of base network and the dataset.
The mAP represents overall retrieval performance. More details and tradeoffs can be found in the precisionrecall curve. Figures 6–8 show some comparisons of PR curves for MNIST and CIFAR10. In these figures, the parameter(s) of each curve corresponds to the algorithm’s highest mAP. For example, in Fig. 6, for SSDH and MNIST, the best mAP is achieved by , which is outperformed by RVSSDH with and ; for NetVLAD, the best mAP is achieved when , whose curve is significantly above ToyNet’s but clearly below SSDH’s.
VD Classification performance
Since the proposed RVSSDH is trained in a classification framework (recall that there is a prediction layer after the hash layer), a question is whether a hash value is also suitable for classification.
To answer this question, Table II shows a comparison on the Top1 error rate during validation, where the dataset is MNIST and the base network is ToyNet. In the table, the smallest and the two largest rates are marked in bold. It is interesting to see that NetVLAD gives the largest value (SSDH gives the second largest), and the general performance ranking is: RVSSDHSSDHToyNetNetVLAD, which is different from the retrieval cases. Compared with Fig. 3–8, the results show a difference between classification and retrieval. In other words, good retrieval performance does not guarantee good classification performance, or vice versa. Nevertheless, for RVSSDH, the behaviour is consistent.
More results are shown in Table III–IV, and similar patterns can be observed. Note that in Table IV AlexNet performs better than SSDH but worse than RVSSDH. The best accuracy achieved by RVSSDH on CIFAR10 with AlexNet is 86.19%, which is close to the accuracy (89%) in the AlexNet paper [33]. Considering that no data augmentation is used in our experiments and our AlexNet’s accuracy is 82.61%, we conclude that RVSSDH is likely to give a significant boost in classification performance.
Note that the error rates of SSDH and RVSSDH are computed using unbinarized hash values (i.e. ). Therefore the above results only prove that the continuous RVSSDH is useful for classification. Although the quantization error is generally small, the actual performance of binary hash values in classification is left for future investigation.
Method  error rate vs. hash length  

8 bits  16 bits  32 bits  64 bits  128 bits  
ToyNet  0.0126  
NetVLAD  0.0474  
SSDH  0.0147  0.0129  0.0117  0.0118  0.0116 
RVSSDH  0.0101  0.0089  0.0087  0.0093  0.0088 
Method  error rate vs. hash length  

8 bits  16 bits  32 bits  64 bits  128 bits  
ToyNet  0.2802  
NetVLAD  0.5787  
SSDH  0.3127  0.2757  0.2675  0.2724  0.2714 
RVSSDH  0.2780  0.2453  0.2375  0.2313  0.2357 
Method  error rate vs. hash length  

8 bits  16 bits  32 bits  64 bits  128 bits  
Alexnet  0.1739  
NetVLAD  0.2846  
SSDH  0.2306  0.2158  0.2029  0.2049  0.2079 
RVSSDH  0.1637  0.1413  0.1472  0.1381  0.1407 
VE Complexity comparison
The complexity of RVSSDH can be evaluated in two aspects: 1) the retrieval speed; 2) the training speed. Some quantitative results are given in this section. The experiments are performed on a computer with Intel i78700K CPU, 16G memory, and Nvidia GTX1080 GPU. Figure 9 shows a comparison of retrieval speed (seconds per query) in log scale, where the dataset is CIFAR10 and the base network is AlexNet. For RVSSDH and SSDH, the retrieval speed is the same and the hash length is varied; for NetVLAD, the centroid number is varied, and the output length is ; for AlexNet, the classification layer is removed, and the output length is .
Note that the distance metric for retrieval is different for different algorithms: RVSSDH/SSDH uses Hamming distance; NetVLAD uses Cosine distance; AlexNet uses Euclidean distance. This is why NetVLAD is faster than AlexNet for the same output size. In the NetVLAD paper [9], the authors also propose to compress the output with PCA [39] to dimensions. That corresponds to the case. To conclude, RVSSDH and SSDH are always the fastest; NetVLAD is faster than AlexNet for small . Figure 10 shows the retrieval speed of RVSSDH in linear scale. The actual speed is approximately linear in the hash length.
Table V shows a comparison of training speed, which is represented by the number of processed images per second (Hz). It is obvious that AlexNet is the fastest, because fewest layers are used. The speed of SSDH is about half of AlexNet. The hash length is not shown in the table because we find that the training speed is insensitive to . On the other hand, for RVSSDH and NetVLAD, the speed actually depends on the anchor number . For a small , RVSSDH can be faster than SSDH and get close to AlexNet. This is because the VLAD core reduces the complexity of subsequent FC layers. In general, the speed of NetVLAD is about half of RVSSDH. The speed gain of RVSSDH comes from removing the normalization layers. To conclude, RVSSDH is almost always a good choice in terms of retrieval accuracy and complexity.
Method  AlexNet  NetVLAD  SSDH  RVSSDH 

987  410 (K=8)  540  900 (K=8)  
Speed  360 (K=16)  800 (K=16)  
(Hz)  310 (K=32)  620 (K=32)  
240 (K=64)  415 (K=64) 
VF Largescale retrieval
Besides MNIST and CIFAR10, a much larger dataset Places365 [38] is also used to evaluate RVSSDH. Since previous results show that NetVLAD generally performs worse than SSDH, it is no longer considered in this test. Table VI shows a mAP comparison between RVSSDH and SSDH for hash lengths 256 and 512 (larger hash lengths are used here for the dataset is much larger than before). Two base networks are used: AlexNet and VGGF [34]. Although the margin is not as large as in the CIFAR10 case, RVSSDH still outperforms SSDH by 2%–4.9% in mAP. We also note that VGGF works better than AlexNet, and 256bit length works better than 512bit length (perhaps due to insufficient training).
method  mAP  

VGGF  AlexNet  
RVSSDH (K=32,L=256)  0.2023  0.1904 
RVSSDH (K=32,L=512)  0.1835  0.1679 
SSDH (L=256)  0.1822  0.1413 
SSDH (L=512)  0.1595  0.1292 
VG The choice of VLAD parameters
RVSSDH has parameters and . The choice of typically depends on the number of classes, while controls the level of aggregation. By varying , and comparing the mAP, some tests are performed to find suitable values for these parameters. While the hash length typically depends on the dataset size and the number of classes, it is not straightforward to see the best value of . According to the results in Figures 11–13, one can see that and are generally good choices for the tested datasets. Since also controls the complexity of the algorithm (see Table V), our rule of thumb is that should not be too large.
VH The effects of random VLAD
The random VLAD layer in RVSSDH is inspired by VLAD and NetVLAD, but the random nature makes it different from the ancestors. What happens if the original NetVLAD is used in RVSSDH? Some tests are performed to answer this question. The results are shown in Figures 14–15, where the dataset is CIFAR10 and the base network is AlexNet. There is a significant performance drop if random VLAD is replaced by NetVLAD, and even a nonconvergence (K=16, L=8). Therefore, “NetVLAD+SSDH” is not a good option.
VI The effects of the transform layer
The transform layer is placed in between the random VLAD layer and the hash layer. The effects are verified by running another set of tests without the transform layer and comparing the mAP values. The results are shown in Fig. 16–17 for two scenarios. In the best case, the transform layer increases mAP by approximately 10%; in general, the gain is about 1%–7%. Therefore, it is good practice to keep the transform layer.
VJ Comparison with other hash algorithms
Previous results already show the superiority of RVSSDH over SSDH and NetVLAD. In order to know the position of RVSSDH among other stateoftheart hash algorithms, more tests are carried out with CIFAR10 and VGGF, in two settings with small hash lengths: Case1 and Case2 (see the description in [25]). The results are shown in Table VII–VIII and compared with some data collected from [25]. One can see that RVSSDH still has advantages over a few baselines (such as the classic SH [17], ITQ [40], and KSH [22]), but it is outperformed by MIHash [25], a recent deep hash algorithm. This is not a surprising result, because RVSSDH uses pointwise training with complexity , while MIHash (and many other baselines) uses pairwise training with complexity . In fact, Case1 is a disadvantageous situation for pointwise algorithms, where the training set is only 10% of the total. According to the reasonable performance in Case2, RVSSDH is still an attractive “economic” solution, taking into account the low computational cost. Note that RVSSDH can be modified to use pairwise or triplet training too, which is a promising path for future research; but on the other hand, pairwise training might prohibits large hash lengths due to the exponentially increasing complexity, thus reduces versatility.
Method  mAP vs. hash length  

12 bits  24 bits  32 bits  48 bits  
SH [17]  0.183  0.164  0.161  0.161 
ITQ [40]  0.237  0.246  0.255  0.261 
SPLH [41]  0.299  0.33  0.335  0.33 
KSH [22]  0.488  0.539  0.548  0.563 
SDH [42]  0.478  0.557  0.584  0.592 
RVSSDH  0.551  0.589  0.603  0.598 
MIHash [25]  0.738  0.775  0.791  0.816 
Vi Conclusion and discussion
In this work, we propose RVSSDH, a deep hash algorithm that incorporates VLAD (vector of locally aggreggated descriptors) into a neural network architecture. The core of RVSSDH is a random VLAD layer coupled with a latent hash layer through a transform layer. It is a pointwise algorithm that can be efficiently trained by minimizing classification error and quantization loss. This novel construction significantly outperforms baselines such as NetVLAD and SSDH in both accuracy and complexity, thus offers an alternative tradeoff in the stateoftheart. Our future work might include pairwise or triplet training, adding GAN, or multiscale extension [32].
Our experiment results also reveal some drawbacks of NetVLAD:1) the normalization steps are slow; 2) the initialization of anchors is cumbersome and inflexible (even ineffective); 3) it is not suitable for pointwise training. These issues make our random VLAD an interesting alternative.
Li Weng is currently an Assistant Professor at Hangzhou Dianzi University. He received his PhD in electrical engineering from University of Leuven (Belgium) in 2012. He worked on encryption, authentication, and hash algorithms for multimedia data. He then worked at University of Geneva (Switzerland) and Inria (France) on largescale CBIR systems with emphasis on privacy protection. He was a postdoctoral researcher at IGN  French Mapping Agency. His research interests include multimedia signal processing, machine learning, and information security. 
Lingzhi Ye received her Bachelor’s degree in Automation from Hangzhou Dianzi University, Hangzhou, China, in 2017. She is currently working towards the Masterâs degree at the Hangzhou Dianzi University. Her research interests include deep learning, computer vision. 
Jiangmin Tian received her B.S. degree in software engineering and PhD degree in control science and engineering from Huazhong University of Science and Technology, Wuhan, China, in 2010 and 2019 respectively. She is currently a lecturer at Hangzhou Dianzi University. Her research interests include machine learning and computer vision. 
Jiuwen Cao received the B.Sc. and M.Sc. degrees from the School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu, China, in 2005 and 2008, respectively, and the Ph.D. degree from the School of Electrical and Electronic Engineering, Nanyang Technological University (NTU), Singapore, in 2013. From 2012 to 2013, he was a Research Fellow with NTU. Now, He is a Professor of Hangzhou Dianzi University, Hangzhou, China. His research interests include machine learning, artificial neural networks, intelligent data processing, and array signal processing. He is an Associate Editor of IEEE Transactions on Circuits and Systems I: Regular paper, Journal of the Franklin Institute, Multidimensional Systems and Signal Processing, and Memetic computing. He has served as a Guest Editor of Journal of the Franklin Institute and Multidimensional Systems and Signal Processing. 
Jianzhong Wang received the Bachelor’s degree with the School of Computer Science and Engineering, XiDian University, Xi’an, China, in 1985, and the Master’s degree with the School of Computer Science and Engineering, Zhejiang University, Hangzhou, China, in 1993, respectively. He has been a Faculty with the Hangzhou Dianzi University, Hanzghou, since 1985, where he is currently a Professor. He served as the Vice Dean from 2000 and has been serving as the Dean since 2016 with the School of Automation. He has published extensively in international journals and conferences and authorized over 30 patents. His current research interests include computer information system development, computer control, embedded system, and system modeling and optimization. He was a recipient of a number of national science and technology awards, including the Second Prize of StateLevel Teaching Award. 
References
 A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” International Journal of Computer Vision, vol. 42, no. 3, pp. 145–175, 2001.
 D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
 M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in ECCV, 2010, pp. 778–792.
 J. Sivic and A. Zisserman, “Video google: a text retrieval approach to object matching in videos,” in Proc. of IEEE International Conference on Computer Vision (ICCV), Oct 2003, pp. 1470–1477.
 H. Jegou, M. Douze, and C. Schmid, “Product quantization for nearest neighbor search,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 1, pp. 117–128, 2011.
 H. Jégou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010, pp. 3304–3311.
 J. Wang, W. Liu, S. Kumar, and S. F. Chang, “Learning to hash for indexing big data – A survey,” Proceedings of the IEEE, vol. 104, no. 1, pp. 34–57, Jan 2016.
 H. Jégou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” in European Conference on Computer Vision, 2008, pp. 304–317.
 R. Arandjelović, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “NetVLAD: CNN architecture for weakly supervised place recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1437–1451, June 2018.
 H. F. Yang, K. Lin, and C. S. Chen, “Supervised learning of semanticspreserving hash via deep convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 2, pp. 437–451, Feb. 2018.
 M. Schneider and S.F. Chang, “A robust content based digital signature for image authentication,” in Proc. of International Conference on Image Processing, vol. 3, 1996, pp. 227–230.
 A. Varna and M. Wu, “Modeling and analysis of correlated binary fingerprints for content identification,” IEEE Transactions on Information Forensics and Security, vol. 6, no. 3, pp. 1146–1159, Sep. 2011.
 L. Weng, I.H. Jhuo, and W.H. Cheng, Big Data Analytics for LargeScaleMultimedia Search. Wiley, 2019, ch. Perceptual Hashing for LargeScale Multimedia Search, pp. 239–265.
 M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni, “Localitysensitive hashing scheme based on pstable distributions,” in Proc. of 20th Symposium on Computational Geometry (SCG), 2004, pp. 253–262.
 M. S. Charikar, “Similarity estimation techniques from rounding algorithms,” in Proc. of 34th annual ACM Symposium on Theory of Computing (STOC), 2002, pp. 380–388.
 M. Slaney and M. Casey, “Localitysensitive hashing for finding nearest neighbors [lecture notes],” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 128–131, March 2008.
 Y. Weiss, A. Torralba, and R. Fergus, “Spectral hashing,” in Advances in Neural Information Processing Systems (NIPS), 2009, pp. 1753–1760.
 Y. Gong and S. Lazebnik, “Iterative quantization: A procrustean approach to learning binary codes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 817–824.
 K. He, F. Wen, and J. Sun, “Kmeans hashing: An affinitypreserving quantization method for learning binary compact codes,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2938–2945.
 B. Kulis and K. Grauman, “Kernelized localitysensitive hashing for scalable image search,” in Proc. of IEEE International Conference on Computer Vision (CVPR), 2009, pp. 2130–2137.
 J. Wang, S. Kumar, and S.F. Chang, “Semisupervised hashing for scalable image retrieval,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2010, pp. 3424–3431.
 W. Liu, J. Wang, R. Ji, Y.G. Jiang, and S.F. Chang, “Supervised hashing with kernels,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2074–2081.
 A. Torralba, R. Fergus, and Y. Weiss, “Small codes and large image databases for recognition,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8.
 R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang, “Bitscalable deep hashing with regularized similarity learning for image retrieval and person reidentification,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 4766–4779, Dec 2015.
 F. Cakir, K. He, S. A. Bargal, and S. Sclaroff, “Hashing with mutual information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 10, pp. 2424–2437, Oct 2019.
 J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu, “Learning finegrained image similarity with deep ranking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014, pp. 1386–1393.
 F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 815–823.
 Fang Zhao, Y. Huang, L. Wang, and Tieniu Tan, “Deep semantic ranking based hashing for multilabel image retrieval,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 1556–1564.
 K. He, F. Çakir, S. A. Bargal, and S. Sclaroff, “Hashing as tieaware learning to rank,” in IEEE Conference on Computer Vision and Pattern Recognition CVPR, 2018, pp. 4023–4032.
 Y. Cao, B. Liu, M. Long, and J. Wang, “HashGAN: Deep learning to hash with pair conditional wasserstein GAN,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition CVPR, 2018, pp. 1287–1296.
 S. Lloyd, “Least squares quantization in pcm,” IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129–137, 1982.
 Z. Shi, L. Zhang, Y. Sun, and Y. Ye, “Multiscale multitask deep NetVLAD for crowd counting,” IEEE Transactions on Industrial Informatics, vol. 14, no. 11, pp. 4953–4962, Nov 2018.
 A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS), 2012, pp. 1097–1105.
 K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” in British Machine Vision Conference, 2014, p. 12.
 Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
 Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
 A. Krizhevsky, “Learning multiple layers of features from tiny images,” University of Toronto, Tech. Rep., 2009.
 B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452–1464, Jun. 2018.
 C. M. Bishop, Pattern recognition and machine learning. Springer, 2006.
 Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin, “Iterative quantization: A procrustean approach to learning binary codes for largescale image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 12, pp. 2916–2929, 2013.
 J. Wang, S. Kumar, and S.F. Chang, “Sequential projection learning for hashing with compact codes,” in International Conference on Machine Learning, 2010, pp. 1127–1134.
 F. Shen, C. Shen, W. Liu, and H. T. Shen, “Supervised discrete hashing,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 37–45.
 W. Li, S. Wang, and W. Kang, “Feature learning based deep supervised hashing with pairwise labels,” in International Joint Conference on Artificial Intelligence, 2016, pp. 1711–1717.