Robust Hashing for MultiView Data: Jointly Learning LowRank Kernelized Similarity Consensus and Hash Functions
Abstract
Learning hash functions/codes for similarity search over multiview data is attracting increasing attention, where similar hash codes are assigned to the data objects characterizing consistently neighborhood relationship across views. Traditional methods in this category inherently suffer three limitations: 1) they commonly adopt a twostage scheme where similarity matrix is first constructed, followed by a subsequent hash function learning; 2) these methods are commonly developed on the assumption that data samples with multiple representations are noisefree,which is not practical in reallife applications; 3) they often incur cumbersome training model caused by the neighborhood graph construction using all points in the database (). In this paper, we motivate the problem of jointly and efficiently training the robust hash functions over data objects with multifeature representations which may be noise corrupted. To achieve both the robustness and training efficiency, we propose an approach to effectively and efficiently learning lowrank kernelized ^{1}^{1}1We use kernelized similarity rather than kernel, as it is not a squared symmetric matrix for datalandmark affinity matrix. hash functions shared across views. Specifically, we utilize landmark graphs to construct tractable similarity matrices in multiviews to automatically discover neighborhood structure in the data. To learn robust hash functions, a latent lowrank kernel function is used to construct hash functions in order to accommodate linearly inseparable data. In particular, a latent kernelized similarity matrix is recovered by rank minimization on multiple kernelbased similarity matrices. Extensive experiments on realworld multiview datasets validate the efficacy of our method in the presence of error corruptions.
Robust Hashing for MultiView Data: Jointly Learning LowRank Kernelized Similarity Consensus and Hash Functions
Lin Wu, Yang Wang The University of Adelaide, Australia The University of New South Wales, Kensington, Sydney, Australia lin.wu@adelaide.edu.au wangy@cse.unsw.edu.au
1 Introduction
Hashing is dramatically efficient for similarity search over lowdimensional binary codes with low storage cost. Intensive hashing methods valid on single data source have been proposed which can be classified into dataindependent hashing such as locality sensitive hashing (LSH) [?] and datadependent hashing or learning based hashing [?; ?].
In reallife situations, data objects can be decomposed of multiview (feature) spaces where each view can characterize its individual property, e.g., an image can be described by color histograms and textures, and the two features turn out to be complementary to each other [?; ?; ?; ?; ?; ?; ?; ?; ?]. Consequently, a wealth of multiview hashing methods [?; ?; ?; ?; ?; ?] are developed in order to effectively leverage complementary priors from multiviews to achieve performance improvement in similarity search. The critical issue is to ensure the learned hash codes can well preserve the original data similarities regarding viewdependent feature representations. To be specific, similar hash codes are assigned to data objects that consistently capture nearest neighborhood structure across all views.
1.1 Motivation
Despite improved performance delivered by existing multiview hashing methods [?; ?; ?; ?; ?; ?], some fundamental limitations can be identified:

The learning process is conducted by a twostage mechanism where hash functions are learned based on preconstructed data similarity matrix. Their methods commonly assume that data samples are noisefree under multiple views whereas in realworld applications input data objects may be noisy (e.g., missing values in pixels), resulting in corresponding similarity matrices being corrupted by considerable noises [?; ?]. Moreover, the recovery of consensus or requisite similarity values across views in the presence of noise contamination remains an unresolved challenge in multiview data analysis [?; ?; ?].
This motivates us to deliver a framework to jointly and effectively learn similarity matrices and robust hash functions with kernel functions plugged because the kernel trick is able to tackle linearly inseparable data [?]. To this end, a latent kernelized similarity matrix is recovered shared across views by using lowrank representation (LRR) [?] which is robust to corrupted observations. The recovered lowrank kernelized similarity matrix is consensusreaching across views and can reveal the true underlying structures in data points.

Stateoftheart multiview hashing methods is less efficiency in their learning procedure because the learning is performed by building and accessing a neighborhood graph using all points (). This action is intractable in offline training when is large.
To this end, we are further motivated to employ an landmark graph to build an approximate neighborhood graph using landmarks [?; ?], in which the similarity between a pair of data points is measured with respect to a small number of landmarks (typically a few hundred). The resulting graph is built in time and sufficiently sparse with performance approaching to true NN graphs as the number of landmarks increases [?].
1.2 Our Method
In this paper, we propose a novel approach to robust multiview hashing by effectively and efficiently learning a set of hash functions and a lowrank kernelized similarity matrix shared by multiple views.
We remark that our method is fundamentally different from existing multiview hashing methods that are conditioned on corruptionfree similarities, which has diminished their application to realworld tasks. Instead, we propose to learn hash functions and kernelbased similarities under a more realistic scenario with noisy observations. Our method is advantageous in the aspect of efficiency due to the employment of approximate neighborhood with landmark graphs. We clarify the recovered lowrank similarity matrix in kernel functions to be the kernelized rather than kernel since it is not a symmetric matrix yet characterizes nonlinear similarities. The proposed method is also different from partial view study [?; ?], where they consider the case that data examples with some modalities are missing. Our approach follows the setting of multiview learning which aims to improve existing single view model by learning a model utilizing data collected from multiple channels [?; ?; ?; ?; ?; ?] where all data samples have full information in all views.
In our framework, the low rank minimization is enforced to yield a consensusreaching, kernelized similarity matrix shared by multiple views where larger similarity values indicate corresponding data objects from the same cluster, while smaller similarity values imply those come from distinct clusters. Thus, the learned lowrank similarity matrix against multiviews can reflect the underlying clustering information.
Technically, a nonlinear kernelized similarity matrix in the th view, denoted as , can be decomposed into three components: (1) A latent lowrank kernelized similarity matrix , representing the nonlinear requisite or consensus similarities shared across views; (2) a viewdependent redundancy characterizing its individual similarities; and (3) possible error corruptions for viewspecific representations. We unify view redundancy and errors into and impose an norm constraint on it, denoted as . This is because view redundancy and disturbing errors are always sparsely distributed, and minimizing is able to identify nonzero sparse columns revealing corresponding redundancy/errors. Note that in this work, “error” generally refers to error corruptions or perturbation, e.g., noise or missing values, in viewdependent feature values. These principles are formulated into an objective function, which is optimized based on the inexact Augmented Lagrangian Multiplier (ALM) scheme [?]. It allows us to jointly learn a latent lowrank nonlinear similarity with corruption free and optimal hash functions for multiview data, where hash codes are restricted to well preserve local (neighborhood) geometric structures in each view. We remark that several crossview semantic hashing algorithms [?; ?; ?] have been developed to embed multiple high dimensional features from heterogeneous data sources into one Hamming space, while preserving their original similarities. Our setting is fundamentally different from crossview/modal hashing in the aspect that we aim to leverage multiple features to jointly learn hash functions and a latent nonlinear similarity matrix over a homogeneous data source. To the best of our knowledge, we are the first to systematically address the problem of multiview hashing with possible data error corruptions.
1.3 Contributions
The major contributions of this paper are threefold.

We motivate the problem of robust hashing over multiview data with nonlinear data distribution, and propose to learn the robust hash functions and a lowrank kernelized similarity matrix shared by views.

An iterative lowrank recovery optimization technique is proposed to learn the robust hashing functions. For the sake of efficiency, the neighborhood graph is approximated by using landmark graphs with sparse connection between data points.

Extensive experiments conducted on realworld multiview datasets validate the efficacy of our method in the presence of error corruptions for multiview feature representations.
2 Related Work
2.1 Multiview Learning based Hashing
The purpose of multiview learning based hashing is to learn better hash codes by leveraging multiple views. Some recent representative works include Multiple Feature Hashing (MFH) [?], Composite Hashing with Multiple Sources (CHMS) [?], Compact Kernel Hashing with multiple features (CKH) [?], and Multiview Sequential Spectral Hashing (SSH) [?]. However, these methods have common drawbacks that they typically apply spectral graph technique (e.g., NN graph) to model a similarities between data points. In general, the complexity of constructing the similarity matrix is for data points, which is not pragmatic in largescale applications. Moreover, the similarity matrix induced by graph construction is very sensitive to noise corruptions. To avoid the construction of similarity matrix, Shen et al. [?] present a MultiView Latent Hashing (MVLH) to learn hash codes by performing matrix factorization on a unified kernel feature space over multiple views. Nonetheless, there are significant differences between MVLH and our approach. First, matrix factorization is performed on a unified kernel space which is formed by simply concatenating multiple kernel feature spaces. This would discard distinct local structures in individual views. By contrast, the kernelized similarity matrix is constructed with respect to the distinct characteristic in each view. Second, MVLH neglects the case of potential noise corruption in data samples. In this aspect, we attentively employ the lowrank representation (LRR) [?] to recover latent subspace structures from corrupted data.
2.2 Lowrank Modeling
Lowrank modeling in attracting increasing attention due to its capability of recovering the underlying structure among data objects [?; ?; ?; ?; ?; ?]. It has striking success in many applications such as data compression [?], subspace clustering [?; ?; ?], and image processing [?; ?; ?]. For instance, in [?], Zhang et al. consider a joint formulation of recovering lowrank and sparse subspace structures for robust representation.
Nowadays, data are usually collected from diverse domains or obtained from various feature extractors, and each group of features can be regarded as a particular view [?]. Moreover, these data can be easily corrupted by potential noises (e.g., missing pixels or outliers), or large variations (e.g., post variations in face images) in real applications. In practice, the underlying structure of data could be multiple subspaces, and thus LowRank Representation (LRR) is designed to find subspace structures in noisy data [?; ?]. The multiview lowrank analysis [?] is a recently proposed multiview learning approach, which introduces lowrank constraint to reveal the intrinsic structure of data, and identifies outliers for the representation coefficients in lowrank matrix recovery.
In this paper, we are the first to apply lowrank learning to reveal structured kernalized similarity among multiview data, and scale it up well to largescale applications.
3 Robust Multiview Hashing
3.1 Preliminary and Problem Definition
Let be the embedding function for nonlinear feature spaces, each of which corresponds to one view. Following the Kernelized Locality Sensitive Hashing [?], we uniformly select samples from the training set , denoted by (), to construct kernelized similarity matrices under multiple views. Given a sample represented by its feature , the th hash bit can be generated via the linear projection:
(1) 
where denotes the elementwise function, which is 1 if it is larger or equal to 0 and 1 otherwise. indicates the linear combination of landmarks, which can be the cluster centers [?] via scalable means clustering over the feature space with dimensions. is a bias term. Then, we have
(2) 
where denotes the th column of , such that , and denotes the kernelized similarity matrix between landmarks and samples corresponding to the kernelized representation . Accordingly, the hash code of can be rewritten via the kernel form,
(3) 
where and .
Given a set of training samples that may contain errors, denotes the th feature of , and is the dimensionality for the feature space regarding the th view. Then is the view matrix corresponding to the feature of all training data. is the vector representation of the training data using all features where , and is the number of views. We denote as the hash codes of the training samples corresponding to all features, and as the hash codes of the training data for the th view. We aim to learn a latent lowrank kernel matrix shared across multiple kernels, and construct a set of robust hashing functions for multiview data where (), and is the number of hashing functions, i.e., the hash code length. The kernel function is plugged into hash function because the kernel trick has been theoretically and empirically proved to be able to tackle the data distribution that is almost linearly inseparable [?].
3.2 Lowrank Kernelized Similarity Recovery from Multiviews
Given a collection of highdimensional multiview data samples that may contain certain errors for each viewspecific representation, we construct multiple nonlinear feature spaces , each of which represents one feature view. To leverage multiple complementary representations, we propose to derive a consensus lowrank kernelized similarity matrix recovered from corrupted data objects, and shared across views. This lowrank nonlinear similarity matrix is considered as the most requisite component, whilst each view also contains individual nonrequisite information including redundancy and errors. We explicitly model the redundancy via sparsity since multiview study suggests that each individual view is sufficient to identify most of the similarity structure, and the deviation between requisite component and data sample is sparse [?]. In reality, data samples can be grossly corrupted due to the sensor failure or communication errors. Thus, an norm is adopted to characterize errors since they usually cause column sparsity in an affinity matrix [?].
In our framework, the lowrank similarity matrix is constructed to be sparse by considering data samples and landmarks, thus ascertaining the efficiency of our approach. Therefore, the latent lowrank kernelized similarity matrix can be recovered from through a lowrank constraint on and sparse constraint on each , that is,
(4) 
where is the tradeoff parameter and encodes the summation of error corruption and possible noise information regarding the th view.
3.3 Objective Function
Many studies [?; ?] have shown the benefits to exploit local structure of the training data to infer accurate and compact hash codes. However, all these algorithms are sensitive to error corruptions, hampering them to be effective in practical situations. By contrast, we propose to jointly learn hash codes by preserving local similarities in multiple views while being robust to errors. To exploit the local structure in each view, we define affinity matrices , one for each view, that is,
where is the nearest neighbor set, and the Euclidean distance is employed in each feature space to determine the neighborhood. A reasonable criteria of learning hash codes from the th view is to ensure similar objects in the original space should have similar binary hash codes. This can be formulated as below:
(5) 
Given a training sample , we expect the optimal hash code consistent with its distinct hash codes derived from each view. In this way, the local geometric structure in a single view can be globally optimized. Therefore, we have
(6) 
where is a tradeoff parameter. The main bottleneck in the above formulation is computation where the cost of building the underlying graph and its associate affinity matrix is , which is intractable for large . To avoid the computational bottleneck, we employ a landmark graph by using a small set of points called landmarks to approximate the data neighborhood structure [?]. Similarities of all database points are measured with respect to these landmarks, and the true adjacency/similarity matrix in the th view is approximated using these similarities. First, Kmeans clustering ^{2}^{2}2In practice, running Kmeans algorithm on a small subsample of the database with very few iterations is sufficient. is performed on data points to obtain () clusters center that act as landmark points. Next, the landmark graph defines the truncated similarities ’s between all data points and landmarks as,
where denotes the indices of () nearest landmarks of points in according to a distance function such as distance, and denotes the bandwidth parameter. Note that the matrix is highly sparse. Each row of contains only nonzero entries which sum to 1. Thus, the landmark graph provides a powerful approximation to the adjacency matrix as where [?].
For ease of representation, we denote . To learn a set of hashing functions and a consensus nonlinear representation in a joint framework, we formulate the objective function of robust multiview hashing as follows
(7) 
where is a tradeoff parameter, enforces the hash code to be binary codes, and the constraint is imposed to encourage bit decorrelations while avoiding the trivial solution. Due to the discrete constraints and nonconvexity, the optimization problem in Eq.(7) is difficult to solve. Following spectral hashing [?], we relax the constraints to be , then we have
We rewrite the objective function by further minimizing the least square error regarding while regularizing coupled with tradeoff parameters and , it then has
(8) 
Eq.(8) is still nonconvex due to orthogonal constraint . Fortunately with either , or , fixed, the problem is convex with respect to the other variables. Therefore, we present an alternating optimization way that can efficiently find the optimum in a few steps. First, given and , we show that computation expressions of and can be obtained. To compute and , we employ an efficient optimization technique, the inexact augmented Lagrange multiplier (ALM) algorithm [?].
4 Optimization
4.1 Compute and
4.2 Compute and
With variables and being fixed, the problem turns to be
(12) 
The rank minimization problem has been well studied in literature [?; ?]. By introducing an auxiliary variable such that , Eq.(12) can be then converted into the following equivalent form:
(13) 
where and represent the Lagrange multipliers, denotes the inner product of matrices, and is an adaptive penalty parameter. Next we will elaborate the update rules for each of , , and by minimizing while fixing the others.
Solving for
When the other variables are fixed, the subproblem w.r.t. is
(14) 
It can be solved by the Singular Value Threshold method [?]. More specifically, let be the SVD form of , the updating rule of using the SVD operator in each iteration will be
(15) 
where is the shrinkage operator [?].
Solving for
The subproblem with respect to can be simplified as
(16) 
which enjoys a closed form solution .
Solving for
With the other variables being fixed, we update by solving
(17) 
For ease of representation, we define . Then, the problem in Eq.(17) can be rewritten as
(18) 
Hence, the problem in Eq.(18) can be decomposed into independent subproblems: , subject to . Each subproblem is a proximal operator problem, which can be efficiently solved by the projection algorithm in [?].
4.3 Learning Hash Codes
Once the hashing function implemented by and is learned by exploiting the kernelized similarity consensus , we can generate hash codes for both database and query samples, denoted as , via Eq. (19).
(19) 
where , represents the similarity between and the th landmark using Gaussian RBF kernel over the concatenated feature space for all views.
4.4 OutofSample Extension
An essential part of hashing is to generate binary codes for new samples, which is known as outofsample problems. A widely used solution is the Nystrm extension [?]. However, this is impractical for largescale hashing since the Nystrm extension is as expensive as doing exhaustive nearest neighbor search with a complexity of for data points. In order to address the outofsample extension problem, we employ a nonparametric regression approach, inspired by Shen et al. [?]. Specifically, given the hashing embedding for the entire training set , for a new data point , we aim to generate a hashing embedding while preserving the local neighborhood relationships among its neighbors in . A simple inductive formulation can produce the embedding for a new data point by a sparse linear combination of the base embeddings:
(20) 
where we define
However, Eq.(20) does not scale well for computing outofsample extension () for largescale tasks. To this end, we employ a prototype algorithm [?] to approximate using only a small base set:
(21) 
where is the sign function, and is the hashing embedding for the base set which is the cluster centers obtained by Kmeans. In this stage, the major computation cost comes from Kmeans clustering, which is in time ( is the feature dimension, and is the number of iterations in Kmeans). The iteration number can be set less than 50, thus, the Kmeans only costs . Considering that is much less than , the total time is linear in the size of training set. The computation of distance between and cost . Thus, the overall time cost is .
5 Complexity Analysis
We analyze the time complexity regarding per iteration of the optimization strategy. The complexity of computing and in Eq.(11) is and , respectively. Commonly, landmarks are generated offline via scalable Kmeans clustering for less than 50 iterations, keeping the complexity of computing to be . The complexity of computing hash codes for a new sample is . Overall, the time complexity is in one iteration, which is linear with respect to the training size.
6 Experiments
6.1 Experimental Settings
Competitors
We compare our method with recently proposed stateoftheart multiple feature hashing algorithms:

Multiple feature hashing (MFH) [?]: This method exploits local structure in each feature and global consistency in the optimization of hashing functions.

Composite hashing with multiple sources (CHMS) [?]: This method treats a linear combination of viewspecific similarities as an average similarity which can be plugged into a spectral hashing framework.

Compact kernel hashing with multiple features (CKH) [?]: It is a multiple feature hashing framework where multiple kernels are linearly combined.

Sequential spectral hashing with multiple representations (SSH) [?]: This method constructs an average similarity matrix to assemble viewspecific similarity matrices.

MultiView Latent Hashing (MVLH) [?]: This is an unsupervised multiview hashing approach where binary codes are learned by the latent factors shared by multiple views from an unified kernel feature space.
Datasets
We conduct the experiments on two image benchmarks: CIFAR10 ^{3}^{3}3http://www.cs.toronto.edu/ kriz/cifar.html and NUSWIDE.

CIFAR10 consists of 60K 3232 color images from ten object categories, each of which contains 6K samples. Every image is assigned to a mutually exclusive class label and for each image, we extract 512dimensional GIST feature [?] and 300dimensional bagofwords quantized from dense SIFT features [?] to be two views.

NUSWIDE [?] contains 269,648 labeled images crawled from Flickr and is manually annotated with 81 categories. Three types of features are extracted: 128dimensional wavelet texture, 225dimensional blockwise color moments, and 500dimensional bagofwords to construct three views.
Multiview Corruption Setting
In CIFAR10, considering that missing features may have some structure, we remove a square patch of pixels from each image covering 25% of the total number of pixels. The location of the patch is uniformly sampled for each image. This will naturally deteriorate viewdependent feature representations. In NUSWIDE, we consider the scenario where 20% of feature values in each view are corrupted with perturbation noise following a standard Gaussian distribution.
Parameter Setting
In the training phase, we uniformly sample 30K and 100K images as training data from both datasets, and generate 300 and 500 landmarks. That is, we fix the graph construction parameters , on CIFAR10, and , on NUSWIDE, respectively. In the testing phase, we randomly select 1,000 query images in which the true neighbors of each image are defined as the semantic neighbors which share at least one common semantic label. For our method and CKH, we use Gaussian RBF kernel , where represents the Euclidean distance within the th feature space. The parameter is learned via the selftuning strategy [?].
Evaluation Metric
The mean precisionrecall and mean average precision (MAP) are computed over the retrieved set consisting of the samples with the hamming distance [?] using 8 to 32 bits to a specific query. We carry out hash lookup within a Hamming radius 2 and report the mean hash lookup precision over all queries. For a query , the average precision (AP) is defined as , where is the number of groundtruth neighbors of in database, is the number of entities in database, denotes the precision of the top retrieved entities, and if the th retrieved entity is a groundtruth neighbor and , otherwise. Ground truth neighbors are defined as items which share at least one semantic label. Given a query set of size , the MAP is defined as the mean of the average precision for all queries: .
6.2 Results
Method  CIFAR10  NUSWIDE  
P=8  P=32  P=48  P=128  P=8  P=32  P=48  P=128  
MFH  23.310.71  28.190.48  26.380.68  23.680.71  23.520.72  26.490.85  33.550.49  34.970.81 
CHMS  25.610.22  31.80.66  26.540.52  19.380.84  27.540.41  30.220.92  28.240.96  27.521.12 
CKH  31.750.53  32.050.72  37.320.76  34.450.81  29.720.43  37.840.63  33.560.82  34.421.32 
SSH  27.340.46  35.780.68  29.360.63  27.520.72  28.950.46  33.420.88  30.050.71  29.210.98 
MVLH  32.270.41  40.240.63  44.810.46  42.060.62  31.920.62  39.050.87  40.310.52  36.120.70 
Ours  36.730.41  47.630.52  51.220.36  46.570.44  34.210.48  46.350.47  44.330.34  43.080.32 
(a) is fixed as  (b) is fixed as  (c) is fixed as 
We report the mean precisionrecall curves of Hamming ranking, and mean average precision (MAP) w.r.t. different number of hashing bits over 1K query images. Results are shown in Fig.1, which are computed from top100 retrieved samples. It can be seen from top subfigure of Fig.1 that our method achieves a performance gain in both precision and recall over all counterparts and the second best is MVLH. This can demonstrate the superiority of using nonlinear hashing functions in nonlinear space. More importantly, the latent consensus kernelized similarity matrix by lowrank minimization is not only effective in leveraging complementary information from multiviews, but also robust against the presence of errors. The subfigure (bottom) in Fig.1 shows that as the hashing bit number varies, our method consistently keeps superior performance. Specifically, it reaches the highest precision value for 48 bits and shows a relatively steady performance with more hashing bits. The results from the NUSWIDE database are shown in Fig.2. Once again we can see performance gaps in precisionrecall between our approach and competitors, as illustrated in top subfigure of Fig.2. This validates the advantage of our method by exploiting consensus of kernelized similarity to learn robust nonlinear hashing functions. In subfigure (bottom) of Fig.2, as the number of hashing bit increases, our method is able to keep high and steady MAP values.
Method  CIFAR10  NUSWIDE  
Training  Test  Training  Test  
MFH  32.8  6.4  41.6  8.5 
CHMS  29.8  4.7  37.2  7.8 
SSH  23.6  1.3  31.7  2.4 
CKH  10.7  2.3  15.3  3.2 
MVLH  20.4  2.2  28.1  4.3 
Ours  14.1  2.6  19.2  3.5 
To evaluate the impact of hashing bit numbers on performance of hash lookup, in Table 1, we report hash lookup mean precision with standard deviation (meanstd) in the case of 8, 32, 48, 128 bits on both databases. Similar to Hamming ranking results, our method achieves the better performance than others and obviously increasing performance with less than 32 bits, which demonstrates that our approach with compact hashing codes can retrieve more semantically related images than all baselines in terms of hash lookup.
In Table 2, we report the comparison on training/test time over the two image benchmarks. CKH and our method are much more efficient by taking less than 15s and 20s respectively to train on CIFAR10 and NUSWIDE using 32 bits. The efficiency improvement comes from the usage of landmarks. While our method is slightly less efficient to CKH because of the lowrank kernelized similarity recovery, it is very comparable to CKH and consistently superior to CKH in other performance. MVLH is relatively costly due to its expensive matrix factorization in its kernel space. MFH and CHMS are timeconsuming in training stage because they both involve the eigendecomposition of a dense affinity matrix, which is not scalable to a largescale setting. SSH has a gain in efficiency compared with MFH and CHMS on account of their approximation on the Knearest graph construction [?].
6.3 Parameter Tuning
In this experiment, we test different parameter settings for our algorithm to study the performance sensitivity. We learn three parameters: , , and , corresponding to the term of requisite component, nonrequisite decomposition, and hashing function learning in Eq.(8). For these parameters, we tune them from . We fix one of the parameters in , , and to report the MAP while the other two parameters are changing. The results are shown in Fig.3. In Fig.3 (a), by fixing , we show the performance variance on different pairs of and . We can observe that our algorithms achieves a relatively higher MAP when , and . The similar performance can also be seen from Fig.3 (b) and Fig.3 (c). Thus, among different combinations, the method gains the best performance when , , and , while it is relatively insensitive to varied parameters setting. With optimal combination of parameters, we study the issue of convergence. In Fig.4, we can observe that our algorithm becomes convergent in less than 40 iterations, demonstrating its fast convergence rate.
6.4 OutofSample Case
In this experiment, we study the property of outofsample extension. We take the CIFAR10 dataset as the base benchmark to train base embeddings. Another dataset MNIST is considered as the testing bed. The MINIST dataset [?] consists of 70K images, each of 784 dimensions, of handwritten digits from “0” to “9”. As in Fig.5, our method achieves the best results. On this dataset, we can clearly see that our method outperforms MVLH by a large margin, which increases as code length increases. This further demonstrates the advantage of kernelized lowrank embedding as a tool for hashing by embedding high dimensional data into a lower dimensional space. This dimensionality reduction procedure not only preserves the local neighborhood, but also reveals global structure.
7 Conclusion
In this paper, we motivate the problem of robust hashing for similarity search over multiview data objects under a practical scenario that error corruptions for viewdependent feature representations are presented. Unlike existing multiview hashing methods that take a twophase scheme of constructing similarity matrices and learning hash functions separately, we propose a novel technique to jointly learn hash functions and a latent, lowrank, corruptionfree kernelized similarity under multiple representations with potential noise corruptions. Extensive experiments conducted on realworld multiview data sets demonstrate the superiority of our method in terms of efficacy.
References
 [Bengio et al., 2004] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux, JeanFranÃ§ois Paiement, Pascal Vincent, and Marie Ouimet. Learning eigenfunctions links spectral embedding and kernel pca. Neural Comput, 16(10):2197–2219, 2004.
 [Cai et al., 2010] JianFeng Cai, Emmanuel J. Cands, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1957–1982, 2010.
 [Candes and Recht, 2009] Emmanuel J. Candes and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathmatics, 9(6):717–772, 2009.
 [Chua et al., 2009] TatSeng Chua, Jinhui Tang, Richang Hong, Haojie Li, and Zhiping Luo. Nuswide: a realworld web image database from national university of singapore. In ACM CIVR, 2009.
 [Datar et al., 2004] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Localitysensitive hashing scheme based on pstable distribution. In SOCG, 2004.
 [Deng et al., 2013] Yue Deng, Qionghai Dai, Risheng Liu, Zengke Zhang, and Sanqing Hu. Lowrank structure learning via nonconvex heuristic recovery. IEEE Transactions on Neural Networks and Learing Systems, 24(3):383–396, 2013.
 [Duchi et al., 2008] John Duchi, Shai ShalevShwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the ball for learning in high dimensions. In ICML, 2008.
 [Kim et al., 2012] Saehoon Kim, Yoonseop Kang, and Seungjin Choi. Sequential spectral learning to hash with multiple representations. In ECCV, pages 538–551, 2012.
 [Kulis and Grauman, 2009] Brian Kulis and Kristen Grauman. Kernelized localitysensitive hashing for scalable image search. In ICCV, 2009.
 [Kumar and III, 2011] Abhishek Kumar and Hal Daume III. A cotraining approach for multiview spectral clustering. In ICML, 2011.
 [Kumar and Udupa, 2011] Shaishav Kumar and Raghavendra Udupa. Learning hash functions for crossview similarity search. In IJCAI, pages 1360–1365, 2011.
 [Kumar et al., 2011] Abhishek Kumar, Piyush Rai, and Hal Daum. Coregularized multiview spectral clustering. In NIPS, 2011.
 [LeCun et al., 1998] Yann LeCun, LÃ©eon Bottou, Yoshua Bengio, and Patrick Haaffner. Gradientbased learning applied to document recognition. In Proceedings of IEEE, 1998.
 [Li et al., 2015] Sheng Li, Ming Shao, and Yun Fu. Multiview lowrank analysis for outlier detection. In SIAM Data Mining, pages 748–756, 2015.
 [Lin et al., 2010] Zhouchen Lin, Minming Chen, and Yi Ma. The augmented lagrange multiplier method for exact recovery of corrupted lowrank matrices. In arXiv:1009.5055, 2010.
 [Lin et al., 2015] Zhouchen Lin, Risheng Liu, and Huan Li. Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Machine Learning, (2):287–325, 2015.
 [Liu et al., 2010a] Guangcai Liu, Zhuochen Lin, and Yong Yu. Robust subspace segmentation by lowrank representation. In ICML, 2010.
 [Liu et al., 2010b] Wei Liu, Jun Wang, and ShihFu Chang. Large graph construction for scalable semisupervised learning. In ICML, 2010.
 [Liu et al., 2011] Wei Liu, Jun Wang, Sanjiv Kumar, and ShihFu Chang. Hashing with graphs. In ICML, 2011.
 [Liu et al., 2012a] Wei Liu, Jun Wang, Rongrong Ji, Yugang Jiang, and ShihFu Chang. Supervised hashing with kernels. In CVPR, pages 2074 – 2081, 2012.
 [Liu et al., 2012b] Xianglong Liu, Junfeng He, Di Liu, and Bo Lang. Compact kernel hashing with multiple features. In ACM Multimedia, pages 881–884, 2012.
 [Liu et al., 2013] Guangcan Liu, Zhuochen Lin, Shuicheng Yan, Ju Sun, Yong Yu, and Yi Ma. Robust recovery of subspace structures by lowrank representation. IEEE Trans. Pattern Anal. Mach. Intell., 35(1):171–184, 2013.
 [Lowe, 2004] David Lowe. Distinctive image features from scaleinvariant keypoints. IJCV, 60:91–110, 2004.
 [Masci et al., 2014] Jonathan Masci, Michael M. Bronstein, Alexander M. Bronstein, and Jurgen Schmidhuber. Multimodal similaritypreserving hashing. IEEE TPAMI, 36(4):824–830, 2014.
 [Oliva and Torralba, 2001] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42(3):145–175, 2001.
 [Ou et al., 2013] Mingdong Ou, Peng Cui, Fei Wang, Jun Wang, Wenwu Zhu, and Shiqiang Yang. Comparing apples to oranges: a scalable solution with heterogeneous hashing. In ACM SIGKDD, pages 230–238, 2013.
 [Shen et al., 2013] Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, and Zhenmin Tang. Inductive hashing on manifolds. In CVPR, pages 1562 – 1569, 2013.
 [Shen et al., 2015] Xiaobo Shen, Fumin Shen, QuanSen Sun, and YunHao Yuan. Multiview latent hashing for efficient multimedia search. In ACM Multimedia, pages 831–834, 2015.
 [Song et al., 2011] Jingkuan Song, Yi Yang, Zi Huang, HengTao Shen, and Richang Hong. Multiple feature hashing for realtime large scale nearduplicate video retrieval. In ACM Multimedia, pages 423–432, 2011.
 [Wang et al., 2010] Jun Wang, Sanjiv Kumar, and ShihFu Chang. Semisupervised hashing for scalable image retrieval. In CVPR, pages 3424 – 3431, 2010.
 [Wang et al., 2013] Yang Wang, Xuemin Lin, and Qing Zhang. Towards metric fusion on multiview data: a crossview based graph random walk approach. In ACM CIKM, pages 805–810, 2013.
 [Wang et al., 2014] Yang Wang, Xuemin Lin, Lin Wu, Wenjie Zhang, and Qing Zhang. Exploiting correlation consensus: Towards subspace clustering for multimodal data. In ACM Multimedia, pages 981–984, 2014.
 [Wang et al., 2015a] Qifan Wang, Luo Si, and Bin Shen. Learning to hash on partial multimodal data. In IJCAI, pages 3904–3910, 2015.
 [Wang et al., 2015b] Yang Wang, Xuemin Lin, Lin Wu, and Wenjie Zhang. Effective multiquery expansions: Robust landmark retrieval. In ACM Multimedia, pages 79–88, 2015.
 [Wang et al., 2015c] Yang Wang, Xuemin Lin, Lin Wu, Wenjie Zhang, and Qing Zhang. Lbmch: Learning bridging mapping for crossmodal hashing. In ACM SIGIR, 2015.
 [Wang et al., 2015d] Yang Wang, Xuemin Lin, Lin Wu, Wenjie Zhang, Qing Zhang, and Xiaodi Huang. Robust subspace clustering for multiview data by exploiting correlation consensus. IEEE Transactions on Image Processing, 24(11):3939–3949, 2015.
 [Wang et al., 2015e] Yang Wang, Wenjie Zhang, Lin Wu, Xuemin Lin, and Xiang Zhao. Unsupervised metric fusion over multiview data by graph random walkbased crossview diffusion. IEEE Transactions on Neural Networks and Learning System, 99:1–14, 2015.
 [Wang et al., 2016a] Yang Wang, Xuemin Lin, Lin Wu, Qing Zhang, and Wenjie Zhang. Shifting multihypergraphs via collaborative probabilistic voting. Knowledge and Information Systems, 46(3):515–536, 2016.
 [Wang et al., 2016b] Yang Wang, Wenjie Zhang, Lin Wu, Xuemin Lin, Meng Fang, and Shirui Pan. Iterative views agreement: An iterative lowrank based structured optimization method to multiview spectral clustering. In IJCAI, 2016.
 [Wei et al., 2014] Ying Wei, Yangqiu Song, Yi Zhen, Bo Liu, and Qiang Yang. Scalable heterogeneous translated hashing. In ACM SIGKDD, pages 791–800, 2014.
 [Weiss et al., 2008] Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS, 2008.
 [Wright et al., 2009a] John Wright, Yigang Peng, Yi Ma, Arvind Ganesh, and Shankar Rao. Robust principal component analysis: Exact recovery of corrupted lowrank matrices by convex optimization. In NIPS, 2009.
 [Wright et al., 2009b] John Wright, Yigang Peng, Yi Ma, Arvind Ganesh, and Shankar Rao. Robust principal component analysis: exact recovery of corrupted lowrank matrices via convex optimization. In NIPS, 2009.
 [Wu et al., 2013] Lin Wu, Yang Wang, and John Shepherd. Efficient image and tag coranking: a bregman divergence optimization method. In ACM Multimedia, 2013.
 [Wu et al., 2016] Lin Wu, Yang Wang, and Shirui Pan. Exploiting attribute correlations: A novel trace lassobased weakly supervised dictionary learning method. IEEE Transactions on Cybernetics, 2016.
 [Xia et al., 2014] Rongkai Xia, Yan Pan, Lei Du, and Jian Yin. Robust multiview spectral clustering via lowrank and sparse decomposition. In AAAI, pages 2149–2155, 2014.
 [Xu et al., 2013] Chang Xu, Dacheng Tao, and Chao Xu. A survey on multiview learning. arXiv:1304.5634, 2013.
 [Ye et al., 2012] Guangnan Ye, Dong Liu, IHong Jhuo, and ShihFu Chang. Robust late fusion with rank minimization. In CVPR, pages 3021–3028, 2012.
 [ZelnikManor and Perona, 2004] Lihi ZelnikManor and Pietro Perona. Selftuning spectral clustering. In NIPS, 2004.
 [Zhang et al., 2011] Dan Zhang, Fei Wang, and Luo Si. Composite hashing with multiple information sources. In ACM SIGIR, pages 225–234, 2011.
 [Zhang et al., 2014] Zhao Zhang, Shuicheng Yan, and Mingbo Zhao. Similarity preserving lowrank representation for enhanced data representation and effective subspace learning. Neural Networks, 53:81–94, 2014.
 [Zhang et al., 2015] Zhao Zhang, Shuicheng Yan, Mingbo Zhao, and Fanzhang Li. Bilinear lowrank coding framework and extension for robust image recovery and feature representation. KnowledgeBased Systems, 86:143–157, 2015.
 [Zhang et al., 2016] Zhao Zhang, Fanzhang Li, Mingbo Zhao, Li Zhang, and Shuicheng Yan. Joint lowrank and sparse principal feature coding for enhanced robust repersentation and visual classification. IEEE Transactions on Image Processing, 25(6):2429–2443, 2016.
 [Zheng et al., 2015] Shuai Zheng, Xiao Cai, Chris Ding, Feiping Nie, and Heng Huang. A closed form solution to multiview lowrank regression. In AAAI, pages 1973–1979, 2015.
 [Zhou et al., 2013] Xiaowei Zhou, Can Yang, and Weichuan Yu. Moving object detection by detecting contiguous outliers in the lowrank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(3):597–610, 2013.