Supervised multiview learning based on simultaneous learning of multiview intact and single view classifier
Abstract
Multiview learning problem refers to the problem of learning a classifier from multiple view data. In this data set, each data points is presented by multiple different views. In this paper, we propose a novel method for this problem. This method is based on two assumptions. The first assumption is that each data point has an intact feature vector, and each view is obtained by a linear transformation from the intact vector. The second assumption is that the intact vectors are discriminative, and in the intact space, we have a linear classifier to separate the positive class from the negative class. We define an intact vector for each data point, and a viewconditional transformation matrix for each view, and propose to reconstruct the multiple view feature vectors by the product of the corresponding intact vectors and transformation matrices. Moreover, we also propose a linear classifier in the intact space, and learn it jointly with the intact vectors. The learning problem is modeled by a minimization problem, and the objective function is composed of a Cauchy error estimatorbased viewconditional reconstruction term over all data points and views, and a classification error term measured by hinge loss over all the intact vectors of all the data points. Some regularization terms are also imposed to different variables in the objective function. The minimization problem is solve by an iterative algorithm using alternate optimization strategy and gradient descent algorithm. The proposed algorithm shows it advantage in the compression to other multiview learning algorithms on benchmark data sets.
Keywords:
Multiview learning Supervised learning Intact space Hinge loss∎
1 Introduction
1.1 Background
Multiview learning has been an important in machine learning community Wu2012661 ; Sun20132031 ; Yu20142431 ; Lu2015 ; Zha2015 ; FakeriTabrizi2015117 ; Liu20151233 ; liu2015supervised ; sindhwani2008rkhs ; wang2015regularized . In traditional machine learning problems, we usually assume that a data point has a feature vector to represent its input information. For example, in image recognition problem, we can extract a visual feature vector from an image, using a texture descriptor Petrov20131499 ; Mohanty20131011 ; Mala201580 ; Yadav2015101 ; Luo2013709 ; Gui20143126 ; Wang2014 ; jiang2015manifold . In this scene, the texture is a view of the image. However, there could be more than one view of an image. Besides the texture view, we can also extract feature vectors from other views, including shape and color. An other example is the problem of classification of scientific articles, and we may extract a feature vector from the main text of the article Long20151833 ; Hogenboom201546 ; Picard201595 ; La2015929 ; Hogenboom201546 ; Picard201595 ; Koopman2015 ; Chen2015473 ; Feng2015109 ; KumarNagwani20152589 . However, the main text is just one view the article, and we can also have features from other views, such as abstract, reference list, etc. Multiview learning argues that we should learn from more than one views to present the data and construct a predictor. The motive for multiview learning is that single view based data representation is usually incomplete, and different views can present complementary information for the learning problem. In the problem of multiview learning, the input of a data point is not just one single feature vector of one single view, but multiple feature vectors presenting different views. The target of multiview learning is to learn a predictor to take multiple view feature vectors to predict one single output of a data point. The problem of multiview learning can be classified to two types, supervised multiview learning and unsupervised learning.

Supervised multiview learning refers to the problem of learning from a data set, where both the multiview input and output are available for each data point Li20142040 ; Jiang20141635 ; Hajmohammadi2014195 . In this problem, the output is usually a class label, or a continues response. In this case, the learning problem is to build a predictive model from the training data set to predict the output of a input data point, with help the inputoutput pairs of the training set.

Unsupervised multiview learning refers to the problem of cluster a set of data points, and the multiview inputs of each data point are given Feng2013343 ; Sublemontier2013 ; Zhao20137 . In this problem, the outputs of the data points are not available.
In this paper, we investigate the problem of supervised multiview learning, and propose a novel algorithm to solve it. The proposed method is based on an assumption that different views of a data point are generated from one single intact feature vector, and the view generation is performed by a linear transformation. We try to recover the intact feature vector for each data point from its multiview feature vectors, with guiding of its corresponding output, i.e., its binary class label.
1.2 Relevant works
There are some existing multiview learning methods. We the stateofthearts of them as follows.

Zhang et al. Zhang2008752 proposed to use local learning (LL) method for the problem of multiview learning problem, and designs a local predictive model for each data point based on the multiview inputs. The local predictive model is learned on the nearest neighbors of a data point.

Sindhwani et al. sindhwani2005co proposed to use cotraining algorithm for multiview learning problems to improve the classification performance of each view (CT). This method is based on multiview regularization, and the agreement and smoothness over both labeled and unlabeled data points.

Quadrianto Quadrianto2011425 proposed a multiview learning algorithm to solve the problem of view disagreement (VD), i.e., different views of one single data point do not belong to the same class. This method uses a conditional entropy criterion to find the disagreement among different views, and remove the data points with view disagreement from the training set.

Zhai Zhai2012 proposed multiview metric learning method with global consistency and local smoothness (GL) for the multiview learning problem with partially labeled data set. This method simultaneously consider both the global consistency and local smoothness, by assuming that the different views has a shared latent feature space, and imposing global consistency and local structure to the learning procedure.

Chen et al. Chen20122365 proposed a statistical subspace multiview representation method (SS), by leveraging both multiview dependencies and supervision information. This method is based on a subspace Markov network of multiview latent, and assumes that the multiviews and the class labels are conditionally independent. The algorithm is based on the maximization of data likelihood, and the minimization of classification error.
1.3 Contributions
In this paper, we propose a novel supervised multiview learning method. This method is based on the assumption of single discriminative intact of different multiview inputs. Under this assumption, although there are different views of one single data point, one single intact feature vector exists for the data point. This intact feature vector is assumed to be discriminative, i.e., it can represents the class information of each data point. Moreover, the feature vector of each view of a data point can be obtained from the intact vector, by performing a linear viewconditional transformation to the intact feature vector. In this way, if we learn the discriminative intact feature vector for each training data point, we can learn a classifier in the intact with the help of the class labels of the training data points. To this end, we proposed a novel method to learn the hidden of the intact feature vector, the viewconditional transformation matrices, and the classifier in the intact space simultaneously. We define a intact feature vector for each data point, and a transformation matrix for each view. The feature vector of one view of each data point can be reconstructed as the product of its corresponding transformation matrix and intact feature vector. The reconstruction error for each view of each data point is measured by the Cauchy error estimator Idan20141108 ; Gallagher20151264 . To learn the optimal intact feature vectors and viewconditional transformation matrices, we propose to minimize the Cauchy errors. Moreover, due to the assumption that the intact feature vectors are discriminative, we also argue that we can design a classifier in the intact space, and the classifier can minimize the classification error. Thus we also propose to learn a linear classifier in the intact space, and use the hinge loss to measure the classification error the training set in the intact space Chen201580 ; Charuvaka201563 . To learn the optimal classifier parameter and the intact feature vectors, we also propose to minimize the hinge loss with regard to both the classifier parameter and the intact feature vectors.
To model the problem, we propose a joint optimization problem for learning of intact vectors, viewconditional transformation matrices, and the classifier parameter vector. The objective function of this problem is composed of two error terms, and three regularization terms. The firs error term is the view reconstruction term measured by Cauchy estimator over all the data points and views. The second error term is the classification error over all the intact feature vectors of all training data points, measured by hinge losses. The three regularization terms are all squared norm terms over each intact feature vectors, viewconditional matrices, and the classifier parameter vectors. The purpose of impose the squared norm to these variables are to reduce the complexity of the learned outputs. To minimize the proposed objective function, we adapt an alternate optimization strategy, i.e., when the objective function is minimized with regard to one variable, other variables are fixed. The optimization with regard to each variable is conducted by using gradient descent algorithm.
The contributions of this paper are of three parts:

We propose a novel supervised multiview learning framework by simultaneous learning of intact feature vectors, viewconditional transformation matrices, and classifier parameter vector.

We build a novel optimization problem for this learning problem, by considering both the view reconstruction problem, and the classifier learning problem.

We develop an iterative algorithm to solve this optimization problem based on alternate optimization strategy and gradient descent algorithm.
1.4 Paper organization
This paper is organized as follows: In section 2, the proposed method for supervised multiview learning is introduced. In this section, we first model this problem as a minimization problem of a objective function, and then solve it with an iterative algorithm. In section 3, the proposed iterative algorithm is evaluated. We first give an analysis of its sensitivity to parameters, and then compare it to some stateoftheart algorithms, and finally test the running time performance of the proposed algorithm. In section 4, we give the conclusion of this paper.
2 Methods
In this section, we introduce the proposed supervised multiview learning method.
2.1 Problem modeling
We assume we are dealing with supervised binary classification problem with multiview data. A training data set of data points is given, . is the th data point. The information of each data point is composed of feature vectors of views, and a binary class label . is the dimensional feature vector of the th view of the th data point, and is a the binary class label of the th data point. The problem of supervised multiview learning is to learn a predictive model from the training set, which can predict a binary class label from the multiview input of a test data point. We assume there is an intact vector for the th data point, and its th view can be reconstructed by a linear transformation,
(1) 
where is the viewconditional linear transformation matrix for the th view. Please the viewconditional transformation matrix for the same view of all the data points is the same. By learning both the and , we can recover the hidden intact vector for the th data point, , and use it for classification problem. To this end, we propose to minimize the reconstruction error. The reconstruction error is measured by Cauchy error estimator, ,
(2) 
This error estimator has been shown to be robust, and it also provides a offset. We propose to minimize this error estimator over all data points and all views with regard to both , and ,
(3) 
Moreover, we also assume that the intact feature vectors of the data points are discriminative, and presents the class information, thus the intact feature vectors can minimize a classification loss function of the data set. We propose to learn the intact feature vector of the th data point by jointly learning a liner classifier to predict its class label, . The classifier is designed as linear function,
(4) 
The usage of a linear function as the classifier is motive by the work of Fan and Tang fan2010enhanced . Fan and Tang fan2010enhanced proposed to use a linear classifier to maximize the area under the ROC Curve (AUC) for the problem of imbalance learning and cost sensitive learning. Fan and Tang fan2010enhanced found that a linear classifier used to maximize AUC searches an optimal solution in a very constrained space, and enhance the maximum AUC linear classifier by extending its searching in the solution space, and improving the way to use the structure of the classifier. Thus the linear classifier has been proven to be effective in the optimizing AUC by Fan and Tang fan2010enhanced , it inspires us to use it to learn an effective classifier in the intact vector space. The classification error can be measured by the hinge loss function,
(5) 
This the optimization of this loss function can obtain a large margin classifier. To learn the optimal classifier and the discriminative intact feature vectors, we propose to minimize the classifier loss measured by the hinge loss function of the classification result over all the training data points,
(6) 
Moreover, to prevent the problem of overfitting of variables, we propose to minimize the squared norm of the variables to regularize the learning , , and ,
(7) 
Our overall learning problem is obtained by considering both the problems of viewconditional reconstruction in (3), and classifier learning in the intact space in (6),
(8)  
where is a tradeoff parameter to balance the viewconditional reconstruction terms and the classification error terms, and is a tradeoff parameter to balance the viewconditional reconstruction terms and the regularization terms. By optimizing this problem, we can learn intact feature vectors which can present the multiview inputs of the data points, and also is discriminative.
2.2 Optimization
To solve the optimization problem in (21), we propose to use the alternate optimization strategy. The optimization is conducted in an iterative algorithm. When one variable is considered, the others are fixed. After one variable is updated, it will be fixed in the next iteration when other variable is updated. In the following subsections, we will discuss how to update each variable.
2.2.1 Updating
When we want to update , we only consider this single variable, while fix all other variables. Thus we have the following optimization problem,
(9) 
The second term is not a convex function, and it is hard to optimize it directly. Thus we rewrite it as follows,
(10) 
We define a indicator variable, , to indicate which of the above cases is true,
(11) 
and rewrite (10) as follows,
(12) 
Please note that is also a function of , however, we first update it by using solved in previous iteration, and then fix it to update in current iteration. In this way, (9) is rewritten as
(13) 
where is the objective of this optimization problem. To seek the minimization of , we use gradient descent algorithm. This algorithm update by descending it to the direction of gradient of ,
(14) 
where is the descent step, and is the gradient function of . We set as the partial derivative of with regard to ,
(16) 
2.2.2 Updating
When we want to optimize , we fix all other variables and only consider itself. The optimization problem is changed to the follows,
(17) 
where is the objective function of this problem. To solve this problem, we also update by using the gradient descent algorithm,
(18) 
where is the gradient function of ,
(20) 
2.2.3 Updating
When we want to update to minimize the objective function of (21), we fix the other variables, and only consider . Thus the problem in (21) is transferred to
(21) 
Please note that is actually a function of . However, similar the strategy to solve , we also update it according to solved in previous iteration, and fix it to update in current iteration. When are fixed, we update to minimize by using the gradient descent algorithm,
(22) 
where is the gradient function of , and it is defined as follows,
(23) 
By substituting it to (24), we have the final updating rule for ,
(24) 
2.3 Iterative algorithm
After we have the updating rules of all the variables, we can design an iterative algorithm for the learning problem. This iterative algorithm has one outer FOR loop, and two inner FOR loops. The outer FOR loop is corresponding to the main iterations. The two inner FOR loops are corresponding to the updating of intact feature vectors of data points, and the updating of viewconditional transformation matrices. The algorithm is given in Algorithm 1. The iteration number is determined by crossvalidation in our experiments.

Algorithm 1. Iterative algorithm for multiview intact and singleview classifier learning (MISC).

Input: Training data set, .

Input: Tradeoff parameters, and .

Input: Maximum iteration number, .

Initialization: , and .

For

Update descent step,

For
Update as follows,
(25) Update by fixing , and ,
(26) 
End of For

For
Update by fixing ,
(27) 
End of For

Update by fixing and ,
(28)


End of For

Output: , , and .
As we can see from the algorithm, in the main FOR loop, descent step variable, , is firstly updated, and then the hinge loss indicator variables, and the intact feature vectors are updated. The viewconditional transformation matrices, are updated, and finally, the classifier parameter are updated.
3 Experiments
In this section, we will evaluate the proposed algorithm on a few realworld supervised multiview learning problems experimentally.
3.1 Benchmark data sets
3.1.1 PASCAL VOC 07 data set
The first data set used in the experiment is the PASCAL VOC 07 data set Jaszewski2015 . In this data set, there are 9,963 images of 20 different object classes. Each image is presented by two different view, which are visual view, and tag view. To extract the feature vector from the visual view of an image, we extract local visual features, SIFT, from the image, and represent the local features as a histogram. To extract the feature vector from the tag view from the image, we use the histogram vector of user tags of the image as the feature vector.
3.1.2 CiteSeer data set
The second data set is the CiteSeer data set Williams201468 . In this data set, there are 3,312 documents of 6 classes. Each document has three views, which are the text view, inbound reference view, and outbound reference view.
3.1.3 HMDB data set
The third data set is the HMDB dataset, which is a video database of human motion recognition problem Kuehne20112556 . In this data set, there are 6,849 video clips of 51 action classes. To present each video clip, we extract 3D Harris corners, and present them by two different types of local features, which are the histogram of oriented gradient (HOG) and histogram of oriented flow (HOF). We further represent each clip by two feature vectors of two views, which are the histograms of HOG and HOF.
3.2 Experiment protocols
To conduct the experiments, we split each data set into 10 nonoverlapping folds, and use the 10fold crossvalidation to perform the trainingtesting procedure. Each fold is used as a test set in turn, and the remaining 9 folds are used as the training sets. The proposed algorithm is performed to the training set to obtain the viewconditional transformation matrices, and the classifier parameter. Then the learned viewconditional transformation matrices and the classifier parameter are used to represent and classify the data points in the test set. To handle the multiple class problem, we use the onevsall strategy.
3.3 Performance measures
To measure the classification performance over the test set, we use the classification accuracy. The classification accuracy is defined as follows,
(29) 
It is obvious that a better algorithm should be able to obtain a higher classification accuracy.
3.4 Experiment results
In this experiment, we first study the sensitivity of the algorithm to the parameters, which are and .
3.4.1 Sensitivity to parameters
To study the performance of the proposed algorithm with different tradeoff parameters, and . We perform the algorithm by using the parameters of values and , and measured the performance of different parameters. Fig. 1 illustrates the performance on the PASCAL VOC 07 data set with respect to different tradeoff parameter . The proposed algorithm achieves a stable performance in all the settings of parameter . In Fig. 2, the performance against different tradeoff parameter is also shown. From this figure, we can also see that the algorithm is stable tot he changes of value of . This suggests that MISC is not sensitive to the changes of tradeoff parameters.
3.4.2 Comparison to stateoftheart algorithms
We compare the proposed algorithm to the following methods, multiview learning algorithm using local learning (LL) proposed by Zhang et al. Zhang2008752 , multiview learning algorithm using cotraining (CT) proposed by Sindhwani et al. sindhwani2005co , multiview learning algorithm based on view disagreement (VD) proposed by Quadrianto Quadrianto2011425 , multiview learning algorithm with global consistency and local smoothness (GL) proposed by Zhai Zhai2012 , and multiview representation method using statistical subspace learning (SS) proposed by Chen et al. Chen20122365 . The error bars of the classification accuracy of the compared methods over three different data sets are given in Fig. 3, Fig. 3 and Fig. 5. From the figures, we find that the proposed method, MISC, stably outperforms other algorithms at all the data sets. Even on the most difficult data set, HMDB, the proposed method, MISC, also achieves an accuracy as high as about 0.4. The multiple view data are optimally combined by MISC to find the latent intact space and the optimal classifier in the corresponding intact space. The main reason for this is the robust property of the proposed algorithm. This algorithm has the ability to appropriately handle the complementary between multiple views, and learn a discriminative hidden intact space with help of classifier learning.
4 Conclusions and future works
We propose a novel multiview learning algorithm by learning intact vectors of the training data points and a classifier in the intact space. The intact vectors is assumed to be a hidden but critical vector for each data point, and we can obtain its multiple view feature vectors by viewconditional transformations. Moreover, we also assume that the intact vectors are discriminative, i.e., can be separated by a linear function according to their classes. We propose a novel optimization problem to model both the learning of intact vectors and classifier. An iterative algorithm is developed to solve this problem. This algorithm outperforms other multiview learning algorithms on benchmark data sets, and it also shows its stability over tradeoff parameters. In the future, we will study the potential to use the proposed algorithm for imbalanced data set with multiview features fan2011margin ; fan2010enhanced ; chawla2004editorial , and the usage of Bayesian network classifier instead of linear classifier to learn the intact vectors of multiview data fan2014tightening ; fan2014finding ; fan2015improved . Moreover, we will also investigate to use the proposed algorithm to solve the problems of bioinformatics wang2014computational ; zhou2014biomarker ; liu2013structure ; peng2015modeling , computer vision wang2015representing ; wang2015image , and multimedia data processing wang2015supervised ; wang2015multiple .
Acknowledgements
This project was supported by the National Natural Science Foundation of China (Grant No. 61472172), and a research funding of Ludong University (Grant No. 27870301).
References
 (1) Charuvaka, A., Rangwala, H.: Convex multitask relationship learning using hinge loss. In: IEEE SSCI 2014  2014 IEEE Symposium Series on Computational Intelligence  CIDM 2014: 2014 IEEE Symposium on Computational Intelligence and Data Mining, Proceedings, pp. 63–70 (2015)
 (2) Chawla, N.V., Japkowicz, N., Kotcz, A.: Editorial: special issue on learning from imbalanced data sets. ACM Sigkdd Explorations Newsletter 6(1), 1–6 (2004)
 (3) Chen, N., Zhu, J., Sun, F., Xing, E.: Largemargin predictive latent subspace learning for multiview data analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(12), 2365–2378 (2012)
 (4) Chen, P.T., Chen, F., Qian, Z.: Road traffic congestion monitoring in social media with hingeloss markov random fields. pp. 80–89 (2015). DOI 10.1109/ICDM.2014.139
 (5) Chen, Y.W., Wang, J.L., Cai, Y.Q., Du, J.X.: A method for chinese text classification based on apparent semantics and latent aspects. Journal of Ambient Intelligence and Humanized Computing 6(4), 473–480 (2015)
 (6) FakeriTabrizi, A., Amini, M.R., Goutte, C., Usunier, N.: Multiview selflearning. Neurocomputing 155, 117–127 (2015)
 (7) Fan, X., Malone, B., Yuan, C.: Finding optimal bayesian network structures with constraints learned from data. In: Proceedings of the 30th Annual Conference on Uncertainty in Artificial Intelligence (UAI14), pp. 200–209 (2014)
 (8) Fan, X., Tang, K.: Enhanced maximum auc linear classifier. In: Fuzzy Systems and Knowledge Discovery (FSKD), 2010 Seventh International Conference on, vol. 4, pp. 1540–1544. IEEE (2010)
 (9) Fan, X., Tang, K., Weise, T.: Marginbased oversampling method for learning from imbalanced datasets. In: Advances in Knowledge Discovery and Data Mining, pp. 309–320. Springer (2011)
 (10) Fan, X., Yuan, C.: An improved lower bound for bayesian network structure learning. In: TwentyNinth AAAI Conference on Artificial Intelligence, pp. 2439 – 2445 (2015)
 (11) Fan, X., Yuan, C., Malone, B.: Tightening bounds for bayesian network structure learning. In: Proceedings of the 28th AAAI Conference on Artificial Intelligence, pp. 2439 – 2445 (2014)
 (12) Feng, G., Guo, J., Jing, B.Y., Sun, T.: Feature subset selection using naive bayes for text classification. Pattern Recognition Letters 65, 109–115 (2015). DOI 10.1016/j.patrec.2015.07.028
 (13) Feng, Y., Xiao, J., Zhuang, Y., Liu, X.: Adaptive unsupervised multiview feature selection for visual concept recognition. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7724 LNCS(PART 1), 343–357 (2013)
 (14) Gallagher, C., Fisher, T., Shen, J.: A cauchy estimator test for autocorrelation. Journal of Statistical Computation and Simulation 85(6), 1264–1276 (2015)
 (15) Gui, J., Tao, D., Sun, Z., Luo, Y., You, X., Tang, Y.: Group sparse multiview patch alignment framework with view consistency for image classification. IEEE Transactions on Image Processing 23(7), 3126–3137 (2014)
 (16) Hajmohammadi, M., Ibrahim, R., Selamat, A.: Crosslingual sentiment classification using multiple source languages in multiview semisupervised learning. Engineering Applications of Artificial Intelligence 36, 195–203 (2014)
 (17) Hogenboom, A., Frasincar, F., De Jong, F., Kaymak, U.: Polarity classification using structurebased vector representations of text. Decision Support Systems 74, 46–56 (2015)
 (18) Idan, M., Speyer, J.: Multivariate cauchy estimator with scalar measurement and process noises. SIAM Journal on Control and Optimization 52(2), 1108–1141 (2014)
 (19) Jaszewski, M., Parameswaran, S., Hallenborg, E., Bagnall, B.: Evaluation of maritime object detection methods for full motion video applications using the pascal voc challenge framework. In: Proceedings of SPIE  The International Society for Optical Engineering, vol. 9407, p. 94070Y (2015)
 (20) Jiang, F., Jia, L., Sheng, X., LeMieux, R.: Manifold regularization in structured output space for semisupervised structured output prediction. Neural Computing and Applications pp. 1–10 (2015)
 (21) Jiang, Y., Liu, J., Li, Z., Lu, H.: Semisupervised unified latent factor learning with multiview data. Machine Vision and Applications 25(7), 1635–1645 (2014)
 (22) Koopman, B., Karimi, S., Nguyen, A., McGuire, R., Muscatello, D., Kemp, M., Truran, D., Zhang, M., Thackway, S.: Automatic classification of diseases from freetext death certificates for realtime surveillance. BMC Medical Informatics and Decision Making 15(1) (2015). DOI 10.1186/s1291101501742
 (23) Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: Hmdb: A large video database for human motion recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2556–2563 (2011)
 (24) Kumar Nagwani, N.: A comment on ’a similarity measure for text classification and clustering’. IEEE Transactions on Knowledge and Data Engineering 27(9), 2589–2590 (2015)
 (25) La, L., Wang, N., Zhou, D.P.: Improving reading comprehension step by step using onlineboost text readability classification system. Neural Computing and Applications 26(4), 929–939 (2015)
 (26) Li, X.X., Li, R.F., Feng, F.X., Cao, J., Wang, X.J.: Multiview supervised latent dirichlet allocation. Tien Tzu Hsueh Pao/Acta Electronica Sinica 42(10), 2040–2044 (2014)
 (27) Liu, J., Jiang, Y., Li, Z., Zhou, Z.H., Lu, H.: Partially shared latent factor learning with multiview data. IEEE Transactions on Neural Networks and Learning Systems 26(6), 1233–1246 (2015)
 (28) Liu, X., Wang, J., Yin, M., Edwards, B., Xu, P.: Supervised learning of sparse context reconstruction coefficients for data representation and classification. Neural Computing and Applications pp. 1–9 (2015)
 (29) Liu, Y., Yang, J., Zhou, Y., Hu, J.: Structure design of vascular stents. Multiscale simulations and mechanics of biological materials pp. 301–317 (2013)
 (30) Long, J., Wang, L.D., Li, Z.D., Zhang, Z.P., Yang, L.: Wordnetbased lexical semantic classification for text corpus analysis. Journal of Central South University 22(5), 1833–1840 (2015)
 (31) Lu, H., Hu, Z., Gao, H.: Multiview sample classification algorithm based on l1graph domain adaptation learning. Mathematical Problems in Engineering 2015 (2015). DOI 10.1155/2015/329753
 (32) Luo, Y., Tao, D., Xu, C., Xu, C., Liu, H., Wen, Y.: Multiview vectorvalued manifold regularization for multilabel image classification. IEEE Transactions on Neural Networks and Learning Systems 24(5), 709–722 (2013)
 (33) Mala, K., Sadasivam, V., Alagappan, S.: Neural network based texture analysis of ct images for fatty and cirrhosis liver classification. Applied Soft Computing Journal 32, 80–86 (2015)
 (34) Mohanty, A., Senapati, M., Beberta, S., Lenka, S.: Texturebased features for classification of mammograms using decision tree. Neural Computing and Applications 23(34), 1011–1017 (2013)
 (35) Peng, B., Liu, Y., Zhou, Y., Yang, L., Zhang, G., Liu, Y.: Modeling nanoparticle targeting to a vascular surface in shear flow through diffusive particle dynamics. Nanoscale Research Letters 10(1), 235 (2015)
 (36) Petrov, N., Georgieva, A., Jordanov, I.: Selforganizing maps for texture classification. Neural Computing and Applications 22(78), 1499–1508 (2013)
 (37) Picard, D., Gosselin, P.H., Gaspard, M.C.: Challenges in contentbased image indexing of cultural heritage collections: Support vector machine active learning with applications to text classification. IEEE Signal Processing Magazine 32(4), 95–102 (2015)
 (38) Quadrianto, N., Lampert, C.: Learning multiview neighborhood preserving projections. pp. 425–432 (2011)
 (39) Sindhwani, V., Niyogi, P., Belkin, M.: A coregularization approach to semisupervised learning with multiple views. In: Proceedings of ICML workshop on learning with multiple views, pp. 74–79 (2005)
 (40) Sindhwani, V., Rosenberg, D.S.: An rkhs for multiview learning and manifold coregularization. In: Proceedings of the 25th international conference on Machine learning, pp. 976–983. ACM (2008)
 (41) Sublemontier, J.H.: Unsupervised collaborative boosting of clustering: An unifying framework for multiview clustering, multiple consensus clusterings and alternative clustering (2013). DOI 10.1109/IJCNN.2013.6706911
 (42) Sun, S.: A survey of multiview machine learning. Neural Computing and Applications 23(78), 2031–2038 (2013)
 (43) Wang, J., Wang, H., Zhou, Y., McDonald, N.: Multiple kernel multivariate performance learning using cutting plane algorithm. arXiv preprint arXiv:1508.06264 (2015)
 (44) Wang, J., Zhou, Y., Duan, K., Wang, J., Bensmail, H.: Supervised crossmodal factor analysis for multiple modal data classification. SMC 2015 (2015)
 (45) Wang, J., Zhou, Y., Wang, H., Yang, X., Yang, F., Peterson, A.: Image tag completion by local learning. In: Advances in Neural Networks–ISNN 2015, pp. 232–239. Springer (2015)
 (46) Wang, J., Zhou, Y., Yin, M., Chen, S., Edwards, B.: Representing data by sparse combination of contextual data points for classification. In: Advances in Neural Networks–ISNN 2015, pp. 373–381. Springer (2015)
 (47) Wang, J.J.Y., Wang, Y., Jing, B.Y., Gao, X.: Regularized maximum correntropy machine. Neurocomputing 160, 85–92 (2015)
 (48) Wang, S., Zhou, Y., Tan, J., Xu, J., Yang, J., Liu, Y.: Computational modeling of magnetic nanoparticle targeting to stent surface under high gradient field. Computational mechanics 53(3), 403–412 (2014)
 (49) Wang, Z., Sun, X., Sun, L., Huang, Y.: Multiview discriminative geometry preserving projection for image classification. The Scientific World Journal 2014 (2014). DOI 10.1155/2014/924090
 (50) Williams, K., Wu, J., Choudhury, S., Khabsa, M., Giles, C.: Scholarly big data information extraction and integration in the citeseer? digital library. In: Proceedings  International Conference on Data Engineering, pp. 68–73 (2014)
 (51) Wu, T.X., Lian, X.C., Lu, B.L.: Multiview gender classification using symmetry of facial images. Neural Computing and Applications 21(4), 661–669 (2012)
 (52) Yadav, A., Anand, R., Dewal, M., Gupta, S.: Multiresolution local binary pattern variants based texture feature extraction techniques for efficient classification of microscopic images of hardwood species. Applied Soft Computing Journal 32, 101–112 (2015)
 (53) Yu, J., Rui, Y., Tang, Y., Tao, D.: Highorder distancebased multiview stochastic learning in image classification. IEEE Transactions on Cybernetics 44(12), 2431–2442 (2014)
 (54) Zha, Z.J., Yang, Y., Tang, J., Wang, M., Chua, T.S.: Robust multiview feature learning for rgbd image understanding. ACM Transactions on Intelligent Systems and Technology 6(2) (2015). DOI 10.1145/2735521
 (55) Zhai, D., Chang, H., Shan, S., Chen, X., Gao, W.: Multiview metric learning with global consistency and local smoothness. ACM Transactions on Intelligent Systems and Technology 3(3) (2012)
 (56) Zhang, D., Wang, F., Zhang, C., Li, T.: Multiview local learning. In: Proceedings of the National Conference on Artificial Intelligence, vol. 2, pp. 752–757 (2008)
 (57) Zhao, X., Evans, N., Dugelay, J.L.: Unsupervised multiview dimensionality reduction with application to audiovisual speaker retrieval. pp. 7–12 (2013)
 (58) Zhou, Y., Hu, W., Peng, B., Liu, Y.: Biomarker binding on an antibodyfunctionalized biosensor surface: the influence of surface properties, electric field, and coating density. The Journal of Physical Chemistry C 118(26), 14,586–14,594 (2014)