1 Introduction
Abstract

This paper introduces a new and effective algorithm for learning kernels in a MTL (MTL) setting. Although, we consider a MTL scenario here, our approach can be easily applied to standard single task learning, as well. As shown by our empirical results, our algorithm consistently outperforms the traditional kernel learning algorithms such as uniform combination solution, convex combinations of base kernels as well as some kernel alignment-based models, which have been proven to give promising results in the past. We present a Rademacher complexity bound based on which a new MT-MKL (MT-MKL) model is derived. In particular, we propose a SVM (SVM)-regularized model in which, for each task, an optimal kernel is learned based on a neighborhood-defining kernel that is not restricted to be PSD (PSD). Comparative experimental results are showcased that underline the merits of our neighborhood-defining framework in both classification and regression problems.

1 Introduction

As shown by the past empirical works [5, 1, 2, 31, 17], it is beneficial to learn multiple related tasks simultaneously instead of independently as typically done in practice. A commonly utilized information sharing strategy for MTL is to use a (partially) common feature mapping to map the data from all tasks to a (partially) shared feature space . Such a method, named kernel-based MTL, not only allows information sharing across tasks, but also enjoys the non-linearity that is brought by the feature mapping .

While applying kernel-based models, it is crucial to carefully choose the kernel function, as using inappropriate kernel functions may lead to deteriorated generalization performance. A widely adapted strategy for kernel selection is to learn a convex combination of some base kernels [15, 19], which combined with MTL, results in the MT-MKL approach. Such a method linearly combines pre-selected basis kernel functions , with the combination coefficients , which are learned during the training stage in a pre-defined feasible region. For example, a widely used and theoretically well studied feasible region is given by the -norm constraint [15]: . As such, each task features a common kernel function . One such MT-MKL model is proposed in [26]. Besides, a more general MT-MKL approach with conically combined multiple objective functions and -norm MKL (MKL) constraint is introduced in [21], and further extended and theoretically studied in [20]. A MT-MKL model that allows both feature and kernel selection is proposed in [12] and extended in [13]. Finally, in [29], the authors proposed to use a partially shared kernel function, \ie, , with -norm constraints are put on and . Such a method allows data from the unrelated tasks to be mapped to task-specific feature spaces, instead of sharing feature space with other tasks, thus potentially prevents the effect of “negative transfer”, \ie, knowledge transferred between irrelevant tasks, which leads to degraded generalization performance.

Another rather different approach for learning kernels is based on the notion of KTA (KTA), which is a similarity measure between the input and output (target) kernels. There exist several studies that utilize the kernel alignment [8, 10, 14] or centered kernel alignment [7] as their kernel learning criteria. It has been theoretically shown that maximizing the alignment between the input kernel and the target one can lead to a highly accurate learning hypothesis (see Theorem 13 for classification and Theorem 14 for regression in [7] ). Also, via a series of experiments, the authors in [7] demonstrate that the alignment approach consistently outperforms the traditional kernel-based methods such as uniform or convex combination solutions. As shown in [7], the problem of learning a maximumly aligned kernel to the target can be efficiently reduced to a simple QP (QP), which in turn is equivalent to considering a Frobenius norm of differences between the input and target kernels.

Inspired by the idea of kernel alignment, in this paper we present a new MT-MKL model in which, for each task , the “optimal” kernel matrix is highly aligned with a “neighborhood-defining” alignment matrix , which is dictated by the data itself. In particular, we derive a Rademacher complexity bound for MTL function classes induced by an alignment-based regularization. It turns out that the Rademacher complexity of such classes can be upper-bounded in terms of the neighborhood alignment matrices. Based on this observation, we derive a new algorithm where the optimal kernels are learned simultaneously with the alignment matrices, using a regularized SVM optimization problem. As opposed to the target kernel alignment approach (in which the alignment kernel is PSD) , we do not restrict our alignment matrices to be PSD. Therefore, our model enjoys more flexibility in the sense that it allows the optimal kernel to reside in the neighborhood of an indefinite matrix, whenever warranted by the data.

It is worth pointing out that the problem of learning with indefinite kernels has been addressed by many researchers [25, 18, 32, 11, 24], as it has been shown that in many real-life applications, the PSD-ness constraints on the kernels might limit the usability of kernel-based methods. (see [25] for a discussion). Examples of such situations include using the BLAST, Smith-Waterman or FASTA similarity scores between protein sequences in bioinformatics; using the cosine similarity between term frequency-inverse document frequency (tf-idf) vectors in text mining; using the pyramid match kernel, shape matching and tangent distances in computer vision; using human-judged similarities between concepts and words in information retrieval, just to name a few.

Finally, via a series of comparative experiments, we show that our proposed model surpasses in performance the traditional kernel-based learning algorithms such as uniform and convex combination solutions. As shown by the experiments, our method also improves upon the KTA approach in which an optimal kernel is learned by maximizing the alignment between the target kernel and the convex contamination kernel . Moreover, we show that our model empirically outperforms some other similar approaches of learning an optimal neighborhood kernel. However, as we discuss later, the similarity between our model and other optimal neighborhood kernel learning models [22, 23] is only superficial.

The remainder of this paper is organized as follows: \srefsec:NewModel contains a formal description of a MTL alignment-regularized learning framework with fixed alignment matrices. \srefsec:GeneralizationBounds presents a Rademacher complexity bound for the corresponding hypothesis class of alignment-based models presented in \srefsec:NewModel. Also in \srefsec:NeighKernelChoice, we present our new MTL model, and further discuss the motivation and subsequent derivation of optimal neighborhood-defining kernels. Experimental results obtained for MT (MT) classification and regression are provided in \srefsec:Experiments to show the effectiveness of the proposed model compared to some other kernel-based formulations. Finally, in \srefsec:Conclusions we briefly summarize our findings.

In what follows, we use the following notational conventions: vectors and matrices are depicted in bold face. A prime denotes vector/matrix transposition. The ordering symbols and stand for the corresponding component-wise relations. Additional notation is defined in the text as needed.

2 Multi-Task with Neighborhood-defining Matrices

Consider a linear MTL model involving tasks, each of which is addressed by an SVM model. For each supervised task , assume that there is a training set sampled from based on some probability distribution , where denotes an arbitrary set that serves as the native space of samples for all tasks and represents the output space associated with the labels. Without loss of generality, we assume that the same number of labeled samples are available for learning each task. Furthermore, we assume that the SVM tasks are going to be learned via a standard MKL scheme using a prescribed collection of RKHS , such that each is equipped with an inner product and that has an associated feature mapping . The associated reproducing kernel is such that for all .

It is not hard to verify that this considerations can imply an equivalent RKHS that serves as the partially common feature space for all tasks. In specific, one can consider with induced feature mapping , endowed with the inner product and its associated reproducing kernel function for all . Define as , where for all . Also let . At this point, we would like to bring into attention that in order to address the problem of negative transfer, for each task , we let , where , . We also define , which is the concatenation of the mutual vector and the task-specific vector parameters s. Then, we consider the following Hypothesis class

(1)

where , , with being the kernel matrix, whose entry is given as . Also, are neighborhood-defining matrices, which are assumed to be pre-defined at this moment. We will show in \srefsec:NeighKernelChoice how these matrices can be determined based on the Rademacher complexity of the model. Note that via Tikhonov-Ivanov equivalency, one can show that the Frobenius norm constraint in the set can be equivalently considered as a regularization term in the objective function of the corresponding learning problem of hypothesis class (1). Furthermore, it can be shown that minimizing over this term can be reduced to a simple QP itself, which in turn, is equivalent to an alignment maximization problem between two kernels and (see Proposition 9 in [7]).

With this being said, if one defines for each task , then the term in (1) reduces to the KTA quantity, which measures the alignment between the kernel and target kernel matrix , derived from the output labels. Obviously, unlike the idea of this paper in which the neighborhood-defining matrix is also learned in a data-driven manner, the target kernel is fixed.

Other approaches in single task context [22, 23] exist that also consider the problem of learning an optimal kernel from a noisy observation. However, these approaches are different in spirit from our approach here. These differences can be summarized as follows: (1) they assume that both the optimal kernel and the noisy one are PSD matrices, (2) they use the neighborhood defining kernel during the training, and the proxy kernel during the test procedure, and therefore past and future examples are treated inconsistently by their model, and (3) more importantly, in both approaches, the feature space is assumed to be induced by the neighborhood-defining kernel , (and not the original kernel ). One potential reason for this consideration might be related to the fact that assuming a feature space induced by the kernel leads to the trivial solution in their formulations.

In the next section, we present Rademacher complexity bound for the hypothesis class in (1), which helps us in designing a new MTL model with a regularization term on based on the complexity of the model.

3 Rademacher Complexity

Rademacher complexity is a measure of how well the functions in a hypothesis class correlates with the random noise, and therefore it quantifies the richness of the hypothesis set . Given a space , let be a set of data, which are drawn identically and independently according to distribution . Then, the Empirical Rademacher Complexity of the hypothesis class is defined as

(2)

where s are independent uniform -valued random variables. Rademacher complexities are data-dependent complexity measures that lead to finer learning guarantees [16, 4].

With some algebra manipulation over the term in (1), it is not difficult to see that the constraint set for is obtained as

(3)

where the definition of , and are given as follows:

First, is a block matrix that is defined as

(4)

Here, , whose -th element is given as . is a block matrix, whose -th block is a -dimensional vector . Similar to , is also a block matrix, whose -th block is a diagonal matrix, where the -th diagonal element is given by , \ie, the -th block matrix of is defined as . Note that it can be easily shown that the matrix is PD (PD) whenever the base kernels (s) are linearly independent. We assume without loss of generality that matrix is PD, otherwise we can choose an independent subset of base kernels.

Second, given a matrix , whose -th element is defined as , the vector in (3) is given as

(5)

where is the -dimensional all-one vector, and is the matrix vectorization operator.

Finally, in (3) is defined as

(6)
{thrm}

Let be the HS (HS) defined in \erefMTL-ONK-HS1. Then, for all and fixed neighborhood-defining matrices , it holds that the empirical Rademacher complexity can be upper-bounded as

(7)

with

(8)

where , , is a vector whose -th element is and other elements are , and stands for the Kronecker product. Also, is a matrix whose -th element is defined as . The proof of this theorem is provided in the Appendix.

In the next section, we present a new MT-ONMKL (MT-ONMKL) formulation, which enjoys a data-driven procedure for selecting the optimal kernel from neighborhood alignment matrices. More specifically, our model learns the optimal kernel by considering an additional regularization term derived based on the Rademacher complexity of the alignment-regularized hypothesis space (1).

4 The New Mt-Onmkl model

Since the Rademacher complexity bounds give guarantees for the generalization error, they are considered as one of the most helpful data-dependent complexity measures in both theoretical analysis and designing of more efficient algorithms in machine learning problems. As an example, in the context of kernel-based learning methods, most algorithm restrict the learning hypothesis class to a constraint on the trace of the kernel, as it has been shown that, for a fixed kernel, the Rademacher complexity of a kernel-based algorithm can be upper-bounded in terms of the trace of the kernel [3, 19, 28]. Here, we also derive an upper bound for Rademacher complexity of alignment-regularized hypothesis classes, similar to (1), and then we design a new MT-MKL based on our derived bound.

4.1 Formulation

As we showed earlier, in terms of the neighborhood-defining matrices s, the Rademacher complexity of in (1) can be upper-bounded by the quantity . Thus, we can add this constraint to restrict the hypothesis class , which leads to a new MTL formulation presented in the sequel. In particular, considering that part of (7) which depends on s, we define the following regularizer to learn our neighborhood-defining measures s

(9)

where . Now, we formulate our new MT-ONMKL model as the following optimization problem

(10)

where , ; and are regularization parameters.

Unlike approaches such in [23] and [22], MT-ONMKL opts to choose the neighborhood-defining matrices using optimization problem (10), in lieu of choices that are much harder to justify. The benefits of this particular choice are largely reflected in the experimental results reported in the next section.

4.2 Algorithm

First, note that if one considers inter-related SVM classification problems, then (10) can be equivalently expressed as

s.t. (11)

where . Note that a very similar formulation can be derived for regression using algorithms such as SVR at this stage. Thus, the algorithm that we present in the following can be easily extended to regression problems with a simple substitution of SVM with SVR.

It can be shown that the primal-dual form of (4.2) with respect to and is given by

s.t. (12)

where is the Lagrangian dual variable for the minimization problem \wrt. A block coordinate descent framework can be applied to decompose \probrefeq:dualFormulationEquivalent into three subproblems. The first subproblem which is the maximization problem with respect to , can be efficiently solved via LIBSVM [6], and the second subproblem, which is the minimization with respect to , takes the quadratic form

(13)

where the vector is given as

(14)

Here is a matrix whose -th element is defined as , where is the -dimensional all-one vector, and is the matrix vectorization operator. As we show later, the matrix is PSD, and therefore, optimization problem (13) is convex for which any quadratic problem solver can be employed to find the optimal in each iteration. The optimization problem \wrt is given as

(15)

Using \proprefProp-QP (in the Appendix), this problem can be reduced to solving the following simple QP

(16)

where , and with and the Pseudo inverse of defined in (8). Also, , where . Note that the projection matrix is PSD. Therefore, the optimization problem (16) is convex in for , and it has the well known analytical solution .

5 Experiments

In this section, we demonstrate the merits of MT-ONMKL via a series of comparative experiments. For all experiments, Linear, Polynomial with degree , and Gaussian kernels with spread parameters have been utilized as kernel functions for MKL. All kernels are normalized as . Note that, in order to derive the need for MTL, we intentionally keep the training set size small as only of the samples we use for each experiment. The rest of data are split in equal sizes for validation and testing. The SVM regularization parameter is chosen over the set ; and are chosen over the set via cross-validation.

5.1 Benchmark Datasets

We evaluate the performance of our method on the following datasets:

Letter Recognition dataset is a collection of handwritten words – collected by Rob Kassel at MIT spoken Language System Group – involves the eight tasks: ‘C’ vs. ‘E’, ‘G’ vs. ‘Y’, ‘M’ vs. ‘N’, ‘A’ vs. ‘G’, ‘I’ vs. ‘J’, ‘A’ vs. ‘O’, ‘F’ vs. ‘T’ and ‘H’ vs. ‘N’. Each letter is represented by 8 by 16 pixel image, which forms a 128 dimensional feature vector. We randomly chose 200 samples for each letter. An exception is letter ‘J’, for which only 189 samples were available.

Landmine Detection dataset consist of 29 binary classification tasks collected from various landmine fields. Each data sample is represented by a 9-dimensional feature vector extracted from radar images and is associated to a binary class label . The feature vectors correspond to regions of landmine fields and include four moment-based features, three correlation-based features, one energy ratio feature, and one spatial variance feature. The objective is to recognize whether there is a landmine or not based on a region’s features.

Spam Detection dataset was obtained from ECML PAKDD 2006 Discovery challenge for the spam detection problem. For our experiments, we used Task B dataset which contains labeled training data (emails) from inboxes of different users. The goal is to construct a binary classifier for each user, detecting spam () emails from the non-spam () ones. Each email is represented by the term frequencies of the words resulting in features from which we chose the most frequent ones.

SARCOS dataset is generated from an inverse dynamics prediction system of a seven degrees-of-freedom (DOF) SARCOS anthropomorphic robot arm. This dataset consists of dimensions: the first dimensions are considered as features (including joint positions, joint velocities and joint accelerations), and the last 7 dimensions, corresponding to joint torques, are used as outputs. Therefore, there are tasks and the inputs are shared among all the tasks. for each -dimensional observation, the goal is then to predict joint torques for the seven DOF. This dataset involves observations from which we randomly sampled examples for our experiments.

Short-term Electricity Load Forecasting dataset which was released for the Global Energy Forecasting Competition (GEFComp2012). This dataset contains hourly-load history of a US utility in different zones from January st, 2004 to December , 2008. The goal is to predict the -hour-ahead electricity load of these zones. For this purpose, we considered predictors consist of a delay vector of lagged hourly loads along with the calendar information including years, seasons, months, weekdays and holidays. Note that we normalized the data to unify the units of different features. Finally, we randomly sampled (non-sequential) examples per each task for our experiments.

5.2 Experimental Results

To assess the performance of our MT-ONMKL, we compared it with some neighborhood kernel approaches reviewed in \srefsec:NewModel. In both cases, considering an SVM formulation, the optimization over leads to the analytical solution which depends on the labels of the training samples. More specifically, the first model in [22] uses a Gaussian kernel matrix with the spread parameter . Note that, as suggested by the authors in [22], we did cross-validation over to choose the best kernel. We will be refereeing to this model as RPKL (RPKL). The second approach in [23], dubbed ONJKL (ONJKL), instead of pre-specifying, it learns the kernel in the form of a linear combination of a set of base kernels. Note that we modified the formulations in [22, 23] to MTL setting. We also compare our approach with a simple KTA model in which the neighborhood-defining matrix is fixed, and it is defined as , for each task .

Moreover, we evaluate the performance of our model against the classical MT-MKL, which considers inter-related SVM-based formulations with multiple kernel functions, and jointly learns the parameter s of the kernels during the training process. AVeraged Multi-Task Multiple Kernel Learning (AVMTMKL), in which MKL parameters are all fixed and set equal to , is another method we consider in our comparison study. Finally, an ITL (ITL) model is used as a baseline, according to which each task is individually trained via a traditional single-task MKL strategy, and the average performance over all tasks is taken to gauge the effectiveness of this method versus others.

Classification Accuracy Regression MSE
Landmine Letter Spam SARCOS Load
ITL

AVMTMKL

MT-MKL

RPKL


ONJKL


KTA

MT-ONMKL
Table 1: Experimental comparison between MT-ONMKL and six other methods on five benchmark datasets. The superscript next to each model indicates its rank. The best performing algorithm gets rank of .
\tref

multi-task2 reports the average performance (accuracy for classification, and MSE for regression) over 20 runs of randomly sampled training sets for each experiment. The superscript next to each value indicates the rank of the corresponding model on the relevant data set, while the superscript next to each model name reflects its average rank over all data sets. Note that we used Friedman’s and Holm’s post-hoc tests in [9], using which a model can be statistically compared to a set of other methods over multiple data sets. According to this statistical analysis, we concluded that our model dominates all other methods at the significance level .

Note that, although KTA shows promising results in classification, it fails good results for regression problems. This is even more evident for SARCOS dataset which is considered a challenging problem due to the strong nonlinearity of the model caused by the extensive amount of superpositions of sine and cosine functions in robot dynamics [30]. This might bring one to the conclusion that, in complex prediction problems similar to robot inverse dynamic, using the output kernel might not be the best choice to align the optimal kernel.

(a) Classification (Letter)
(b) Regression (Load)
Figure 1: Kernel alignment between optimal and neighborhood matrices for each pair of tasks

For all four alignment-regularized models, the pairwise alignments between the optimal kernels and the neighborhood-defining matrices are shown in \freffig:Feature space. As one can observe, for both classification and regression problems Letter and Load, our optimal kernel for each task is highly aligned, not only with its own corresponding neighborhood kernel, but also with the neighborhood kernels of other tasks. This would suggest that our model can provide best alignment as well as best performance of the final kernels among all other alignment-based models considered in this study.

6 Conclusions

In this work, we proposed a novel SVM-based MT-MKL framework for both classification and regression. Our new algorithm improves over the existing kernel-based methods, which have been demonstrated to be good performers on a variety of classification and regression tasks in the past. Our model, particularly, learns an optimal kernel simultaneously with a neighborhood (possibly indefinite) kernel, based on a Rademacher complexity-regularized model. As opposed to the previous approaches, our MT-ONMKL model identifies the neighborhood defining kernels in a much more principled manner. In specific, they are chosen as the ones minimizing the Rademacher complexity bound of alignment-regularized models. The performance advantages reported for both classification and regression problems largely seem to justify the arguments related to the introduction of this new model.

Supplementary Materials

A useful lemma in deriving the generalization bound of \thrmrefthrm:thrm is provided next.

{lemm}

Let and let be a vector of independent Rademacher random variables. Let denote the Hadamard (component-wise) matrix product. Then, it holds that

(17)
Proof.

Let denote the Iverson bracket, such that , if is true and , if false. The expectation in question can be written as

(18)

where the indices of the last sum run over the set . Since the components of are independent Rademacher random variables, it is not difficult to verify the fact that only in the following four cases: , , and ; in all other cases, . Therefore, it holds that

(19)

Substituting \erefeq:app2 into \erefeq:app1, after some algebraic operations, yields the desired result.

Proof of \thrmrefthrm:thrm

As mentioned earlier, the Rademacher complexity of function class is defined as

(20)

where ’s are i.i.d. Rademacher random v ariables. By invoking the Representer Theorem (e.g. see [27]), (20) becomes

(21)

where . Also, is defined as

Instead, consider the relaxed constraint set . Then, it follows that

(22)

Using , , and , the right-hand side of \erefeq:thm:4, if first optimized w.r.t. the ’s, yields

(23)

where , , . Also, using conversion, we have for any non-negative vector and any :

Taking and , it can be shown that , where

(24)

and , whose -th element is defined as , \erefequeq:thm:5 becomes

(25)

Optimizing w.r.t. finally yields

(26)

By applying Jensen’s Inequality twice, we obtain

(27)

If is defined as \erefeq:blockdVectorDef, the first expectation evaluates to

(28)

Note that with the aid of \lemmreflemm:lemma, and definition of in \erefeq:blockdVectorDef, it can be shown that

(29)

Combining \erefeq:thm:8, \erefeq:thm:9 and \erefeq:thm:10 we conclude that

where we used the Arithmetic-Geometric Mean inequality in the last step.

{prop}

Let stack all the columns of a matrix into a vector. Define the matrix as (8). . Let denote the Pseudo inverse of matrix , and define the orthogonal projection . Also, let the vector , where . If one defines , and , then the solution of the following QP

(30)

is the same as the solution of the optimization problem (15).

Proof.

The proof follows from the fact that , and . Replacing these quantities in (15) completes the proof. ∎

References

  • [1] Rie Kubota Ando and Tong Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6, 2005.
  • [2] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008.
  • [3] Francis R Bach, Gert RG Lanckriet, and Michael I Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In Proceedings of the twenty-first international conference on Machine learning, page 6. ACM, 2004.
  • [4] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.
  • [5] Rich Caruana. Multitask learning. Machine Learning, 28:41–75, 1997.
  • [6] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  • [7] Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. Algorithms for learning kernels based on centered alignment. Journal of Machine Learning Research, 13(Mar):795–828, 2012.
  • [8] Nello Cristianini, John Shawe-Taylor, Andre Elisseeff, and Jaz Kandola. On kernel-target alignment. In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, pages 367–373. MIT Press, 2001.
  • [9] Janez Demvar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine learning research, 7(Jan):1–30, 2006.
  • [10] Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In International conference on algorithmic learning theory, pages 63–77. Springer, 2005.
  • [11] Suicheng Gu and Yuhong Guo. Learning svm classifiers with indefinite kernels. In Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012.
  • [12] Tony Jebara. Multi-task feature and kernel selection for SVMs. In ICML, 2004.
  • [13] Tony Jebara. Multitask sparsity via maximum entropy discrimination. Journal of Machine Learning Research, 12:75–110, 2011.
  • [14] Seung-Jean Kim, Alessandro Magnani, and Stephen Boyd. Optimal kernel selection in kernel fisher discriminant analysis. In Proceedings of the 23rd international conference on Machine learning, pages 465–472. ACM, 2006.
  • [15] Marius Kloft, Ulf Brefeld, Soren Sonnenburg, and Alexander Zien. -norm multiple kernel learning. Journal of Machine Learning Research, 12:953–997, 2011.
  • [16] Vladimir Koltchinskii and Dmitry Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, pages 1–50, 2002.
  • [17] Abhishek Kumar and Hal Daume. Learning task grouping and overlap in multi-task learning. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1383–1390, 2012.
  • [18] Achintya Kundu, Vikram Tankasali, Chiranjib Bhattacharyya, and Aharon Ben-Tal. Efficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions. In Advances in Neural Information Processing Systems, pages 1198–1206, 2010.
  • [19] Gert R.G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. JMLR, 5:27–72, 2004.
  • [20] Cong Li, Michael Georgiopoulos, and Georgios C Anagnostopoulos. Conic multi-task classification. In Machine Learning and Knowledge Discovery in Databases, pages 193–208. Springer, 2014.
  • [21] Cong Li, Michael Georgiopoulos, and Georgios C Anagnostopoulos. Pareto-path multi-task multiple kernel learning. ArXiv e-prints, April 2014. arXiv:1404.3190.
  • [22] Jun Liu, Jianhui Chen, Songcan Chen, and Jieping Ye. Learning the optimal neighborhood kernel for classification. In Proceedings of the 21st International Jont Conference on Artifical Intelligence, IJCAI’09, pages 1144–1149, San Francisco, CA, USA, 2009. Morgan Kaufmann Publishers Inc.
  • [23] Xinwang Liu, Jianping Yin, Lei Wang, Lingqiao Liu, Jun Liu, Chenping Hou, and Jian Zhang. An adaptive approach to learning optimal neighborhood kernels. IEEE T. Cybernetics, 43(1):371–384, 2013.
  • [24] Gaëlle Loosli, Stéphane Canu, and Cheng Soon Ong. Learning svm in krein spaces. IEEE transactions on pattern analysis and machine intelligence, 38(6):1204–1216, 2016.
  • [25] Cheng Soon Ong, Xavier Mary, Stéphane Canu, and Alexander J Smola. Learning with non-positive kernels. In Proceedings of the twenty-first international conference on Machine learning, page 81. ACM, 2004.
  • [26] Wojciech Samek, Alexander Binder, and Motoaki Kawanabe. Multi-task learning via non-sparse multiple kernel learning. In Pedro Real, Daniel Diaz-Pernil, Helena Molina-Abril, Ainhoa Berciano, and Walter Kropatsch, editors, Computer Analysis of Images and Patterns, volume 6854 of Lecture Notes in Computer Science, pages 335–342. Springer Berlin / Heidelberg, 2011.
  • [27] Bernhard Schölkopf, Ralf Herbrich, and AlexJ. Smola. A generalized representer theorem. In David Helmbold and Bob Williamson, editors, Computational Learning Theory, volume 2111 of Lecture Notes in Computer Science, pages 416–426. Springer Berlin Heidelberg, 2001. doi:10.1007/3-540-44581-1_27.
  • [28] Soren Sonnenburg, Gunnar Ratsch, Christin Schafer, and Bernhard Scholkopf. Large scale multiple kernel learning. Journal of Machine Learning Research, 7:1531–1565, 2006.
  • [29] Lei Tang, Jianhui Chen, and Jieping Ye. On multiple kernel learning with multiple labels. In IJCAI, 2009.
  • [30] Sethu Vijayakumar and Stefan Schaal. Locally weighted projection regression: An o (n) algorithm for incremental real time learning in high dimensional space. In In Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000).
  • [31] Niloofar Yousefi, Michael Georgiopoulos, and Georgios C Anagnostopoulos. Multi-task learning with group-specific feature space sharing. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 120–136. Springer, 2015.
  • [32] Jinfeng Zhuang, Ivor W Tsang, and Steven CH Hoi. A family of simple non-parametric kernel learning algorithms. Journal of Machine Learning Research, 12(Apr):1313–1347, 2011.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
4913
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description