1 Introduction
Abstract

In this paper we present two related, kernel-based DML (DML) methods. Their respective models non-linearly map data from their original space to an output space, and subsequent distance measurements are performed in the output space via a Mahalanobis metric. The dimensionality of the output space can be directly controlled to facilitate the learning of a low-rank metric. Both methods allow for simultaneous inference of the associated metric and the mapping to the output space, which can be used to visualize the data, when the output space is 2- or 3-dimensional. Experimental results for a collection of classification tasks illustrate the advantages of the proposed methods over other traditional and kernel-based DML approaches.


Keywords: Distance Metric Learning, Kernel Methods, Reproducing Kernel Hilbert Space for Vector-valued Functions

1 Introduction

DML has become an active research area due to the fact that many machine learning models and algorithms depend on metric calculations. Considering plain Euclidean distances between samples may not be a suitable approach for some practical problems, \eg, for KNN (KNN) classification, where a metric other than the Euclidean may yield higher recognition rates. Hence, it may be important to learn an appropriate metric for the learning problem at hand. DML aims to address this problem, \ie, to infer a parameterized metric from the available training data that maximizes the performance of a model.

Most of past DML research focuses specifically on learning a weighted Euclidean metric, also known as the Mahalanobis distance (\eg see [13]), or generalizations of it, where the weights are inferred from the data. For elements of a finite-dimensional Euclidean space , the Mahalanobis distance is defined as , where , \ie  is symmetric positive semi-definite matrix of weights to be determined. Note that when is not strictly positive definite, it defines a pseudo-metric in . An obvious DML approach is to learn this metric in the data’s native space, which is tantamount to first linearly transforming the data via a matrix , such that , and then measuring distances using the standard Euclidean metric .

One possible alternative worth exploring is to search for a non-linear transform prior to measuring Mahalanobis distances, so that performance may improve over the case, where a linear transformation is used. Towards this end, efforts have been recently made to develop kernel-based DML approaches. If is the original (native) data space, most of these methods choose an appropriate (positive definite) scalar kernel , which gives rise to a RKHS (RKHS) of functions with inner product . This inner product satisfies the (reproducing) property that, for any , there are functions , such that . The mapping is referred to as the feature map and is referred to as the (transformed) feature space of , both of which are implied by the chosen kernel. Notice that the feature map may be highly non-linear. Subsequently, these methods learn a metric in the feature space : , where is a self-adjoint, bounded, positive-definite operator, preferably, of low rank. Since any element of may be of infinite dimension, operator may be described by an infinite number of parameters to be inferred from the data. Obviously, learning is not feasible by following direct approaches and, therefore, needs to be learned in some indirect fashion. For example, the authors in [8] pointed out an equivalence between kernel learning and metric learning in the feature space. In specific, they showed that learning in is implicitly achieved by learning a finite-dimensional matrix.

In this paper, we propose a different DML kernelization strategy, according to which a kernel-based, non-linear transform maps into a Euclidean output space , in order to learn a Mahalanobis distance in that output space. This strategy gives rise to two new models that simultaneously learn both the mapping and the output space metric. Leveraged by the Representer Theorem proposed in [14], all computations of both methods involve only kernel calculations. Unlike previous kernel-based approaches, whose mapping from input to feature space cannot be cast into an explicit form, the relevant mappings from input to output space are explicit for both of our methods. Thus, we can access the transformed data in the output space, and this feature can be even used to visualize the data [20], when the output space is - or -dimensional. Furthermore, by specifying the dimensionality of the output space, the rank of the learned metric can be easily controlled to facilitate dimensionality reduction of the original data.

Our first approach uses an appropriate, but otherwise arbitrary, matrix-valued kernel function and, hence, provides maximum flexibility in specifying the mapping . Furthermore, in this approach, Mahalanobis distances are explicitly parameterized by a weight matrix to be learned. Our second method is similar to the first one, but assumes a specific parameterized matrix-valued kernel function that can be inferred from the data. We show that the Mahalanobis distance is implicitly determined by the kernel function, and thus eliminates the need of learning a weight matrix for the Mahalanobis distances. To demonstrate the merit of our methods, we compare them to standard -NN classification (without DML) and other recent kernelized DML algorithms, including Large Margin Nearest Neighbor (LMNN) [22], Information-Theoretic Metric Learning (ITML) [4] and kernelized LMNN (KLMNN) [3]. The comparisons are drawn using eight UCI benchmark data sets in terms of recognition performance and show that the novel methods can achieve higher classification accuracy.

Related Work Several previous works have been focused on DML. Xing, et. al. [23] proposed an early DML method, which minimizes the distance between similar points, while enlarging the distance between dissimilar points. In [17], relative comparison constraints that involve three points at a time are considered. Neighborhood Components Analysis (NCA) [6] is proposed to learn a Mahalanobis distance for the -NN classifier by maximizing the leave-one-out -NN performance. [1] proposed a DML method for clustering. Large Margin Nearest Neighbor (LMNN) DML model [22] aims to produce a mapping, so that the -nearest neighbors of any given sample belong to the same class, while samples from different classes are separated by large margins. Similarly, a Support Vector-based method is proposed in [15]. Also, LMNN is further extended to a Multi-Task Learning variation [16]. Another multi-task DML model is proposed in [25] that searches for task relationships. In [7], the authors proposed a general framework for sparse DML, such that several previous works are subsumed. Also, some other DML models can be extended to sparse versions by augmenting their formulations. Recently, an eigenvalue optimization framework for DML was developed an presented in [24]. Moreover, the connection between LMNN and Support Vector Machines (SVMs) was discussed in [5].

Besides the problem of learning a metric in the original feature space, there has been increasing interest in kernelized DML methods. In the early work of [19], the Lagrange dual problem of the proposed DML formulation is derived, and the DML method is kernelized in the dual domain. Information-Theoretic Metric Learning (ITML) [4] is another kernelized method, which is based on minimizing the Kullback-Leibler divergence between two distributions. The kernelization of LMNN is discussed in [18] and [10]. Moreover, a Kernel Principal Component Analysis (KPCA)-based kernelized algorithm is developed in [3], such that many DML methods, such as LMNN, can be kernelized. In [12], the Mahalanobis matrix and kernel matrix are learned simultaneously. In [8] and its extended work [9], the authors proposed a framework that builds connections between kernel learning and DML in the kernel-induced feature space. Several kernelized models, such as ITML, are covered by this framework. Finally, MKL (MKL)-based metric DML is discussed in [21].

2 RKHS for Vector-Valued Functions

Before introducing our methods, in this section we will briefly review the concept of Reproducing Kernel Hilbert Space (RKHS) for vector-valued functions as presented in [14]. Let be an arbitrary set, which we will refer to as input space, although it may not actually be a vector space per se. A matrix function is called a positive-definite matrix-valued kernel, or simply matrix kernel, iff it satisfies the following conditions:

(1)
(2)
(3)

where and is a block matrix, whose block is given as , where . According to [14, Theorem 1], if is a matrix kernel, then there exists a unique (up to an isometry) RKHS of vector-valued functions equipped with an inner product that admits as its reproducing kernel, \ie and , there are vector-valued functions that depend on and respectively, such that it holds

(4)

Note that is a bounded linear operator parameterized by and that the function is such that, when evaluated on , it yields

(5)

3 Fixed Matrix Kernel Dml Formulation

In this section, we propose our first kernelized DML method based on a RKHS for vector-valued functions. Again, let be an arbitrary set. Assume we are provided with a training set , where and , and we are considering the supervised learning task that seeks to infer a distance metric in along with a mapping from . In addition to , we also assume that we are provided with a real-valued, symmetric similarity matrix with entries , where is such that . Other than these constraints, the values can be arbitrary and assigned appropriately with respect to a specific application context. Moreover, let be a matrix-valued kernel function (\ie, it satisfies \erefeq:kernel_condition_symmetry through \erefeq:kernel_condition_pd2) on of given form and let be its associated RKHS of -valued elements. Consider now the following DML formulation:

(6)

Notice that , where . In other words, the Euclidean norms of vector differences appearing in (6) are Mahalanobis distances for the output space. Note that if is not full-rank, then is not strictly positive definite, thus will be a pseudo-metric in . The rationale behind this formulation is as follows. The first term, the collocation term, forces similar (w.r.t. the similarity measure ) input samples to be mapped closely in the output space (unsupervised learning task). The second term, the regression term, forces samples to be mapped close to their target values (supervised learning task). In the context of classification tasks, the combination of these two terms aims to force data that belong to the same class to be mapped close to the same cluster. Closeness in the output space is measured via a Mahalanobis metric that is parameterized via . The third term, as we will show later, controls the magnitude of matrix and facilitates the derivation of our proposed algorithm. Finally, the fourth term is a regularization term and is penalizing the complexity of . Eventually, one can simultaneously learn the output space distance metric and the mapping through a joint minimization.

The functional of \prefeq:formulation_general satisfies the conditions stipulated by the Representer Theorem for Hilbert spaces of vector-valued elements (Theorem 5 in [14]) and, therefore, for a fixed value of , the unique minimizer is of the form:

(7)

where the -dimensional vectors are to be learned. Notice that, due to \erefeq:operator_point_evaluation, the explicit input-to-output mapping is given in \erefeq:mapping and is, in general, non-linear in , if is a vector space over the reals.

(8)
{proposition}\pref

eq:formulation_general is equivalent to the following minimization problem:

(9)

where , is the kernel matrix for the training set (as defined for \erefeq:kernel_condition_pd2), , and .

The above proposition can be proved by directly substituting \erefeq:solution_f into \prefeq:formulation_general and then using \erefeq:reproducing_property. Given two samples , , the inferred metric will be of the form

(10)

with . Next, we state a result that facilitates the solution of \prefeq:formulation_general_matrix_form.

{proposition}\pref

eq:formulation_general_matrix_form is convex with respect to each of the two variables and individually.

Proof.

The convexity of the objective function, denoted as , with respect to is guaranteed by the positive semi-definiteness of the corresponding Hessian matrix of :

(11)

To show the convexity with respect to , we consider each term separately. The convexity of stems from the conclusion in [2, p. 110], which states that is convex with respect to any matrix for any . For the same reason, is also convex. Finally, is convex in , as shown in [2, p. 109]. Thus, the objective function is also convex with respect to . ∎

Based on \proprefprop:convex_general_case, we can perform the joint minimization \prefeq:formulation_general_matrix_form by block coordinate descent with respect to and . We set the partial derivatives of with respect to the two variables to zero and obtain

(12)
(13)

where stands for Moore-Penrose pseudo-inversion. One can update via \erefeq:solution_c_general by holding fixed to its current estimate and then update via \erefeq:solution_L_general by using the most current value of . Repeating these steps until convergence would constitute the basis for the block-coordinate descent to train this model. Due to the calculation of the pseudo-inverse, the time complexity of each iteration, in the worst case scenario, is .

As we can observe from \erefeq:solution_L_general, since , the parameter that appears in the term of \prefeq:formulation_general directly controls the norm of . Although other regularization terms on may be utilized in place of , they may not lead to a simple update equation for , such as the one given in \erefeq:solution_L_general. The potential appeal of this formulation stems from the simplicity of the training algorithm combined with the flexibility of choosing a matrix kernel function that is suitable to the application at hand.

4 Parameterized Matrix Kernel Dml Formulation

Our next formulation shares all assumptions with the previous one with the exception that the matrix kernel function is now parameterized. We shall show that, even though the matrix kernel function is somewhat restricted, it has the property that is able to implicitly determine the output space Mahalanobis metric. To start, we assume a matrix kernel of the form:

(14)

where is a scalar kernel function that is predetermined by the user and is a symmetric, positive semi-definite matrix, which will be learned from . Because of this facts, satisfies \erefeq:kernel_condition_symmetry through \erefeq:kernel_condition_pd2 and, therefore is a legitimate matrix kernel function. The formulation for the alternative DML model reads

(15)

where is the squared Frobenius norm of and is the matrix trace operator. \prefeq:formulation_specialized_B differs from \prefeq:formulation_general in a regularization term and in that the former seems to use Euclidean distances in the output space, while the latter uses Mahalanobis distances in the output space with weight matrix . As was the case with the formulation of \srefsec:Sub_Problem_Formulation, the functional of \prefeq:formulation_specialized_B also satisfies the conditions of the Representer Theorem for Hilbert spaces of vector-valued elements and, for fixed value of , the unique minimizer has the same form as the one of \erefeq:solution_f and the explicit input-to-output mapping is given as

(16)

which, in all but trivial cases, is again non-linear in , if is a vector space over the reals. In a derivation similar to the one found in \srefsec:Sub_Problem_Formulation, one can show that \prefeq:formulation_specialized_B is equivalent to the following constrained joint minimization problem:

(17)

where , is the kernel matrix with as its element, , where is the operator producing a diagonal matrix with the same diagonal as the operator’s argument, is the all-ones vector and . The learned metric will be of the form

(18)

where and, in this case, . It is readily seen that the matrix specifying the matrix kernel function also determines the Mahalanobis distance in the output space . Therefore, this model implicitly learns the Mahalanobis distance by learning the matrix in the kernel function.

{proposition}\pref

eq:formulation_specialized_B_matrix_form is convex with respect to each of the two variables and .

Proof.

The proof is based on the following facts outlined in [2, sec. 3.6]: (a) A matrix-valued function is matrix convex if and if for any , is convex. (b) Suppose a matrix-valued function is matrix convex and a real-valued function is convex and non-decreasing. Then, is convex, where denotes function composition. (c) The function is convex and non-decreasing in , if . In what follows, we show convexity for each term in \prefeq:formulation_specialized_B_matrix_form. Since and , therefore is convex with respect to based on facts (b) and (c). To show the convexity with respect to , note that the matrix-valued function is matrix convex with respect to based on and fact (a). Thus, with and fact (b) and (c), we achieve the convexity. The same method is employed to prove the convexity of the other three terms (note that ). ∎

Based on \proprefprop:convex_specific_kernel, we can again apply a block coordinate descent algorithm to solve \prefeq:formulation_specialized_B_matrix_form. If is the relevant objective function, we set the partial derivative of with respect to zero and obtain:

(19)

As noted in [11], this matrix equation can be solved for as follows:

(20)

To find the optimum for fixed , due to the constraint , we use a projected gradient descent method. In each iteration, we update using the traditional gradient descent rule: , where is the step length, followed by projecting the updated onto the cone of positive semi-definite matrices. Since is convex with respect to for fixed , this procedure is able to find the optimum solution for . The gradient with respect to is given as

(21)

where is the Hadamard matrix product and is defined as

(22)

Therefore, for each iteration, the time complexity of updating is , due to the calculation of a matrix inverse. When updating , the time complexity is determined by the convergence speed of the projected gradient descent method.

5 Experiments

In this section, we evaluate the performance of our two kernelized DML methods on classification problems. Towards this purpose, we opt to set , where is the class label of the sample and is an appropriately chosen prototype target vector for the class. Additionally, we choose to evaluate the pair-wise sample similarities as , where denotes the result of the Iversonian bracket, \ie it equals , if evaluates to true, and , if otherwise. After training each of these models, we employ a KNN classifier to label samples in the range space of ; the classifier uses the models’ learned metrics (given by \erefeq:metric_fixed_kernel and \erefeq:metric_parameterized_kernel) to establish nearest neighbors.

We compare our methods with several other approaches. The first one labels samples of the original feature space via the -NN classification rule using Euclidean distances and, provides a baseline for the accuracy that can be achieved for each classification problem we considered. The second one relies on a popular DML method, namely the Large Margin Nearest Neighbor (LMNN) DML method [22]. We also selected two kernelized approaches for comparison, namely, Information-Theoretic Metric Learning (ITML) [4] and kernelized LMNN (KLMNN) [3].

We evaluated all approaches on eight datasets from the UCI repository, namely, White Wine Quality (Wine), Wall-Following Robot Navigation (Robot), Statlog Vehicle Silhouettes (Vehicle), Molecular Biology Splice-junction Gene Sequences (Molecular), Waveform Database Generator Version 1 (Wave), Ionosphere (Iono), Cardiotocography (Cardio), Pima Indians Diabetes (Pima). For all datasets, each class was equally represented in number of samples. An exception is the original Wine dataset that has eleven classes, eight of which are poorly represented; for this dataset we only chose data from the other three classes.

For our model with general matrix kernel function , we chose the diagonal matrix , where through were Gaussian kernel functions with different spreads. For the second model, where , we also chose to be a Gaussian kernel. During the test phase for all experiments, the parameters , , , the output dimension , the Gaussian kernel’s spread parameter and the number of nearest neighbors to be used by the KNN classifier are selected through cross-validation. Training of the models was performed using % and % of each data set. In the sequel, we provide the experimental results in figures, which display the average classification accuracies over runs. Also, the error bars correspond to a % confidence interval of the estimated accuracies.

(a)
(b)
Figure 1: Experimental results for % training data. Average classification performance over runs for each data set and each method is shown. Error bars indicate % confidence intervals.

We first discuss the results in the case where we used only 10% of the training data; they are depicted in \freffig:ten_percent. Our first model with general kernel function is named as “Method 1”, and the second model with specified kernel function is called “Method 2”. For almost all datasets, we observe that all five DML methods outperform the scheme involving no transformation of the original feature space (\ie, the output space coincided with the original feature space) and labeling samples via Euclidean-distance KNN classification. This remarkable fact underlines the potential benefits of DML methods. Moreover, we observe that kernelized methods usually outperform LMNN. This observation may partly justify the use of a nonlinear mapping for DML. Furthermore, we observe from the figure that both of our methods typically outperform the other four approaches. More specifically, the proposed two models achieve the highest accuracy across all datasets with the only exception on the Vehicle dataset, where ITML and KLMNN outperform slightly. It is worth mentioning that, for the Pima data set, none of the other three DML methods can enhance the performance compared to the baseline KNN classification, while our methods achieve significant improvements.

(a)
(b)
Figure 2: Experimental results for % training data. Average classification performance over runs for each data set and each method is shown. Error bars indicate % confidence intervals.

Similar conclusions can be drawn regarding the results generated by using % of the training data. These results are depicted in \freffig:fifty_percent. Our methods outperform all the other four methods for most datasets. An exception occurs for the Molecular dataset, where KLMNN achieves higher performance than ours. In the case of Robot and Cardio datasets, all methods perform similarly well. The reason might be that, with enough data, all of the models can be trained well enough to achieve close to optimal performance. For the Pima data set, again, our methods achieve much better results than all other four methods. It is also important to note that, for our Method 1, despite the relatively simple form of the matrix kernel function we opted for, the resulting model demonstrated very competitive classification accuracy across all datasets. One would likely expect even better performance, if a more sophisticated matrix kernel function is used.

For the sake of visualizing the distribution of the transformed Robot data via our models in dimensions, we provide \freffig:robot_KPCA and \freffig:robot_visualize. Similar to [8], we compare the produced mappings of our methods to Kernel Principal Component Analysis (KPCA). KPCA’s -dimensional principal subspace was identified based on % of the available training data, \ie, training patterns, and the test points were projected onto that subspace. The same training samples were also used for training our two models, which used a Gaussian kernel function and a spread parameter value that maximized KNN’s classification accuracy.

Figure 3: Visualization of the Robot data set by applying KPCA.
(a) Method 1
(b) Method 2
Figure 4: Visualization of the Robot data set by applying our methods.

From \freffig:robot_KPCA we observe that KPCA’s projection may only promote good discrimination between samples drawn from class versus the rest. On the other hand, in \freffig:robot_general and \freffig:robot_B, all four classes are reasonably well-clustered in the output space obtained by our two methods. This may explain why our methods are able to achieve high classification accuracy, even when only % of the available data are used for training.

6 Conclusions

In this paper, we proposed two new kernel-based DML methods, which rely on RKHS of vector-valued functions. Via a mapping , the two methods map data from their original space to an output space, whose dimension can be directly controlled. Subsequent distance measurements are performed in the output space via a Mahalanobis metric. The first proposed model uses a general matrix kernel function and, thus, provides significant flexibility in modeling the input-to-output space mapping. On the other hand, the second proposed method uses a more restricted matrix kernel function, but has the advantage of implicitly determining the Mahalanobis metric. Furthermore, its matrix kernel function can be learned from data. Unlike previous kernel-based approaches, the relevant mappings are explicit for both of our two methods. Combined with the fact that the output space dimensionality can be directly specified, the models can also be used for dimensionality reduction purposes, such as for visualizing the data in or dimensions. Experimental results on eight UCI benchmark data sets show that both of the proposed methods can achieve higher performance in comparison to other traditional and kernel-based DML techniques.

Acknowledgements

C. Li acknowledges partial support from NSF (NSF) grant No. 0806931. Also, M. Georgiopoulos acknowledges partial support from NSF grants No. 0525429, No. 0963146, No. 1200566 and No. 1161228. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.

References

  • [1] Mikhail Bilenko, Sugato Basu, and Raymond J. Mooney. Integrating constraints and metric learning in semi-supervised clustering. In International Conference on Machine Learning, 2004. Available from: http://doi.acm.org/10.1145/1015330.1015360.
  • [2] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
  • [3] Ratthachat Chatpatanasiri, Teesid Korsrilabutr, Pasakorn Tangchanachaianan, and Boonserm Kijsirikul. A new kernelization framework for mahalanobis distance learning algorithms. Neurocomputing, 73:1570–1579, 2010. Available from: http://dx.doi.org/10.1016/j.neucom.2009.11.037.
  • [4] Jason V. Davis, Brian Kulis, Prateek Jain, and Inderjit S. Dhillon. Information-theoretic metric learning. In Proceedings of the 24th international conference on Machine learning, ICML ’07, pages 209–216, 2007. Available from: http://doi.acm.org/10.1145/1273496.1273523.
  • [5] Huyen Do, Alexandros Kalousis, Jun Wang, and Adam Woznica. A metric learning perspective of svm: On the relation of LMNN and SVM. In JMLR W&CP 5: Proceedings of Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), volume 5, pages 308–317, 2012. Available from: http://jmlr.csail.mit.edu/proceedings/papers/v22/do12/do12.pdf.
  • [6] Jacob Goldberger, Sam Roweis, Geoff Hinton, and Ruslan Salakhutdinov. Neighbourhood components analysis. In Neural Information Processing Systems, 2004. Available from: http://books.nips.cc/papers/files/nips17/NIPS2004_0121.pdf.
  • [7] Kaizhu Huang, Yiming Ying, and Colin Campbell. Gsml: A unified framework for sparse metric learning. In Data Mining, 2009. ICDM ’09. Ninth IEEE International Conference on, pages 189 –198, Dec. 2009. Available from: http://dx.doi.org/10.1109/ICDM.2009.22.
  • [8] Prateek Jain and Brian Kulis. Inductive regularized learning of kernel functions. In Neural Information Processing Systems, 2010. Available from: http://books.nips.cc/papers/files/nips23/NIPS2010_0603.pdf.
  • [9] Prateek Jain, Brian Kulis, Jason V. Davis, and Inderjit S. Dhillon. Metric and kernel learning using a linear transformation. Journal of Machine Learning Research, 13:519–547, 2012.
  • [10] Brian Kulis, Suvrit Sra, Dhillon, and Inderjit. Convex perturbations for scalable semidefinite programming. In JMLR W&CP 5: Proceedings of Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), volume 5, pages 296–303, 2012. Available from: http://jmlr.csail.mit.edu/proceedings/papers/v5/kulis09a/kulis09a.pdf.
  • [11] Peter Lancaster. Explicit solutions of linear matrix equations. SIAM Review, 12:pp. 544–566, 1970. Available from: http://www.jstor.org/stable/2028490.
  • [12] Zhengdong Lu, Prateek Jain, and Inderjit S. Dhillon. Geometry-aware metric learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 673–680, 2009. Available from: http://doi.acm.org/10.1145/1553374.1553461.
  • [13] R. De Maesschalck, D. Jouan-Rimbaud, and D.L. Massart. The mahalanobis distance. Chemometrics and Intelligent Laboratory Systems, 50(1):1 – 18, 2000. Available from: http://www.sciencedirect.com/science/article/pii/S0169743999000477, doi:10.1016/S0169-7439(99)00047-7.
  • [14] Charles A. Micchelli and Massimiliano Pontil. On learning vector-valued functions. Neural Computation, 17:177–204, 2005. Available from: http://dx.doi.org/10.1162/0899766052530802.
  • [15] Nam Nguyen and Yunsong Guo. Metric learning: A support vector approach. In Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II, ECML PKDD ’08, pages 125–136, Berlin, Heidelberg, 2008. Springer-Verlag. Available from: http://dx.doi.org/10.1007/978-3-540-87481-2_9, doi:10.1007/978-3-540-87481-2_9.
  • [16] Shibin Parameswaran and Kilian Q. Weinberger. Large margin multi-task metric learning. In Neural Information Processing Systems, 2010. Available from: http://books.nips.cc/papers/files/nips23/NIPS2010_0510.pdf.
  • [17] Matthew Schultz and Thorsten Joachims. Learning a distance metric from relative comparisons. In Neural Information Processing Systems, 2004. Available from: http://books.nips.cc/papers/files/nips16/NIPS2003_AA06.pdf.
  • [18] Lorenzo Torresani and Kuang-chih Lee. Large margin component analysis. In Neural Information Processing Systems, 2007. Available from: http://books.nips.cc/papers/files/nips19/NIPS2006_0791.pdf.
  • [19] Ivor W. Tsang and James T. Kwok. Distance metric learning with kernels. In Proceedings of International Conference on Artificial Neural Networks (ICANN), pages 126–129, 2003.
  • [20] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85, 2008.
  • [21] Jun Wang, Huyen Do, Adam Woznica, and Alexandros Kalousis. Metric learning with multiple kernels. In Neural Information Processing Systems, 2011. Available from: http://books.nips.cc/papers/files/nips24/NIPS2011_0683.pdf.
  • [22] Kilian Q. Weinberger and Laurence K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207–244, 2009. Available from: http://dl.acm.org/citation.cfm?id=1577069.1577078.
  • [23] Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. Distance metric learning, with application to clustering with side-information. In Neural Information Processing Systems, 2002. Available from: http://books.nips.cc/papers/files/nips15/AA03.pdf.
  • [24] Yiming Ying and Peng Li. Distance metric learning with eigenvalue optimization. Journal of Machine Learning Research, 13:1–26, 2012. Available from: http://dl.acm.org/citation.cfm?id=2188385.2188386.
  • [25] Yu Zhang and Dit-Yan Yeung. Transfer metric learning by learning task relationships. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’10, pages 1199–1208, 2010. Available from: http://doi.acm.org/10.1145/1835804.1835954.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
11681
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description