Online Multiple Kernel Learning for Structured Prediction

Online Multiple Kernel Learning for Structured Prediction

André F. T. Martins   Noah A. Smith   Eric P. Xing
Pedro M. Q. Aguiar   Mário A. T. Figueiredo
{afm,nasmith,epxing}@cs.cmu.edu
aguiar@isr.ist.utl.pt
,   mtf@lx.it.pt
School of Computer Science
Carnegie Mellon University, Pittsburgh, PA, USA
Instituto de Telecomunicações
Instituto Superior Técnico, Lisboa, Portugal
Instituto de Sistemas e Robótica
Instituto Superior Técnico, Lisboa, Portugal
Abstract

Despite the recent progress towards efficient multiple kernel learning (MKL), the structured output case remains an open research front. Current approaches involve repeatedly solving a batch learning problem, which makes them inadequate for large scale scenarios. We propose a new family of online proximal algorithms for MKL (as well as for group-lasso and variants thereof), which overcomes that drawback. We show regret, convergence, and generalization bounds for the proposed method. Experiments on handwriting recognition and dependency parsing testify for the successfulness of the approach.

1 Introduction

Structured prediction (Lafferty et al., 2001; Taskar et al., 2003; Tsochantaridis et al., 2004) deals with problems with a strong interdependence among the output variables, often with sequential, graphical, or combinatorial structure. Despite recent advances toward a unified formalism, obtaining a good predictor often requires a significant effort in designing kernels (i.e., features and similarity measures) and tuning hyperparameters. The slowness in training structured predictors in large scale settings makes this an expensive process.

The need for careful kernel engineering can be sidestepped using the kernel learning approach initiated in Bach et al. (2004); Lanckriet et al. (2004), where a combination of multiple kernels is learned from the data. While multi-class and scalable multiple kernel learning (MKL) algorithms have been proposed (Sonnenburg et al., 2006; Zien and Ong, 2007; Rakotomamonjy et al., 2008; Chapelle and Rakotomamonjy, 2008; Xu et al., 2009; Suzuki and Tomioka, 2009), none are well suited for large-scale structured prediction, for the following reason: all involve an inner loop in which a standard learning problem (e.g., an SVM) is repeatedly solved; in large-scale structured prediction, it is often prohibitive to tackle this problem in its batch form, and one typically resorts to online methods (Bottou, 1991; Collins, 2002; Ratliff et al., 2006; Collins et al., 2008). These methods are fast in achieving low generalization error, but converge slowly to the training objective, thus are unattractive for repeated use in the inner loop.

In this paper, we overcome the above difficulty by proposing a stand-alone online MKL algorithm. The algorithm is based on the kernelization of the recent forward-backward splitting scheme Fobos Duchi and Singer (2009) and iterates between subgradient and proximal steps. In passing, we improve the Fobos regret bound and show how to efficiently compute the proximal projections associated with the squared -norm, despite the fact that the underlying optimization problem is not separable.

After reviewing structured prediction and MKL (§2), we present a wide class of online proximal algorithms (§3) which extend Fobos by handling composite regularizers with multiple proximal steps. These algorithms have convergence guarantees and are applicable in MKL, group-lasso (Yuan and Lin, 2006) and other structural sparsity formalisms, such as hierarchical lasso/MKL Bach (2008b); Zhao et al. (2008), group-lasso with overlapping groups Jenatton et al. (2009), sparse group-lasso (Friedman et al., 2010), and the elastic net MKL (Tomioka and Suzuki, 2010). We apply our MKL algorithm to structured prediction (§4), using the two following testbeds: sequence labeling for handwritten text recognition, and natural language dependency parsing. We show the potential of our approach by learning combinations of kernels from tens of thousands of training instances, with encouraging results in terms of runtimes, accuracy and identifiability.

2 Structured Prediction, Group Sparsity, and Multiple Kernel Learning

Let and be the input and output sets, respectively. In structured prediction, to each input corresponds a (structured and exponentially large) set of legal outputs; e.g., in sequence labeling, each is an observed sequence and each is the corresponding sequence of labels; in parsing, each is a string, and each is a parse tree that spans that string.

Let be the set of all legal input-output pairs. Given a labeled dataset , we want to learn a predictor of the form

(1)

where is a compatibility function. Problem (1) is called inference (or decoding) and involves combinatorial optimization (e.g., dynamic programming). In this paper, we use linear functions, , where is a parameter vector and a feature vector. The structure of the output is usually taken care of by assuming a decomposition of the form , where is a set of parts and the are partial output assignments (see (Taskar et al., 2003) for details). Instead of explicit features, one may use a positive definite kernel, , and let belong to the induced RKHS . Given a convex loss function , the learning problem is usually formulated as a minimization of the regularized empirical risk:

(2)

where is a regularization parameter and is the norm in . In structured prediction, the logistic loss (in CRFs) and the structured hinge loss (in SVMs) are common choices:

(3)
(4)

In (4), is a user-given cost function. The solution of (2) can be expressed as a kernel expansion (structured version of the representer theorem (Hofmann et al., 2008, Corollary 13)).

In the kernel learning framework Bach et al. (2004); Lanckriet et al. (2004), the kernel is expressed as a convex combination of elements of a finite set , the coefficients of which are learned from data. That is, , where

(5)

The so-called MKL problem is the minimization of (2) with respect to . Letting be the direct sum of the RKHS, this optimization can be written (as shown in (Bach et al., 2004; Rakotomamonjy et al., 2008)) as:

(6)

where the optimal kernel coefficients are . For explicit features, the parameter vector is split into groups, , and the minimization in (6) becomes

(7)

where is a sum of -norms, called the mixed -norm. The group-lasso criterion (Yuan and Lin, 2006) is similar to (7), without the square in the regularization term, revealing a close relationship with MKL (Bach, 2008a). In fact, the two problems are equivalent up to a change of . The -norm regularizer favors group sparsity: groups that are found irrelevant tend to be entirely discarded.

Early approaches to MKL (Lanckriet et al., 2004; Bach et al., 2004) considered the dual of (6) in a QCQP or SOCP form, thus were limited to small scale problems. Subsequent work focused on scalability: in (Sonnenburg et al., 2006), a semi-infinite LP formulation and a cutting plane algorithm are proposed; SimpleMKL (Rakotomamonjy et al., 2008) alternates between learning an SVM and a gradient-based (or Newton Chapelle and Rakotomamonjy (2008)) update of the kernel weights; other techniques include the extended level method (Xu et al., 2009) and SpicyMKL (Suzuki and Tomioka, 2009), based on an augmented Lagrangian method. These are all batch algorithms, requiring the repeated solution of problems of the form (2); even if one can take advantage of warm-starts, the convergence proofs of these methods, when available, rely on the exactness (or prescribed accuracy in the dual) of these solutions.

In contrast, we tackle (6) and (7) in primal form. Rather than repeatedly calling off-the-shelf solvers for (2), we propose a stand-alone online algorithm with runtime comparable to that of solving a single instance of (2) by online methods (the fastest in large-scale settings (Shalev-Shwartz et al., 2007; Bottou, 1991)). This paradigm shift paves the way for extending MKL to structured prediction, a large territory yet to be explored.

3 Online Proximal Algorithms

We frame our online MKL algorithm in a wider class of online proximal algorithms. The theory of proximity operators (Moreau, 1962), which is widely known in optimization and has recently gained prominence in the signal processing community (Combettes and Wajs, 2006; Wright et al., 2009), provides tools for analyzing these algorithms and generalizes many known results, sometimes with remarkable simplicity. We thus start by summarizing its important concepts in §3.1, together with a quick review of convex analysis.

3.1 Convex Functions, Subdifferentials, Proximity Operators, and Moreau Projections

Throughout, we let (where ) be a convex, lower semicontinuous (lsc) (the epigraph is closed in ), and proper () function. The subdifferential of at is the set

the elements of which are the subgradients. We say that is -Lipschitz in if . We say that is -strongly convex in if

The Fenchel conjugate of is , . Let:

the function is called the Moreau envelope of , and the map is the proximity operator of (Combettes and Wajs, 2006; Moreau, 1962). Proximity operators generalize Euclidean projectors: consider the case , where is a convex set and denotes its indicator (i.e., if and otherwise). Then, is the Euclidean projector onto and is the residual. Two other important examples of proximity operators follow:

  • if , then ;

  • if , then is the soft-threshold function Wright et al. (2009), defined as .

If is (group-)separable, i.e., , where , then its proximity operator inherits the same (group-)separability: Wright et al. (2009). For example, the proximity operator of the mixed -norm, which is group-separable, has this form. The following proposition, that we prove in Appendix A, extends this result by showing how to compute proximity operators of functions (maybe not separable) that only depend on the -norms of groups of components; e.g., the proximity operator of the squared -norm reduces to that of squared .

Proposition 1

Let be of the form for some . Then, and .

Finally, we recall the Moreau decomposition, relating the proximity operators of Fenchel conjugate functions (Combettes and Wajs, 2006) and present a corollary (proved in Appendix B) that is the key to our regret bound in §3.3.

Proposition 2 (Moreau (1962))

For any convex, lsc, proper function ,

(8)
Corollary 3

Let be as in Prop. 2, and . Then, any satisfies

(9)

Although the Fenchel dual does not show up in (9), it has a crucial role in proving Corollary 3.

3.2 A General Online Proximal Algorithm for Composite Regularizers

The general algorithmic structure that we propose and analyze in this paper, presented as Alg. 1, deals (in an online111For simplicity, we focus on the pure online setting, i.e., each parameter update uses a single observation; analogous algorithms may be derived for the batch and mini-batch cases. fashion) with problems of the form

(10)

where is convex222We are particularly interested in the case where is a “vacuous” constraint whose goal is to confine each iterate to a region containing the optimum, by virtue of the projection step in line 9. The analysis in §3.3 will make this more clear. The same trick is used in Pegasos (Shalev-Shwartz et al., 2007). and the regularizer has a composite form . Like stochastic gradient descent (SGD (Bottou, 1991)), Alg. 1 is suitable for problems with large ; it also performs (sub-)gradient steps at each round (line 4), but only w.r.t. the loss function . Obtaining a subgradient typically involves inference using the current model; e.g., loss-augmented inference, if , or marginal inference if . Our algorithm differs from SGD by the inclusion of proximal steps w.r.t. to each term (line 7). As noted in (Duchi and Singer, 2009; Langford et al., 2009), this strategy is more effective than standard SGD for sparsity-inducing regularizers, due to their usual non-differentiability at the zeros, which causes oscillation and prevents SGD from returning sparse solutions.

When , Alg. 1 reduces to Fobos (Duchi and Singer, 2009), which we kernelize and apply to MKL in §3.4. The case has applications in variants of MKL or group-lasso with composite regularizers (Tomioka and Suzuki, 2010; Friedman et al., 2010; Bach, 2008b; Zhao et al., 2008). In those cases, the proximity operators of are more easily computed than that of their sum , making Alg. 1 more suitable than Fobos. We present a few particular instances (all with ).

1:  input: dataset , parameter , number of rounds , learning rate sequence
2:  initialize ; set
3:  for  to  do
4:     take a training pair and obtain a subgradient
5:      (gradient step)
6:     for  to  do
7:         (proximal step)
8:     end for
9:      (projection step)
10:  end for
11:  output: the last model or the averaged model
Algorithm 1 Online Proximal Algorithm

Projected subgradient with groups.

Let and be the indicator of a convex set . Then (see §3.1), each proximal step is the Euclidean projection onto and Alg. 1 becomes the online projected subgradient algorithm from (Zinkevich, 2003). Letting yields an equivalent problem to group-lasso and MKL (7). Using Prop. 1, each proximal step reduces to a projection onto a -ball whose dimension is the number of groups (see a fast algorithm in (Duchi et al., 2008)).

Truncated subgradient with groups.

Let and , so that (10) becomes the usual formulation of group-lasso, for a general loss . Then, Alg. 1 becomes a group version of truncated gradient descent (Langford et al., 2009), studied in (Duchi and Singer, 2009) for multi-task learning. Similar batch algorithms have also been proposed (Wright et al., 2009). The reduction from to can again be made due to Prop. 1; and each proximal step becomes a simple soft thresholding operation (as shown in §3.1).

Proximal subgradient for squared mixed .

With , we have the MKL problem (7). Prop. 1 allows reducing each proximal step w.r.t. the squared to one w.r.t. the squared ; however, unlike in the previous example, squared is not separable. This apparent difficulty has led some authors (e.g., Suzuki and Tomioka (2009)) to remove the square from , which yields the previous example. However, despite the non-separability of , the proximal steps can still be efficiently computed: see Alg. 2. This algorithm requires sorting the weights of each group, which has cost; we show its correctness in Appendix F. Non-MKL applications of the squared norm are found in (Kowalski and Torrésani, 2009; Zhou et al., 2010).

1:  input: vector and parameter
2:  sort the entries of into (i.e., such that )
3:  find
4:  output: , where
Algorithm 2 Moreau Projection for

Other variants of group-lasso and MKL.

In hierarchical lasso and group-lasso with overlaps (Bach, 2008b; Zhao et al., 2008; Jenatton et al., 2009), each feature may appear in more than one group. Alg. 1 handles these problems by enabling a proximal step for each group. Sparse group-lasso (Friedman et al., 2010) simultaneously promotes group-sparsity and sparsity within each group, by using ; Alg. 1 can handle this regularizer by using two proximal steps, both involving simple soft-thresholding: one at the group level, and another within each group. In non-sparse MKL ((Kloft et al., 2010), §4.4), . Invoking Prop. 1 and separability, the resulting proximal step amounts to solving scalar equations of the form , also valid for (unlike the method described in (Kloft et al., 2010)).

3.3 Regret, Convergence, and Generalization Bounds

We next show that, for a convex loss and under standard assumptions, Alg. 1 converges up to precision, with high confidence, in iterations. If or are strongly convex, this bound is improved to , where hides logarithmic terms. Our proofs combine tools of online convex programming (Zinkevich, 2003; Hazan et al., 2007) and classical results about proximity operators (Moreau, 1962; Combettes and Wajs, 2006). The key is the following lemma (that we prove in Appendix C).

Lemma 4

Assume that , the loss is convex and -Lipschitz on , and that the regularizer satisfies the following conditions: (i) each is convex; (ii) (each proximity operator does not increase the previous ); (iii) (projecting the argument onto does not increase ). Then, for any , at each round of Alg. 1,

(11)

If, in addition, is -strongly convex, then the bound in (11) can be strengthened to

(12)

A related, but less tight, bound for was derived in Duchi and Singer (2009); instead of our term in (11), the bound of (Duchi and Singer, 2009) has .333This can be seen from their Eq. 9, setting and . When , Fobos becomes the truncated gradient algorithm of Langford et al. (2009) and our bound matches the one therein derived, closing the gap between (Duchi and Singer, 2009) and (Langford et al., 2009). The classical result in Prop. 2, relating Moreau projections and Fenchel duality, is the crux of our bound, via Corollary 3. Finally, note that the conditions (i)(iii) are not restrictive: they hold whenever the proximity operators are shrinkage functions (e.g., if , with ).

We next characterize Alg. 1 in terms of its cumulative regret w.r.t. the best fixed hypothesis, i.e.,

(13)
Proposition 5 (regret bounds with fixed and decaying learning rates)

Assume the conditions of Lemma 4, along with and . Then:

  • Running Alg. 1 with fixed learning rate yields

    (14)

    Setting yields a sublinear regret of . (Note that this requires knowing in advance and the number of rounds .)

  • Assume that is bounded with diameter (i.e., , ). Let the learning rate be , with arbitrary . Then,

    (15)

    Optimizing the bound gives , yielding .

  • If is -strongly convex, and , we obtain a logarithmic regret bound:

    (16)

Similarly to other analyses of online learning algorithms, once an online-to-batch conversion is specified, regret bounds allow us to obtain PAC bounds on optimization and generalization errors. The following proposition can be proved using the same techniques as in (Cesa-Bianchi et al., 2004; Shalev-Shwartz et al., 2007).

Proposition 6 (optimization and estimation error)

If the assumptions of Prop. 5 hold and as in 2., then the version of Alg. 1 that returns the averaged model solves the optimization problem (10) with accuracy in iterations, with probability at least . If is also -strongly convex and as in 3., then, for the version of Alg. 1 that returns , we get . The generalization bounds are of the same orders.

We now pause to see how the analysis applies to some concrete cases. The requirement that the loss is -Lipschitz holds for the hinge and logistic losses, where (see Appendix E). These losses are not strongly convex, and therefore Alg. 1 has only convergence. If the regularizer is -strongly convex, a possible workaround to obtain convergence is to let “absorb” that strong convexity by redefining . Since neither the -norm nor its square are strongly convex, we cannot use this trick for the MKL case (7), but it does apply for non-sparse MKL (Kloft et al., 2010) (-norms are strongly convex for ) and for elastic net MKL (Suzuki and Tomioka, 2009). Still, the rate for MKL is competitive with the best batch algorithms; e.g., the method in Xu et al. (2009) achieves primal-dual gap in iterations. Some losses of interest (e.g., the squared loss, or the modified loss above) are -Lipschitz in any compact subset of but not in . However, if it is known in advance that the optimal solution must lie in some compact convex set , we can add a vacuous constraint and run Alg. 1 with the projection step, making the analysis still applicable; we present concrete examples in Appendix E.

3.4 Online MKL

The instantiation of Alg. 1 for yields Alg. 3. We consider ; adapting to any generalized linear model (e.g., ) is straightforward. As discussed in the last paragraph of §3.3, it may be necessary to consider “vacuous” projection steps to ensure fast convergence. Hence, an optional upper bound on is accepted as input. Suitable values of for the SVM and CRF case are given in Appendix E. In line 4, the scores of candidate outputs are computed groupwise; in structured prediction (see §2), a factorization over parts is assumed and the scores are for partial output assignments (see Taskar et al. (2003); Tsochantaridis et al. (2004) for details). The key novelty of Alg. 3 is in line 8, where the group structure is taken into account, by applying a proximity operator which corresponds to a groupwise shrinkage/thresolding, where some groups may be set to zero.

Although Alg. 3 is written in parametric form, it can be kernelized, as shown next (one can also use explicit features in some groups, and implicit in others). Observe that the parameters of the th group after round can be written as , where

Therefore, the inner products in line 4 can be kernelized. The cost of this step is , instead of the (where is the dimension of the th group) for the explicit feature case. After the decoding step (line 5), the supporting pair is stored. Lines 7, 9 and 11 require the norm of each group, which can be manipulated using kernels: indeed, after each gradient step (line 6), we have (denoting and ):

(17)

and the proximal and projection steps merely scale these norms. When the algorithm terminates, it returns the kernel weights and the sequence .

1:  input: , , , radius , learning rate sequence
2:  initialize
3:  for  to  do
4:     take an instance , and compute scores , for
5:     decode:
6:     Gradient step:
7:     compute weights , , and shrink them with Alg. 2
8:     Proximal step: , for
9:     Projection step:
10:  end for
11:  compute , for
12:  return , and the last model
Algorithm 3 Online-mkl

In case of sparse explicit features, an implementation trick analogous to the one used in (Shalev-Shwartz et al., 2007) (where each is represented by its norm and an unnormalized vector) can substantially reduce the amount of computation. In the case of implicit features with a sparse kernel matrix, a sparse storage of this matrix can also significantly speed up the algorithm, eliminating its dependency on in line 4. Note also that all steps involving group-specific computation can be carried out in parallel using multiple machines, which makes Alg. 3 suitable for combining many kernels (large ).

4 Experiments

Handwriting recognition.

We use the OCR dataset of Taskar et al. (2003) (www.cis.upenn.edu/~taskar/ocr), which has words written by people ( characters). Each character is a -by- binary image, i.e., a -dimensional vector (our input) and has one of labels (a-z; the outputs to predict). Like in (Taskar et al., 2003), we address this sequence labeling problem with a structured SVM; however, we learn the kernel from the data, via Alg. 3. We use an indicator basis function to represent the correlation between consecutive outputs. Our first experiment (reported in the upper part of Tab. 1) compares linear, quadratic, and Gaussian kernels, either used individually, combined via a simple average, or with MKL. The results show that MKL outperforms the others by or more.

The second experiment aims at showing the ability of Alg. 3 to exploit both feature and kernel sparsity by learning a combination of a linear kernel (explicit features) with a generalized -spline kernel, given by , with chosen so that the kernel matrix has zeros. The rationale is to combine the strength of a simple feature-based kernel with that of one depending only on a few nearest neighbors. The results (Tab. 1, bottom part) show that the MKL outperforms by the individual kernels, and by more than the averaged kernel. Perhaps more importantly, the accuracy is not much worse than the best one obtained in the previous experiment, while the runtime is much faster ( versus seconds).

Kernel

Training

Test Acc.

Runtimes

(per char.)

Linear () 6 sec.
Quadratic () 116 sec.
Gaussian () () 123 sec.
Average 118 sec.
MKL 279 sec.
-Spline () 8 sec.
Average 15 sec.
MKL 15 sec.
Table 1: Results for handwriting recognition. Averages over runs on the same folds as in (Taskar et al., 2003), training on one and testing on the others. The linear and quadratic kernels are normalized to unit diagonal. In all cases, epochs were used, with in (15) picked from by selecting the one that most decreases the objective after epochs. Results are for the best regularization coefficient (chosen from .

Dependency parsing.

We trained non-projective dependency parsers for English, using the dataset from the CoNLL-2008 shared task Surdeanu et al. (2008) ( training sentences, tokens, and test sentences). The output to be predicted from each input sentence is the set of dependency arcs, linking heads to modifiers, that must define a spanning tree (see example in Fig. 1). We use arc-factored models, where the feature vectors decompose as . Although they are not the state-of-the-art for this task, exact inference is tractable via minimum spanning tree algorithms (McDonald et al., 2005). We defined feature templates for each candidate arc by conjoining the words, lemmas, and parts-of-speech of the head and the modifier , as well as the parts-of-speech of the surrounding words, and the distance and direction of attachment. This yields a large scale problem, with million features instantiated. The feature vectors associated with each candidate arc, however, are very sparse and this is exploited in the implementation. We ran Alg. 3 with explicit features, with each group standing for a feature template. MKL did not outperform a standard SVM in this experiment ( against ); however, it showed a good performance at pruning irrelevant feature templates (see Fig. 1, bottom right). Besides interpretability, which may be useful for the understanding of the syntax of natural languages, this pruning is also appealing in a two-stage architecture, where a standard learner at a second stage will only need to handle a small fraction of the templates initially hypothesized.

Figure 1: Top: a dependency parse tree (adapted from (McDonald et al., 2005)). Bottom left: group weights along the epochs of Alg. 3. Bottom right: results of standard SVMs trained on sets of feature templates of sizes , either selected via a standard SVM or by MKL (the UAS—unlabeled attachment score—is the fraction of non-punctuation words whose head was correctly assigned.)

5 Conclusions

We introduced a new class of online proximal algorithms that extends Fobos and is applicable to many variants of MKL and group-lasso. We provided regret, convergence, and generalization bounds, and used the algorithm for learning the kernel in large-scale structured prediction tasks.

Our work may impact other problems. In structured prediction, the ability to promote structural sparsity suggests that it is possible to learn simultaneously the structure and the parameters of the graphical models. The ability to learn the kernel online offers a new paradigm for problems in which the underlying geometry (induced by the similarities between objects) evolves over time: algorithms that adapt the kernel while learning are robust to certain kinds of concept drift. We plan to explore these directions in future work.

Appendix A Proof of Proposition 1

We have respectively:

(18)

where the solution of the innermost minimization problem in is , and therefore .

Appendix B Proof of Corollary 3

We start by stating and proving the following lemma:

Lemma 7

Let be as in Prop. 2, and let . Then, any satisfies

(19)

Proof: From (8), we have that

from which (19) follows.   

Now, take Lemma 7 and bound the left hand side as:

This concludes the proof.

Appendix C Proof of Lemma 4

Let . We have successively:

(20)