Learning with Clustering Structure

Learning with Clustering Structure

Abstract.

We study supervised learning problems using clustering constraints to impose structure on either features or samples, seeking to help both prediction and interpretation. The problem of clustering features arises naturally in text classification for instance, to reduce dimensionality by grouping words together and identify synonyms. The sample clustering problem on the other hand, applies to multiclass problems where we are allowed to make multiple predictions and the performance of the best answer is recorded. We derive a unified optimization formulation highlighting the common structure of these problems and produce algorithms whose core iteration complexity amounts to a k-means clustering step, which can be approximated efficiently. We extend these results to combine sparsity and clustering constraints, and develop a new projection algorithm on the set of clustered sparse vectors. We prove convergence of our algorithms on random instances, based on a union of subspaces interpretation of the clustering structure. Finally, we test the robustness of our methods on artificial data sets as well as real data extracted from movie reviews.

Key words and phrases:
Clustering, Multitask, Dimensionality Reduction, Supervised Learning, Supervised Clustering
2010 Mathematics Subject Classification:
A shorter, preliminary version of this paper appeared at the NIPS 2015 workshop “Transfer and Multi-Task Learning: Trends and New Perspectives”.

1. Introduction

Adding structural information to supervised learning problems can significantly improve prediction performance. Sparsity for example has been proven to improve statistical and practical performance [Bach et al., 2012]. Here, we study clustering constraints that seek to group either features or samples, to both improve prediction and provide additional structural insights on the data.

When there exists some groups of highly correlated features for instance, reducing dimensionality by assigning uniform weights inside each distinct group of features can be beneficial both in terms of prediction and interpretation [Bondell and Reich, 2008] by significantly reducing dimension. This often occurs in text classification for example, where it is natural to group together words having the same meaning for a given task [Dhillon et al., 2003; Jiang et al., 2011].

On the other hand, learning a unique predictor for all samples can be too restrictive. For recommendation systems for example, users can be partitioned in groups, each having different tastes. Here, we study how to learn a partition of the samples that achieves the best within-group prediction [Guzman-Rivera et al., 2014; Zhang, 2003]

These problems can of course be tackled by grouping synonyms or clustering samples in an unsupervised preconditioning step. However such partitions might not be optimized or relevant for the prediction task. Prior hypotheses on the partition can also be added as in Latent Dirichlet Allocation [Blei et al., 2003] or Mixture of Experts [Jordan, 1994]. We present here a unified framework that highlights the clustered structure of these problems without adding prior information on these clusters. While constraining the predictors, our framework allows the use of any loss function for the prediction task. We propose several optimization schemes to solve these problems efficiently.

First, we formulate an explicit convex relaxation which can be solved efficiently using the conditional gradient algorithm [Frank and Wolfe, 1956; Jaggi, 2013], where the core inner step amounts to solving a clustering problem. We then study an approximate projected gradient scheme similar to the Iterative Hard Thresholding (IHT) algorithm [Blumensath and Davies, 2009] used in compressed sensing. While constraints are non-convex, projection on the feasible set reduces to a clustering subproblem akin to k-means. In the particular case of feature clustering for regression, the k-means steps are performed in dimension one, and can therefore be solved exactly by dynamic programming [Bellman, 1973; Wang and Song, 2011]. When a sparsity constraint is added to the feature clustering problem for regression, we develop a new dynamic program that gives the exact projection on the set of sparse and clustered vectors.

We provide a theoretical convergence analysis of our projected gradient scheme generalizing the proof made for IHT. Although our structure is similar to sparsity, we show that imposing a clustered structure, while helping interpretability, does not allow us to significantly reduce the number of samples, as in the sparse case for example.

Finally, we describe experiments on both synthetic and real datasets involving large corpora of text from movie reviews. The use of k-means steps makes our approach fast and scalable while comparing very favorably with standard benchmarks and providing meaningful insights on the data structure.

2. Learning & clustering features or samples

Given sample points represented by the matrix and corresponding labels , real or nominal depending on the task (classification or regression), we seek to compute linear predictors represented by . Clustering features or samples is done by constraining and our problems take the generic form

in the prediction variable , where is a learning loss (for simplicity, we consider only squared or logistic losses in what follows), is a classical regularizer and encodes the clustering structure.

The clustering constraint partitions features or samples into groups of size by imposing that all features or samples within a cluster share a common predictor vector or coefficient , solving the supervised learning problem. To define it algebraically we use a matrix that assigns the features or the samples to the groups, i.e.  if feature or sample is in group and otherwise. Denoting , the prediction variable is decomposed as leading to the supervised learning problem with clustering constraint

(1)

in variables , and whose dimensions depend on whether features () or samples () are clustered.

Although this formulation is non-convex, we observe that the core non-convexity emerges from a clustering problem on the predictors , which we can deal with using k-means approximations, as detailed in Section 3. We now present in more details two key applications of our formulation: dimensionality reduction by clustering features and learning experts by grouping samples. We only detail regression formulations, extensions for classification are given in the Appendix A.1. Our framework also applies to clustered multitask as a regularization hypothesis, and we refer the reader to the Appendix A.2 for more details on this formulation.

2.1. Dimensionality reduction: clustering features

Given a prediction task, we want to reduce dimensionality by grouping together features which have a similar influence on the output [Bondell and Reich, 2008], e.g. synonyms in a text classification problem. The predictor variable is here reduced to a single vector, whose coefficients take only a limited number of values. In practice, this amounts to a quantization of the classifier vector, supervised by a learning loss.

Our objective is to form groups of features , assigning a unique weight to all features in group . In other words, we search a predictor such that for all . This problem can be written

(2)

in the variables , and . In what follows, will be a squared or logistic loss that measures the quality of prediction for each sample. Regularization can either be seen as a standard regularization on with , or a weighted regularization on , .

Note that fused lasso [Tibshirani et al., 2005] in dimension one solves a similar problem that also quantizes the regression vector using an penalty on coefficient differences. The crucial difference with our setting is that fused lasso assumes that the variables are ordered and minimizes the total variation of the coefficient vector. Here we do not make any ordering assumption on the regression vector.

2.2. Learning experts: clustering samples

Mixture of experts [Jordan, 1994] is a standard model for prediction that seeks to learn predictors called “experts”, each predicting labels for a different group of samples. For a new sample the prediction is then given by a weighted sum of the predictions of all experts . The weights are given by a prior probability depending on . Here, we study a slightly different setting where we also learn experts, but assignments to groups are only extracted from the labels and not based on the feature variables as illustrated by the graphical model in Figure 1.

Figure 1. Learning multiple diverse experts (left), mixture of experts model (right). The assignment matrix gives the assignment to groups, grey variables are observed, arrows represent dependance of variables.

This means that while we learn several experts (classifier vectors), the information contained in the features is not sufficient to select the best experts. Given a new point we can only give diverse answers or an approximate weighted prediction . Our algorithm will thus return several answers and minimizes the loss of the best of these answers. This setting was already studied by Zhang [2003] for general predictors, it is also related to subspace clustering [Elhamifar and Vidal, 2009], however here we already know in which dimension the data points lie.

Given a prediction task, our objective is to find groups of sample points to maximize within-group prediction performance. Within each group , samples are predicted using a common linear predictor . Our problem can be written

(3)

in the variables and such that is a partition of the samples. As in the problem of clustering features above, measures the quality of prediction for each sample and is a weighted regularization. Using an assignment matrix and an auxiliary variable such that , which means if , problem (3) can be rewritten

in the variables , and . Once again, our problem fits in the general formulation given in (1) and in the sections that follows, we describe several algorithms to solve this problem efficiently.

3. Approximation algorithms

We now present optimization strategies to solve learning problems with clustering constraints. We begin by simple greedy procedures and a more refined convex relaxation solved using approximate conditional gradient. We will show that this latter relaxation is exact in the case of feature clustering because the inner one dimensional clustering problem can be solved exactly by dynamic programming.

3.1. Greedy algorithms

For both clustering problems discussed above, greedy algorithms can be derived to handle the clustering objective. A straightforward strategy to group features is to first train predictors as in a classical supervised learning problem, and then cluster weights together using k-means. In the same spirit, when clustering sample points, one can alternate minimization on the predictors of each group and assignment of each point to the group where its loss is smallest. These methods are fast but unstable and highly dependent on initialization. However, alternating minimization can be used to refine the solution of the more robust algorithms proposed below.

3.2. Convex relaxation using conditional gradient algorithm

Another approach is to relax the problem by considering the convex hull of the feasible set and use the conditional gradient method (a.k.a. Frank-Wolfe, [Frank and Wolfe, 1956; Jaggi, 2013]) on the relaxed convex problem. Provided that an affine minimization oracle can be computed efficiently, the key benefit of using this method when minimizing a convex objective over a non-convex set is that it automatically solves a convex relaxation, i.e. minimizes the convex objective over the convex hull of the feasible set, without ever requiring this convex hull to be formed explicitly.

In our case, the convex hull of the set is the entire space so the relaxed problem loses the initial clustering structure. However in the special case of a squared loss, i.e. , minimization in can be performed analytically and our problem reduces to a clustering problem for which this strategy is relevant. We illustrate this simplification in the case of clustering features for a regression task, detailed computations and explicit procedures for other settings are given in Appendix A.3.

Replacing in (2), the objective function in problem (2) becomes

Minimizing in and using the Sherman-Woodbury-Morrison formula we then get

and the resulting clustering problem is then formulated in terms of the normalized equivalence matrix

such that if item and are in the same group and otherwise.

Writing the set of equivalence matrices for partitions into at most groups, our partitioning problem can be written

in the matrix variable . We now relax this last problem by solving it (implicitly) over the convex hull of the set of equivalence matrices using the conditional gradient method. Its generic form is described in Algorithm (1), where the scalar product is the canonical one on matrices, i.e.  . At each iteration, the algorithm requires solving an linear minimization oracle over the feasible set. This gives the direction for the next step and an estimated gap to the optimum which is used as stopping criterion.

  Initialize
  for  do
     Solve linear minimization oracle
(4)
     if  gap then
        return  
     else
        Set
     end if
  end for
Algorithm 1 Conditional gradient algorithm

The estimated gap is given by the linear oracle as

By definition of the oracle and convexity of the objective function, we have

Crucially here, the linear minimization oracle in (4) is equivalent to a projection step. This projection step is itself equivalent to a k-means clustering problem which can be solved exactly in the feature clustering case and well approximated in the other scenarios detailed in the appendix. For a fixed matrix , we have that

is positive semidefinite (this is the case for all the settings considered in this paper). Writing its matrix square root we get

because is an orthonormal projection (, ) and so is . Given a matrix , we also have

(5)

where the minimum is taken over centroids and partition . This means that computing the linear minimization oracle on is equivalent to solving a k-means clustering problem on . This k-means problem can itself be solved approximately using the k-means++ algorithm which performs alternate minimization on the assignments and the centroids after an appropriate random initialization. Although this is a non-convex subproblem, k-means++ guarantees a constant approximation ratio on its solution [Arthur and Vassilvitskii, 2007]. We write k-means the approximate solution of the projection. Overall, this means that the linear minimization oracle (4) can therefore be computed approximately. Moreover, in the particular case of grouping features for regression, the k-means subproblem is one-dimensional and can be solved exactly using dynamic programming [Bellman, 1973; Wang and Song, 2011] so that convergence of the algorithm is ensured.

The complete method is described as Algorithm 2 where we use the classical stepsize for conditional gradient . A feasible solution for the original non-convex problem is computed from the solution of the relaxed problem using Frank-Wolfe rounding, i.e.  output the last linear oracle.

0:  
  Initialize
  for  do
     Compute the matrix square root of
     Get oracle
     if  then
        return  
     else
        Set
     end if
  end for
   is given by the last k-means
   is given by the analytic solution of the minimization for fixed
  
Algorithm 2 Conditional gradient on the equivalence matrix

3.3. Complexity

The core complexity of Algorithm 2 is concentrated in the inner k-means subproblem, which standard alternating minimization approximates at cost , where is the number of alternating steps, is the number of clusters, and is the product of the dimensions of . However, computation of the gradient requires to invert matrices and to compute a matrix square root of the gradient at each iteration, which can slow down computations for large datasets. The choice of the number of clusters can be done given an a priori on the problem (e.g. knowing the number of hidden groups in the sample points), or cross-validation, idem for the other regularization parameters.

4. Projected Gradient algorithm

In practice, convergence of the conditional gradient method detailed above can be quite slow and we also study a projected gradient algorithm to tackle the generic problem in (1). Although simple and non-convex in general, this strategy used in the context of sparsity can produce scalable and convergent algorithms in certain scenarios, as we will see below.

4.1. Projected gradient

We can exploit the fact that projecting a matrix on the feasible set

is equivalent to a clustering problem, with

where the minimum is taken over centroids and partition . The k-means problem can be solved approximately with the k-means++ algorithm as mentioned in Section 3.2. We will analyze this algorithm for clustering features for regression in which the projection can be found exactly. Writing k-means the approximate solution of the projection, the objective function and the stepsize, the full method is summarized as Algorithm 3 and its implementation is detailed in Section 4.3.

0:  
  Initialize
  while  do
     
     
     
  end while
   and are given through k-means
  
Algorithm 3 Proj. Gradient Descent

4.2. Convergence

We now analyze the convergence of the projected gradient algorithm with a constant stepsize , applied to the feature clustering problem for regression. We focus on a problem with squared loss without regularization term, which reads

in the variables , and . We assume that the regression values are generated by a linear model whose coefficients satisfy the constraints above, up to additive noise, with

where . Hence we study convergence of our algorithm to , i.e.  to the partition of its coefficients and its values.

We will exploit the fact that each partition defines a subspace of vectors , so the feasible set can be written as a union of subspaces. Let be a partition and define

where is the set of assignment matrices corresponding to . Since permuting the columns of together with the coefficients of has no impact on , the matrices in are identical up to a permutation of their columns. So, for , , therefore is a subspace and the corresponding assignment matrices are its different basis.

To a feasible vector , we associate the partition of its values that has the least number of groups. This partition and its corresponding subspace are uniquely defined and, denoting the set of partitions in at most  clusters, our problem (4.2) can thus be written

where the variable belongs to a union of subspaces .

We will write the projected gradient algorithm for (4.2) as a fixed point algorithm whose contraction factor depends on the singular values of the design matrix on collections of subspaces generated by the partitions . We only need to consider largest subspaces in terms of inclusion order, which are the ones generated by the partitions into exactly groups. Denoting this set of partitions, the collections of subspaces are defined as

Our main convergence result follows. Provided that the contraction factor is sufficient, it states the convergence of the projected gradient scheme to the original vector up to a constant error of the order of the noise.

Proposition 4.1.

Given that projection on is well defined, the projected gradient algorithm applied to (3) converges to the original as

where

and is any orthonormal basis of the subspace .

Proof.

To describe the algorithm we define and as the partitions associated respectively with and containing the least number of groups and

Orthonormal projections on , and are given respectively by . Therefore by definition , and .

We can now control convergence, with

(6)

In the second term, as and , we have

which is equivalent to

and this last statement implies

This means that we get from (6)

Now, assuming

(7)
(8)

and summing the latter inequality over , using that , we get

We bound and using the information of on all possible subspaces of or . For a subspace or , we define the orthonormal projection on it and any orthonormal basis of it. For (8) we get

which is independent of the choice of .

For (7), using that and , we have

which is independent of the choice of and yields the desired result.    

We now show that and derive from bounds on the singular values of on the collections and . Denoting and respectively the smallest and largest singular values of a matrix , we have

and assuming and that

for some , then [Vershynin, 2010, Lemma 5.38] shows

We now show that for isotropic independent sub-Gaussian data , these singular values depend on the number of subspaces of , , their dimension and the number of samples . This proposition reformulates results of Vershynin [2010] to exploit the union of subspace structure.

Proposition 4.2.

Let be the finite collections of subspaces defined above, let and . Assuming that the rows of the design matrix are isotropic independent sub-gaussian, we have

with probability larger than , where , is any orthonormal basis of and depend only on the sub-gaussian norm of the .

Proof.

Let us fix , with or and one of its orthonormal basis. By definition of , . The rows of are orthogonal projections of the rows of onto , so they are still independent sub-gaussian isotropic random vectors. We can therefore apply [Vershynin, 2010, Theorem 5.39] on . Hence for any , with probability at least , the smallest and largest singular values of the rescaled matrix written respectively and are bounded by

(9)

where and depend only on the sub-gaussian norm of the . Now taking the union bound on all subsets of , (9) holds for any with probability

Taking , we get for all ,

with probability at least , where . Therefore

Then [Vershynin, 2010, Theorem 5.39] yields

hence the desired result.    

Overall here, Proposition 4.1 shows that the projected gradient method converges when the contraction factor is strictly less than one. When observations are isotropic independent sub-gaussian, this means

which is also

(10)

The first condition in (10) means that subspaces must be low-dimensional, in our case and we naturally want the number of groups to be small. The second condition in (10) means that the structure (clustering here) is restrictive enough, i.e. that the number of possible configurations, , is small enough.

As we show below, in the simple clustering case however, this number of subspaces is quite large, growing essentially as .

Proposition 4.3.

The number of subspaces in is lower bounded by

Proof.

is indexed by the number of partitions in exactly clusters, i.e.the Stirling number of second kind . Standard bounds on the Stirling number of the second kind give

(11)

hence .    

This last proposition means that although the intrinsic dimension of our variables is of order , the number of subspaces is such that we need roughly , i.e. approximately as many samples as features, so the clustering structure is not specific enough to reduce the number of samples required by our algorithm to converge. On the other hand, given this many samples, the algorithm provably converges to a clustered output, which helps interpretation.

As a comparison, classical sparse recovery problems have the same structure [Rao et al., 2012], as -sparse vectors for instance can be described as and so are part of a “union of subspaces”. However in the case of sparse vectors the number of subspaces grows as which means recovery requires much less samples than features.

4.3. Implementation and complexity

In our implementation we use a backtracking line search on the stepsize that guarantees decreasing of the objective. At each iteration if

decreases the objective value we take and we increase the stepsize by a constant factor with . If increases the objective value we decrease the stepsize by a constant factor , with , output a new and iterate this scheme until decreases the objective value or the stepsize reaches a stopping value . We observed better results with this line search than with constant stepsize, in particular when the number of samples for clustering features is small.

Complexity of Algorithm 3 is measured by the cost of the projection and the number of iterations until convergence. If approximated by k-means++ the projection step costs , where is the number of alternating steps, is the number of clusters, and is the product of the dimensions of . When clustering features for regression, the dynamic program of Zhang [2003] solving exactly the projection step is in and ours for -sparse vectors, detailed in Section 5.1, is in . We observed convergence of the projected gradient algorithm in less than 100 iterations which makes it highly scalable. As for the convex relaxation the choice of the number of clusters is done given an a priori on the problem.

5. Sparse and clustered linear models

Algorithm 3 can also be applied when a sparsity constraint is added to the linear model, provided that the projection is still defined. This scenario arises for instance in text prediction when we want both to select a few relevant words and to group them to reduce dimensionality. Formally the problem of clustering features (2) becomes then

in the variables , , and , where is a matrix of canonical vectors which assigns nonzero coefficients and an assignment matrix of variables in clusters.

We develop a new dynamic program to get the projection on -sparse vectors whose non-zero coefficients are clustered in groups and apply our previous theoretical analysis to prove convergence of the projected gradient scheme on random instances.

5.1. Projection on -sparse -clustered vectors

Let be the set of -sparse vectors whose non-zero values can be partitioned in at most groups. Given , we are interested in its projection on , which we formulate as a partitioning problem. For a feasible , with and , with , the partition of its non-zero values such that if and only if , the distance between and is given by

The projection is solution of