# Streaming Label Learning for Modeling Labels on the Fly

###### Abstract

It is challenging to handle a large volume of labels in multi-label learning. However, existing approaches explicitly or implicitly assume that all the labels in the learning process are given, which could be easily violated in changing environments. In this paper, we define and study streaming label learning (SLL), i.e.labels are arrived on the fly, to model newly arrived labels with the help of the knowledge learned from past labels. The core of SLL is to explore and exploit the relationships between new labels and past labels and then inherit the relationship into hypotheses of labels to boost the performance of new classifiers. In specific, we use the label self-representation to model the label relationship, and SLL will be divided into two steps: a regression problem and a empirical risk minimization (ERM) problem. Both problems are simple and can be efficiently solved. We further show that SLL can generate a tighter generalization error bound for new labels than the general ERM framework with trace norm or Frobenius norm regularization. Finally, we implement extensive experiments on various benchmark datasets to validate the new setting. And results show that SLL can effectively handle the constantly emerging new labels and provides excellent classification performance.

## 1 Introduction

Multi-label learning has a great many achievements and prospects for its successful application to real-world problems, such as text categorization [1, 2], gene function classification [3, 4] and image/video annotation [5, 6]. In the multi-label learning problem, each example can be associated with multiple and nonexclusive labels, and the goal of learning is to allocate the most relevant subset of labels to a new example. In the era of big data, the size of label set is constantly increasing. For example, there are already millions of image tags in Flickr and categories in Wikipedia. Hence, the research challenge is to design scalable yet effective multi-label learning algorithms, which are capable to compromise the conflict between the prodigious number of labels and the limited computation resource.

A straightforward approach for multi-label learning is 1-vs-all or Binary Relevance (BR) [7], which learns an independent classifier for each label. However, the constant increase on the size of label set makes it computationally infeasible. The prevalent technique to deal with label proliferation problem is to shrink the large label space by embedding original high-dimensional label vectors into low-dimensional representations. Different projection mechanisms can be adopted for transforming label vectors, including compressed sensing [8], principal component analysis [9], canonical correlation analysis [10], singular value decomposition [11] and Bloom filters [12]. The predictions made in the low-dimensional label space are then transformed back onto the original high-dimensional label space via a decomposition matrix [13, 14] or k-nearest neighbor (kNN) technique [15]. Additionally, some works [16, 17] attempt to select a small yet sensible subset of labels to represent the entire label set, and then learn hypotheses regrading this smaller label set.

Aforementioned learning methods successfully remedy the label proliferation problem and have achieved promising performance in different multi-label tasks. However, these methods may be restricted in two aspects. (a) Nearly all of these algorithms explicitly or implicitly assume that all the labels in the learning process are given for once, and thus they can only tackle a static label setting, which could be easily violated in changing environments. In practice, there is a rapid increase in the volume of labels as the understanding of data goes more in depth, disabling the static label setting in consequence. For example, in social network, users usually belong to different groups or clubs according to their individual characteristics or interests. With the fast development of information techniques and the convenient information transmission, there could emerge new interest groups or clubs, which should be timely and accurately recommended to prospective members. In event detection problem, it is urgent to timely and effectively investigate an emerging new event, which is excluded in the early detection systems. In this way, we can immediately integrate new events into the previous detection system by borrowing the knowledge from past events. Therefore, involvement of constantly emerging labels is very significant for the multi-label learning. (b) Although there are some tricks to adapt classical multi-label algorithms to handle emerging new labels, they could have various disadvantages. More precisely, independently learning for new labels would neglect the knowledge harvested from past labels; integrating new labels and past labels to re-train a new multi-label model requires a huge computation cost, which thus decreases the scalability of the multi-label system, especially when dealing with large scale scenario. As a result, it is challenging to efficiently and accurately model the emergence of the new labels.

Targeting at the both problematic aspects, we define and study streaming label learning (SLL), i.e., labels are arrived on the fly, to learn a model for the newly-arrived k~{}(\geq 1) labels given a well-trained model for the existing m (usually large) labels within multi-label learning context. The proposed streaming label learning algorithm equips with the capability of modelling newly-arrived labels with the help of the knowledge learned from past labels to timely and effectively respond to the changing and demands of environments. The core of SLL is to explore and exploit the relationships between new labels and past labels. Instead of decomposing the label matrix into label vectors in terms of different data points [7, 13, 15], we examine it from the perspective of label space and represent each label through the response values on examples. Based on the idea of “labels represent themselves”, the label structure exploited for labels self-representation stands for relationships between labels, which can be inherited by hypotheses of labels as well. Given the relationships between past labels and newly arrived labels, we can thus easily model the new labels with the help of the well-trained multi-label model on the large number of past labels. We theoretically suggest that the generalization ability of hypotheses of newly arrived labels can be largely improved with the knowledge harvested from past labels. Experimental results on large-scale real-world datasets demonstrate the significance of studying streaming label learning and the effectiveness of the proposed algorithm in timely and effectively learning new labels.

The rest of the paper is organized as follows. In Section 2, we formulate the streaming label learning problem and propose the corresponding mathematical model. The optimization process is elaborated in Section 3 and theoretical analysis is given in Section 4. In Section 5, we present and analyze the experimental results, with concluding remarks stated in Section 6. All detailed proofs are shown in Appendix section.

## 2 Problem Formulation

In this section we present a streaming label learning mechanism to handle the emerging new labels with the help of the knowledge learned from past labels, which is able to explore the previously well-trained multi-label model over a large number of labels and get rid of intensive computation cost. The proposed algorithm seeks to exploit label relationship via label self-representation, which has an important influence on the hypotheses of labels.

We first state multi-label learning (MLL) and introduce frequent notations. Let the given training data set denoted by \mathcal{D}=\{(\mathbf{x}_{1},\mathbf{y}_{1}),...,(\mathbf{x}_{n},\mathbf{y}_{% n})\}, where \mathbf{x}_{i}\in\mathcal{X}\subseteq\mathbb{R}^{d} is the input feature vector and \mathbf{y}_{i}\in\mathcal{Y}\subseteq\{-1,1\}^{m} is the corresponding label vector. Moreover, y_{ij}=1 iff the j-th label is assigned to the example \mathbf{x}_{i} and y_{ij}=-1 otherwise. Let \mathbf{X}=[\mathbf{x}_{1},...,\mathbf{x}_{n}]\in\mathbb{R}^{d\times n} be the data matrix and \mathbf{Y}=[\mathbf{y}_{1},...,\mathbf{y}_{n}]\in\{-1,1\}^{m\times n} be the label matrix. Given dataset \mathcal{D}, multi-label learning aims to learn a function f:\mathbb{R}^{d}\rightarrow\{-1,1\}^{m} that generates the prediction on label vector for a test point.

### 2.1 Label Relationship

Probe of label relationship is demonstrated to be critical and beneficial in boosting the performance of multi-label learning [18, 19, 20]. Given the label matrix \mathbf{Y}=[\mathbf{y}_{1},...,\mathbf{y}_{n}]\in\{-1,1\}^{m\times n}, where Y_{ij} indicates the response of i-th label on example \mathbf{x}_{j}, most works [7, 13, 15] treat \mathbf{Y} from the perspective of column (example) and investigate different techniques to transform these example-wise vectors. By contrast, we propose to examine the label matrix from the label perspective, namely, considering the row-wise vectors as the representations of different labels. This thus enables us to specify the abstract concept of a label via its responses on n examples. To facilitate the mathematical notations, we equivalently examine the columns of \mathbf{Y}’s transpose, denoted as \mathbf{Y}^{*}=\mathbf{Y}^{T}=[\mathbf{y}^{*}_{1},...,\mathbf{y}^{*}_{m}]\in\{% -1,1\}^{n\times m}.

In the following, we proceed to introduce two important assumptions for streaming label learning problem.

\bullet Label Self-representation. Given m labels indexed by l_{i}(i=1,...,m), each of them can be represented by vectors \{\mathbf{y}_{1}^{*},...,\mathbf{y}_{m}^{*}\}. We employ a valuable assumption of “labels represent themselves” to model the label relationship. Specifically, a label is assumed to be represented as a combination of other labels. For example, a linear representation is utilized for a given label l_{i},

\mathbf{y}_{i}^{*}=\sum_{j\neq i}s_{i}^{j}\mathbf{y}_{j}^{*} | (1) |

where \mathbf{s}_{i}=[s_{i}^{1},...,s_{i}^{m}]^{T} is the coefficient vector to reconstruct label l_{i} and s_{i}^{i}=0 excludes l_{i} itself in reconstruction. Moreover, if s_{i}^{j}>0, then label l_{j} has positive influence on label l_{i} in Eq.(1), while s_{i}^{j}<0 implies that label l_{j} has negative influence on label l_{i}. \mathbf{s}_{i} is encouraged to be sparse, so that label l_{i} only has connections with several labels.

\bullet Hypotheses of Labels. Multi-label learning aims to learn better hypotheses of labels with the help of relationship between labels. A simple yet effective approach to formulate the process of multi-label decision \mathbb{R}^{d}\rightarrow\{-1,1\}^{m\times n} is via using function f(\mathbf{x};W)=W^{T}\mathbf{x}=[\bm{w}_{1}^{T}\mathbf{x},...,\bm{w}_{m}^{T}% \mathbf{x}]^{T}. The multi-label classifier W can thus be regarded as the composition of classifiers regarding different labels \{\bm{w}_{1},...,\bm{w}_{m}\}, where \bm{w}_{i} is the classifier w.r.t. label l_{i}. We assume that the relationship between labels can be inherited by classifiers of different labels. Given label l_{i} represented by its related labels \mathcal{N}_{i}=\{l_{j}|s_{i}^{j}\neq 0,j=1,...,m\} according to Eq.(1), classifier \bm{w}_{i} w.r.t. label l_{i} can thus be represented by the classifiers regarding those related labels using the same coefficient vector \mathbf{s}_{i},

\bm{w}_{i}=\sum_{j\neq i}s_{i}^{j}\bm{w}_{j}. | (2) |

Broadly speaking, label relationship acts as a regularization of multi-label classifier W, which encourages W to be represented by itself as well.

The linear self-representation of labels is a simple yet effective assumption within multi-label learning indeed. There usually exist significant dependencies among labels in multi label learning. With the extension of real datasets, these dependencies are enhanced as well. Thus it is easy for some specific label to investigate a group of “neighborhood” labels involved in its linear representation among a great many labels. ^{1}^{1}1We will validate the linear self-representation of labels empirically in Section 6. Note that our operation over label matrix resembles label selection techniques in [16], which also assumes linear self-representation of labels, but they are developed from distinct perspectives. Label selection focuses on selecting a shared subset of labels to recover all the given labels. Nevertheless, the proposed label self-representation aims to accurately represent the label current in progress, which will not be distracted by the reconstruction results of other labels. Besides, we propagate the label relationship exploited through label self-representation into the process of learning multi-label classifiers, instead of independently learning classifiers for the selected label subset. It is instructive to note that the label self-representation operation implicitly encourages W to be low rank, which is also a widely-used assumption in multi-label learning [14, 13].

### 2.2 Streaming Label Learning

Conventional well-trained multi-label learning model on data associated with a large number of labels is difficult to be adapted to the newly arrived labels without computationally intensive re-training, and the label relationship discovered by existing methods cannot be straightforwardly extended with the emerging new labels as well.

This section details the proposed steaming label learning (SLL) mechanism, designed for accommodating emerging new labels. Basically, SLL consists of two steps. For the k newly arrived labels, we first exploit their relationships between m past labels, and then learn their corresponding hypotheses with the help of previously well-trained model regarding past labels and the exploited label relationship.

We first assume there is only one newly-arrived label (k=1) and then extend it into a mini-batch setting (k\geq 1). Denote a well-trained multi-label learning model over m labels as W_{m}=[\bm{w}_{1},...,\bm{w}_{m}], and the label matrix of n examples aligned in label dimension is denoted as Y_{m}^{*}=[\mathbf{y}_{1}^{*},...,\mathbf{y}_{m}^{*}]. Besides, given matrix S_{m} describing label relationship, we then have Y^{*}_{m}\approx Y^{*}_{m}S_{m} and W_{m}\approx W_{m}S_{m}.

Streaming label learning with one label. Given a new label l_{m+1} represented by the response vector \mathbf{y}_{m+1}^{*} on n examples, we assume that it can be represented by the past m labels \mathbf{y}^{*}_{m+1}=\sum_{j=1}^{m}s_{m+1}^{j}\mathbf{y}^{*}_{j}, where \mathbf{s}_{m+1} is the coefficient vector of the new label l_{m+1}, and can be determined by solving the following optimization problem:

\operatorname*{\arg\,\min}_{\mathbf{s}_{m+1}\in\mathbb{R}^{m}}\quad\frac{1}{2}% \left\lVert\mathbf{y}^{*}_{m+1}-Y^{*}_{m}\mathbf{s}_{m+1}\right\rVert_{2}^{2}+% \lambda\left\lVert\mathbf{s}_{m+1}\right\rVert_{1}, | (3) |

where the least squares acts as a residual penalty for the label representation and \lambda>0 encourages sparsity. In this way, we can obtain the representation \mathbf{s}_{m+1} of the new label l_{m+1}.

The label relationship between the new label and past labels can provide us helpful information to learn the hypothesis regarding the new label. According to hypotheses of labels in Eq.(2), we have

\bm{w}_{m+1}=W_{m}\mathbf{s}_{m+1} | (4) |

for new label l_{m+1}. Eq.(4) actually provides prior information for the new classifier \bm{w}_{m+1} to be learned. By further considering its prediction error, \bm{w}_{m+1} can be learned by minimizing the following objective function:

\displaystyle J(\bm{w}_{m+1})=\sum_{i=1}^{n}\ell(y^{*}_{m+1,i},\mathbf{x}_{i}^% {T}\bm{w}_{m+1})\\ \displaystyle+\frac{\beta}{2}\left\lVert\bm{w}_{m+1}-W_{m}\mathbf{s}_{m+1}% \right\rVert_{2}^{2} | (5) |

where \beta>0 is a regularization parameter and \ell(\cdot,\cdot) is a loss function to measure the discrepancy between the ground-truth label and the prediction. We choose the \ell_{2} loss function in our experiment for simplicity though our SLL can adapt to other loss functions, since \ell_{2} loss is shown to stand out in most cases of multi-label classification tasks comparing to other loss functions, such as logistic loss and L_{2}-hinge loss [13]. Therefore, the new classifier can be learned subsequently, but integrated with the already-learned knowledge of past labels.

SLL with one new label can be naturally extended into a mini-batch setting, where a mini-batch of new labels instead of one single label is processed at a time.

Mini-batch extension. Given a batch of k new labels l_{new}=\{l_{m+1},...,l_{m+k}\} represented by vectors Y^{*}_{new}=\{\mathbf{y}^{*}_{m+1},...,\mathbf{y}^{*}_{m+k}\}, the challenging part is that we need to consider not only the relationships between new labels and past labels, but also those among new labels.

Suppose that each new label is reconstructed with the help of all the other labels (new labels and past labels), i.e. l_{m+i}=\sum_{j=1}^{m+k}s_{m+i}^{j}l_{j},j\neq m+i. According to Eq.(1), we can obtain

Y^{*}_{new}=Y^{*}_{m+k}S_{new} | (6) |

with Y^{*}_{m+k}=[Y^{*}_{m},Y^{*}_{new}]. Then the representation of new labels can be also solved through the following optimization problem as Eq.(3),

\begin{array}[]{rl}\arg\displaystyle{\min_{S_{new}}}&\frac{1}{2}\left\lVert Y^% {*}_{new}-Y^{*}_{m+k}S_{new}\right\rVert_{F}^{2}+\lambda\left\lVert S_{new}% \right\rVert_{1,1}\\ s.t.&(S_{new})_{m+i,i}=0,\ \forall i=1,...,k.\\ \end{array} | (7) |

where (S_{new})_{m+i,i}=0 is to exclude each individual label from its reconstruction. As a result, the representation of new labels can be obtained. Moreover, considering the partitioning of S_{new}=[S_{new}^{(1)};S_{new}^{(2)}], we have Y^{*}_{new}=Y^{*}_{m+k}S_{new}=Y^{*}_{m}S_{new}^{(1)}+Y^{*}_{new}S_{new}^{(2)}. Thus we can observe that S_{new}^{(1)} corresponds to the representation from m past labels while S_{new}^{(2)} means the interactive representation of the k new labels, which coheres with the assumption we make.

After obtaining S_{new}, it can also be employed in the process of learning new classifiers, W_{new}=[W_{m},W_{new}]S_{new}=W_{m}S_{new}^{(1)}+W_{new}S_{new}^{(2)} where W_{new} is the parameter matrix for new labels. Then the optimization function is formulated as

\displaystyle\tilde{J}(W_{new})=\frac{1}{2}\sum_{i=1}^{n}\left\lVert\mathbf{y}% _{new}^{i}-W_{new}^{T}\mathbf{x}_{i}\right\rVert_{2}^{2}\\ \displaystyle+\frac{\beta}{2}\left\lVert W_{new}(\mathbf{I}-S_{new}^{(2)})-W_{% m}S_{new}^{(1)}\right\rVert_{F}^{2} | (8) |

Note that the adopted loss function in Eq.(8) is decomposable, namely, \left\lVert\mathbf{y}_{new}^{i}-W_{new}^{T}\mathbf{x}_{i})\right\rVert_{2}^{2}% =\sum_{j=1}^{k}(y_{new}^{ij}-\bm{w}_{m+j}^{T}\mathbf{x}_{i})^{2} where y_{new}^{ij} is the j-th new label value of the i-th example \mathbf{x}_{i}. Besides, when S_{new}^{(2)}=\bf{0}, it means that we neglect the relationships within new labels, and only investigate the relationships between new labels and past labels. This case coheres with the single new label scenario, and thus it can be viewed as a special case with batch size 1.

The framework of the proposed algorithm is summarized in Algorithm 1.

## 3 Optimization

In this section, we show the details of optimization in SLL. Basically, the optimization mainly consists of two parts, i.e.solving the label representation S and the classifier parameter matrix W. Additionally, we introduce a natural method to initialize the proposed steaming label learning algorithm.

### 3.1 Optimizing the New Label Representation S

For a single new label (see Problem (3)), the optimization is unconstrained yet with a non-smooth regularization term. In fact, it is an \ell_{1}-regularized linear least-squares problem or Lasso. This problem has been thoroughly investigated in many literatures, and there exist a great many algorithms to solve it efficiently, such as alternating direction method of multipliers (ADMM) [21], least angle regression (LARS) [22], grafting [23] and feature-sign search algorithm [24]. Also there are some off-the-shelf toolboxes or packages solving it, e.g. CVX [25, 26], TFOCS [27] and SPAMS [28]. Note that the optimization of Problem (3) depends on the dimension of the label set. To handle the label proliferation problem, we propose to implement clustering trick [15] to all labels, then select labels with an appropriate number for each cluster and compose a relatively small label dictionary in order to improve the efficiency.

### 3.2 Optimizing the Classifier W for New Labels

\bm{w}_{m+1} in Problem (5). Various gradient-based or subgradient-based methods can be adopted to minimize Eq.(5). Different from [13], which seeks to process a large number of labels all at once and has to turn to cheap methods, such as conjugate gradient (CG) method, Eq.(5) provides a practical solution to learn multiple labels in a streaming manner, and thus largely alleviates the challenge of a prodigious number of labels on machine load and computational cost. We only concern a d-dimension optimization problem, which enables us to adopt more accurate methods, like LBFGS search, with little computational cost. With the squared loss, we can even obtain a closed-form solution of Eq.(5),

\bm{w}_{m+1}\leftarrow(\mathbf{X}\mathbf{X}^{T}+\beta\mathbf{I})^{-1}(\mathbf{% X}\mathbf{y}_{m+1}^{*}+\beta W_{m}\mathbf{s}_{m+1}). | (9) |

which only needs to inverse a d\times d matrix instead of dr\times dr in [13], where r is the low rank upper bound.

W_{new} in Problem (8). Similarly, Eq.(8) can also be solved by various gradient-gradient methods. However, the bottleneck is the calculation of gradients and the corresponding Hessian matrix for its cost may be extremely large. Let \bm{w}_{new}=vec(W_{new})\in\mathbb{R}^{dk}, where vec(\cdot) is the vectorization of a matrix. Since loss function \ell is separable over each \bm{w}_{m+i}, then the gradient and Hessian matrix of \bm{w}_{new} can be calculated using the gradient and Hessian matrix over each \bm{w}_{m+i}. Besides, denoting D_{new}=((\mathbf{I}-S_{new}^{(2)})\otimes\mathbf{I}) and \mathbf{z}_{new}=vec(W_{m}S_{new}^{(1)}), the residual penalty \frac{1}{2}\left\lVert W_{new}(\mathbf{I}-S_{new}^{(2)})-W_{m}S_{new}^{(1)}% \right\rVert_{F}^{2} can be rewritten as \frac{1}{2}\left\lVert D_{new}^{T}\bm{w}_{new}-\mathbf{z}_{new}\right\rVert_{2% }^{2}, whose gradient and Hessian matrix are easy to calculate. Let \ell^{\prime}(a,b)=\frac{\partial}{\partial b}\ell(a,b),\ell^{\prime\prime}(a,% b)=\frac{\partial^{2}}{\partial b^{2}}\ell(a,b), then the gradient and Hessian-vector multiplication of \bm{w}_{new} are:

\displaystyle\nabla\tilde{J}(\bm{w}_{new})=\mbox{stack}\left\{\sum_{i=1}^{n}% \ell^{\prime}(y_{new}^{ij},\bm{w}_{m+j}^{T}\mathbf{x}_{i})\mathbf{x}_{i}\right% \}_{j=1}^{k} | |||

\displaystyle\qquad\qquad\qquad+\beta D_{new}(D_{new}^{T}\bm{w}_{new}-\mathbf{% z}_{new}) | |||

\displaystyle=vec(\mathbf{X}G+\beta[W_{new}(\mathbf{I}-S_{new}^{(2)})-W_{m}S_{% new}^{(1)}](\mathbf{I}-S_{new}^{(2)})^{T}) | (10) | ||

\displaystyle\nabla^{2}\tilde{J}(\bm{w}_{new})\mathbf{z}=\mbox{stack}\left\{% \sum_{i=1}^{n}\ell^{\prime\prime}(y_{new}^{ij},\bm{w}_{m+j}^{T}\mathbf{x}_{i})% \mathbf{x}_{i}\mathbf{x}_{i}^{T}\mathbf{z}_{j}\right\}_{j=1}^{k} | |||

\displaystyle\qquad\qquad\qquad\quad+\beta D_{new}D_{new}^{T}\mathbf{z} | |||

\displaystyle=vec(\mathbf{X}H+\beta Z(\mathbf{I}-S_{new}^{(2)})(\mathbf{I}-S_{% new}^{(2)})^{T}) | (11) |

where G_{ij}=\ell^{\prime}(y_{new}^{ij},\bm{w}_{m+j}^{T}\mathbf{x}_{i}), \mathbf{z}=vec(Z)=vec([\mathbf{z}_{1},...,\mathbf{z}_{k}]), H_{ij}=\ell^{\prime\prime}(y_{new}^{ij},\bm{w}_{m+j}^{T}\mathbf{x}_{i})\mathbf% {x}_{i}^{T}\mathbf{z}_{j} and stack\{\cdot\} means stacking the vectors in the set to form a longer vector according to the ascending index order. As a result, the calculation of gradient and Hessian-vector multiplication can be efficiently obtained using Eqs.(10)-(11). For the \ell_{2} loss function, the key factors G and H in Eq.(10)-(11) can be more easily calculated,

G=\mathbf{X}^{T}W_{new}-Y_{new}^{*};\quad H=\mathbf{I}. | (12) |

In SLL, k is usually small and thus we can turn to more refined techniques, such as various line search. However, since the size of Problem (8) is d\times k, we propose to adopt cheap methods, such as Conjugate Gradient (CG), when feature dimension d is very large.

### 3.3 An Initialization proposal of W_{m} for Past Labels

Optimization and implementation of SLL requires an already well-trained model W_{m} for past m labels, which is critical for the performance over k new labels. In real application, we just utilize an obtained W_{m} as the input of SLL to model k new labels, without retraining the m+k labels.

Furthermore, although SLL is designed for modelling new labels, one additional merit is that it also provides a solution for the memory limited multi label learning problem. Almost all existing MLL methods need to load the whole feature and label data into the memory, and they may be fairly restrictive to be trained on low-end computation devices at hand since the datasets are tending to be larger and larger. Fortunately, due to the separate two steps of SLL, we can load label data and feature data successively into memory, since they dominate the memory overhead of both two steps respectively. In this way, SLL can basically decrease half of memory need for the training of MLL, together with considering the dependencies among labels.

In this case, based on the two assumptions in Section 2.1, one practical proposal for learning the initial classifier W_{m} over past m labels is in a similar approach as that for SLL, by minimizing the following objective:

\displaystyle\mathcal{J}(W_{m},S_{m})=\frac{1}{2}\sum_{i=1}^{n}\left\lVert% \mathbf{y}_{i}-W_{m}^{T}\mathbf{x}_{i}\right\rVert_{2}^{2}+\lambda_{1}\left% \lVert S_{m}\right\rVert_{1,1}\\ \displaystyle+\frac{\lambda_{2}}{2}\left\lVert W_{m}-W_{m}S_{m}\right\rVert_{F% }^{2}+\frac{\lambda_{3}}{2}\left\lVert Y^{*}_{m}-Y^{*}_{m}S_{m}\right\rVert_{F% }^{2} | (13) |

where \left\lVert S_{m}\right\rVert_{1,1}=\sum_{i=1}^{m}\left\lVert\mathbf{s}_{i}% \right\rVert_{1} promotes the sparsity of S_{m}=[\mathbf{s}_{1},...,\mathbf{s}_{m}]\in\mathbb{R}^{m\times m}. \lambda_{i}>0~{}(i=1,2,3) are the weight parameters. Usually, we expect that the objective function is minimized given a small reconstruction error of labels, thus much weight (i.e. large \lambda_{3}) should be imposed on \frac{1}{2}\left\lVert Y^{*}_{m}-Y^{*}_{m}S_{m}\right\rVert_{F}^{2} term in Eq.(13).

Problem (13) is basically solved with the alternative iteration strategy, i.e. fixing one variable and optimizing the other until convergence. As for solving for S_{m}, it is still a lasso problem, and can adopt the same methods in solving Problem (7). As for solving W_{m}, it can be viewed as a special case of Problem (8) with S_{new}^{(1)}=\bm{0}, thus Eq.(8) is of size d\times m. In this case, especially referred to large scale labels, we may perform cheap updates and obtain a good approximate solution. For example, Conjugate Gradient (CG) can be employed to significantly reduce the computational complexity based on Eq.(10) and (11).

## 4 Theoretical Analysis

In this section, we theoretically analyze the proposed SLL, regarding the following two aspects: (a) the generalization of the designed classifier for new labels; and (b) the difference between the classifier parameter matrix obtained with streaming labels and that without streaming labels.

### 4.1 Generalization Error Bounds

We first analyze excess risk bounds for SLL. In particular, we present a generalization error bound for the new classier learned in the steaming fashion. Moreover, we show that under some circumstances, our SLL can give a tighter bound than the common trace norm or Frobenius norm regularization in the ERM framework.

Since SLL focuses on boosting the performance of new labels by exploring the knowledge from past labels, it needs a well-trained multi-label classifier as the initialization input, denoted as W_{old}\in\mathbb{R}^{d\times m}. Suppose k new labels are involved at a time, then SLL is implemented upon a data distribution \mathcal{D}=\mathcal{X}\times\{-1,1\}^{k}, where \mathcal{X}\in\mathbb{R}^{d} is the feature space. Training data contains n points (\mathbf{x}_{1},\mathbf{y}_{1}),...,(\mathbf{x}_{n},\mathbf{y}_{n}), which are sampled i.i.d. from the distribution \mathcal{D}, where \mathbf{x}_{i}\in\mathcal{X} is the feature vector and \mathbf{y}_{i}\in\{-1,1\}^{k} is the ground-truth label vector. Our SLL is based on the proposed label self-representation and hypotheses, which can be viewed as a regularization. And the regularized set can be written as \mathcal{W}:=\{W\in\mathbb{R}^{d\times k},\left\lVert W-W_{old}S\right\rVert_{% F}\leq\varepsilon,\left\lVert S\right\rVert_{1,1}\leq\lambda\}, where S is the representation weight matrix of new labels with a sparsity controlling parameter \lambda. For simplicity, we just analyze the scenario where k new labels have no interaction in their representation.

Given the obtained training data, SLL learns a classifier \hat{W} by minimizing the empirical risk over the regularized set \mathcal{W}, \hat{\mathcal{L}}(W)=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=1}^{k}\ell(\mathbf{y}^{l% }_{i},\mathbf{x}_{i},\bm{w}_{l}) and \hat{W}\in\arg\min_{W\in\mathcal{W}}\hat{\mathcal{L}}(W). Define the population risk of an arbitrary W as \mathcal{L}(W)=\operatorname*{\mathbb{E}}_{(\mathbf{x},\mathbf{y})}[\kern-1.5% pt[\sum_{l=1}^{k}\ell(\mathbf{y}^{l},\mathbf{x},\bm{w}_{l})]\kern-1.5pt], then the goal is to show the learned \hat{W} possesses good generalization, i.e., \mathcal{L}(\hat{W})\leq\inf_{W\in\mathcal{W}}\mathcal{L}(W)+\epsilon. We have the following theorem.

###### Theorem 1.

Assume we learn a new predictor W\in\mathbb{R}^{d\times k} in terms of k new labels using streaming label learning formulation \displaystyle{\hat{W}=\arg\min_{W\in\mathcal{W}}\hat{\mathcal{L}}(W)} over a set of n training points, where \mathcal{W}:=\{\left\lVert W-W_{old}S\right\rVert_{F}\leq\varepsilon,\left% \lVert S\right\rVert_{1,1}\leq\lambda\}. Then with probability at least 1-\delta, we have

\mathcal{L}(\hat{W})\leq\inf_{W\in\mathcal{W}}\mathcal{L}(W)+\mathcal{O}\left(% (\varepsilon+\lambda c)\sqrt{\frac{k}{n}}\right)+\mathcal{O}\left(k\sqrt{\frac% {\log\frac{1}{\delta}}{n}}\right) |

where c=\left\lVert W_{old}\right\rVert_{F} and we presume (w.l.o.g) that \operatorname*{\mathbb{E}}_{\mathbf{x}}[\kern-1.5pt[\left\lVert\mathbf{x}% \right\rVert_{2}^{2}]\kern-1.5pt]\leq 1.

According to Theorem 1, the key term of the upper bound is the second term and it depends on the value of (\varepsilon+\lambda c). \varepsilon is related to the accuracy of the self-representation of label hypotheses, then if the representation performs accurately, \varepsilon can be sufficiently small. \lambda controls the coefficient sparsity and is usually small. Moreover, considering \left\lVert W\right\rVert_{F}\leq\varepsilon+\lambda\left\lVert W_{old}\right% \rVert_{F} in \mathcal{W}, thus SLL might provide a small upper bound of \left\lVert W\right\rVert_{F} or sometimes of \left\lVert W\right\rVert_{*}, which means the generalization error bound generated by SLL could be tighter than the common Frobenius or trace norm regularization, when \varepsilon and \lambda are sufficiently small. Proof of Theorem 1 is referred to Appendix A.

### 4.2 Streaming Approximation Error Bound

We now investigate whether the classifier matrix learned by SLL is seriously deviated from the one learned under the conventional multi-label learning setting. Precisely, suppose we have k labels and the classifier matrix is \hat{W}_{k}, and then we learned a new label classifier matrix \hat{\bm{w}} using SLL. However, without SLL we would learn a classifier corresponding to all k+1 labels, denoted as \hat{W}_{k+1}. The goal is to estimate the difference between \hat{W}_{k+1} and [\hat{W}_{k},\bm{w}], which reflects the cost of classifier learned by SLL.

Given the training data \mathbf{X}=[\mathbf{x}_{1},...,\mathbf{x}_{n}]\in\mathbb{R}^{d\times n} and their label matrix \mathbf{Y}=[\mathbf{y}_{1},...,\mathbf{y}_{n}]\in\{-1,1\}^{(k+1)\times n}, the classifier parameter matrix \hat{W}_{k+1} is determined in the following optimization:

\hat{W}_{k+1}=\operatorname*{\arg\,\min}_{W\in\mathbb{R}^{d\times(k+1)}}\sum_{% i=1}^{n}\ell(\mathbf{y}_{i},\mathbf{x}_{i},W)+\frac{\lambda}{2}\left\lVert W-% WS\right\rVert_{F}^{2}, | (14) |

where S is the label structure matrix of all k+1 labels. Then we have the following theorem:

###### Theorem 2.

Given the training data \{\mathbf{X},\mathbf{Y}\} of k+1 labels, the classifier matrix \hat{W}_{k+1} is determined in Eq.(14). Assuming the first k labels are also learned using Eq.(14) denoted as \hat{W}_{k}, while the (k+1)th label is learned under the streaming label learning framework, denoted as \hat{\bm{w}}, then the following inequality holds,

\displaystyle\left\lVert\hat{W}_{k+1}-[\hat{W}_{k},\bm{w}]\right\rVert_{F}\leq% \frac{2}{\lambda\sigma_{1}^{2}(\mathbf{I}-S)+\sigma_{1}^{2}(\mathbf{X})}\\ \displaystyle\left(\sqrt{2n\Omega C}+\lambda\tau\sqrt{\left\lVert\hat{W}\right% \rVert_{F}^{2}+\left\lVert\hat{\bm{w}}\right\rVert_{2}^{2}}\right) |

where C=\ell_{2}(\mathbf{Y},\mathbf{X},[\hat{W}_{k},\bm{w}]) is the least squares loss value of the classifier learned by SLL; constant \tau=\left\lVert\mathbf{I}-S\right\rVert_{F}^{2} and \sigma_{1}(\cdot) indicates the smallest singular value. Moreover, we presume (w.l.o.g) that \left\lVert\mathbf{x}_{i}\right\rVert_{2}^{2}\leq\Omega and \mathbf{X} is of full row rank.

As indicated in Theorem 2, we present an approximation error bound for \hat{W}_{k+1}-[\hat{W}_{k},\hat{\bm{w}}] using the least squares loss for simplicity and it shows that the bound is directly controlled by the loss of SLL. The better the SLL learns the classifier (i.e.the smaller C is), the tighter the bound will be. In this way, if we focus on SLL with much effort, then the learned classifier would not tend to be unsatisfying. Proof of Theorem 2 is referred to Appendix B.

## 5 Experimental Results

In this section, we conduct experiments on SLL to demonstrate its effectiveness and efficiency in terms of dealing with new labels.

Datasets. We select 5 benchmark multi-label datasets to implement the setting of SLL, including three small datasets (Bibtex, MediaMill and Delicious) and two large datasets (EURlex and Wiki10). See Table I for the details of these datasets.

Dataset | d | L | #train | #test | \bar{d} | \bar{L} |
---|---|---|---|---|---|---|

Bibtex | 1,836 | 159 | 4,880 | 2,515 | 68.74 | 2.40 |

MadiaMill | 120 | 101 | 30,993 | 12,914 | 120.00 | 4.38 |

Delicious | 500 | 983 | 12,920 | 3,185 | 18.17 | 19.03 |

EURlex | 5,000 | 3,993 | 15,539 | 3,809 | 236.69 | 5.31 |

Wiki10 | 101,938 | 30,938 | 14,146 | 6,616 | 673.45 | 18.64 |

Baseline methods. We compare the proposed SLL with three competing methods:

1). BR [7]. Since our setting focuses on the streaming labels, to our best knowledge, only it can accommodate new labels without retrain the model related to past labels.

2). LEML (low rank empirical risk minimization for multi-label learning) [13]. Since SLL and LEML are both within ERM framework, we analyze the difference of their classification performance.

3). SLEEC (sparse local embeddings for extreme classification) [15]. Since SLL aims at handling new labels and is capable of scaling to large datasets, we choose the state-of-the-art SLEEC in extreme classification.

Evaluation Metrics. We use three prevalent metrics to measure the performance of all competing methods including our SLL: (a) Hamming loss, which concerns the holistic classification accuracy over all labels, (b) precision at k (P@k), which is usually used in tagging or recommendation and only top k predictions are involved in the evaluation, and (c) average AUC, which reflects the ranking performance.

For Lasso-style Problems 3 and 7, we use a Cholesky-based implementation of the LARS-Lasso algorithm to efficiently solve them with a high accuracy, supported by SPAMS optimization toolbox [28], and implement it in a parallel way. As for obtaining the classifier defined in Problems (5) and (8), we propose to use LBFGS line search in small datasets (or use Eq.(9) with moderate d accelerated by GPU) and utilize conjugate gradient descent in large datasets, based on techniques in Eqs.(10) and (11).

### 5.1 Validation of Effectiveness

Single new label.
Basically SLL relies on an initial multi-label classifier related to the past labels, which can be learned by solving Problem (13). We first compare the classification results with varying label ratios which are involved in the initialization training. Since the streaming labels are processed one by one, only BR can handle this case while other methods (including LEML and SLEEC) have to implement retraining many times, but have identical results with all labels ( ratio = 100%) involved in initialization. Figure 1 shows the P@3 accuracy and average AUC results, together with their trends, in various initial label ratios. From these results, we have the following observations. 1) For SLL, more labels involved in the initialization tend to improve its performance. However, with the increase of initial ratio, the improvement would be less obvious; when the ratio is larger than an appropriate value (e.g. 70% for Bibtex, 60% for Delicious), the performance would keep relatively stable, even sometimes getting worse. Thus selecting the initial label size is very critical when dealing with large datasets, and 50%\sim70% would be a satisfying option. 2) In terms of all labels (100% ratio), SLL can yield better accuracies than those of BR and LEML and is competitive with SLEEC. This indicates our used label structure tends to be a stronger regularization in training multi-label classifiers compared to the common Frobenious or trace norm regularization.

P@1 | P@3 | P@5 | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

SLL | BR | LEML | SLEEC | SLL | BR | LEML | SLEEC | SLL | BR | LEML | SLEEC | ||

Bibtex | 15 | 11.51 | 10.85 | 10.56 | 10.89 | 5.09 | 4.26 | 4.12 | 4.45 | 2.94 | 2.68 | 2.66 | 2.70 |

30 | 18.20 | 17.56 | 17.48 | 17.89 | 9.03 | 8.42 | 8.37 | 8.55 | 5.52 | 5.32 | 5.27 | 5.38 | |

45 | 26.18 | 25.25 | 25.22 | 25.84 | 12.98 | 12.32 | 12.29 | 12.55 | 8.48 | 8.03 | 8.04 | 8.19 | |

60 | 41.11 | 40.02 | 40.16 | 40.80 | 20.76 | 19.78 | 19.89 | 20.05 | 13.62 | 13.28 | 13.32 | 13.36 | |

75 | 41.45 | 40.91 | 40.97 | 41.31 | 22.85 | 22.37 | 22.55 | 22.61 | 15.92 | 15.29 | 15.44 | 15.53 | |

Mediamill | 10 | 22.16 | 21.36 | 20.94 | 21.75 | 13.62 | 13.37 | 13.32 | 13.50 | 10.52 | 10.39 | 10.36 | 10.47 |

20 | 81.04 | 80.27 | 80.23 | 80.31 | 53.41 | 53.25 | 53.21 | 53.36 | 36.64 | 36.51 | 36.48 | 36.57 | |

30 | 84.12 | 54.16 | 54.12 | 83.99 | 54.26 | 54.02 | 54.01 | 54.20 | 38.31 | 38.06 | 38.12 | 38.23 | |

40 | 84.35 | 84.12 | 84.19 | 84.27 | 55.35 | 55.05 | 55.14 | 55.28 | 39.96 | 39.68 | 39.72 | 39.87 | |

50 | 84.40 | 84.22 | 84.26 | 84.34 | 55.88 | 55.43 | 55.66 | 55.80 | 41.90 | 41.45 | 41.78 | 41.83 | |

Delicious | 100 | 34.01 | 31.86 | 32.01 | 32.12 | 23.04 | 22.32 | 22.56 | 22.74 | 18.16 | 17.04 | 17.22 | 17.32 |

200 | 38.97 | 38.03 | 38.18 | 38.34 | 31.11 | 30.01 | 30.17 | 30.60 | 25.34 | 24.42 | 25.00 | 25.05 | |

300 | 52.89 | 52.13 | 52.22 | 52.78 | 43.47 | 42.88 | 43.03 | 43.30 | 36.09 | 35.65 | 35.86 | 35.97 | |

400 | 58.52 | 57.84 | 58.00 | 58.30 | 49.63 | 48.78 | 49.22 | 49.47 | 43.87 | 43.08 | 43.20 | 43.35 | |

500 | 61.66 | 61.23 | 61.49 | 61.57 | 54.52 | 54.11 | 54.27 | 54.41 | 48.50 | 48.04 | 48.12 | 48.33 | |

EURlex | 200 | 5.12 | 4.62 | 4.69 | 4.81 | 2.09 | 1.86 | 1.91 | 1.96 | 1.28 | 1.09 | 1.17 | 1.21 |

400 | 11.39 | 10.34 | 10.55 | 10.70 | 4.95 | 4.22 | 4.76 | 4.88 | 3.05 | 2.87 | 2.96 | 3.01 | |

600 | 12.48 | 11.81 | 11.97 | 12.09 | 5.86 | 5.29 | 5.54 | 5.77 | 3.68 | 3.57 | 3.62 | 3.66 | |

800 | 14.01 | 13.23 | 13.77 | 13.90 | 6.83 | 6.42 | 6.66 | 6.72 | 4.26 | 4.02 | 4.17 | 4.23 | |

1000 | 17.28 | 17.12 | 17.26 | 17.31 | 9.05 | 8.84 | 9.02 | 9.08 | 5.77 | 5.33 | 5.65 | 5.79 | |

Wiki10 | 1k | 10.19 | 9.23 | 9.98 | 10.02 | 4.82 | 4.55 | 4.61 | 4.75 | 3.19 | 3.11 | 3.12 | 3.15 |

2k | 15.62 | 15.28 | 15.36 | 15.54 | 7.95 | 7.69 | 7.77 | 7.84 | 5.44 | 5.28 | 5.33 | 5.38 | |

3k | 23.18 | 22.71 | 22.85 | 23.05 | 12.44 | 12.22 | 12.30 | 12.38 | 8.63 | 8.43 | 8.51 | 8.56 | |

4k | 31.11 | 30.22 | 30.79 | 31.00 | 18.07 | 17.34 | 17.88 | 18.00 | 12.52 | 12.33 | 12.40 | 12.47 | |

5k | 38.69 | 38.29 | 38.55 | 38.71 | 22.22 | 21.78 | 22.10 | 22.21 | 15.40 | 15.27 | 15.32 | 15.37 |

Mini-batch new labels. For handling new labels in the mini-batch fashion, we investigate the results with different batch sizes. Specifically, for each dataset we randomly choose 50% (for Delicious 483) labels as past labels since it tends to be a sensible option. Then we focus on the following new labels with different batch sizes. Instead of single new label senecio, in this case a batch of new labels can be independently processed within the traditional multi-label learning for LEML and SLEEC, including BR. The average results in various batches of labels are shown in Table II. As indicated by the results, SLL largely outperforms other methods in dealing with mini-batch new labels. Under this setting, BR, LEML and SLEEC cannot employ the knowledge from past labels and regard the learning of new labels as an independent process. Nevertheless, SLL enables to learn new labels based on the obtained knowledge. Note that with the increase of batch size, the gap between SLL and other methods would shrink since the amount of labels is already large for well train a multi-label model; nevertheless, the training cost will increase accordingly.

### 5.2 Efficiency in dealing with new labels

So far we have shown the superiority of SLL in classification performance, we now evaluate its efficiency in larger datasets. We select 100/200, 1k/2k and 10k/20k labels of Delicious, EURlex and Wiki10 respectively to serve initialization, then focus on the performance of 100 new labels. For SLL, we adopt the same initialization results with LEML. The average running time is showed in Figure 2, zoomed out using log10 scale. We can see that SLL and BR clearly surpass LEML and SLEEC in running time since they do not need the expensive retraining process. Note that the difference between SLL and BR lies in the label structure probe procedure, and for large datasets we can use clustering trick to reduce the scale of problem (3). For example, we select 3k labels for Wiki10 to form the fixed dictionary in Lasso, thus with the increase of label size, SLL can still be comparative with BR in efficiency.

### 5.3 Investigated label structure

Since the adopted label structure plays an significant role in SLL because it helps to train classifiers with a special regularization, we intuitively show the investigated label relationships, i.e.for a given label what labels are involved in its reconstruction. We select three labels and their five representation “neighbors” of Bibtex dataset in Table III. As shown in Table III, we can easily see that some related labels in logic are exactly investigated by our SLL, which intuitively explains the reasons that SLL works. For example, to label “epitope”, which is a terminology in immunology, some investigated labels are connected with its description, such as “honogeneous” and “sequence” while some are also in immunology, including “lipsome” and “immunosensor”.

epitope | sequence; ldl ; homogeneous ; liposome ; |
---|---|

immunosensor | |

fornepomuk | nepomuk ; langen ; knowledge ; semantics; |

knowledgemanagement | |

concept | formal; requirements; empirical; data; |

objectoriented |

## 6 Conclusion

In this paper we studied the streaming label learning (SLL) framework, which enables to model newly-arrived labels with the help of the knowledge learned from past labels. More precisely, we investigate the relationships among labels by examining label matrix from the perspective of label space and propose the label structure to embed it into the empirical risk minimization (ERM) framework, which regularizes the learning of the new classifier. We showed SLL can provide a tighter generalization bound of new labels and would not lose accuracy because SLL explores and exploits the label relationship. Thus SLL can be viewed as an efficient way to learn new classifiers under multi-label learning framework, but with no need of retraining the whole multi-label model. Experiments comprehensively demonstrated the superiority of SLL to existing multi-label learning methods in terms of handling new labels.

## Appendix A Proof of Theorem 1

In this section, we detail the proof of Theorem 1, following the framework presented in [13]. In the sequel, we will show that under streaming label learning, a tighter uniform convergence bound for the empirical losses will be obtained in terms of new classifiers. Denote the regularization set of SLL as \mathcal{W}:=\{W\in\mathbb{R}^{d\times k},\left\lVert W-W_{old}S\right\rVert_{% F}\leq\varepsilon,\left\lVert S\right\rVert_{1,1}\leq\lambda\}, where W_{old} is the previous multi-label classifier matrix for past labels and S is the representation weight matrix of new labels with a sparsity inducing parameter \lambda.

The goal of the proof is to show with high probability the following inequality holds:

\mathcal{L}(\hat{W})\leq\hat{\mathcal{L}}(\hat{W})+\epsilon |

where \epsilon is a small quantity. \mathcal{L}(W)=\operatorname*{\mathbb{E}}_{(\mathbf{x},\mathbf{y})}[\kern-1.5% pt[\ell(\mathbf{y},f(\mathbf{x};W))]\kern-1.5pt]=\operatorname*{\mathbb{E}}_{(% \mathbf{x},\mathbf{y})}[\kern-1.5pt[\ell(\mathbf{y},\mathbf{x},W)]\kern-1.5pt]% =\operatorname*{\mathbb{E}}_{(\mathbf{x},\mathbf{y})}[\kern-1.5pt[\sum_{l=1}^{% k}\ell(\mathbf{y}^{l},\mathbf{x},\bm{w}_{l})]\kern-1.5pt] is the expectation of loss function \ell or the real loss. \hat{\mathcal{L}}(W)=\frac{1}{n}\sum_{i=1}^{n}\ell(\mathbf{y}_{i},\mathbf{x}_{% i},W)=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=1}^{k}\ell(\mathbf{y}^{l}_{i},\mathbf{x% }_{i},\bm{w}_{l}) is the empirical risk of the training data. Let \displaystyle{W^{*}\in\arg\min_{W\in\mathcal{W}}\mathcal{L}(W)},\displaystyle{% \hat{W}\in\arg\min_{W\in\mathcal{W}}\hat{\mathcal{L}}(W)}, then similar analysis can be implemented to obtain \hat{\mathcal{L}}(W^{*})\leq\mathcal{L}(W^{*})+\epsilon, inducing the ultimate inequality we claim, i.e. \mathcal{L}(\hat{W})\leq\mathcal{L}(W^{*})+2\epsilon. Thus we focus on the original uniform convergence bound. Typically, the whole proof of Eq.A can be accomplished within three steps. We will elaborate them in the sequel.

### A.1 Bounding Excess Risk Using Its Supremum

To probe an appropriate upper bound of the excess risk \mathcal{L}(\hat{W})-\hat{\mathcal{L}}(\hat{W}), it is natural to investigate its supremum,

\begin{split}&\displaystyle\mathcal{L}(\hat{W})-\hat{\mathcal{L}}(\hat{W})\leq% \sup_{W\in\mathcal{W}}\{\mathcal{L}(W)-\hat{\mathcal{L}}(W)\}\\ \displaystyle=&\displaystyle\sup_{W\in\mathcal{W}}\left\{\operatorname*{% \mathbb{E}}_{(\widetilde{\mathbf{x}}_{i},\widetilde{\mathbf{y}}_{i})}\left[% \kern-4.0pt\left[\frac{1}{n}\sum_{i=1}^{n}\ell(\widetilde{\mathbf{y}}_{i},% \widetilde{\mathbf{x}}_{i},W)\right]\kern-4.0pt\right]-\frac{1}{n}\sum_{i=1}^{% n}\ell(\mathbf{y}_{i},\mathbf{x}_{i},W)\right\}\\ \displaystyle\triangleq&\displaystyle g((\mathbf{x}_{1},\mathbf{y}_{1}),...,(% \mathbf{x}_{n},\mathbf{y}_{n}))\end{split} |

For the decomposable loss function \ell(\mathbf{y},\mathbf{x},W)=\sum_{l=1}^{k}\ell(\mathbf{y}^{l},\mathbf{x},\bm% {w}_{l}), the change in any (\mathbf{x}_{i},\mathbf{y}_{i}) would induce a perturbation of g((\mathbf{x}_{1},\mathbf{y}_{1}),...,(\mathbf{x}_{n},\mathbf{y}_{n})) at most \mathcal{O}(\frac{k}{n}). Then by using McDiarmid’s inequality, the sum of squared perturbations will be bounded by \frac{2k^{2}}{n}, and thus the excess risk is bounded by a term related to the expectation of g((\mathbf{x}_{1},\mathbf{y}_{1}),...,(\mathbf{x}_{n},\mathbf{y}_{n})), the expected suprēmus deviation. Therefore, with probability at least 1-\delta, it holds that

\displaystyle\mathcal{L}(\hat{W})-\hat{\mathcal{L}}(\hat{W})\\ \displaystyle\leq\operatorname*{\mathbb{E}}_{(\mathbf{x}_{i},\mathbf{y}_{i})}[% \kern-1.5pt[g((\mathbf{x}_{1},\mathbf{y}_{1}),...,(\mathbf{x}_{n},\mathbf{y}_{% n}))]\kern-1.5pt]+\mathcal{O}\left(k\sqrt{\frac{\log\frac{1}{\delta}}{n}}\right) |

In the sequel, we will investigate the upper bound of the expected suprēmus deviation.

### A.2 Bounding Expected Suprēmus Deviation by a Rademacher Average

Now we bound the expected suprēmus deviation using its calculation and some tricks related to the Rademacher complexity. Note that we adopt a Rademacher average introduced in [13]. And we have

\begin{split}&\displaystyle\operatorname*{\mathbb{E}}_{(\mathbf{x}_{i},\mathbf% {y}_{i})}[\kern-1.5pt[g((\mathbf{x}_{1},\mathbf{y}_{1}),...,(\mathbf{x}_{n},% \mathbf{y}_{n}))]\kern-1.5pt]\\ \displaystyle=&\displaystyle{\normalsize\operatorname*{\mathbb{E}}\limits_{(% \mathbf{x}_{i},\mathbf{y}_{i})}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}% \left\{\operatorname*{\mathbb{E}}_{(\widetilde{\mathbf{x}}_{i},\widetilde{% \mathbf{y}}_{i})}\left[\kern-4.0pt\left[\frac{1}{n}\sum_{i=1}^{n}\ell(% \widetilde{\mathbf{y}}_{i},\widetilde{\mathbf{x}}_{i},W)-\ell(\mathbf{y}_{i},% \mathbf{x}_{i},W)\right]\kern-4.0pt\right]\right\}\right]\kern-4.0pt\right]}\\ \displaystyle\leq&\displaystyle\mathop{\operatorname*{\mathbb{E}}\limits_{(% \mathbf{x}_{i},\mathbf{y}_{i})}}\limits_{(\widetilde{\mathbf{x}}_{i},% \widetilde{\mathbf{y}}_{i})}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}\left% \{\frac{1}{n}\sum_{i=1}^{n}\ell(\widetilde{\mathbf{y}}_{i},\widetilde{\mathbf{% x}}_{i},W)-\frac{1}{n}\sum_{i=1}^{n}\ell(\mathbf{y}_{i},\mathbf{x}_{i},W)% \right\}\right]\kern-4.0pt\right]\\ \displaystyle=&\displaystyle\mathop{\operatorname*{\mathbb{E}}_{(\mathbf{x}_{i% },\mathbf{y}_{i})}}\limits_{(\widetilde{\mathbf{x}}_{i},\widetilde{\mathbf{y}}% _{i}),\epsilon_{i}}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}\left\{\frac{1% }{n}\sum_{i=1}^{n}\epsilon_{i}\left(\ell(\widetilde{\mathbf{y}}_{i},\widetilde% {\mathbf{x}}_{i},W)-\ell(\mathbf{y}_{i},\mathbf{x}_{i},W)\right)\right\}\right% ]\kern-4.0pt\right]\\ \displaystyle\leq&\displaystyle\operatorname*{\mathbb{E}}_{(\mathbf{x}_{i},% \mathbf{y}_{i}),(\widetilde{\mathbf{x}}_{i},\widetilde{\mathbf{y}}_{i}),% \epsilon_{i}}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}\left\{\frac{1}{n}% \sum_{i=1}^{n}\epsilon_{i}\ell(\widetilde{\mathbf{y}}_{i},\widetilde{\mathbf{x% }}_{i},W)\right\}\right]\kern-4.0pt\right]\\ &\displaystyle+\operatorname*{\mathbb{E}}_{(\mathbf{x}_{i},\mathbf{y}_{i}),(% \widetilde{\mathbf{x}}_{i},\widetilde{\mathbf{y}}_{i}),\epsilon_{i}}\left[% \kern-4.0pt\left[\sup_{W\in\mathcal{W}}\left\{\frac{1}{n}\sum_{i=1}^{n}% \epsilon_{i}\ell(\mathbf{y}_{i},\mathbf{x}_{i},W)\right\}\right]\kern-4.0pt% \right]\\ \displaystyle=&\displaystyle 2\operatorname*{\mathbb{E}}_{(\mathbf{x}_{i},% \mathbf{y}_{i}),\epsilon_{i}}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}% \left\{\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i}\ell(\mathbf{y}_{i},\mathbf{x}_{i}% ,W)\right\}\right]\kern-4.0pt\right]\\ \displaystyle=&\displaystyle 2\operatorname*{\mathbb{E}}_{(\mathbf{x}_{i},% \mathbf{y}_{i}),\epsilon_{i}}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}% \left\{\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i}\sum_{j=1}^{k}\ell(\mathbf{y}_{i}^% {j},\mathbf{x}_{i},\bm{w}_{j})\right\}\right]\kern-4.0pt\right]\\ \displaystyle\leq&\displaystyle\frac{2C}{n}\operatorname*{\mathbb{E}}_{(% \mathbf{x}_{i},\mathbf{y}_{i}),\epsilon_{i}}\left[\kern-4.0pt\left[\sup_{W\in% \mathcal{W}}\left\{\sum_{i=1}^{n}\epsilon_{i}\sum_{j=1}^{k}\langle\mathbf{x}_{% i},\bm{w}_{j}\rangle\right\}\right]\kern-4.0pt\right]\\ \displaystyle=&\displaystyle\frac{2C}{n}\operatorname*{\mathbb{E}}_{\mathbf{x}% _{i},\epsilon_{i}}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}\left\{\sum_{i=% 1}^{n}\sum_{j=1}^{k}\langle\epsilon_{i}\mathbf{x}_{i},\bm{w}_{j}\rangle\right% \}\right]\kern-4.0pt\right]=2C\mathcal{R}_{n}(\mathcal{W})\end{split} |

where \epsilon_{i}(i=1,...,n) are the Rademacher variables. The first inequality utilizes the Jensen inequality and the last inequality is based on the assumption that the loss function \ell is bounded and C-Lipschitz. Here we adopt a Redemacher complexity defined as follows:

\begin{split}\displaystyle\mathcal{R}_{n}(\mathcal{W})&\displaystyle\triangleq% \frac{1}{n}\operatorname*{\mathbb{E}}_{\mathbf{x}_{i},\epsilon_{i}}\left[\kern% -4.0pt\left[\sup_{W\in\mathcal{W}}\left\{\sum_{i=1}^{n}\sum_{j=1}^{k}\langle% \epsilon_{i}\mathbf{x}_{i},\bm{w}_{j}\rangle\right\}\right]\kern-4.0pt\right]% \\ &\displaystyle=\frac{1}{n}\operatorname*{\mathbb{E}}_{\mathbf{X},\bm{\epsilon}% }\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}\langle W,\mathbf{X}_{\bm{% \epsilon}}\rangle\right]\kern-4.0pt\right]\end{split} |

where \mathbf{X}_{\bm{\epsilon}}\triangleq[\sum_{i=1}^{n}\epsilon_{i}\mathbf{x}_{i},% ...,\sum_{i=1}^{n}\epsilon_{i}\mathbf{x}_{i}]. Then in the last step, we directly calculate and estimate the Redemacher complexity.

### A.3 Estimating the Redemacher Complexity

By applying the Cauchy-Schwarz inequality and the Triangle inequality of matrix norms, we obtain

\begin{split}&\displaystyle\frac{1}{n}\operatorname*{\mathbb{E}}_{\mathbf{X},% \bm{\epsilon}}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}\langle W,\mathbf{X% }_{\bm{\epsilon}}\rangle\right]\kern-4.0pt\right]\leq\frac{1}{n}\operatorname*% {\mathbb{E}}_{\mathbf{X},\bm{\epsilon}}\left[\kern-4.0pt\left[\sup_{W\in% \mathcal{W}}\left\lVert W\right\rVert_{F}\left\lVert\mathbf{X}_{\bm{\epsilon}}% \right\rVert_{F}\right]\kern-4.0pt\right]\\ &\displaystyle=\frac{1}{n}\operatorname*{\mathbb{E}}_{\mathbf{X},\bm{\epsilon}% }\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}\left\lVert W-W_{old}S+W_{old}S% \right\rVert_{F}\left\lVert\mathbf{X}_{\bm{\epsilon}}\right\rVert_{F}\right]% \kern-4.0pt\right]\\ &\displaystyle\leq\frac{1}{n}\operatorname*{\mathbb{E}}_{\mathbf{X},\bm{% \epsilon}}\left[\kern-4.0pt\left[\sup_{W\in\mathcal{W}}(\left\lVert W-W_{old}S% \right\rVert_{F}+\left\lVert W_{old}S\right\rVert_{F})\left\lVert\mathbf{X}_{% \bm{\epsilon}}\right\rVert_{F}\right]\kern-4.0pt\right]\\ &\displaystyle\leq\frac{\varepsilon+\left\lVert W_{old}S\right\rVert_{F}}{n}% \operatorname*{\mathbb{E}}_{\mathbf{X},\bm{\epsilon}}\left[\kern-4.0pt\left[% \left\lVert\mathbf{X}_{\bm{\epsilon}}\right\rVert_{F}\right]\kern-4.0pt\right]% \leq\frac{\varepsilon+\left\lVert W_{old}S\right\rVert_{F}}{n}\sqrt{% \operatorname*{\mathbb{E}}_{\mathbf{X},\bm{\epsilon}}\left[\kern-4.0pt\left[% \left\lVert\mathbf{X}_{\bm{\epsilon}}\right\rVert_{F}^{2}\right]\kern-4.0pt% \right]}\\ \end{split} |

Then we calculate \operatorname*{\mathbb{E}}_{\mathbf{X},\bm{\epsilon}}\left[\kern-4.0pt\left[% \left\lVert\mathbf{X}_{\bm{\epsilon}}\right\rVert_{F}^{2}\right]\kern-4.0pt\right] as,

\begin{split}\displaystyle\operatorname*{\mathbb{E}}_{\mathbf{X},\bm{\epsilon}% }\left[\kern-4.0pt\left[\left\lVert\mathbf{X}_{\bm{\epsilon}}\right\rVert_{F}^% {2}\right]\kern-4.0pt\right]&\displaystyle=k\operatorname*{\mathbb{E}}_{% \mathbf{X},\bm{\epsilon}}\left[\kern-4.0pt\left[\left\lVert\sum_{i=1}^{n}% \epsilon_{i}\mathbf{x}_{i}\right\rVert_{2}^{2}\right]\kern-4.0pt\right]\\ &\displaystyle=k\operatorname*{\mathbb{E}}_{\mathbf{X},\bm{\epsilon}}\left[% \kern-4.0pt\left[\sum_{i=1}^{n}\left\lVert\mathbf{x}_{i}\right\rVert_{2}^{2}+% \sum_{i\neq j}\epsilon_{i}\epsilon_{j}\langle\mathbf{x}_{i},\mathbf{x}_{j}% \rangle\right]\kern-4.0pt\right]\\ &\displaystyle=k\sum_{i=1}^{n}\operatorname*{\mathbb{E}}_{\mathbf{x}_{i}}[% \kern-1.5pt[\left\lVert\mathbf{x}_{i}\right\rVert_{2}^{2}]\kern-1.5pt]\leq kn% \end{split} |

where the energy of feature \operatorname*{\mathbb{E}}_{\mathbf{x}}[\kern-1.5pt[\left\lVert\mathbf{x}% \right\rVert_{2}^{2}]\kern-1.5pt] is assumed to be no more than 1 without loss of generality. Thus we can obtain an upper bound of the Redemacher complexity to establish Theorem 1:

\begin{split}\displaystyle\mathcal{R}_{n}(\mathcal{W})&\displaystyle\leq\sqrt{% \frac{k}{n}}(\varepsilon+\left\lVert W_{old}S\right\rVert_{F})\\ &\displaystyle\leq\sqrt{\frac{k}{n}}(\varepsilon+\left\lVert W_{old}\right% \rVert_{F}\left\lVert S\right\rVert_{1,1})\\ &\displaystyle\leq\sqrt{\frac{k}{n}}(\varepsilon+\lambda\left\lVert W_{old}% \right\rVert_{F}).\end{split} |

## Appendix B Proof of Theorem 2

Now we detail the proof of Theorem 2, which aims to analyze the cost introduced by SLL, i.e., the perturbation of classifier parameter matrix (including both past labels and new labels) due to the SLL mechanism. Assuming the past label size is m and a new label is given, the goal is to estimate the difference of the classifier matrix between the SLL and the original m+1 parameter matrix. Given the training data \mathbf{X}=[\mathbf{x}_{1},...,\mathbf{x}_{n}]\in\mathbb{R}^{d\times n} and their label matrix \mathbf{Y}=[\mathbf{y}_{1},...,\mathbf{y}_{n}]\in\{-1,1\}^{(m+1)\times n}, the m+1-dimensional classifier parameter matrix is determined by the following optimization:

\hat{Z}=\operatorname*{\arg\,\min}_{Z\in\mathbb{R}^{d\times(m+1)}}J(Z)=\sum_{i% =1}^{n}\ell(\mathbf{y}_{i},\mathbf{x}_{i},Z)+\frac{\lambda}{2}\left\lVert Z-ZS% \right\rVert_{F}^{2}, |

where S is the label structure matrix of all m+1 labels. Denoting \tilde{Z}=[\hat{W},\hat{\bm{w}}], we have J(\tilde{Z})\geq J(\hat{Z}), and substitute it into the expression of J(Z), and then we have

\displaystyle\sum_{i=1}^{n}\ell(\mathbf{y}_{i},\mathbf{x}_{i},\tilde{Z})-\sum_% {i=1}^{n}\ell(\mathbf{y}_{i},\mathbf{x}_{i},\hat{Z})\\ \displaystyle\geq\frac{\lambda}{2}\left\lVert\hat{Z}-\hat{Z}S\right\rVert_{F}^% {2}-\frac{\lambda}{2}\left\lVert\tilde{Z}-\tilde{Z}S\right\rVert_{F}^{2}. |

Denote the approximation error as \Delta=\hat{Z}-\tilde{Z}=\hat{Z}-[\hat{W},\hat{\bm{w}}]. Then the right hand side of the above inequality can be rewritten as

\begin{split}\displaystyle\mbox{right}=&\displaystyle\frac{\lambda}{2}\left% \lVert\tilde{Z}+\Delta-(\tilde{Z}+\Delta)S\right\rVert_{F}^{2}-\frac{\lambda}{% 2}\left\lVert\tilde{Z}-\tilde{Z}S\right\rVert_{F}^{2}\\ \displaystyle=&\displaystyle\frac{\lambda}{2}\left\lVert\tilde{Z}-\tilde{Z}S% \right\rVert_{F}^{2}+\frac{\lambda}{2}\left\lVert\Delta-\Delta S\right\rVert_{% F}^{2}\\ &\displaystyle+\lambda Tr[(\tilde{Z}-\tilde{Z}S)^{T}(\Delta-\Delta S)]-\frac{% \lambda}{2}\left\lVert\tilde{Z}-\tilde{Z}S\right\rVert_{F}^{2}\\ \displaystyle=&\displaystyle\frac{\lambda}{2}\left\lVert\Delta-\Delta S\right% \rVert_{F}^{2}+\lambda\langle\Delta,(\tilde{Z}-\tilde{Z}S)(\mathbf{I}-S)^{T}% \rangle\end{split} |

where \langle\cdot,\cdot\rangle is the inner product of two matrices. Similarly, the left hand side can also be rewritten using the approximation error \Delta and for simplicity we consider the least squares loss function,

\begin{split}\displaystyle\mbox{left}=&\displaystyle\frac{1}{2}\left\lVert% \mathbf{Y}-\tilde{Z}^{T}\mathbf{X}\right\rVert_{F}^{2}-\frac{1}{2}\left\lVert% \mathbf{Y}-\hat{Z}^{T}\mathbf{X}\right\rVert_{F}^{2}\\ \displaystyle=&\displaystyle\frac{1}{2}\left\lVert\mathbf{Y}-\tilde{Z}^{T}% \mathbf{X}\right\rVert_{F}^{2}-\frac{1}{2}\left\lVert\mathbf{Y}-(\tilde{Z}+% \Delta)^{T}\mathbf{X}\right\rVert_{F}^{2}\\ \displaystyle=&\displaystyle\frac{1}{2}\left\lVert\mathbf{Y}-\tilde{Z}^{T}% \mathbf{X}\right\rVert_{F}^{2}-\frac{1}{2}\left\lVert\mathbf{Y}-\tilde{Z}^{T}% \mathbf{X}\right\rVert_{F}^{2}\\ &\displaystyle-\frac{1}{2}\left\lVert\Delta^{T}\mathbf{X}\right\rVert_{F}^{2}+% Tr[(\mathbf{Y}-\tilde{Z}^{T}\mathbf{X})^{T}\Delta^{T}\mathbf{X}]\\ \displaystyle=&\displaystyle-\frac{1}{2}\left\lVert\Delta^{T}\mathbf{X}\right% \rVert_{F}^{2}+\langle\mathbf{X}(\mathbf{Y}-\tilde{Z}^{T}\mathbf{X})^{T},% \Delta\rangle\end{split} |

Thus we obtain

\displaystyle\frac{\lambda}{2}\left\lVert\Delta-\Delta S\right\rVert_{F}^{2}+% \frac{1}{2}\left\lVert\Delta^{T}\mathbf{X}\right\rVert_{F}^{2}\\ \displaystyle\leq\langle\mathbf{X}(\mathbf{Y}-\tilde{Z}^{T}\mathbf{X})^{T},% \Delta\rangle-\lambda\langle\Delta,(\tilde{Z}-\tilde{Z}S)(\mathbf{I}-S)^{T}\rangle |

Suppose d\ll n and \mathbf{X} is of full row rank, and denote its smallest singular value as \sigma_{1}(\mathbf{X}), then based on the singular value decomposition, we have

\displaystyle\frac{\lambda}{2}\left\lVert\Delta-\Delta S\right\rVert_{F}^{2}+% \frac{1}{2}\left\lVert\Delta^{T}\mathbf{X}\right\rVert_{F}^{2}\\ \displaystyle\geq\frac{\lambda}{2}\sigma_{1}^{2}(\mathbf{I}-S)\left\lVert% \Delta\right\rVert_{F}^{2}+\frac{1}{2}\sigma_{1}^{2}(\mathbf{X})\left\lVert% \Delta\right\rVert_{F}^{2} |

Thus

\begin{split}&\displaystyle\frac{1}{2}[\lambda\sigma_{1}^{2}(\mathbf{I}-S)+% \sigma_{1}^{2}(\mathbf{X})]\left\lVert\Delta\right\rVert_{F}^{2}\\ \displaystyle\leq&\displaystyle\langle\Delta,\mathbf{X}(\mathbf{Y}-\tilde{Z}^{% T}\mathbf{X})^{T}-\lambda(\tilde{Z}-\tilde{Z}S)(\mathbf{I}-S)^{T}\rangle\\ \displaystyle\leq&\displaystyle\left\lVert\Delta\right\rVert_{F}\left\lVert% \mathbf{X}(\mathbf{Y}-\tilde{Z}^{T}\mathbf{X})^{T}-\lambda(\tilde{Z}-\tilde{Z}% S)(\mathbf{I}-S)^{T}\right\rVert_{F}\\ \displaystyle\leq&\displaystyle\left\lVert\Delta\right\rVert_{F}\left(\left% \lVert\mathbf{X}\right\rVert_{F}\left\lVert\mathbf{Y}-\tilde{Z}^{T}\mathbf{X}% \right\rVert_{F}+\lambda\left\lVert\tilde{Z}\right\rVert_{F}\left\lVert\mathbf% {I}-S\right\rVert_{F}^{2}\right)\end{split} |

Dividing both sides using \left\lVert\Delta\right\rVert_{F}, we obtain

\begin{split}&\displaystyle\frac{1}{2}[\lambda\sigma_{1}^{2}(\mathbf{I}-S)+% \sigma_{1}^{2}(\mathbf{X})]\left\lVert\Delta\right\rVert_{F}\\ \displaystyle\leq&\displaystyle\left\lVert\mathbf{X}\right\rVert_{F}\left% \lVert\mathbf{Y}-\tilde{Z}^{T}\mathbf{X}\right\rVert_{F}+\lambda\left\lVert% \tilde{Z}\right\rVert_{F}\left\lVert\mathbf{I}-S\right\rVert_{F}^{2}\\ \displaystyle\leq&\displaystyle\sqrt{n\Omega}\left\lVert\mathbf{Y}-\tilde{Z}^{% T}\mathbf{X}\right\rVert_{F}+\lambda\left\lVert\mathbf{I}-S\right\rVert_{F}^{2% }\sqrt{\left\lVert\hat{W}\right\rVert_{F}^{2}+\left\lVert\hat{\bm{w}}\right% \rVert_{2}^{2}}\end{split} |

and it is equivalent to

\displaystyle\left\lVert\Delta\right\rVert_{F}\leq\frac{2}{\lambda\sigma_{1}^{% 2}(\mathbf{I}-S)+\sigma_{1}^{2}(\mathbf{X})}\cdot\\ \displaystyle\left(\sqrt{n\Omega}\left\lVert\mathbf{Y}-\tilde{Z}^{T}\mathbf{X}% \right\rVert_{F}+\lambda\left\lVert\mathbf{I}-S\right\rVert_{F}^{2}\sqrt{\left% \lVert\hat{W}\right\rVert_{F}^{2}+\left\lVert\hat{\bm{w}}\right\rVert_{2}^{2}}% \right), |

which completes the proof of Theorem 2.

## References

- [1] B. Yang, J.-T. Sun, T. Wang, and Z. Chen, “Effective multi-label active learning for text classification,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009, pp. 917–926.
- [2] X. Li, J. Ouyang, and X. Zhou, “Supervised topic models for multi-label classification,” Neurocomputing, vol. 149, pp. 811–819, 2015.
- [3] Z. Barutcuoglu, R. E. Schapire, and O. G. Troyanskaya, “Hierarchical multi-label prediction of gene function,” Bioinformatics, vol. 22, no. 7, pp. 830–836, 2006.
- [4] N. Cesa-Bianchi, M. Re, and G. Valentini, “Synergy of multi-label hierarchical ensembles, data fusion, and cost-sensitive methods for gene functional inference,” Machine Learning, vol. 88, no. 1-2, pp. 209–241, 2012.
- [5] M. Wang, X. Zhou, and T.-S. Chua, “Automatic image annotation via local multi-label classification,” in Proceedings of the 2008 international conference on Content-based image and video retrieval. ACM, 2008, pp. 17–26.
- [6] R. S. Cabral, F. Torre, J. P. Costeira, and A. Bernardino, “Matrix completion for multi-label image classification,” in Advances in Neural Information Processing Systems, 2011, pp. 190–198.
- [7] G. Tsoumakas, I. Katakis, and I. Vlahavas, “Mining multi-label data,” in Data mining and knowledge discovery handbook. Springer, 2010, pp. 667–685.
- [8] D. Hsu, S. Kakade, J. Langford, and T. Zhang, “Multi-label prediction via compressed sensing.” in NIPS, vol. 22, 2009, pp. 772–780.
- [9] F. Tai and H.-T. Lin, “Multilabel classification with principal label space transformation,” Neural Computation, vol. 24, no. 9, pp. 2508–2542, 2012.
- [10] Y. Zhang and J. G. Schneider, “Multi-label output codes using canonical correlation analysis,” in International Conference on Artificial Intelligence and Statistics, 2011, pp. 873–882.
- [11] Y.-N. Chen and H.-T. Lin, “Feature-aware label space dimension reduction for multi-label classification,” in Advances in Neural Information Processing Systems, 2012, pp. 1529–1537.
- [12] M. M. Cisse, N. Usunier, T. Artieres, and P. Gallinari, “Robust bloom filters for large multilabel classification tasks,” in Advances in Neural Information Processing Systems, 2013, pp. 1851–1859.
- [13] H.-F. Yu, P. Jain, P. Kar, and I. Dhillon, “Large-scale multi-label learning with missing labels,” in Proceedings of The 31st International Conference on Machine Learning, 2014, pp. 593–601.
- [14] F. Sun, J. Tang, H. Li, G.-J. Qi, and T. S. Huang, “Multi-label image categorization with sparse factor representation,” Image Processing, IEEE Transactions on, vol. 23, no. 3, pp. 1028–1037, 2014.
- [15] K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain, “Sparse local embeddings for extreme multi-label classification,” in Advances in Neural Information Processing Systems, 2015, pp. 730–738.
- [16] K. Balasubramanian and G. Lebanon, “The landmark selection method for multiple output prediction,” arXiv preprint arXiv:1206.6479, 2012.
- [17] W. Bi and J. Kwok, “Efficient multi-label classification with many labels,” in Proceedings of the 30th International Conference on Machine Learning (ICML-13), 2013, pp. 405–413.
- [18] W. Bi and J. T. Kwok, “Multi-label classification on tree-and dag-structured hierarchies,” in Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011, pp. 17–24.
- [19] W. Cheng, E. Hüllermeier, and K. J. Dembczynski, “Bayes optimal multilabel classification via probabilistic classifier chains,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 279–286.
- [20] B. Hariharan, L. Zelnik-Manor, M. Varma, and S. Vishwanathan, “Large scale max-margin multi-label classification with priors,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010, pp. 423–430.
- [21] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
- [22] B. Efron, T. Hastie, I. Johnstone, R. Tibshirani et al., “Least angle regression,” The Annals of statistics, vol. 32, no. 2, pp. 407–499, 2004.
- [23] S. Perkins and J. Theiler, “Online feature selection using grafting,” in ICML, 2003, pp. 592–599.
- [24] H. Lee, A. Battle, R. Raina, and A. Y. Ng, “Efficient sparse coding algorithms,” in Advances in neural information processing systems, 2006, pp. 801–808.
- [25] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1,” http://cvxr.com/cvx, Mar. 2014.
- [26] ——, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, ser. Lecture Notes in Control and Information Sciences, V. Blondel, S. Boyd, and H. Kimura, Eds. Springer-Verlag Limited, 2008, pp. 95–110, http://stanford.edu/~boyd/graph_dcp.html.
- [27] S. R. Becker, E. J. Candès, and M. C. Grant, “Templates for convex cone problems with applications to sparse signal recovery,” Mathematical programming computation, vol. 3, no. 3, pp. 165–218, 2011.
- [28] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online dictionary learning for sparse coding,” in Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009, pp. 689–696.