SemiSupervised Active Learning for Support Vector Machines: A Novel Approach that Exploits Structure Information in Data
Abstract
In our today’s information society more and more data emerges, e.g. in social networks, technical applications, or business applications. Companies try to commercialize these data using data mining or machine learning methods. For this purpose, the data are categorized or classified, but often at high (monetary or temporal) costs. An effective approach to reduce these costs is to apply any kind of active learning (AL) methods, as AL controls the training process of a classifier by specific querying individual data points (samples), which are then labeled (e.g., provided with class memberships) by a domain expert. However, an analysis of current AL research shows that AL still has some shortcomings. In particular, the structure information given by the spatial pattern of the (un)labeled data in the input space of a classification model (e.g., cluster information), is used in an insufficient way. In addition, many existing AL techniques pay too little attention to their practical applicability. To meet these challenges, this article presents several techniques that together build a new approach for combining AL and semisupervised learning (SSL) for support vector machines (SVM) in classification tasks. Structure information is captured by means of probabilistic models that are iteratively improved at runtime when label information becomes available. The probabilistic models are considered in a selection strategy based on distance, density, diversity, and distribution (4DS strategy) information for AL and in a kernel function (Responsibility Weighted Mahalanobis kernel) for SVM. The approach fuses generative and discriminative modeling techniques. With benchmark data sets and with the MNIST data set it is shown that our new solution yields significantly better results than stateoftheart methods.
keywords:
active learning, semisupervised learning, support vector machine, structure information, responsibility weighted Mahalanobis kernel, 4DS strategy1 Introduction
Machine learning is based on sample data. Sometimes, these data are labeled and, thus, models to solve a certain task (e.g., a classification or regression task) can be built using targets assigned to the input data of a classification or regression model. In other cases, data are unlabeled (e.g., for clustering) or only partially labeled. Correspondingly, we distinguish the areas of supervised, unsupervised, and semisupervised learning. In many application areas (e.g., industrial quality monitoring processes Sic98 (), intrusion detection in computer networks HSS03 (), speech recognition FHYA12 (), or drug discovery MME14 ()) it is rather easy to collect unlabeled data, but quite difficult, timeconsuming, or expensive to gather the corresponding targets. That is, labeling is in principal possible, but the costs are enormous. An effective approach to reduce these costs is to apply active learning (AL) methods, as AL controls the training process by specific querying of individual samples (also called examples, data points, or observations), which are then labeled (e.g., provided with class memberships) by a domain expert. In this article, we focus on classification problems.
Poolbased AL typically assumes that at the beginning of a training process a large set of unlabeled samples and a small set of labeled samples are available. Then, AL iteratively increases the number of labeled samples () by asking the “right” questions with the aim to train a classifier with the highest possible generalization performance and, at the same time, the smallest possible number of labeled samples. Generally, a selection strategy is used that selects informative samples from by considering the current “knowledge” of the classifier that shall be trained actively. These samples are labeled by an oracle or a domain expert, added to the training set , and the classifier is updated. The AL process stops as soon as a predefined stopping criterion is met.
An analysis of current research in the field of AL Cawlay11 (); GCDL11 (); Settles11 () shows that AL still has some shortcomings. In particular, the structure information, which is given by the spatial arrangement of the (un)labeled data in the input space of a classifier, is used in an insufficient way. Furthermore, many existing AL techniques pay too little attention to their practical applicability, e.g., regarding the number of initially labeled samples or the number of adjustable parameters which should both be as low as possible. To meet these challenges we present in this article several techniques that build together a new approach that combines AL and semisupervised learning (SSL) for support vector machines (SVM).
In general, machine learning techniques can be applied whenever “patterns” or “regularities” in a set of sample data can be recognized and, thus, be exploited. For a classification tasks this often means that the data build clusters of arbitrary shapes, whereby such structures shall be revealed and modeled in order to consider them for the active sample selection and for the classifier training. Starting from this point of view we propose several techniques in this article to combine AL and SSL for SVM: First, we claim that in a real application an AL process has to start “from scratch” without any label information. Therefore, we use probabilistic mixture models (i.e., generative models) to capture the structure information of the unlabeled samples. These models are determined offline (before the AL process starts) with the help of variational Bayesian inference (VI) techniques in an unsupervised manner. Second, during the AL process, when more and more label information becomes available, we use these class labels to revise these density models. For this, we introduce a transductive learner into the standard PAL process that adapts the mixture model with the help of local modifications such that model components preferably model clusters of samples that belong to the same class. Third, the data sets and in each iteration () of the AL process can be seen as a sparsely labeled data set that can be used to train a classifier in a semisupervised manner. Based on the iteratively improved parametric density models, e.g., Gaussian mixture models in the case of a continuous (realvalued) input space, we derive a new datadependent kernel, called responsibility weighted Mahalanobis (RWM) kernel. Basically, this kernel is based on Mahalanobis distances being part of the Gaussians but it reinforces the impact of model components from which any two samples that are compared are assumed to originate. Fourth, a selection strategy for AL has to fulfill several tasks, for example: At an early stage of the AL process, samples have to be chosen in all regions of the input space covered by data (exploration phase). At a late stage of the AL process, a finetuning of the decision boundary of the classifier has to be realized by choosing samples close to the (current) decision boundary (exploitation phase). Thus, “asking the right question” (i.e., choosing samples for a query) is a multifaceted problem. To solve this problem we adopt a selfadaptive, multicriteria strategy called 4DS that considers structure in the data for its sample selection. Fifth, many works in the field of AL assume that at the beginning of the AL process a relatively large number of labeled samples are available, but in a real application this is usually not true. In addition, these data have a great impact on the learning performance of the actively trained classifier. Therefore, we present an extended version of the standard PAL cycle that starts with an empty labeled training set and determines the first labeled set () within an initialization round using structure information.
The innovative aspects of the work presented in this article are:

Our new approach starts with an empty initial training set. This means that the “knowledge” of the actively trained classifier cannot be used for sample selection in the first query round. Hence, a density based strategy is used to find informative samples.

Data structure is captured with help of generative, probabilistic mixture models, that are initially trained with VI in an unsupervised fashion. These data models are adapted (revised) during the AL process with help of class information that becomes available. Thus, we aim to model data clusters of samples that are assumed to belong to the same class.

Structure information is considered (1) for the active sample selection by a selfadaptive, multicriteria selection strategy (4DS) and (2) for SVM training (finding the support vectors) with help of a datadependent (RWM) kernel.

Bringing all together results in a practical, effective, and efficient approach that combines AL and SSL for SVM.
The AL approach presented here is based on a complex interplay of several building blocks presented in previous issues of this journal that are combined and evaluated here for the very first time (the selection strategy 4DS RS13 (), the transductive learning scheme RCS14 (), and the RWM kernel for SVM RS15 ()).
The remainder of this article is structured as follows: Section 2 illustrates the potential of AL and the properties of our new approach. Section 3 gives an overview of related work. Section 4 sketches the probabilistic mixture model that we use to gather structure in data and defines the RWM kernel and the selection strategy 4DS to train an SVM in a semisupervised active fashion. In addition, Section 4 explains our extension of the standard PAL cycle that makes it possible to start the AL process “from scratch” (without any labeled samples) and to refine the mixture model as soon as label information becomes available. Results of a large number of simulation experiments are set out in Section 5. Finally, Section 6 summarizes the key findings and gives an outlook to our future work.
2 Motivating Example
In the following, we will illustrate with a simple example (1) the potential of AL and (2) the properties of our new approach to train an SVM in a semisupervised active fashion (see Fig. 1).
Assume we observe three processes producing data in a twodimensional input space (small blue plus signs, green circles and red triangles), that shall be classified by an SVM. For our example, we generated a set of samples using a Gaussian mixture model (GMM) with three components (, , , , , , ) and split them into training and test data according to a 5fold crossvalidation. Fig. 1 shows only the training data of the first crossvalidation fold, that corresponds to the pool of unlabeled samples and the initially labeled samples , where we assume that label information is only given for three randomly selected samples, one for each class (shown in orange color). After that, in each query round (learning cycle) only one sample is actively selected, labeled by a domain expert, and then considered for SVM training. Here, the samples are shown in boldface and colored red, if they were selected in the current query round and otherwise violet. The samples that are marked with rectangles correspond to the support vectors of the respective SVM. Fig. 1(a) shows an SVM with RBF kernel, that was actively trained based on selected and thus labeled samples. Here, the “knowledge” of the SVM was not considered for the sample selection since the samples were selected randomly, a strategy also called random sampling (Random). Fig. 1(d) depicts the corresponding learning process of the SVM with RBF kernel and here we see that the SVM reaches an accuracy of about on the test data after the execution of the last query (i.e., with a set of 20 labeled samples). Fig. 1(b) also shows the AL process of an SVM with RBF kernel, but this time the samples are actively selected with an uncertainty sampling (US) strategy. This means that in each AL cycle the sample lying nearest to the current decision boundary of the SVM is selected. We can see, that the SVM reaches an test accuracy of , if we recognize the knowledge of the SVM for active sample selection with US. In addition, Fig. 1(e) shows that the SVM reaches this accuracy already with only labeled samples. In Fig. 1(c) we see an SVM with RWM kernel that was also actively trained based on labeled samples, but of these samples (not the first three) were selected with the selection strategy 4DS. Here, the SVM yields the highest test accuracy of . Moreover, Fig. 1(f) shows that the SVM with RWM kernel trained with only three labeled samples (i.e., with of the overall set of available training samples) reaches a significantly higher test accuracy () compared to the SVM with RBF kernel (slightly more than ). Further, after the active selection of three additional samples, the SVM with RWM kernel always yields a test accuracy of more than throughout the AL process.
This example shows on the one hand, that AL obtains respectable results if the active sample selection is done deliberately (US vs. Random). On the other hand it can be seen that considering structure information may result in a more efficient and effective AL process of a discriminative classifier such as SVM.
3 Related Work
This section discusses related techniques that are specially developed to train SVM by combining AL and SSL. For a detailed overview in condensed form about the current stateoftheart in the field of AL, SSL, and techniques that combine these two approaches not only for SVM we refer to RS13 (); RS15 (); RCS14 (). In RS13 (), the main five categories of selection strategies are explained, that are used in the field of (poolbased) AL to find informative samples: uncertainty sampling, density weighting, estimated error reduction, AUC maximization, and diversity sampling. In addition, we refer to Settles09 (); JH09 () for a more detailed overview on this topic. Related work in the field of SSL, that make also use of the unlabeled data to train discriminative classifiers, such SVM for instance, is given in RS15 (). In RCS14 (), related work is presented that combines AL and SSL to train (generative) classifiers actively in a more effective way, e.g., either by using modeling techniques for capturing and refining structure information, so that only the most informative samples are selected, or by querying a “second” paradigm instead of an human expert to reduce the labeling costs.
SSL is based on the idea that a classifier with high generalization capability is trained using information of labeled and unlabeled samples. In combination with AL, this means that the active selection of outliers shall be avoided. An SVM approach that tries to solve this problem is called representative sampling XYTXW03 (). An SVM based on an initially labeled set of samples is trained. Then, in each iteration of the AL process the clustering algorithm means is used to cluster all unlabeled samples that lie within the current margin of the SVM. From each of the resulting clusters the sample with minimal average distance to all samples that belong to the same cluster is queried. While only unlabeled samples within the margin of the SVM are presented to an oracle or human expert, representative sampling selects on the one hand samples for which the SVM is most uncertain regarding their class assignments and on the other hand samples that are representative regarding . The main drawback of representative sampling is that clustering techniques are in general computationally expensive. An approach that avoids this these efforts and still makes use of the unlabeled samples, is called hinted sampling LFL12 (). Hinted sampling selects also samples, that are both representative and uncertain (i.e., the SVM is most uncertain about their class membership), but doing so the unlabeled samples are neither clustered nor assigned with a class label (as with transductive SVM (TSVM) Joachims99 ()). Rather, the samples in are used as hints for finding a decision boundary, that on the one hand leads to a small classification error regarding the labeled training samples () and on the other hand takes course close to (or through) the set of unlabeled samples (hints set). Since in each query round the informative samples are selected with help of uncertainty sampling (US), the selected ones are located close to the decision boundary of the SVM and are representative, too. In general, the HintSVM can be regarded as an SSL approach CSZ06 (), but HintSVM differ significantly from typical SSL techniques as the semisupervised SVM (SVM) CSK08 (); RSM11 (); FGQZ11 (), since these approaches try to find decision boundaries that are located as distant as possible to the samples in U (in low density regions).
TSVM assume that the labeled and unlabeled samples are independent and identically distributed (i.i.d.). But this assumption is not guaranteed in AL. This problem is also related to transfer learning PY10 (), because the distributions of the training and test data may differ. For this reason, TSVM are not well suited for AL, since an active sample selection (e.g., with US) easily results in an overfitting of the TSVM towards the already queried samples WYZ11 (). Therefore, in CC10 () an approach is presented, that focuses more on SSL than on AL by using AL techniques only to find a good initialization to train a TSVM or related SSL approaches. Another approach, published in SYH11 () and used to extract protein sequences, combines AL and SSL in the same way, too. For this, an SVM is initially trained regarding a small set of labeled samples and then used to iteratively query samples that the SVM would assign to classes with high uncertainty. These samples are then labeled by an expert and the SVM is retrained. If reaches a certain number of labeled samples, the AL process stops and a deterministic annealing SVM (DASVM) SKC06 (), that corresponds to an implementation variant of TSVM, is used to label the yet unlabeled samples (). In contrast, LXQ13 () focuses on AL and uses SSL only as additional help to train an SVM as efficiently as possible, i.e., with the smallest possible number of expert queries. For this purpose, LXQ13 () combines selftraining Scu65 () with US to select on the one hand samples that are representative and can, therefore, be labeled by the SVM itself, and on the other hand “uncertain” samples, i.e., samples that are difficult to label, and, therefore, labeled by an human expert. Again, at the beginning of the learning process an SVM is trained based on a small initially labeled data set. Then, the following two steps are alternately executed: In the first step, samples are iteratively selected by means of US. The SVM is retrained once a new sample is selected, labeled by an domain expert, and added to the training set . In each of these query rounds, the unlabeled samples are also labeled by the SVM and, based on these class assignments for each , a rate of class change is determined or updated. Then, in the second step, the unlabeled samples , that have a rate of class change equal to zero, are divided into subsets according to the class label , assigned by the SVM in the first step. These subsets are then used to select one representative sample for each class . Here, a sample is the more representative for the class , the more its distance to the current decision boundary of the SVM corresponds to the median of the distances of all samples x belonging to the th subset. The representative samples are subsequently labeled with the class label of the subset which they belong and added to . If no stopping condition is met, the first step is executed again. The main problem of this approach is, that in the presence of data with bimodal or multimodal class distributions the representative samples are assigned “untrusted” class labels, resulting in a deterioration of the classification performance of the actively trained SVM KPI14 ().
Other works HJZL09 (); WMZL05 () combine AL and SSL to create an efficient approach for contentbased image retrieval. Here, several images (samples) are selected iteratively regarding class assignments for which the actively trained Laplacian SVM (LapSVM) BNS06 (); WL09 (); NCW15 () is most uncertain. The LapSVM follows the principle of manifold regularization, integrating an “intrinsic regularizer” MB11 () into the SVM training. This regularizer is estimated based on the labeled and unlabeled samples with help of a Laplacian graph which can be seen as a nonparametric density estimator. To avoid the selection of images with “redundant” content a “minmax” approach HJZL09 () is used that ensures the diversity of queried images. In WMZL05 (), an extension of the LapSVM for image retrieval is presented, where multiple Laplacian graphs for heterogeneous data, such as content, text, and links are used. Both approaches show that the LapSVM is very well suited for AL.
What do we intend to make better or in a different way? An actively trained SVM with our new RWM kernel uses a parametric density estimation to capture structure information, because a parametric estimation approach often leads to a more “robust” estimate than a nonparametric approach (used by the LapSVM (LAP kernel), for instance). An SVM with RWM kernel can be actively trained with help of any selection strategy and the RWM kernel can be used in combination with any standard implementation of SVM (e.g., LIBSVM libsvm ()) or with any solver for the SVM optimization problem (e.g., SMO) without any additional algorithmic adjustments or extensions. In addition, the data modeling can be conducted once, i.e., offline, based on the unlabeled samples before the AL process starts (i.e., unsupervised). Moreover, it is also possible to refine the density model based on class information becoming known during the AL process (online), so that the model captures the structure information of the unlabeled and labeled samples. Here, local adaptions of the model components are used for an efficient model refinement.
4 Theoretical and methodical foundations
In this section we will (1) describe our (unsupervised) approach to capture structure information in data which is based on probabilistic mixture models. Next, we show (2) how class information can be used either to extend these models to classifiers or to refine the density model. Based on that we (3) define a new similarity measure that allows us to exploit structure in data for SVM training, i.e., to determine the support vectors. (4) We define the multicriteria selection strategy 4DS, that uses structure information to select informative samples actively. Bringing all together, we (5) extend the traditional PAL cycle so that it starts with an empty training set and iteratively refines the data modeling based on the known classes to train an SVM in an improved semisupervised active fashion.
4.1 Capturing Structure in Data with Probabilistic Mixture Models
Assume we have a set of samples, where each sample can be described with a dimensional vector, then we capture the structure information contained in with help of the mixture model
(1) 
consisting of J densities with In this probabilistic context is seen as a random variable. Here, the conditional densities are the model components and the are multinomial distributions with parameters (mixing coefficients).
On the basis of we can determine for a particular sample and a component the value
(2) 
These values are called responsibilities of the J components for the generation of the sample . That is, a responsibility of a component is an estimate for the value of a latent variable that describes from which process in the real world a sample originates. The idea behind this approach is that components model processes in the real world from which the samples we observe are assumed to originate (or: to be “generated”).
Which kind of density function can be used for the components? In general, a dimensional sample may have (i.e., realvalued) dimensions (attributes) and categorical ones. Without loss of generality we arrange these dimensions such that
(3) 
Note that we italicize when we refer to single dimensions. The continuous part of this vector with for all is modeled with a multivariate normal (i.e., Gaussian) distribution with center (expectation) and covariance matrix . With denoting the determinant of a matrix we use the model
(4) 
with the distance measure given by
(5) 
defines the Mahalanobis distance of vectors based on a covariance matrix . For many practical applications, the use of Gaussian components can be motivated by the generalized central limit theorem which roughly states that the sum of independent samples from any distribution with finite mean and variance converges to a normal distribution as the size of the data set goes to infinity (cf., for example DHS01 ()).
For categorical dimensions we use a 1of coding scheme where is the number of possible categories of attribute (). The value of such an attribute is represented by a vector with if belongs to category and otherwise. The categorical dimensions are modeled by means of special cases of multinomial distributions. That is, for an input dimension (attribute) we use
(6) 
with and the restrictions and . The rationale for using such distributions for categorical variables is obvious as any given distribution of a categorical variable can perfectly be modeled.
We assume that the categorical dimensions are mutually independent and that there are no dependencies between the categorical and the continuous dimensions. Then, the component densities are defined by
(7) 
How can the various parameters of the density model be determined? Assuming that the samples are independent and identically distributed (i.i.d.), we perform the parameter estimation by means of variational Bayesian inference (VI) which realizes the Bayesian idea of regarding the model parameters as random values whose distributions have to be trained FS09 (); Bis06 (). This approach has two important advantages: First, the estimation process is more robust, i.e., it avoids “collapsing” components, socalled singularities, and second it optimizes the number of components by its own. That is, the training process starts with a large number of components and prunes components automatically until an appropriate number J is reached. For a more details of VI see FS09 (); Bis06 ().
4.2 Iterative Refinement of Models Capturing Structure in Data
If we take class information into account, we can extend our estimated probabilistic mixture model to a generative classifier called CMM (classifier based on mixture models). However, to obtain high classification performance it is important to recognize “overlapping” processes that either generate samples belonging to different classes or cannot be modeled perfectly. For this reason, we distinguish between two different training techniques:
The first one, called sharedcomponents classifier (), captures structure information in an unsupervised way (as described in Section 4.1) with a sharedcomponents density model and subsequently extends this model to a classifier using class labels. That is, to minimize the risk of classification errors we compute for an input sample the posterior distribution and select then (according to the winnertakesall principle) the class with the highest posterior probability. In this case, the distribution is decomposed as follows:
(8) 
Thus, the classifier is based on a single mixture model (i.e., “shared” by all classes), where the are multinomial distributions with parameters . These parameters can be estimated in a supervised step for samples with given class labels using the responsibilities:
(9) 
with being the “effective” number of samples “generated” by component j (see FS09 () for more details) and corresponds to the subset of all samples for which is the assigned target class.
The second one, called separatecomponents classifier (), also uses class information to assign the model components to classes but (in contrast to ) it builds a separatecomponents density model, so that the data modeling is performed in a supervised way, too. In this case the distribution is decomposed in another way:
(10) 
Therefore, the classifier is based on a number of mixture density models (one for each class), so that . Here, the conditional densities () are the model components, the are multinomial distributions with parameters (class dependent mixing coefficients), and is a multinomial distribution with parameters (class priors). To evaluate Eq. (10), we have to exploit the fact that in the case of we treat all classes separately and, thus, components are uniquely assigned to one class, i.e., . To train the we have to split the entire training set into subsets first, each containing all samples of the corresponding class , i.e., with , where denotes the cardinality of a set. Then, for each , a mixture model is trained separately by means of VI as sketched in Section 4.1. After this, we have found parameter estimates for the and , cf. Eq. (10). The parameters for the class priors are estimated with
(11) 
Comparing the two modeling approaches from the viewpoint of AL we can state that the main advantage of the is that underlying mixture density model can be trained completely unsupervised and only the gradual assignments of its components to classes have to be determined in a subsequent, supervised step. The key advantage of the – for which we need a fully labeled data set already for the first modeling step – is that label information may improve the data modeling, e.g., we may expect a better discrimination of highly overlapping densities belonging to different classes. But, if we assume that no class labels are available at the beginning of an AL process, we must start with a sharedcomponents density model only.
Thus, the key idea of our AL technique is to start with the density model being part of a to model structure in data. During the AL process, when more and more labeled data become available, we will iteratively revise the model using label information. This can roughly be seen as a kind of transformation from a towards a until a stopping criterion is met. To keep the computational costs for this transformation process as low as possible we introduce a transductive learner into the AL process, that adapts the density mixture model by means of local modifications. This learner consists of several steps, and we look briefly (and in a simplifying way) at the most important ones: First, we realize that at least one model component models a set of samples that belong to different classes (i.e., labeled by the oracle or human expert) using an interestingness measure called uniqueness. These components are denoted as disputed. Second, we detect the (labeled and unlabeled) samples that are modeled by the disputed components. Third, the unlabeled ones of these samples are labeled transductively by a samplebased classifier (related to a knearestneighbor approach). Fourth, this set of (now fully labeled) samples is used to train a local separatecomponents model (), whose components are then fused or combined with all nondisputed model components (of the initial sharedcomponents model) in the fifth step. For a more detailed description of the transductive learner see RCS14 ().
4.3 Exploiting Structure in Data for Active SemiSupervised SVM Training
As described above, we gather structure information by means of probabilistic mixture models and if necessary we revise these models as soon as class information becomes available during the AL process. But, how can we integrate this information into the active training process of a discriminative classifier (such as SVM) to optimize its learning performance? Here, we can distinguish between two possibilities: First, we can use structure information to train an SVM in each AL round in an semisupervised fashion. For this, we define the new datadependent RWM kernel, that is based on previously described mixture models. Second, we consider structure information for the active sample selection with help of a selfadaptive, multicriteria selection strategy called 4DS. Both are described in the following.
4.3.1 DataDependent Kernel for SemiSupervised SVM Training
In principal, generative classifiers such as CMM often perform worse than discriminative classifiers such as SVM in many applications. But, on the other hand no density information (or, more general, information concerning structure of data in the input space of the classifier) can be extracted from standard SVM to improve its training process or use this information for active sample selection. Thus, we developed a new datadependent kernel for SVM, called responsibility weighted Mahalanobis (RWM) kernel. This kernel assesses the similarity of any two samples by means of a parametric density model. Basically, this kernel emphasizes the influence of the model components from which two samples that are compared are assumed to originate (that is, the “responsible” model components).
With help of the Mahalanobis distance measure described in Section 4.1 we can determine the distance of any two samples in the dimensional input space with respect to a process modeled by a single Gaussian component with given mean and covariance matrix . In general, however, we need a number of components to model densities accurately. Assuming, a given set of samples with only realvalued dimensions. Then, we model only with a Gaussian mixture model (GMM) as described above. In order to consider the distances of two samples with respect to all components contained in the GMM we need a similarity measure that combines these distances by means of a linear combination. This leads us to the new RWM kernel that weights the Mahalanobis distance according to the responsibilities of the th Gaussian for the generation of the two considered samples , :
(12) 
Here, is the kernel width and the similarity measure is defined as follows:
(13) 
The main advantages of the RWM kernel are (for more details see RS15 ()): (1) Standard training techniques such as SMO and standard implementations of SVM auch as libsvm libsvm () can be used with RWM kernels without any algorithmic adjustments or extensions as only the kernel matrices have to be provided. (2) In case of SSL this kernel outperforms some other kernels that capture structure in data such as the Laplacian kernel (Laplacian SVM) MB11 () that can be regarded as being based on nonparametric density estimates. (3) CSVM with RWM kernels can easily be parametrized using existing heuristics for RBF kernels relying on line search strategies in a twodimensional parameter space. This does not hold for the Laplacian kernel, for example.
In most classification problems we also have categorical (nonordinal) input dimensions that typically cannot be handled as continuous ones. Assume we are given a set of samples where each dimensional sample has continuous and dimensions. Here, each categorical dimension () has different categories for which we use a of coding scheme (cf. Section 4.1).
Then, we extend the RWM kernel accordingly:
(14) 
with weighting factors , and , (with ) only containing the values of the respective continuous and (binary encoded) categorical dimensions, respectively. For the categorical dimensions, we define
(15) 
with
(16) 
i.e., simply by checking the values in the different dimensions for equality. If necessary, it is also possible to weight the categorical part and the continuous part differently by means of the parameters . If and are both set to and the covariance matrix of each model component corresponds to the identity matrix, then the RWM kernel behaves such as an RBF kernel with binary encoded categorical dimensions.
4.3.2 Active Sample Selection Considering Structure in Data
As already mentioned, the main goal of AL is to get the best possible classifier at the lowest possible labeling costs. Therefore, a selection strategy for informative samples must be able to detect all decision regions (exploration phase) and to finetune the decision boundary (exploitation phase), which means that an selection strategy has to find a tradeoff between exploration and exploitation in order to train a classifier efficiently and effectively. Thus, our selection strategy 4DS is based on the two hypotheses: (1) A selection strategy has to consider various aspects and, thus, has to combine several criteria. (2) In different phases of the AL process, these criteria must be weighted differently.
Consequently, 4DS considers the following four criteria:

The first criterion, the distance of samples to the current decision boundary, corresponds to the idea to choose samples with high uncertainty concerning their correct class assignment. Here, for the active training of an SVM its decision function is used:
(17) with Lagrange coefficients , bias , kernel function and classes . Here, the set contains all actively labeled samples so far. That is, the nearer is to the decision boundary, the more this distance criterion tends to zero. For generative classifiers this criterion is estimated using class posteriors for . For this, the entropy DE95 (), the smallestmargin SDW01 (), or the least coefficient CM05 () approaches can be used. This criterion is needed to finetune the decision boundary of the classifier and must, therefore, emphasized at later stages of the AL process.

The second criterion, the density of regions where samples are selected, is simply calculated as follows:
(18) where is the density model that we use to gather structure information (cf. Sections 4.1 and 4.2). Samples with higher likelihood values are preferred. This criterion is used to avoid the selection of outliers and to explore “important” regions of the input space that may be misclassified if they are neglected. In many applications, this criterion is important at early stages of the AL process.

The third criterion, the (class) distribution of samples, makes use of responsibility information to consider all model components of according to their mixing coefficients and, implicitly, the unknown “true” class distribution of all samples (i.e., class priors) for sample selection. For that purpose, each unlabeled sample is added temporarily to the set of all samples actively selected so far (this set corresponds to the union of the set of all labeled samples so far and the current query set ) and then the deviation between the responsibility distribution of this set and the distribution of the mixing coefficients is calculated as follows:
(19) Samples with high values are preferred as those samples highly contribute to reach the goal.

The fourth criterion, the diversity of samples in the query set, is needed to offer the possibility to select more than one sample in each query round. Otherwise samples are selected that can be regarded as being redundant from the viewpoint of the classifier training. The computation of the diversity criterion is based on the approach described in DRH06 () but we estimate the empirical entropy with help of a parametric estimation approach for instead of a Parzen window estimate:
(20) Note that the current query set is always nonempty here as we do not consider this measure for the selection of the first sample in each query round.
These criteria can be weighted individually in a linear combination. However, 4DS uses a selfadaptation scheme to determine the weights of the first three criteria depending on the learning performance of the actively trained classifier in each AL round. Merely, the weight for the diversity criterion has to be set by the user. But if the selection of more than one sample per AL round is not necessary, 4DS can be regarded as parameterfree. 4DS adapts the weight according to the following idea: On the one hand 4DS focusses on the class distribution measure in the initial rounds of an AL process to explore the input space of the classifier. On the other hand, if the classification performance of the classifier deteriorates, the density criterion is favored for choosing representative samples. Otherwise the distance criterion is favored for finetuning the current decision boundary, and vice versa. For more details see RS13 ().
4.4 Extended Poolbased Active Learning Cycle
The traditional poolbased AL (PAL) typically starts with a large pool of unlabeled samples and a small set of labeled samples (with and ), and a classifier is trained initially based on . Then, in each query round a query set of unlabeled, informative samples is determined by means of a selection strategy , which takes into account the “knowledge” contained in , and presented to an oracle (or a human expert) in order to be labeled. Then, is added to , removed from , and is updated. If a given stopping criterion is met, PAL stops, otherwise the next query round starts.
To apply PAL successfully, it is necessary to choose the selection strategy carefully. Depending on this choice the initially labeled training data set has also a decisive impact on the learning performance of the actively trained classifier. In general, the following rule can be applied: The “simpler” the selection strategy, the more labeled samples well distributed in the input space of the classifier have to be available at the beginning of the AL process, because these samples can be regarded as the “initial knowledge” contained in classifier . But in the literature of PAL the appropriate selection of an initial training data set is largely ignored HND10 ().
These challenges can be met if we consider structure information for the active sample selection and for the determination of the initially labeled samples. In the simplest case, structure information can be gathered with any kind of clustering technique and samples close to cluster centers are a good choice for the initial training set , for example. However, for real applications, AL should start “from scratch”, i.e., without any initially labeled data. Therefore, we extend the traditional PAL cycle in the following ways: (1) The data structure is gathered with a probabilistic, generative mixture model , that is estimated based on in an unsupervised way. (2) The initial set is empty and, therefore, we determined in the first query round (initialization round) the first labeled set with a densitybased strategy, that only considers structure information contained in . (3) A transductive learner is used, that revises the generative model in each cycle based on the class information of (cf. Section 4.2).
5 Simulation Experiments
In this section we evaluate the experiments performed on 20 data sets and compare our new approach to integrate structure information into the AL training of discriminative classifiers such as SVM to AL approaches that capture this information in a different way (LapSVM) or neglect it. First, this section describes the setup of the experiments. Second, we visualize the behavior of an actively trained SVM that recognizes structure information with help of the RWM kernel in comparison to an SVM with RBF kernel that does not use this information. For this, we use two data sets with twodimensional input spaces and apply uncertainty sampling (US) in order to outline the impact of the different kernels on the AL behavior of the SVM. Third, simulation experiments are performed on 20 benchmark data sets to compare our new approach to related techniques numerically and in some more detail.
5.1 Setup of the Experiments
In this section, we describe the classifier paradigms and the selection strategies that are used in our simulation experiments. We sketch the main characteristics of the data sets and define our evaluation criteria.
5.1.1 Classifiers
In our experiments we compare actively trained SVM with different datadependent kernels – RWM, GMM, and LAP kernels – and dataindependent kernels, such as the RBF kernel. For the SVM with LAP kernel (also called LapSVM or Laplacian SVM), we ported the MATLAB implementation of Melacci Melacci12 () to Java and adapted it to cope with multiclass problems.
To find good estimates (in case of the RWM and GMM kernels) for the hyperparameters of the VI algorithm (training of the mixture density models capturing structure information in unlabeled data) we used an exhaustive search on the unlabeled training data. To rate a considered set of VI parameters we applied an interestingness measure, called representativity FKS11 (). It measures the dissimilarity of the mixture density model trained with VI and a density estimate resulting from a nonparametric Parzen window estimation. As dissimilarity measure we used the symmetric KullbackLeiber divergence instead of the Hellinger distance mentioned in FKS11 ().
To get good parametrization results regarding the SVM (with corresponding kernel) we performed for each fold of the (outer) 5fold crossvalidation an inner 4fold crossvalidation on the labeled set (after execution of the initialization round). That is, the parameters of the SVM are determined only once during the AL process, whereby with the inner crossvalidation the set is split into a validation set (one quarter) and a training set (three quarters). To rate a considered parameter combination we determined the classification performance of the SVM by considering and (the expected error) simultaneously. However, at the beginning of the AL process we used the whole training set (i.e., without class assignments) for capturing structure information. That is, all samples in the set are used to determine the Laplacian graph in case of the LAP kernel and to determine the mixture model in case of the RWM and GMM kernels. Overall, the test data (set ) of the (outer) crossvalidation is never used for any optimization or parametrization purposes.
The penalty parameter and the kernel width were varied for , the four additional parameters of the LAP kernel and for , the neighborhood size was fixed to and the degree to . To account for information from categorical input dimensions we adapted all kernels in the same way as the RWM kernel (described in Section 4.3). Therefore, to find the best values ââof (weighting factor of continuous input dimensions) and (weighting factor of discrete input dimensions) we varied and from to in step sizes of (for the data sets Australian, Credit A, Credit G, Heart, and Pima that have categorical attributes, cf. Table 1).
5.1.2 Selection Strategies
For selecting informative samples we used three different selection strategies in our suite of AL experiments: random sampling (Random), uncertainty sampling (US) and our 4DS (selection strategy based on distance, density (data), diversity, and distribution (class) information). The latter strategy takes structure information into account for sample selection, whereas the other ones do not consider this information: Random sampling chooses samples randomly with uniform selection probability. It was shown in Cawlay11 () that this trivial approach outperforms some existing, at the first glance more sophisticated selection strategies. The US strategy TK02 () queries the sample for which the SVM is currently most uncertain concerning its class assignment, i.e., the sample with the smallest distance to the current decision boundary. 4DS RS13 () considers the distance of samples to the decision boundary, too, and additionally the density in regions where samples are selected. Furthermore it indirectly considers the unknown class distribution of the samples by utilizing the responsibilities of the model components for these samples and the diversity of samples in the query set that are chosen for labeling. The combination of the four measures in 4DS is selfoptimizing since the weights for the first three measures depend on the performance of the actively trained classifier and only the weight parameter for the last measure (diversity) has to be chosen by the user. Here, we vary from zero to one in steps of and choose the best according to a fold crossvalidation on the training data.
In addition it should be mentioned that for a fair comparison the classifiers will be retrained after each query of one sample for the strategies Random and US and after each query of five samples for the strategy 4DS. Doing so, 4DS has to select samples based on “outdated” information in each query, thus the comparison is certainly not biased towards 4DS. Moreover, in RS13 () it has been shown that 4DS outperforms, based on various evaluation criteria, related selection strategies such as ITDS (information theoretic diversity sampling), DWUS (density weighted uncertainty sampling), DUAL (dual strategy for active learning), PBAC (prototype based active learning), and 3DS (a technique we proposed earlier in RS11 ()) by training a generative classifier () actively.
5.1.3 Data Sets
For our experiments, we use the MNIST data set MNIST15 () and 20 benchmark data sets: 14 realworld data sets (Australian, Credit A, Credit G, Ecoli, Glass, Heart, Iris, Page Blocks, Pima, Seeds, Vehicle, Vowel, Wine, and Yeast) from the UCI Machine Learning Repository AN07 (), two realworld (Phoneme and Satimage) and two artificial data sets (Clouds and Concentric) from the UCL Machine Learning Group UCL14 (), and two artificial data sets, Ripley suggested in Ripley96 () and Two Moons suggested in Melacci12 (). In order to obtain meaningful results regarding the performance of our new approach, we consider three requirements for the selection of the data sets: First, the majority of the data sets should come from real life applications. Second, the data sets should have very different numbers of classes. And third, some of the data sets should have unbalanced class distributions. The description of the benchmark data sets and the MINST data set is summarized in Table 1.
Data Set  Description  
Number of  Continuous  Categorical  Number of  Class  
Samples  Attributes  Attributes  Classes  Distribution (in %)  
Australian  690  6  8  2  55.5,44.5 
Clouds  5000  2  –  2  52.2,50.0 
Concentric  2500  2  â  2  36.8, 63.2 
Credit A  690  6  9  2  44.5,55.5 
Credit G  1000  7  13  2  70.0,30.0 
Ecoli  336  7  –  8  42.6,22.9,15.5,10.4,5.9,1.5,0.6,0.6 
Glass  214  9  –  6  32.7,35.5,7.9,6.1,4.2,13.6 
Heart  270  6  7  2  44.4,55.6 
Iris  150  4  –  3  33.3,33.3,33.3 
Page Blocks  5473  10  –  5  89.8,6.0,0.5,1.6,2.1 
Phoneme  5404  5  –  2  70.7,29.3 
Pima  768  –  8  2  65.0,35.0 
Ripley  1250  2  –  2  50.0,50.0 
Satimage  6345  5  –  6  24.1,11.1,20.3,9.7,11.1,23.7 
Seeds  210  7  –  3  33.3,33.3,33.3 
Two Moons  800  2  –  2  50,50 
Vehicle  846  18  –  4  23.5,25.7,25.8,25.0 
Vowel  528  10  –  11  9.1,9.1,9.1,9.1,9.1,9.1,9.1,9.1,9.1,9.1,9.1 
Wine  178  13  –  3  33.1,39.8,26.9 
Yeast  1484  8  –  10  16.4,28.1,31.2,2.9,2.3,3.4,10.1,2.0,1.3,0.3 
MNIST  70000  784  –  10  10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0 
In our experiments regarding the 20 benchmark data sets, we performed a score normalization for all data sets and conducted a stratified 5fold crossvalidation evaluation, as sketched in Fig. 2. In each round of the outer crossvalidation, one subset is kept out as test set . Of course, is not considered for any parametrization purposes. The other four subsets build the pool of unlabeled samples (cf. Fig. 2(a)), such that our AL approach starts with an empty training set . Consequently, in the first query round an SVM classifier is not given and any “knowledge” cannot be considered for sample selection. Therefore, in the first AL round (initialization) we select a number of samples equal to with help of a density based approach that uses only structure information captured in the mixture density model. Based on the class information for these samples, that build the set , the parameters of the SVM are determined with a grid search technique. This means as mentioned before that the SVM are parameterized only once during the AL process (e.g., for a two class problem only eight labeled samples are used for parametrization purposes). Moreover, the data splits and are chosen identically for all actively trained classifier paradigms. We decided to actively select not more than 500 samples from each data set, apart from Ecoli (270), Glass (171), Heart (216), Iris (120), Seeds (168), and Wine (142) because of their limited size. Thus, this number of labeled samples is taken as stopping criterion for the AL process.
For the experiment regarding the MNIST dataset, we additionally reduced the size of the input dimensions from to dimensions by applying a principle component analysis (PCA) and conducted the stratified 5fold crossvalidation, as described before, only on the corresponding 60000 training samples. Consequently, the 10000 test samples are never used for any parametrization or optimization purposes.
5.1.4 Evaluation Criteria
To assess our results numerically we used three evaluation measures: ranked performance (RP), data utilization rate (DUR), and area under the learning curve (AULC) (cf. CKS06 ()).
The first measure, RP, ranks the actively trained paradigms based on a nonparametric statistical Friedman test Friedman40 (). Basis for a rank is the classification accuracy on test data measured at the PAL step for which the performance on the training data is optimal. This step is not necessarily the last PAL step when the maximum number of actively selected samples – 140, 170, 215, 420, or, 500, respectively (see above) – is reached. Based on these test accuracies, the Friedman test ranks – considering a given significance value – classifiers for each of data sets separately, in the sense that the best performing classifier (highest accuracy) gets the lowest rank, a rank of 1, and the worst classifier (lowest accuracy) the highest rank, a rank of . In case of ties, the Friedman test assigns averaged ranks. Let be the rank of of the th classifier on the jth data set, then the Friedman test compares the classifiers based on the averaged ranks . Under the null hypothesis, which claims that all classifiers are equivalent in their performance and hence their averaged ranks should be equal, the Friedman statistic is distributed according to the distribution with degrees of freedom JS11 (). The Friedman test rejects the null hypothesis if Friedman’s is greater than the value of the distribution. If the null hypothesis can be rejected we proceed with the Nemenyi test Nemenyi63 () as post hoc test in order to show which classifier performs significantly different. Here, the performance differences of two classifiers are significant if the corresponding average ranks differ by at least the critical difference where the critical value is based on the Studentized range statistic divided by . The results of the Nemenyi test can be visualized with help of critical difference plots Demsar06 (). In these plots, nonsignificantly different classifiers are connected in groups (their rank difference is smaller than CD). To summarize the ranked performances over all data sets, the average ranks and the numbers of wins are also determined for each strategy as described in CKS06 (). The number of wins is the number of data sets for which a paradigm performs best. Wins can be “shared” when different classifiers perform comparably on the same data set. That is, a good paradigm yields a low average rank and a large number of wins.
The second measure, DUR, determines the fraction of samples that must be labeled to achieve good classification results. To measure this, we first define the target accuracy as the average accuracy achieved by a baseline strategy over five folds using between and of the maximum number of actively selected samples (see above). Here, we use the strategy US to train an SVM with RBF kernel actively as baseline. The (DUR) (cf. CKS06 ()) is then the minimum number of samples needed by each of the other strategies to reach the target accuracy divided by the number of samples needed by US to train an SVM with RBF kernel actively. Simulations where a strategy does not reach the target accuracy are reported explicitly. The DUR indicates how efficiently a selection strategy uses the data, but it does not reflect detailled performance changes up to the point when the target accuracy is reached. To summarize over all data sets we again determine the mean and number of wins for each strategy. A good strategy yields a low mean DUR (in particular, it should be lower than one) and a large number of wins.
The third measure is the AULC CKS06 (), again measured against the baseline strategy, which is the difference between the area under the learning curve for a given strategy and that of baseline strategy SVM with RBF kernel actively trained with US. A negative value indicates that this strategy (on average over five folds) performs worse than the baseline strategy. The AULC also is calculated for the maximum number of actively selected samples. This measure is, in contrast to the previous ones, sensitive to performance changes throughout the AL process. For example, if two strategies reach the target accuracy with the same number of samples (i.e., the same DUR), one might have a higher AULC if performance improvements occur in an earlier phase of the PAL process, for instance. To summarize over all data sets, we again determine the mean as well as the number of wins. A good strategy should have a high AULC (in particular, it should be positive) and a large number of wins.
5.2 Behavior of actively trained SVM using Structure Information
In this section we compare the behavior of an actively trained SVM that recognizes structure information with help of the RWM kernel to that of an SVM with RBF kernel that does not use this information. For visualization purposes, we use a twodimensional, artificially generated data set, called Clouds, which was taken from the UCL Machine Learning Group UCL14 () (for more information see Section 5.1.3). On Clouds data set we conducted a score normalization and split the data set in a fold cross validation into training and test sets. For a fair comparison we trained SVM with the considered kernels on identically chosen data splits (training and test). The Figs. 3 and 4 show the active training process at different iterations only for the first crossvalidation fold. Here, each of the parts (a)–(e) display the actively trained SVM after the execution of the initialization round and after the query for , , , and samples. In part (f) of the figures the development of the classification performance regarding the test and training data is pictured, respectively. During the initialization round () of the AL process eight samples are selected with the density based strategy (colored orange). We stop the AL process of the SVM after the selection of samples. As we only want to show the impact of the different kernels on the AL behavior, we applied the selection strategy US with the evaluation measure smallest distance (cf. TK02 ()) to select informative samples. For this reason, the SVM is retrained in each query round after the selection of one sample, that is labeled by an oracle and added to the training set .
The parameters of the SVM with RBF kernel and RWM kernel are determined with a second (inner) fold crossvalidation, that only uses the eight labeled samples of the initialization round. Here we applied a grid search, where we varied the penalty parameter and the kernel width with . We estimate the density model that the RWM kernel uses for capturing structure information in an unsupervised way with VI, whose hyperparameters are determined with grid search (on the pool ), too.
Fig. 3(a) shows that the SVM with RWM kernel yields accuracy on the test data after the execution of the initialization round of our PAL process. For this, less than of the available training data has to be labeled. It can be seen that the mixture model of the RWM kernel (gray colored ellipses) models the “true” data distribution quite well. Thus, the resulting decision boundary of SVM with RWM kernel (trained only with the labeled samples in ) approximates the location of the “true” decision boundary between the two classes of the Clouds data set very well. Fig. 4(f) illustrates the corresponding learning curves of the SVM with RWM kernel on the training and test data. It can be seen that the SVM yields only slight improvements regarding the classification accuracy on test data in additional query rounds. After the selection of samples (iteration ) the SVM with RWM kernel reach a test accuracy of nearly which is the same accuracy that a nonactively trained SVM with RBF kernel yields based on labeled training samples. In addition, the ellipses (with gray colored background) in Figs. 3(a)–3(e) show that the mixture model in the th iteration of the PAL process was not adapted based on the class information of labeled samples for . This is because, on the one hand the processes of the Clouds data set generate normally distributed samples so that the VI algorithm uses the “right” number of components to model the four data “generating” processes and on the other hand, based on class information that becomes available during the PAL process, no components are recognized as “disputed” (see Section 4.2).
In Fig. 4, the PAL process of an SVM with RBF kernel is depicted. Already, after the execution of the initialization round, cf. Fig. 4(a), a significant difference between the two kernels can be observed. An SVM with RBF kernel initially yields only an accuracy of about on the test data (the SVM with RWM kernel ) and the SVM with RBF kernel “underestimates” the variance of the process that is located on the right hand side of the two dimensional input space. This is due to the fact that an SVM with RBF kernel (in contrast to an SVM with RWM kernel) compares two given samples with help of the Euclidean distance, and does not use any structure information from the unlabeled samples to assess the similarity between samples. Figs. 4(b)–4(e) illustrate the fact, that the SVM with RBF kernels model the “true” the decision boundary between the two classes the better, the more actively queried (and labeled) samples are considered for training. Consequently, the classification accuracy of the SVM with RBF kernel increases about significantly during the PAL process, such that the SVM yields an accuracy of on the test data after the active selection of training samples, cf. Fig. 4(f). However, the SVM with RWM kernel achieves the same test accuracy based on the initial training set consisting of only eight labeled samples (iteration ).
This short, preliminary study shows that an SVM that uses structure information with help of a datadependent kernel (here: the RWM kernel) reaches a superior AL behavior as an SVM (with RBF kernel), that neglects this information. This means that considering structure information an SVM may achieve higher classification accuracies using training sets with a smaller number of labeled samples.
5.3 Comparison based on Benchmark Data Sets
To evaluate the benefit of using structure information for active SVM training in more detail, we conduct experiments with publicly available benchmark data sets. Thus, we are able to come to statistically significant conclusions concerning our new approach. During our AL process, structure information can be considered in different ways: First, this information can only be used by the selection strategy to query the most informative samples in each AL cycle. Second, it can also be taken into account directly for the SVM training (i.e., for finding the corresponding support vectors), e.g., with datadependent kernels (RWM, GMM, or LAP kernels). And third, these possibilities to consider structure information can be combined. Consequently, the following questions shall be answered by the our experiments: Does it make any sense to consider structure information for training an SVM actively? And how should this information be integrated into this AL process?
The following experiments compare the AL performance of SVM with RWM, GMM, RBF, and LAP kernels. For a comprehensive picture, we take into account an SVM with RBF kernel as stateoftheart method and an SVM with LAP kernel (LapSVM) as best practice approach for an SVM based on a datadependent kernel. Both paradigms are trained actively with Uncertainty Sampling (US).
For a visual assessment of the AL behavior of the SVM (with the corresponding kernels) we generate learning curves. These curves outline the classification performance depending on the number of queries, i.e., iterations of the PAL algorithm (mean accuracy on the five test sets in the crossvalidation versus size of after each query ).
In the following, we compare the performance of the actively trained classifiers with help of the three evaluation criteria RP, DUR, and AULC (cf. Section 5.1.4).
Data Set  RWM kernel  GMM kernel  LAP kernel  RBF kernel  RBF kernel 

4DS  4DS  US  4DS  US  
Australian  85.65  84.93  86.67  84.93  84.06 
Clouds  88.92  82.82  83.96  81.08  77.20 
Concentric  99.64  99.28  99.68  99.56  99.52 
Credit A  85.65  85.36  85.36  85.07  84.35 
Credit G  72.40  72.00  75.50  72.00  71.20 
Ecoli  85.15  85.44  73.90  86.64  85.73 
Glass  71.01  68.70  48.57  66.82  65.89 
Heart  84.81  84.44  77.41  85.19  82.96 
Iris  98.00  97.33  96.00  98.00  96.67 
Page Blocks  94.52  90.39  93.20  94.35  93.06 
Phoneme  80.66  79.40  83.57  78.98  80.50 
Pima  75.00  75.78  75.78  75.00  76.04 
Ripley  90.40  89.68  89.12  89.04  88.96 
Satimage  86.33  86.09  82.19  85.30  75.39 
Seeds  97.62  95.71  90.48  92.86  91.43 
Two Moons  100.00  97.25  100.00  95.75  95.50 
Vehicle  76.84  79.54  69.62  81.32  80.02 
Vowel  93.23  71.92  86.57  80.91  77.98 
Wine  98.32  97.76  97.19  97.21  97.21 
Yeast  58.08  57.95  40.62  57.62  56.94 
Mean  86.11  84.09  81.77  84.38  83.03 
Rank  1.750  3.075  3.200  3.050  3.925 
Wins  11.0  0.0  4.5  3.5  1.0 
To compare the PAL processes of the five classifiers statistically, we applied the first evaluation criterion RP. This criterion uses the test accuracy of each classifier , that achieves the highest accuracy on the training data in the th query round (iteration of the AL process). The corresponding results for test data are given in Table 2. Here, the best results (classifiers that the Friedman test assigns the smallest rank numbers) are highlighted in boldface. With five classifiers and 20 data sets, Friedman’s is distributed according to a distribution with degrees of freedom. The critical value of for is and, thus, smaller than Friedman’s , so we can reject the null hypothesis. With the Nemenyi test, we compute the critical difference to investigate which actively trained classifier performs significantly different. The corresponding CD plot is shown in Fig. 5. The last two lines in Table 2 show that the SVM with RWM kernel, which was trained actively with 4DS, performs better than all other classifiers on data sets (wins) and it also receives the smallest averaged rank of . The second best rank is achieved by the SVM with RBF kernel, also trained actively with 4DS. However, regarding the number of wins the SVM with LAP kernel (actively trained with US) yields the second best result of wins. A look at the CD plot confirms that the SVM with RWM kernel yields significantly better results (on the test data) than the other classifiers, as only the SVM with RWM kernel is contained in the winner group. Furthermore, the SVM with GMM, RBF, and LAP kernels are located in the same group, i.e., they do not yield significantly different results.
How large is the fraction of samples that have to be labeled to achieve good classification results? The second evaluation criterion, DUR, tries to answer this question. Table 3 contains the number of samples and the data utilization ratio needed to achieve a given target accuracy defined by the baseline approach (SVM with RBF kernel actively trained with US). Here, the DUR of an efficient AL process should be clearly lower than one which is the DUR of the baseline approach. If we consider the last two lines of Table 3, it can be seen that only the SVM with datadependent kernel RWM achieves a mean DUR () smaller than one. The SVM with GMM and RBF kernels (both actively trained with 4DS) achieve worse results regarding the mean DUR (highest) and wins (smallest). The SVM with RBF kernel, trained with US, achieves the second highest number of wins (five) and the second smallest mean DUR ().
Data Set  RWM kernel  GMM kernel  LAP kernel  RBF kernel  RBF kernel  Target Accuracy 

4DS  4DS  US  4DS  US  
Australian  0.556  0.926  0.667  0.926  1.000  86.75 
(45)  (75)  (54)  (75)  (81)  
Clouds 
0.055  0.738  0.293  1.299  1.000  76.19 
(9)  (121)  (48)  (213)  (164)  
Concentric 
2.472  1.782  1.070  1.577  1.000  99.77 
(351)  (253)  (152)  (224)  (142)  
Credit A 
2.341  1.611  1.443  1.509  1.000  90.40 
(391)  (269)  (241)  (252)  (167)  
Credit G 
1.341  1.271  0.839  1.271  1.000  83.26 
(401)  (380)  (251)  (380)  (299)  
Ecoli 
0.860  0.682  1.464  0.531  1.000  90.30 
(154)  (122)  (262)  (95)  (179)  
Glass 
0.507  0.923  1.190  0.951  1.000  76.57 
(72)  (131)  (169)  (135)  (142)  
Heart 
0.970  1.297  1.158  1.267  1.000  89.65 
(98)  (131)  (117)  (128)  (101)  
Iris 
1.086  1.657  1.114  1.143  1.000  98.17 
(38)  (58)  (39)  (40)  (35)  
Page Blocks 
0.771  1.568  0.864  0.807  1.000  92.80 
(216)  (439)  (242)  (226)  (280)  
Phoneme 
0.845  1.019  1.070  1.009  1.000  81.53 
(267)  (322)  (338)  (319)  (316)  
Pima 
1.460  2.012  2.075  1.820  1.000  80.58 
(235)  (324)  (334)  (293)  (161)  
Ripley 
0.167  0.606  1.606  0.909  1.000  88.59 
(11)  (40)  (106)  (60)  (66)  
Satimage 
0.147  0.147  0.742  0.196  1.000  74.55 
(24)  (24)  (121)  (32)  (163)  
Seeds 
0.541  0.486  3.378  1.270  1.000  94.52 
(20)  (18)  (125)  (47)  (37)  
Two Moons 
0.260  4.500  0.180  3.440  1.000  95.69 
(13)  (225)  (9)  (172)  (50)  
Vehicle 
0.850  1.024  1.181  0.940  1.000  90.38 
(357)  (430)  (496)  (395)  (420)  
Vowel 
0.335  0.537  0.532  0.757  1.000  80.04 
(146)  (234)  (232)  (330)  (436)  
Wine 
2.867  1.633  1.167  1.867  1.000  99.86 
(86)  (49)  (35)  (56)  (30)  
Yeast 
0.503  0.531  1.748  0.517  1.000  56.81 
(144)  (152)  (500)  (148)  (286)  
Mean  0.947  1.248  1.189  1.200  1.000  
Wins  10.5  1.5  2.0  1.0  5.0 
The third criterion, AULC, evaluates the learning speed of the actively trained classifiers. This is done by referring to the baseline approach by subtracting the area under the learning curve of the baseline classifier from the one of a considered classifier. Consequently, an efficient PAL process should always reach a positive AULC. Table 4 presents the corresponding results. Here, the SVM with RWM kernel, actively trained with 4DS, yields on of the data sets a positive AULC and on nine data sets a win. Consequently, the SVM with RWM kernel achieves the highest positive AULC of on average. In addition, the SVM with GMM and RBF kernels (actively trained with 4DS) achieve positive AULC values, too. With respect to the AULC criterion it can be seen that the SVM with LAP kernel (trained with US) yields some very good but also a lot of bad results. Overall, the SVM with LAP kernel yields the worst mean AULC of despite a number of seven wins.
Data Set  RWM kernel  GMM kernel  LAP kernel  RBF kernel  RBF kernel 

4DS  4DS  US  4DS  US  
Australian  1.081  0.002  1.851  0.002  0.000 
Clouds 
12.374  0.764  7.132  1.358  0.000 
Concentric 
0.363  0.338  0.595  0.210  0.000 
Credit A 
2.252  0.179  1.203  1.178  0.000 
Credit G 
2.337  2.337  3.339  2.337  0.000 
Ecoli 
0.547  0.469  7.333  0.915  0.000 
Glass 
4.164  1.196  8.635  0.174  0.000 
Heart 
0.496  0.098  1.779  0.243  0.000 
Iris 
0.102  0.003  0.047  0.017  0.000 
Page Blocks 
3.357  0.281  25.556  0.944  0.000 
Phoneme 
0.476  3.101  1.856  1.280  0.000 
Pima 
0.110  5.786  0.807  0.746  0.000 
Ripley 
2.183  1.061  0.489  0.004  0.000 
Satimage 
11.423  11.413  4.119  10.096  0.000 
Seeds 
0.725  0.800  3.394  0.008  0.000 
Two Moons 
4.389  0.811  4.387  0.583  0.000 
Vehicle 
4.785  2.642  7.687  2.616  0.000 
Vowel 
18.577  3.363  10.608  3.693  0.000 
Wine 
0.216  0.077  0.056  0.138  0.000 
Yeast 
8.821  6.364  24.153  6.031  0.000 
Mean  2.978  0.761  2.094  0.821  0.000 
Wins  9.0  2.0  7.0  2.0  0.0 
In the following we illustrate the AL behavior of our new approach for six of the data sets: Concentric, Credit A, Glass, Satimage, Seeds, and Vowel. Fig 6 shows the learning curves (classification performance on test data, averaged over five folds) for the SVM with LAP and RBF kernels (trained with US), and the SVM with RWM, GMM, and RBF kernels, trained with 4DS, respectively. First, we can see that typically the actively trained SVM with datadependent kernels reach the classification accuracy of nonactively trained SVM with RBF kernel faster than the actively trained SVM with a dataindependent kernel (here: RBF kernel). This fact speaks for a synergistic effect between AL and SSL. Second, it can be seen, that the integration of structure information into the PAL process of an SVM with the new datadependent kernel RWM and the new selection strategy 4DS leads to a rapid and steep increase of the classification accuracy on many of the investigated data sets (cf. Credit A, Glass, and Vowel). Third, the data sets Concentric and Glass show that active training of the SVM with LAP kernel combined with the selection strategy US sometimes may lead to very bad results. Fourth, there are data sets (e.g., Satimage or Vowel) on which actively trained classifiers may yield higher performances if we allow for a active selection of more than 500 samples.
5.4 Comparison based on the MNIST Data Set
To demonstrate the adaptability of our approach the realworld MNIST MNIST15 () handwritten digit data set is considered. It is a widelyused benchmark for classification problems in the context of supervised learning, since it incorporates a training set of grayscaled images of handwritten digits from to and a test set of additional images.
In this case study we compare the performance of our new approach (SVM with RWM kernel actively trained with 4DS) to that of an SVM with RBF kernel actively trained with US. For reducing the computational and temporal efforts, we captured the structure information only once (before the ALprocess starts) in an unsupervised fashion with VI and used only default parameter values for the two different kernels (RBF kernel: and ; RWM kernel: and ). In addition, we performed a PCA and decided to use the first principal components, as they cumulatively comprise about of total variance.
In order to assess the results scored by our new AL approach, we first look at some publications that use the MNIST data set, too. To the best of our knowledge, there are only a few publications that attempt to solve this multiclass problem by means of AL. Some of them use only two classes from MNIST to simulate binary classification problems. In Beygelzimer2008 (), only the images representing the digits and are regarded. Additionally, the dimensions are reduced from 784 to 25 by performing a PCA, whereas the size of training and test set are decreased to samples each. The same binary classification problem vs. is addressed in Ganti2013 () and Orhan2015 (). The latter considers only images which are randomly divided into two portions for training and test purposes. In Mazzoni2006 (), a subset of images, for each digit, are used to solve the binary classification problem vs. . Only images are meaningful, whereas the remaining are irrelevant, as they are neither nor . Further, the classification of digit vs. all other digits is addressed in Bordes2005 (). Similarly, subsets of different sizes, , , and are used in Nguyen2004 () pursuing the goal of separating the images of a given digit against the other. The classification performance is determined by averaging the sum of the false negatives and false positives over the two classes relative to the size of the subsets.
Research has also been conducted with AL paradigms that consider the MNIST data set as a multiclass problem. For example, after actively acquiring labels for training images, a classification accuracy of about is reached on test images in Dasgupta2011 (). An analog division of images in training and test data is carried out in Dasgupta2008 () and an accuracy of about is reached. The first images from MNIST training set and the first from the MNIST test set are extracted for experimental purposes in Ji2012 (). The proposed AL method achieves on the test set a classification accuracy of less than . In Lefakis2007 (), a set of images are uniformly drawn from the MNIST training set and the proposed technique is tested on images from the MNIST test set. A classification accuracy of about is reported after acquiring labels for 100 samples. Experimental results based on the entire MNIST data set are reported in Beygelzimer2011 () and an overall accuracy of about is reached after labeling images. An overview of paradigms that try to solve the multiclass problem with all classes is presented in Fig. 7. The percent of labeled samples is relative to the size of the corresponding training set, e.g., in case of Harmonic Gaussian out of of the MNIST total training set has been used to achieve an accuracy of on of the MNIST total test set. Since different subsets of the suggested training set and/or of the test set are used, it is difficult to compare our paradigm to the results presented in other publications. Moreover, the size of the initially labeled set varies strongly (sometimes only learning curves are presented) across the published experimental results, which makes a comparison even harder.
The results of the experiments conducted on the MNIST data set are shown in Fig. 8, as it depicts the performance of the active training of SVM with RBF and RWM kernels on the MNIST test set. We started our AL process without any label information and selected samples (it represents of the training set) by means of the proposed densitybased selection strategy (cf., Alg. 2) in the first query round. The labeled samples are the same for both, SVM with RBF and RWM kernels. In case of SVM with RBF kernel and the US selection strategy we select one sample in each query round, whereas in case of SVM with the RWM kernel five samples were chosen. It can easily be observed that the proposed paradigm achieves a steep rise in classification accuracy in the initial phase of the AL and is able to continuously maintain a good performance throughout the whole AL process. Moreover, by regarding only the initial samples it reaches an accuracy of about , which is twice the accuracy of SVM with RBF kernel (). When the AL process advances, the SVM with RBF kernel is able to reach the same accuracy as SVM with RWM kernel (after about actively selected samples) and performs slightly better at the end.
In summary, it is quite difficult to compare the performance of our approach on the MNIST data set to those in the existing AL literature, due to substantially different partitioning in training set, test set, and initially labeled set, and their respective setup for parameter tuning, as shown in Fig. 7. Nevertheless, the behavior exhibited by our approach (see Figs. 6 and 8) underpins our claim that by capturing and exploiting structure information contained in unlabeled data, the performance of AL can be significantly increased if only a very small number of labeled samples is regarded, which is one of the main goals of AL. Moreover, we expect that the results of our new approach can be further improved, if we on the one hand side refine the density model based on the available class information (cf. Section 4.2) and on the other hand use parameter estimation methods to find good values for the kernel parameters (cf. Section 5.1.1).
6 Conclusion and Outlook
In this article we proposed and evaluated an effective and efficient approach that fuses generative and discriminative modeling techniques for an AL processes. The proposed approach takes advantage of structure information in data which is given by the spatial arrangement of the (un)labeled data in the input space. Consequently, we have shown how structure information captured by means of probabilistic models can be iteratively improved at runtime and exploited to improve the performance of the AL process. Furthermore, we proposed a new densitybased selection strategy for selecting the samples in the initial query round.
The key advantages of the proposed paradigm can be summarized as follows:

Due to the new densitybased selection strategy, the AL process is able to start without any initially labeled samples.

Better performance (classification accuracy) is reached in an early stage of the active learning, as it exploits the structure information contained in the unlabeled data.
In our future work we want to address the following questions: As the AL process advances and more labeled samples become available, is it possible to find better parameters that will improve the performance? How computationally intensive will it be? Is it feasible and viable to smoothly switch from SVM with RWM kernel to SVM with RBF kernel when enough label information is accessible? Finally, we need to deeply investigate how to parametrize the RWM kernel in case of high dimensional data sets.
sort&compress
References
 (1) A. Asuncion, D. Newman, UCI Machine Learning Repository, http://archive.ics.uci.edu/ml/ (last access 26/02/2016).
 (2) M. Belkin, P. Niyogi, V. Sindhwani, Manifold regularization: A geometric framework for learning from labeled and unlabeled examples, Journal of Machine Learning Research 7 (2006) 2399–2434.
 (3) A. Beygelzimer, S. Dasgupta, J. Langford, Importance Weighted Active Learning, in: Proceedings of the 26th Annual International Conference on Machine Learning (ICML’09), Montreal, QC, 2009, pp. 49–56.
 (4) A. Beygelzimer, D. Hsu, Efficient active learning, in: Proceedings of the 28th International Conference on Machine Learning (ICML’11), Workshops, Bellevue, WA, 2011.
 (5) C. M. Bishop, Pattern Recognition and Machine Learning, Springer, New York, NY, 2006.
 (6) A. Bordes, Å. Ertekin, J. Weston, L. Bottou, Fast Kernel Classifiers with Online and Active Learning, Journal of Machine Learning Research 6 (2005) 1579–1619.
 (7) C. G. Cawlay, Baseline methods for active learning, in: JMLR: Workshop and Conference Proceedings 16, Sardinia, Italy, 2011, pp. 47 – 57.
 (8) C.C. Chang, C.J. Lin, LIBSVM: A library for support vector machines, ACM Transactions on Intelligent Systems and Technology 2 (2011) 27:1–27:27.
 (9) O. Chapelle, B. Schölkopf, A. Zien (eds.), SemiSupervised Learning, MIT Press, Cambridge, MA, 2006.
 (10) O. Chapelle, V. Sindhwani, S. S. Keerthi, Optimization techniques for semisupervised support vector machines, Journal of Machine Learning Research 9 (2008) 203–233.
 (11) A. Culotta, A. McCallum, Reducing labeling effort for structured prediction tasks, in: Proceedings of the 20th National Conference on Artificial Intelligence (AAAI’05), Pittsburgh, PA, 2005, pp. 746–751.
 (12) M. Culver, D. Kun, S. Scott, Active learning to maximize area under the ROC curve, in: Proceedings of the Sixth International Conference on Data Mining (ICDM ’06), Hong Kong, China, 2006, pp. 149–158.
 (13) L. Cunhe, W. Chenggang, A new semisupervised support vector machine learning algorithm based on active learning, in: Proceedings of the Second International Conference on Future Computer and Communication (ICFCC’10), Wuhan, China, 2010, pp. 638 – 641.
 (14) I. Dagan, S. P. Engelson, Committeebased sampling for training probabilistic classifiers, in: Proceedings of the Twelfth International Conference on Machine Learning (ICML’95), Katalonien, Spanien, 1995, pp. 150–157.
 (15) C. K. Dagli, S. Rajaram, T. S. Huang, Utilizing information theoretic diversity for SVM active learning, in: Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 2006, pp. 506–511.
 (16) S. Dasgupta, Two faces of active learning, Theoretical Computer Science 412 (19) (2011) 1767–1781.
 (17) S. Dasgupta, D. Hsu, Hierarchical sampling for active learning, Helsinki, Finnland, 2008, pp. 208–215.
 (18) J. Demšar, Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research 7 (2006) 1–30.
 (19) R. O. Duda, P. E. Hart, D. G. Stork, Pattern Classification, John Wiley & Sons, Chichester, NY, 2001.
 (20) M. Fan, N.Gu, H. Qiao, B. Zhang, Sparse regularization for semisupervised classification, Pattern Recognition 44 (8) (2011) 1777–1784.
 (21) D. Fisch, E. Kalkowski, B. Sick, S. J. Ovaska, In your interest  objective interestingness measures for a generative classifier, in: Proceedings of the third International Conference on Agents and Artificial Intelligence (ICAART ’11), Rome, Italy, 2011, pp. 414–423.
 (22) D. Fisch, B. Sick, Training of radial basis function classifiers with resilient propagation and variational Bayesian inference, in: International Joint Conference on Neural Networks (IJCNN ’09), Atlanta, GA, 2009, pp. 838–847.
 (23) C. Fook, M. Hariharan, S. Yaacob, A. Adom, A review: Malay speech recognition and audio visual speech recognition, in: International Conference on Biomedical Engineering, 2012, pp. 479–484.
 (24) M. Friedman, A comparison of alternative tests of significance for the problem of rankings, The Annals of Mathematical Statistics 11 (1) (1940) 86–92.
 (25) R. Ganti, A. Gray, Building bridges: Viewing active learning from the multiarmed bandit lens, in: Proceedings of the TwentyNinth Conference on Uncertainty in Artificial Intelligence (UAI’13), Bellevue, WA, 2013.
 (26) I. Guyon, G. Cawley, G. Dror, V. Lemaire, Results of the active learning challenge, in: Journal of Machine Learning Research: Workshop and Conference Proceedings 16, Sardinien, Italien, 2011, pp. 19 – 45.
 (27) A. Hofmann, C. Schmitz, B. Sick, Intrusion detection in computer networks with neural and fuzzy classifiers, in: O. Kaynak, E. Alpaydin, E. Oja, L. Xu (eds.), Artificial Neural Networks and Neural Information Processing (ICANN/ICONIP), vol. 2714 of LNCS, SpringerVerlag Berlin Heidelberg, 2003, pp. 316–324.
 (28) S. C. H. Hoi, R. Jin, J. Zhu, M. R. Lyu, Semisupervised SVM batch mode active learning with applications to image retrieval, ACM Transactions on Information Systems (TOIS) 27 (3) (2009) 1–29.
 (29) R. Hu, B. M. Namee, S. J. Delany, Off to a good start: Using clustering to select the initial training set in active learning, in: Proceedings of the 23rd International Florida Artificial Intelligence Research Society Conference (AAAI’10), Daytona Beach, FL, 2010, pp. 26–31.
 (30) N. Japkowicz, M. Shah, Evaluating Learning Algorithms: A Classification Perspective, Cambridge University Press, New York, NY, USA, 2011.
 (31) M. Ji, J. Han, A Variance Minimization Criterion to Active Learning on Graphs, in: Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS ’12), vol. 22, La Palma, Canary Islands, 2012, pp. 556–564.
 (32) T. Joachims, Transductive inference for text classification using support vector machines, in: Proceedings of the Sixteenth International Conference on Machine Learning, ICML ’99, San Francisco, CA, 1999, pp. 200–209.
 (33) J. Jun, I. Horace, Active learning with SVM, in: J. Ramón, R. Dopico, J. Dorado, A. Pazos (eds.), Encyclopedia of Artificial Intelligence, vol. 3, IGI Global, Hershey, PA, 2009, pp. 1–7.
 (34) J. Kremer, K. S. Pedersen, C. Igel, Active learning with support vector machines, Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery 4 (4) (2014) 313–326.
 (35) L. Lefakis, M. Wiering, SemiSupervised Methods for Handwritten Character Recognition using Active Learning, in: Proceedings of the BelgiumâNetherlands Conference on Artificial Intelligence, Utrecht, Netherlands, 2007, pp. 205–212.
 (36) Y. Leng, X. Xu, G. Qi, Combining active learning and semisupervised learning to construct SVM classifier, KnowledgeBased Systems 44 (0) (2013) 121–131.
 (37) C. Li, C. Ferng, H. Lin, Active learning with hinted support vector machine, in: Proceedings of the Fourth Asian Conference on Machine Learning (ACML’12), Singapur, Singapur, 2012, pp. 221 – 235.
 (38) M. G. Malhat, H. M. Mousa, A. B. ElSisi, Clustering of chemical data sets for drug discovery, in: International Conference on Informatics and Systems, 2014, pp. DEKM–11 – DEKM–18.
 (39) D. Mazzoni, K. L. Wagstaff, M. C. Burl, Active Learning with Irrelevant Examples, in: G. Berlin (ed.), Proceedings of the 17th European Conference on Machine Learning (ECML’06), 2006, pp. 695–702.
 (40) S. Melacci, Manifold regularization: Laplacian svm, http://www.dii.unisi.it/~melacci/lapsvmp/ (last access 26/02/2016).
 (41) S. Melacci, M. Belkin, Laplacian support vector machines trained in the primal, Journal of Machine Learning Research 12 (2011) 1149–1184.
 (42) MNIST, MNIST handwritten digit database, http://yann.lecun.com/exdb/mnist/ (last access 26/02/2016).
 (43) P. Nemenyi, Distributionfree Multiple Comparisons, Princeton University, Princeton, New Jersey, USA, 1963.
 (44) H. T. Nguyen, a. Smeulders, Active learning using preclustering, in: Proceedings of the 21st International Conference on Machine Learning (ICML’04), Banff, AB, 2004, pp. 623–630.
 (45) T. Ni, F.L. Chung, S. Wang, Support vector machine with manifold regularization and partially labeling privacy protection, Information Sciences 294 (0) (2015) 390–407, innovative Applications of Artificial Neural Networks in Engineering.
 (46) C. Orhan, Ö. Taştan, ALEVS: Active Learning by Statistical Leverage Sampling, http://arxiv.org/abs/1507.04155 (last access 02/02/2016).
 (47) S. J. Pan, Q. Yang, A survey on transfer learning, IEEE Transactions on Knowledge and Data Engineering 22 (10) (2010) 1345–1359.
 (48) I. S. Reddy, S. Shevade, M. N. Murty, A fast quasinewton method for semisupervised SVM, Pattern Recognition 44 (1011) (2011) 2305–2313.
 (49) T. Reitmaier, A. Calma, B. Sick, Transductive active learning – a new semisupervised learning approach based on iteratively refined generative models to capture structure in data, Information Sciences 293 (2014) 275–298.
 (50) T. Reitmaier, B. Sick, Active classifier training with the 3DS strategy, in: IEEE Symposium on Computational Intelligence and Data Mining (CIDM ’11), France, Paris, 2011, pp. 88–95.
 (51) T. Reitmaier, B. Sick, Let us know your decision: Poolbased active training of a generative classifier with the selection strategy 4DS, Information Sciences 230 (2013) 106–131.
 (52) T. Reitmaier, B. Sick, The responsibility weighted Mahalanobis kernel for semisupervised training of support vector machines for classification, Information Sciences 323 (2015) 179–198.
 (53) B. Ripley, Pattern recognition and neural networks, http://www.stats.ox.ac.uk/pub/PRNN/ (last access 26/02/2016).
 (54) T. Scheffer, C. Decomain, S. Wrobel, Active hidden Markov models for information extraction, in: Proceedings of the Fourth International Conference on Advances in Intelligent Data Analysis (IDA’01), London, UK, 2001, pp. 309–318.
 (55) H. J. Scudder, Probability of error of some adaptive pattern recognition machines, IEEE Transaction on Information Theory (1965) 363–371.
 (56) B. Settles, Active learning literature survey, Computer Sciences Technischer Bericht 1648, University of Wisconsin, Department of Computer Science (2009).
 (57) B. Settles, From theories to queries: Active learning in practice, in: Journal of Machine Learning Research: Workshop and Conference Proceedings 16, Sardinien, Italien, 2011, pp. 1 – 18.
 (58) B. Sick, Online tool wear monitoring in turning using timedelay neural networks, in: Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 1, 1998, pp. 445–448.
 (59) V. Sindhwani, S. S. Keerthi, O. Chapelle, Deterministic annealing for semisupervised kernel machines, in: Proceedings of the 23rd International Conference on Machine Learning (ICML’06), Pittsburgh, PA, 2006, pp. 841–848.
 (60) M. Song, H. Yu, W.S. S. Han, Combining active learning and semisupervised learning techniques to extract protein interaction sentences, BMC bioinformatics 12 (Suppl 12) (2011) 1–11.
 (61) S. Tong, D. Koller, Support vector machine active learning with applications to text classification, Journal of Machine Learning Research 2 (2002) 45–66.
 (62) UCL, UCL/MLG Elena Database, https://www.elen.ucl.ac.be/neuralnets/Research/Projects/ELENA/elena.htm (last access 26/02/2016).
 (63) X.J. Wang, W.Y. Ma, L. Zhang, X. Li, Multigraph enabled active learning for multimodal web image retrieval, in: Proceedings of the Seventh International Workshop on Multimedia Information Retrieval (MIR’05), Singapur, Singapur, 2005, pp. 65–72.
 (64) Z. Wang, S. Yan, C. Zhang, Active learning with adaptive regularization, Pattern Recognition 44 (10–11) (2011) 2375–2383.
 (65) K. Q. Weinberger, L. K. Saul, Distance metric learning for large margin nearest neighbor classification, Journal of Machine Learning Research 10 (2009) 207–244.
 (66) Z. Xu, K. Yu, V. Tresp, X. Xu, J. Wang, Representative sampling for text classification using support vector machines, in: Advances in Information Retrieval, vol. 2633 of Lecture Notes in Computer Science, Springer, 2003, pp. 393–407.