PoolBased Sequential Active Learning for Regression
Abstract
Active learning is a machine learning approach for reducing the data labeling effort. Given a pool of unlabeled samples, it tries to select the most useful ones to label so that a model built from them can achieve the best possible performance. This paper focuses on poolbased sequential active learning for regression (ALR). We first propose three essential criteria that an ALR approach should consider in selecting the most useful unlabeled samples: informativeness, representativeness, and diversity, and compare four existing ALR approaches against them. We then propose a new ALR approach using passive sampling, which considers both the representativeness and the diversity in both the initialization and subsequent iterations. Remarkably, this approach can also be integrated with other existing ALR approaches in the literature to further improve the performance. Extensive experiments on 11 UCI, CMU StatLib, and UFL Media Core datasets from various domains verified the effectiveness of our proposed ALR approaches.
I Introduction
Active learning (AL) [33], a subfield of machine learning, considers the following problem: if the learning algorithm can choose the training data, then which training samples should it choose to maximize the learning performance, under a fixed budget, e.g., the maximum number of labeled training samples? As an example, consider emotion estimation in affective computing [28]. Emotions can be represented as continuous numbers in the 2D space of arousal and valence [30], or in the 3D space of arousal, valence, and dominance [26]. However, emotions are very subjective, subtle, and uncertain. So, usually multiple human assessors are needed to obtain the groundtruth emotion values for each affective sample (video, audio, image, physiological signal, etc). For example, 1416 assessors were used to evaluate each video clip in the DEAP dataset [21], six to 17 assessors for each utterance in the VAM (Vera am Mittag in German, Vera at Noon in English) spontaneous speech corpus [16], and at least 110 assessors for each sound in the IADS2 (International Affective Digitized Sounds 2nd Edition) dataset [4]. This is very timeconsuming and laborintensive. How should we optimally select the affective samples to label so that an accurate regression model can be built with the minimum cost (i.e., the minimum number of labeled samples)? That’s the typical type of problems that AL targets at.
Many AL approaches have been proposed in the literature [1, 34, 15, 33, 5, 9, 23, 6, 29, 11, 31, 32, 8]. According to the query scenario, they can be categorized into two groups [37]: populationbased and poolbased. In populationbased AL, the test input distribution is known, and training input samples at any desired locations can be queried. Its goal is to find the optimal training input density to generate the training input samples. In poolbased AL, a pool of unlabeled samples is given, and the goal is to optimally choose some to label, so that a model trained from them can best label the remaining samples.
Regardless of whether it is populationbased or poolbased, typically AL is iterative [6]. It first builds a base model from a small number of labeled training samples, and then chooses a few most helpful unlabeled samples and queries for their labels. The newly labeled samples are then added to the training dataset and used to update the model. This process iterates until a termination criterion is met, e.g., the maximum number of iterations, the maximum number of labeled samples, or the desired crossvalidation accuracy, is reached. Depending on the number of unlabeled samples to query in each iteration, AL approaches can also be categorized into two types [6]: sequential AL, where one sample is queried each time, and batchmode AL, where multiple samples are queried in each iteration.
This paper focuses on poolbased sequential active learning for regression (ALR). Although numerous AL approaches have been proposed in the literature [33], most of them are for classification problems. Among those limited number of ALR approaches [6, 7, 39, 5, 37, 40, 38, 36, 10, 15, 24], only a few can be used for poolbased sequential ALR [5, 7, 40, 39]. In this paper we will review them, point out their limitations, and propose approaches to enhance their performance.
The main contributions of this paper are:

We extend three criteria for AL – informativeness, representativeness, and diversity – from classification to regression, and propose a generic framework that can be used to enhance a baseline ALR approach.

We instantiate several ALR approaches that consider informativeness, representativeness, and diversity simultaneously, and demonstrate their promising performances in extensive application domains.
The remainder of this paper is organized as follows: Section II introduces three essential criteria that should be considered in ALR, and then compares several existing poolbased sequential ALR approaches against them. Section III proposes several new poolbased sequential ALR approaches. Section IV describes the datasets to evaluate the effectiveness of the proposed ALR approaches, and the corresponding experimental results. Finally, Section V draws conclusions.
Ii Existing PoolBased Sequential ALR Approaches
In this section we propose three essential criteria for selecting unlabeled samples in poolbased sequential ALR, and then introduce a few existing poolbased sequential ALR approaches. We also compare these ALR approaches against the three criteria and point out their limitations.
Without loss of generality, we assume the pool consists of dimensional samples , , and the first samples have already been labeled with labels .
Iia Three Essential Criteria in ALR
We propose the following three criteria that should be considered in poolbased sequential ALR for selecting the most useful unlabeled sample to label:

Informativeness, which means that the selected sample must contain rich information, so labeling it would significantly benefit the objective function. Informativeness could be measured by uncertainty (entropy, distance to the decision boundary, confidence of the prediction, etc.), expected model change, expected error reduction, and so on [33]. For example, in querybycommittee (QBC), a popular AL approach for both classification and regression [29], the informativeness of an unlabeled sample could be computed as the disagreement among the committee members: the more disagreement is, the more uncertain the sample is, and hence the more informative it is.

Representativeness, which can be evaluated by the number of samples that are similar or close to a target sample (or its density [33]): the larger the number is, the more representative the target sample is. Clearly, the target sample should not be an outlier. For example, in Fig. 1, assume we want to build a regression model to predict the output from and . The gray circle “B” is very likely to be an outlier because it is very far away from all other samples in the input space, so labeling it could mislead the regression model and result in overall worse prediction performance. In other words, a sample like “B” should not be selected for labeling by ALR.

Diversity, which means that the selected samples should scatter across the full input space, instead of concentrating in a small local region. For example, in Fig. 1 the unlabeled samples form three clusters in the input space, so we should select samples from all three clusters to label, instead of focusing on only one or two of them. Assume the two green circles have been selected and labeled. Then, selecting next a sample from the third cluster (the one contains “A”) seems very reasonable.
We should point out that similar criteria have been used in AL for classification (ALC). For example, Shen et al. [35] proposed two multicriteriabased batchmode ALC strategies, both of which considered informativeness, representativeness and diversity simultaneously. Their Strategy 1 first chooses a few most informative samples, clusters them, and then selects the cluster centroids for labeling. Their Strategy 2 first computes a score for each sample as a linear combination of its informativeness and representativeness, selects samples with high scores, and further downselects among them the most diverse ones for labeling. Both strategies are specific to the support vector machine classifier. He et al. [18] considered uncertainty, representativeness, information content, and diversity in batchmode ALC. Let be the batch size. They compute the information content of an unlabeled sample as uncertaintyrepresentativeness, select the most informative samples, cluster them into clusters by kernel means clustering, and finally select the cluster centers for labeling.
However, to our knowledge, similar ideas have not been explored in ALR, except our recent work on enhanced batchmode ALR (EBMALR) [39]. And it is not trivial to extend these concepts from classification to regression, because there could be many different strategies to integrate these three criteria, and different strategies could result in significantly different performances. EBMALR [39] is one of such strategies. However, although our previous research [39] showed that it achieved promising performance in batchmode ALR in braincomputer interface, this paper (Section IVE) shows that it does not perform well in sequential ALR. So, how to integrate informativeness, representativeness and diversity to design highperformance poolbased sequential ALR is still an open problem.
Next we will introduce several existing poolbased sequential ALR approaches, and compare their rationale against our three criteria.
IiB QuerybyCommittee (QBC)
QBC is a very popular poolbased AL approach for both classification [1, 34, 15, 33, 25] and regression [5, 9, 23, 29, 33, 11]. Its basic idea is to build a committee of learners from existing labeled training dataset (usually through bootstrapping and/or different learning algorithms), and then select from the pool the unlabeled samples on which the committee disagrees the most to label.
In this paper we use the poolbased QBC for regression approach proposed by RayChaudhuri and Hamey [29]. It first bootstraps the labeled samples into copies, each containing samples but with duplicates, and builds a regression model from each copy, i.e., the committee consists of regression models. Let the th model’s prediction for the th unlabeled sample be . Then, for each of the unlabeled sample, it computes the variance of the individual predictions, i.e.,
(1) 
where , and then selects the sample with the maximum variance to label.
Comparing against the three criteria for ALR, QBC only considers the informativeness, but not the representativeness and the diversity.
IiC Expected Model Change Maximization (EMCM)
Expected model change maximization (EMCM) is also a very popular AL approach for classification [33, 31, 32, 8], regression [7, 6], and ranking [12]. Cai et al. [7] proposed an EMCM approach for both linear and nonlinear regression. In this subsection we introduce their linear approach, as only linear regression is considered in this paper.
EMCM first uses all labeled samples to build a linear regression model. Let its prediction for the th sample be . Then, like in QBC, EMCM also uses bootstrap to construct linear regression models. Let the th model’s prediction for the th sample be . Then, for each of the unlabeled samples, it computes
(2) 
EMCM selects the sample with the maximum to label.
Comparing against the three criteria for ALR, EMCM only considers the informativeness, but not the representativeness and the diversity.
IiD Greedy Sampling (GS)
Yu and Kim [40] proposed several very interesting passive sampling techniques for regression. Instead of finding the most informative sample based on the learned regression model, as in QBC and EMCM, they select the sample based on its geometric characteristics in the feature space. An advantage of passive sampling is that it does not require updating the regression model and evaluating the unlabeled samples in each iteration, so it is independent of the regression model.
In this paper we use the greedy sampling (GS) for regression approach [40], which is easy to implement, and showed promising performance in [40]. Its basic idea is to select a new sample in a greedy way such that it is located far away from the previously selected and labeled samples. More specifically, for each of the unlabeled sample , it computes its distance to each of the labeled samples, i.e.,
(3) 
then it computes as the minimum distance from to the labeled samples, i.e.,
(4) 
and selects the sample with the maximum to label.
Comparing against the three criteria for ALR, GS only considers the diversity, but not the informativeness and the representativeness.
IiE Enhanced BatchMode ALR (EBMALR)
We have already seen that each of the above three ALR approaches only considers one of the three essential criteria for ALR, so there is room for improvement. Additionally, all of them assume that we already have initially labeled samples for training. Usually these samples are randomly selected, because the regression models cannot be constructed at the very beginning when no or very few labeled samples are available (and hence QBC and EMCM cannot be applied). However, there can still be better initialization approaches to select more representative and diverse seedling samples, without using any label information. One such approach, EBMALR [39], was proposed recently to consider simultaneously informativeness, representativeness and diversity to enhance QBC and EMCM. Theoretically, batchmode ALR can also be used for sequential ALR, by setting the batch size to one. Algorithm 1 shows the EBMALR algorithm when the batch size is one. It first uses means clustering to initialize samples that are representative and diverse, and then uses a baseline ALR approach, such as QBC or EMCM, to select subsequent samples sequentially.
Compared with QBC, EMCM and GS, EBMALR identifies outliers and excludes them from being selected, and considers both representativeness and diversity in initializing the first samples. The original EBMALR (when the batch size is larger than one) considers both diversity and informativeness in each subsequent iteration, but when the batch size becomes one, EBMALR is no longer able to consider the diversity among the selected samples. As a result, its performance degrades significantly, as will be demonstrated in Section IVE.
IiF Design of Experiments (DOE)
Design of experiments (DOE) has been widely studies in statistics and used in various industries, for “exploring new processes and gaining increased knowledge of existing processes, followed by optimising these processes for achieving worldclass performance [2].” Its primary goal is usually to extract the maximum amount of information from as few observations as possible, which is very similar to ALR. There are two typical categories of DOEs [2]:

Screening designs, which are smaller experiments to identify the critical few factors from the many potential trivial factors.

Optimal designs, which are larger experiments that investigate interactions of terms and nonlinear responses, and are conducted at more than two levels for each factor.
Optimal designs are particularly relevant to ALR. They provide theoretical criteria to choosing a set of points to label, for a specific set of assumptions and objectives. Compared with optimal designs, ALR approaches are generally more heuristic. In this paper we only consider ALR approaches.
Iii Our Proposed ALR Approaches
In this section we propose first a basic poolbased sequential ALR approach by considering simultaneously representativeness and diversity (RD), and then strategies to integrate it with QBC, EMCM and GS to further improve the performance.
Iiia The Basic RD ALR Approach
Assume initially none of the samples in the pool is labeled. Our proposed basic RD ALR approach consists of two parts: 1) Better initialization of the first a few samples by considering both representativeness and diversity; and, 2) Generating a new sample in each subsequent iteration by also considering both representativeness and diversity.
Since the input space has dimensions, it is desirable to have at least initially labeled samples to construct a reasonable linear regression model^{1}^{1}1It is also possible to initialize fewer than samples to construct a linear regression model, by using regularized regression such as ridge regression and LASSO. However, here we assume is small, and initialize samples directly for simplicity.. To find the optimal locations of these samples, we perform means () clustering on the unlabeled samples, and then select from each cluster the sample closest to the cluster centroid for labeling. This initialization ensures representativeness, because each sample is a good representation of the cluster it belongs to. It also ensures diversity, because these clusters cover the full input space of .
The idea of using clustering for sample selection in ALR was motivated by similar ideas in ALC. For example, Nguyen and Smeulders [27] used medoids clustering to select representative and diverse samples. Kang, Ryu and Kwon [20] used means clustering to partition the unlabeled samples into different clusters, and then selected from each cluster the sample closest to its centroid as the most representative one. Hu, Namee and Delany [19] used deterministic clustering methods (furthestfirsttraversal, agglomerative hierarchical clustering, and affinity propagation clustering) to initialize the samples, to avoid variations introduced by nondeterministic clustering approaches such as medoids and means. Krempl, Ha and Spiliopoulou [22] proposed a clusteringbased optimized probabilistic active learning approach for online streaming ALC. However, to our knowledge, there have not yet existed any poolbased sequential ALR approaches that use clustering to initialize the samples and also perform subsequent selections.
After the first samples are initialized by considering both representativeness and diversity, next we start the iterative ALR process, where a new sample is selected for labeling in each iteration. Consider the first iteration, where we already have labeled samples, and need to determine which sample from the unlabeled ones should be further selected for labeling. In the basic RD algorithm, we first perform means clustering on all samples, where . Since there are only labeled samples, at least one cluster does not contain any labeled sample. In practice some clusters may contain multiple labeled samples, so usually there are more than one clusters that do not contain any labeled sample. We then identify the largest cluster that does not contain any labeled sample as the current most representative cluster, and select the sample closest to its centroid for labeling. Note that this selection strategy also ensures diversity, because the identified cluster locates differently from all other clusters that already contain labeled samples. We then repeat this process to generate more labeled samples, until the maximum number of labeled samples is reached.
The pseudocode of the basic RD ALR approach is given in Algorithm 2, where Option 1 is used. Similar to GS, the basic RD approach also uses passive sampling, which does not require updating the regression model and evaluating the unlabeled sample in each iteration. So, it is independent of the regression model.
IiiB Integrate RD with QBC, EMCM, and GS
Interestingly, the basic RD ALR approach can be easily integrated with an existing poolbased sequential ALR approach for better performance. The pseudocode is also shown in Algorithm 2, where Option 2 or 3 or 4 is used. The initialization is the same as the basic RD ALR approach. In each iteration, it also selects a sample from the largest cluster that does not already contain a labeled sample for labeling. However, instead of selecting the one closest to its centroid, as in the basic RD ALR approach, now it uses QBC or EMCM or GS to select the most informative or most diverse sample to label. We expect that when QBC or EMCM is used, the integrated RD ALR approach can achieve better performance than the basic RD ALR approach, because now informativeness, representativeness and diversity are considered simultaneously.
IiiC Differences from EBMALR
Our proposed ALR approaches have some similarity with EBMALR [39], e.g., clustering is used to ensure representativeness and diversity. However, there are several significant differences:

This paper considers poolbased sequential ALR, whereas [39] considered poolbased batchmodel ALR. Theoretically, sequential ALR can be viewed as a special case of batchmodel ALR, when the batch size equals one. However, as pointed out in Section IIE, when the batch size becomes one, EBMALR is no longer able to consider the diversity among the selected samples. As a result, its performance becomes significantly worse than the proposed approaches in this paper, as will be shown in Section IVE.

This paper explicitly defines informativeness, representativeness and diversity as three criteria that should be considered in ALR, whereas EBMALR did not (although it implicitly used these concepts).

EBMALR considered also how to exclude outliers from being selected, but it required a userdefined parameter. Through extensive experiments, we found that this part is not critical in most applications, so this paper does not include it. As a result, our new algorithms do not require any userdefined hyperparameters, which are easier to use.

In each subsequent iteration, EBMALR (when the batch size is larger than one) considered first the informativeness and then the diversity, but this paper considers first the diversity and then the informativeness or representativeness. Experiments showed that the latter results in better performances.

This paper compares the performances of nine ALR approaches on 11 datasets from various domains, whereas [39] only compared five ALR approaches in a braincomputer interface application.
Iv Experiments and Results
Extensive experiments are performed in this section to demonstrate the performance of the basic and integrated RD ALR approaches.
Iva Datasets
We used 10 datasets from the UCI Machine Learning Repository^{2}^{2}2http://archive.ics.uci.edu/ml/index.php and the CMU StatLib Datasets Archive^{3}^{3}3http://lib.stat.cmu.edu/datasets/ that have been used in previous ALR experiments [7, 6, 40]. We also used an IADS2 dataset on affective computing from the University of Florida Media Core^{4}^{4}4http://csea.phhp.ufl.edu/media.html#midmedia. It consists of 167 acoustic emotional stimuli for experimental investigations of emotion and attention. 76 acoustic features were extracted [17], and principle component analysis was used to reduce them to 10 features. The goal was to estimate the continuous arousal value from these 10 features. The summary of these datasets is given in Table I. They cover a large variation of application domains.
No. of  No. of  No. of  No. of  No. of  

Dataset  Source  samples  raw  numerical  categorical  total 
features  features  features  features  
ConcreteCS^{1}  UCI  103  7  7  0  7 
IADSArousal^{2}  UFL  167  10  10  0  10 
Yacht^{3}  UCI  308  6  6  0  6 
autoMPG^{4}  UCI  392  7  6  1  9 
NO2^{5}  StatLib  500  7  7  0  7 
Housing^{6}  UCI  506  13  13  0  13 
CPS^{7}  StatLib  534  11  8  3  19 
Concrete^{8}  UCI  1030  8  8  0  8 
Airfoil^{9}  UCI  1503  5  5  0  5 
Winered^{10}  UCI  1599  11  11  0  11 
Winewhite^{10}  UCI  4898  11  11  0  11 
Some datasets contain both numerical and categorical features. For example, the autoMPG dataset contains seven raw features, among which six are numerical and one is categorical (Origin: US, Japan, Germany). We used onehot encoding to covert the categorical values into numerical values, e.g., OriginUS was encoded as , OriginJapan , and OriginGermany . In this way, the converted feature space has dimensions. Categorical features in other datasets were converted similarly before regression. We then normalized each dimension of the feature space to mean zero and standard deviation one.
IvB Algorithms
We compared the performances of nine different sample selection strategies:

Baseline (BL), which randomly selects all samples.

RD, which is our basic RD ALR algorithm introduced in Section IIIA.

QBC, which has been introduced in Section IIB. The first labeled samples are randomly initialized.

RDQBC, which is RD integrated with QBC, introduced in Section IIIB.

EMCM, which has been introduced in Section IIC. The first labeled samples are randomly initialized.

EEMCM, which is the EBMALR approach introduced in Algorithm 1, when EMCM is used as the base ALR approach.

RDEMCM, which is RD integrated with EMCM, introduced in Section IIIB.

GS, which has been introduced in Section IID. The first labeled samples are randomly initialized.

RDGS, which is RD integrated with GS, introduced in Section IIIB.
All nine approaches built a ridge regression model from the labeled samples, with ridge parameter . We used ridge regression instead of ordinary linear regression because the number of labeled samples is very small, so ridge regression, with regularization on the coefficients, generally results in better performance than the ordinary linear regression.
IvC Evaluation Process
There could be two model evaluation strategies: 1) inductive learning, in which we learn a model from labeled samples, and try to evaluate it on samples we have not seen or known about; and, 2) transductive learning, in which we try to evaluate the model on a known (test) set of unlabeled examples. Specific to poolbased ALR, the former means labeling a small amount of samples from a fixed pool, building a regression model, and then predicting the outputs of the remaining unlabeled samples in the same pool, whereas the latter means labeling a small amount of samples from a fixed pool, building a regression model, and then predicting the outputs of unlabeled samples from another pool. This paper mainly focuses on transductive learning, but will also briefly report results on inductive learning in Section IVI (more results can be found in the Supplementary Materials). Generally they are very similar.
The detailed evaluation process was similar to that used in our previous research on poolbased batchmode ALR [39]. For each dataset, let be the pool of all samples. We first randomly selected 80% of the total samples as our training pool^{5}^{5}5For a fixed pool, EEMCM, RD, RDQBC, RDEMCM and RDGS give a deterministic selection sequence because there is no randomness involved (assume means clustering always converges to its global optimum). So, we need to vary the pool in order to study the statistical properties of them. We did not use the traditional bootstrap approach, i.e., sampling with replacement to obtain the same number of samples as the original pool, because bootstrap introduces duplicate samples in the new pool, whereas in practice usually a pool does not contain duplicates. (denoted as ), initialized the first labeled samples ( is the number of total features) either randomly or by EEMCM/RD, identified one sample to label in each subsequent iteration by different algorithms, and built a ridge regression model. The maximum number of samples to be labeled, , was 10% of the size of . For datasets too small or too large, we constrained .
In the inductive learning setting, the model performance was evaluated on the 20% remaining samples that are in but not in , whereas in the transductive learning setting, the model performance was evaluated on the samples in . We ran the above evaluation process 100 times for each dataset and each algorithm, to obtain statistically meaningful results.
IvD Performance Measures
After each iteration of each algorithm, we computed the root mean squared error (RMSE) and correlation coefficient (CC) as the performance measures.
In transductive learning, because different algorithms selected different samples to label, the remaining unlabeled samples in the pool were different for each algorithm, so we cannot compare their performances based on the remaining unlabeled samples. Because in poolbased ALR the goal is to build a regression model to label all samples in the pool as accurately as possible, we computed the RMSE and CC using all samples in the pool, where the labels for the selected samples were their true labels, and the labels for the remaining unlabeled samples were the predictions from the ridge regression model.
Let be the true label for , and be the prediction from the ridge regression model. Without loss of generality, assume the first samples are selected by an algorithm and hence their true labels are known. Then,
(5)  
(6) 
where
(7)  
(8) 
Note that we should consider the RMSE as the primary performance measure, because it is directly optimized in the objective function of ridge regression (CC is not). Generally as the RMSE decreases, the CC should increase, but there is no guarantee. In other words, we expect that an ALR approach performing well on the RMSE should also perform well on the CC, but this is not always true. So, the CC can only be viewed as a secondary performance measure.
In inductive learning, the RMSE and CC can be computed directly on the 20% samples that are in but not in .
IvE Experimental Results
The RMSEs and CCs for the nine algorithms on the 11 datasets in transductive learning, averaged over 100 runs, are shown in Fig. 2. Observe that:

Generally as increased, all nine algorithms achieved better performance (smaller RMSE and larger CC), which is intuitive, because more labeled training samples generally result in a more reliable ridge regression model.

RD, QBC, EMCM, EEMCM and GS achieved better performances than BL on almost all datasets, suggesting that all these ALR approaches were effective.

Generally RDQBC achieved better performance than both RD and QBC, RDEMCM achieved better performance than both RD and EMCM, and RDGS achieved better performance than both RD and GS. These results suggest that our proposed RD ALR approach is complementary to QBC, EMCM and GS, and hence integrating them can outperform each individual ALR approach.
To see the forest for the trees, we also define an aggregated performance measure called the area under the curve (AUC) for the average RMSE and the average CC on each of the 11 datasets in Fig. 2. The AUCs for the RMSEs are shown in Fig. 3, where for each dataset, we used the AUC of BL to normalize the AUCs of the other eight algorithms, so the AUC of BL was always 1. For the RMSE, a smaller AUC indicates a better performance. Similarly, we also show the AUCs of the CCs in Fig. 3, where a larger AUC indicates a better performance. Observe that:

RD achieved smaller AUCs for the RMSE than BL on 10 of the 11 datasets, and larger AUCs for the CC than BL on 9 of the 11 datasets, suggesting that RD is indeed effective.

Among the four existing ALR approaches, GS achieved the best average performance on both RMSE and CC. The reason can be explained as follows. In poolbased ALR we compute the RMSE and CC on all remaining unlabeled samples, and a large error on a single sample may significantly deteriorate the overall performance, i.e., the samples make unequal contributions to the RMSE and CC. A diverse sample, which is far away from currently selected samples, is more likely to give such a large error (its neighborhood has not been sufficiently modeled). GS considers only the diversity, and makes sure the selected samples are somewhat uniformly distributed in the entire input space, i.e., all neighborhoods in the input space are considered, and hence large errors are less likely to occur. This is different from ALC, in which all misclassified samples make equal contributions to the classification error, no matter how far away they are from the decision boundary.

Generally RDQBC, RDEMCM and RDGS achieved the best performances among the nine algorithms.
The ranks of the nine approaches on the 11 datasets, according to the AUCs, are shown in Table II. Observe that on average, RDEMCM, RDGS and RDQBC ranked among the top three on both RMSE and CC, RD and GS ranked the next, EEMCM slightly outperformed EMCM, and BL was the last. This again confirms the superiority of our proposed approaches.
RD  RD  RD  

Dataset  BL  QBC  EMCM  EEMCM  GS  RD  QBC  EMCM  GS  
ConcreteCS  9  8  7  6  5  1  4  2  3  
IADSArousal  9  8  7  6  1  5  4  2  3  
Yacht  5  8  7  3  9  1  2  6  4  
autoMPG  9  7  8  5  1  6  4  2  3  
RMSE  NO2  9  5  7  6  3  8  4  1  2 
Housing  9  7  6  5  8  3  2  1  4  
CPS  8  7  6  9  1  5  3  4  2  
Concrete  9  8  7  6  4  5  3  1  2  
Airfoil  9  3  7  6  8  1  2  4  5  
Winered  9  4  6  7  5  8  2  1  3  
Winewhite  8  4  3  7  1  9  6  2  5  
Average  9  7  8  6  4  5  2  1  2  
ConcreteCS  9  8  7  6  5  1  4  2  3  
IAPSArousal  9  5  6  7  8  4  3  1  2  
Yacht  9  8  7  3  6  1  5  4  2  
autoMPG  9  6  7  5  1  8  4  2  3  
CC  NO2  8  5  7  4  6  9  3  2  1 
Housing  9  8  7  6  5  4  3  1  2  
CPS  8  6  7  9  1  5  2  3  4  
Concrete  9  6  7  8  5  4  3  2  1  
Airfoil  9  5  7  6  8  1  2  3  4  
Winered  4  7  9  6  8  5  2  1  3  
Winewhite  2  1  3  9  6  8  7  5  4  
Average  9  6  8  7  5  4  3  1  2 
IvF Statistical Analysis
To determine if the differences between different pairs of algorithms were statistically significant, we also performed nonparametric multiple comparison tests on the AUCs using Dunn’s procedure [13, 14], with a value correction using the False Discovery Rate method [3]. The values for the AUCs of RMSEs and CCs are shown in Table III, where the statistically significant ones are marked in bold. Observe that:

All ALR approaches had statistically significantly better RMSEs and CCs than BL.

Among the four existing ALR approaches, GS had statistically significantly better RMSEs than QBC, EMCM and EEMCM.

RD had statistically significantly better RMSE and CC than QBC, EMCM and EEMCM.

RDQBC, RDEMCM and RDGS all had statistically significantly better RMSE and CC than the other six approaches, suggesting again that RD is complementary to QBC, EMCM and GS, and hence integrating RD with any of the latter three can further improve the performance.

There were no statistically significant differences among RDQBC, RDEMCM and RDGS.
RD  RD  

BL  QBC  EMCM  EEMCM  GS  RD  QBC  EMCM  
QBC  .0000  
EMCM  .0000  .4662  
EEMCM  .0000  .1399  .1252  
GS  .0000  .0000  .0000  .0019  
RMSE  RD  .0000  .0000  .0000  .0002  .2585  
RDQBC  .0000  .0000  .0000  .0000  .0000  .0004  
RDEMCM  .0000  .0000  .0000  .0000  .0000  .0000  .2115  
RDGS  .0000  .0000  .0000  .0000  .0001  .0014  .3512  .1219  
QBC  .0000  
EMCM  .0000  .0654  
EEMCM  .0000  .0006  .0499  
GS  .0000  .4417  .0499  .0004  
CC  RD  .0000  .0217  .0002  .0000  .0299  
RDQBC  .0000  .0000  .0000  .0000  .0000  .0004  
RDEMCM  .0000  .0000  .0000  .0000  .0000  .0000  .1847  
RDGS  .0000  .0000  .0000  .0000  .0000  .0000  .2367  .4340 
IvG Visualization
It’s also interesting to visualize the sample selection results of different algorithms to confirm the superiority of the RD based ALR approaches. However, because the feature spaces had at least seven dimensions, it is difficult to visualize them directly. So, we performed principle component analysis (PCA) on each dataset, and represented all samples as their projections on the first two principle components. Due to the page limit, we only show the results for the ConcreteCS dataset (more results can be found in the Supplementary Materials). The red asterisks in Fig. 4 indicate the initial seven selected samples. Observe that random initialization, which was used in BL, QBC, EMCM and GS, may leave a large portion of the feature space unsampled. However, RD, RDQBC, RDEMCM and RDGS, which used our proposed initialization approach, initialized the samples more uniformly in the entire feature space. The red asterisks in Fig. 4 indicate the final 20 samples selected by different algorithms. Observe that:

BL still left a large region of the feature space unsampled, even after 20 samples.

QBC, EMCM and GS selected one or more samples near the boundary of the feature space, which may be outliers. On the contrary, EEMCM and RD selected samples uniformly distributed in the whole feature space, and no selected samples were obvious outliers.

The samples selected by RDQBC distributed in the feature space more uniformly than those selected by QBC. Similar patterns can also be observed between RDEMCM and EMCM, and between RDGS and GS.
In summary, the PCA visualization results confirm that the RD based ALR approaches selected more reasonable samples, which resulted in better regression performances.
IvH Individual Improvements
Table II shows that RDEMCM achieved the best average RMSE among the nine algorithms. Recall that RDEMCM has three enhancements over the random sampling approach (BL):

Enhancement 1: RDEMCM considers both the representativeness and the diversity in initializing the first samples, but BL does not consider either of them.

Enhancement 2: RDEMCM considers both the representativeness and the diversity in selecting the new sample in each iteration, but BL does not consider either of them.

Enhancement 3: RDEMCM considers also the informativeness in selecting the new sample in each iteration, but BL does not consider it.
It is interesting to study if each of the three enhancements is necessary, and if so, what their individual effect is.
For this purpose, we constructed three modified versions of RDEMCM, by considering each enhancement individually: E1, which employs only the first enhancement on more representative and diverse initialization; E2, which employs only the second enhancement on more representative and diverse sampling in each iteration; and, E3, which employs only the third enhancement on more informative sampling in each iteration. We then compared their performances with BL and RDEMCM. Due to the page limit, we only show the results on the CPS dataset (averaged over 30 runs) in Fig. 5 (more results can be found in the Supplementary Materials). The AUCs for RMSEs and CCs for all 11 datasets are shown in Fig. 6.
Observe that all three enhancements outperformed BL on most datasets, especially for the RMSE, which was directly optimized in the objective function of ridge regression. More specifically, Fig. 5 shows that the first enhancement on more representative and diverse initialization helped when was very small; the second and third enhancements helped when became larger. By combining the three enhancements, RDEMCM achieved the best performance at both small and large . This suggests that the three enhancements are complementary, and they are all essential to the improved performance of RDEMCM.
IvI Inductive Learning Results
The results presented in this section so far focused only on transductive learning. This subsection presents the inductive learning results, i.e., testing results on the 20% samples in but not in . The AUCs of the nine algorithms on the 11 datasets are shown in Fig. 7 (more results can be found in the Supplementary Materials). Observe that Fig. 7 is very similar to Fig. 3. Our conclusions drawn in transductive learning still hold in inductive learning.
V Conclusions
AL has been frequently used to reduce the data labeling effort in machine learning. However, most existing AL approaches are for classification. This paper studied AL for regression. We proposed three essential criteria that should be considered in selecting a new sample in poolbased sequential ALR, which are informativeness, representativeness, and diversity. An ALR approach called RD was proposed, which considers both the representativeness and diversity in both initialization and subsequent iterations. The RD approach can also be integrated with existing poolbased sequential ALR approaches, such as QBC, EMCM and GS, to further improve the performance. Extensive experiments on 11 public datasets from various domains confirmed the effectiveness of our proposed approaches.
References
 [1] N. Abe and H. Mamitsuka, “Query learning strategies using boosting and bagging,” in Proc. 15th Int’l Conf. on Machine Learning (ICML), Madison, WI, July 1998, pp. 1–9.
 [2] J. Antony, Design of Experiments for Engineers and Scientists, 2nd ed. London: Elsevier, 2014.
 [3] Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: A practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society, Series B (Methodological), vol. 57, pp. 289–300, 1995.
 [4] M. M. Bradley and P. J. Lang, “The international affective digitized sounds (2nd edition; IADS2): Affective ratings of sounds and instruction manual,” University of Florida, Gainesville, FL, Tech. Rep. B3, 2007.
 [5] R. Burbidge, J. J. Rowland, and R. D. King, “Active learning for regression based on query by committee,” Lecture Notes in Computer Science, vol. 4881, pp. 209–218, 2007.
 [6] W. Cai, M. Zhang, and Y. Zhang, “Batch mode active learning for regression with expected model change,” IEEE Trans. on Neural Networks and Learning Systems, vol. 28, no. 7, pp. 1668–1681, July 2017.
 [7] W. Cai, Y. Zhang, and J. Zhou, “Maximizing expected model change for active learning in regression,” in Proc. IEEE 13th Int’l. Conf. on Data Mining, Dallas, TX, December 2013.
 [8] W. Cai, Y. Zhang, S. Zhou, W. Wang, C. Ding, and X. Gu, “Active learning for support vector machines with maximum model change,” Lecture Notes in Computer Science, vol. 8724, pp. 211–216, 2014.
 [9] D. Cohn, Z. Ghahramani, and M. Jordan, “Active learning with statistical models,” Journal of Artificial Intelligence Research, vol. 4, pp. 129–145, 1996.
 [10] D. A. Cohn, Z. Ghahramani, and M. I. Jordan, “Active learning with statistical models,” Journal of Artificial Intelligence Research, vol. 4, no. 1, pp. 129–145, 1996.
 [11] B. Demir and L. Bruzzone, “A multiple criteria active learning method for support vector regression,” Pattern Recognition, vol. 47, pp. 2558–2567, 2014.
 [12] P. Donmez and J. Carbonell, “Optimizing estimated loss reduction for active sampling in rank learning,” in Proc. 25th Int’l Conf. on Machine Learning (ICML), Helsinki, Finland, July 2008, pp. 248–255.
 [13] O. Dunn, “Multiple comparisons among means,” Journal of the American Statistical Association, vol. 56, pp. 62–64, 1961.
 [14] O. Dunn, “Multiple comparisons using rank sums,” Technometrics, vol. 6, pp. 214–252, 1964.
 [15] Y. Freund, H. Seung, E. Shamir, and N. Tishby, “Selective sampling using the query by committee algorithm,” Machine Learning, vol. 28, no. 23, pp. 133–168, 1997.
 [16] M. Grimm, K. Kroschel, and S. S. Narayanan, “The Vera Am Mittag German audiovisual emotional speech database,” in Proc. Int’l Conf. on Multimedia & Expo (ICME), Hannover, German, June 2008, pp. 865–868.
 [17] C. Guo and D. Wu, “Feature dimensionality reduction for video affect classification: A comparative study,” in Proc. 1st Asian Conf. on Affective Computing and Intelligent Interaction, Beijing, China, May 2018.
 [18] T. He, S. Zhang, J. Xin, P. Zhao, J. Wu, X. Xian, C. Li, and Z. Cui, “An active learning approach with uncertainty, representativeness, and diversity,” The Scientific World Journal, 2014.
 [19] R. Hu, B. M. Namee, and S. J. Delany, “Off to a good start: Using clustering to select the initial training set in active learning,” in Proc. 23rd Int’l Florida Artificial Intelligence Research Society Conf., Daytona Beach, FL, May 2010.
 [20] J. Kang, K. R. Ryu, and H.C. Kwon, “Using clusterbased sampling to select initial training set for active learning in text classification,” in Proc. PacificAsia Conf. on Knowledge Discovery and Data Mining, Sydney, Australia, May 2004, pp. 384–388.
 [21] S. Koelstra, C. Muhl, M. Soleymani, J. S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, “DEAP: A database for emotion analysis using physiological signals,” IEEE Trans. on Affective Computing, vol. 3, no. 1, pp. 18–31, 2012.
 [22] G. Krempl, T. C. Ha, and M. Spiliopoulou, “Clusteringbased optimised probabilistic active learning (COPAL),” in Proc. 18th Int’l Conf. on Discovery Science. Banff, Canada: Springer, October 2015, pp. 101–115.
 [23] A. Krogh and J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” in Proc. Neural Information Processing Systems (NIPS), Denver, CO, November 1995, pp. 231–238.
 [24] D. MacKay, “Informationbased objective functions for active data selection,” Neural Computation, vol. 4, no. 4, pp. 590–604, 1992.
 [25] A. Marathe, V. Lawhern, D. Wu, D. Slayback, and B. Lance, “Improved neural signal classification in a rapid serial visual presentation task using active learning,” IEEE Trans. on Neural Systems and Rehabilitation Engineering, vol. 24, no. 3, pp. 333–343, 2016.
 [26] A. Mehrabian, Basic Dimensions for a General Psychological Theory: Implications for Personality, Social, Environmental, and Developmental Studies. Oelgeschlager, Gunn & Hain, 1980.
 [27] H. T. Nguyen and A. Smeulders, “Active learning using preclustering,” in Proc. 21st Int’l Conf. on Machine Learning, July, 2004.
 [28] R. Picard, Affective Computing. Cambridge, MA: The MIT Press, 1997.
 [29] T. RayChaudhuri and L. Hamey, “Minimisation of data collection by active learning,” in Proc. IEEE Int’l. Conf. on Neural Networks, vol. 3, Perth, Australia, November 1995, pp. 1338–1341.
 [30] J. Russell, “A circumplex model of affect,” Journal of Personality and Social Psychology, vol. 39, no. 6, pp. 1161–1178, 1980.
 [31] B. Settles and M. Craven, “An analysis of active learning strategies for sequence labeling tasks,” in Proc. Conf. on Empirical Methods in Natural Language Processing (EMNLP), Honolulu, HI, October 2008, pp. 1069–1078.
 [32] B. Settles, M. Craven, and S. Ray, “Multipleinstance active learning,” in Proc. Advances in Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, December 2008, pp. 1289–1296.
 [33] B. Settles, “Active learning literature survey,” University of Wisconsin–Madison, Computer Sciences Technical Report 1648, 2009.
 [34] H. Seung, M. Opper, and H. Sompolinsky, “Query by committee,” in Proc. ACM Workshop on Computational Learning Theory, Pittsburgh, PA, July 1992, pp. 287–294.
 [35] D. Shen, J. Zhang, J. Su, G. Zhou, and C.L. Tan, “Multicriteriabased active learning for named entity recognition,” in Proc. 42nd Annual Meeting on Association for Computational Linguistics, Barcelona, Spain, July 2004.
 [36] M. Sugiyama, “Active learning in approximately linear regression based on conditional expectation of generalization error,” Journal of Machine Learning Research, vol. 7, pp. 141–166, 2006.
 [37] M. Sugiyama and S. Nakajima, “Poolbased active learning in approximate linear regression,” Machine Learning, vol. 75, no. 3, pp. 249–274, 2009.
 [38] R. Willett, R. Nowak, and R. M. Castro, “Faster rates in regression via active learning,” in Proc. Advances in Neural Information Processing Systems, Vancouver, Canada, December 2006, pp. 179–186.
 [39] D. Wu, V. J. Lawhern, S. Gordon, B. J. Lance, and C.T. Lin, “Offline EEGbased driver drowsiness estimation using enhanced batchmode active learning (EBMAL) for regression,” in Proc. IEEE Int’l Conf. on Systems, Man and Cybernetics, Budapest, Hungary, October 2016, pp. 730–736.
 [40] H. Yu and S. Kim, “Passive sampling for regression,” in IEEE Int’l. Conf. on Data Mining, Sydney, Australia, December 2010, pp. 1151–1156.