Estimating regression errors without ground truth values

Estimating regression errors without ground truth values

Henri Tiittanen    Emilia Oikarinen emilia.oikarinen@helsinki.fi    Andreas Henelius    Kai Puolamäki kai.puolamaki@helsinki.fi Department of Computer Science, University of Helsinki, Helsinki, Finland
October 10, 2019
Abstract

Regression analysis is a standard supervised machine learning method used to model an outcome variable in terms of a set of predictor variables. In most real-world applications we do not know the true value of the outcome variable being predicted outside the training data, i.e., the ground truth is unknown. It is hence not straightforward to directly observe when the estimate from a model potentially is wrong, due to phenomena such as overfitting and concept drift. In this paper we present an efficient framework for estimating the generalization error of regression functions, applicable to any family of regression functions when the ground truth is unknown. We present a theoretical derivation of the framework and empirically evaluate its strengths and limitations. We find that it performs robustly and is useful for detecting concept drift in datasets in several real-world domains.

I Introduction

Regression models are one of the most used and studied machine learning primitives. For example, a bibliographic search of the Physical Review E journal reveals that during 2018 the journal published 11 articles containing the word “regression” already in the title or abstract [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. In regression analysis, the idea is to estimate the value of the dependent variable (denoted by ) given a -dimensional vector of covariates (here we assume real valued attributes, denoted by ). The regression model is trained using training data in such a way that it gives good estimates of the dependent variable on testing data unseen in the training phase. In addition to estimating the value of the dependent variable, it is in practice important to know the reliability of the estimate on testing data. In this paper, we use the expected root mean square error (RMSE) between the dependent variable and its estimate to quantify the uncertainty, but some other error measure could be used as well.

In textbooks, one finds a plethora of ways to train various regression models and to estimate uncertainties, see, e.g., [12]. For example, for a Bayesian regression model the reliability of the estimate can be expressed in terms of the posterior distribution or, more simply, as a confidence interval around the estimate. Another alternative to estimate the error of a regression estimate on yet unseen data is to use (cross-)validation. All of these approaches give some measure of the error on testing data, even when the dependent variable is unknown.

Textbook approaches are, however, valid only when the training and testing data obey the same distribution. In many practical applications this assumption does not hold: an phenomenon known as concept drift [13] occurs. Concept drift means that the distribution of the data changes over time, in which case the assumptions made by the regression model break down, resulting in regression estimates with unknown and possibly large errors. A typical example of concept drift occurs in sensor calibration, where a regression model trained to model sensory response may fail when the environmental conditions change from those used to train the regression model [14, 15, 16, 17, 18]. Another example is given by online streaming data applications such as sentiment classification [19] and e-mail spam detection [20], where in order to optimize the online training process, the model should only be retrained when its performance is degraded, i.e., when concept drift is detected.

At simplest, if the ground truth (the dependent variable ) is known, concept drift may be detected simply by observing the magnitude of the error, i.e., the difference between the regression estimate and the dependent variable. However, in practice, this is often not the case. Indeed, usually the motive for using a regression model is that the value of the dependent variable is not readily available. In this paper, we address the problem of assessing the regression error when the ground truth is unknown. It is surprising that despite the significance of the problem it has not really been adequately addressed in the literature, see Sec. II for a discussion of related work. In this paper we do not focus on any particular domain, such as, e.g., sensor calibration or sentiment or spam classification. Instead, our goal is to introduce a generic computational methodology that can be applied in a wide range of domains where regression is used.

Concept drift can be divided into two main categories, namely real concept drift and virtual concept drift. The former refers to the change in the conditional probability and the latter to the change in the distribution of the covariates , see, e.g., [13, Sec 2.1] for a discussion. If only the covariates, i.e., , are known but the ground truth, i.e., , is not, then it is not possible even in theory to detect changes occurring in only but not in . However, it is possible to detect changes in even when the values of have not been observed. For this reason, we focus on the detection of virtual concept drift in this paper. Note, that one possible interpretation for a situation where changes but remains unchanged is that we are missing some covariates from which would parametrize the changes in . Therefore, an occurrence of real concept drift without virtual concept drift can indicate that we might not have all necessary attributes at our disposal. An obvious solution is to include more attributes into the set of covariates.

One should further observe, that when studying concept drift, we are not interested in detecting merely any changes in the distribution of . Rather, we are only interested in changes that likely increase the error of the regression estimates, a property which is satisfied by our proposed method.

i.1 Contributions and organization

In this paper we (i) define the problem of detecting the concept drift which affects the regression error when the ground truth is unknown, (ii) present an efficient algorithm to solve the problem for arbitrary (black-box) regression models, (iii) show theoretical properties of our solution, and (iv) present an empirical evaluation of our approach.

The rest of this paper is structured as follows. In Sec. II we review the related work. In Sec. III we introduce the idea behind our proposed method for detecting virtual concept drift, which is then formalized in the algorithm discussed in Sec. IV. We demonstrate different aspects of our method in the experimental evaluation in Sec. V. Finally, we conclude with a discussion in Sec. VI.

Ii Related Work

The term “concept drift” was coined by Schlimmer and Granger [21] to describe the phenomenon where the data distribution changes over time in dynamically changing and non-stationary environments. The research related to concept drift has become popular over the last decades with many real world applications, see, e.g., the recent surveys [13, 22, 23].

Most of the concept drift literature focuses on classification problems and concept drift adaptation problems. In contrast, in this paper our focus is on detecting virtual concept drift in regression problems. There are very few works on concept drift in regression, although some of the ideas used with classifiers may be applicable to regression functions. Concept drift detection methods can be divided into supervised (requiring ground truth values) and unsupervised (requiring no ground truth values) approaches. Our approach falls into the latter category, and in the following we focus on reviewing the unsupervised approaches to concept drift detection.

We first briefly mention some methods requiring ground truth values. In [24], classifier prediction errors are used to detect concept drift. The idea is to maintain concept windows, and model prediction errors using a binomial distribution to obtain an error probability for each window. Then, if the number of windows containing errors is above a threshold, concept drift is detected. The method proposed in [25] is one of the few concept drift detection methods for regression. There, an ensemble of multiple regression models trained on sequences/subsets of the data is used to find the best weighting for combining their predictions, and concept drift is then defined as the angle between the estimated weight and mean weight vectors. While the method in [25] has similarities to our method proposed here (i.e., training several regressors on subsets of data), the fundamental difference is that in [25] the ground truth values are required. Ikonomovska et al. [26] train models on subsets of the data and use an ensemble of model trees where each model in the tree is trained on different parts of the data. Concept drift is detected by monitoring the model errors.

When considering methods that require no ground truth values, the approaches can be divided roughly into two categories: methods detecting purely distributional changes and methods that also take into account the model in some way. Concept drift detection approaches based on directly monitoring the covariate distribution detect all changes in regardless of their effect on the performance of the prediction model, examples of such methods include, e.g., [27, 28, 29]. There are also approaches for covariate change detection without comparing distributions directly. For instance, [30] proposes a drift detection method using different measure functions (e.g., statistical moments and power spectrum) on particular time-series windows, and then a divergence value is used to classify windows to concept drift or not. However, if the task is to detect concept drift that degrades the performance of the model, these approaches suffer from a high false alarm rate [31].

The MD3 method [31] uses classifier margin densities for concept drift detection, hence, requiring a classifier that has some meaningful notion of margin, such as, e.g., a probabilistic classifier and a support vector machine. The method works by dividing the input data into segments, and for each segment the proportion of samples in the margin, denoted by , is computed. The minimum and maximum values of are monitored, and if their difference exceeds a given threshold, concept drift is declared.

In [32], a stream of indicator values correlated to concept drift is calculated from test data windows. If a certain proportion of previous indicator values are above a threshold, concept drift is declared. The indicator values are computed using the Kullback-Leibler divergence to compare the histogram of classifier output confidence scores on a test window to a reference window. The method is not generic, however, since it requires a classifier producing a score that can be interpreted as an estimate of the confidence associated with the correctness of the prediction. Since probabilistic regression models provide direct information of the model behavior in the form of uncertainty estimates, it is straightforward to implement a concept drift detection measure by thresholding the uncertainty estimate, e.g., in [33] a method based on Gaussian processes for time series change detection is presented.

Further approaches to concept drift detection include [34], in which the method is developed especially for data containing recurring concepts. Hence, the method in [34] requires prior knowledge about properties of concepts present in the data, namely the samples residing in the centers or at the borders of the class clusters, to be incorporated into the model. Then, a distinct classification model is trained for each concept, and for each test data segment the closest concept in training data is selected using a non-parametric statistical test. The test data segment is then classified using that particular classification model. In [35] concept drift detection for binary classification is performed by comparing classifier output label sequences to a reference window. It is assumed that the training and testing data samples originate from two binomial distributions, and concept drift is detected by using statistical testing with the null hypothesis that the distributions are equal.

Iii Methods

Let the training data consist of triplets: , where is the time index, are the covariates and is the dependent variable. Also, let the testing data similarly be given by where , and the covariates and the dependent variable are given by and , respectively. Furthermore, let the reduced testing data be the testing data without the dependent variable, i.e., .

Segments of the data are defined by tuples where and are the endpoints of the segment such that . We write to denote the triplets in such that the time index belongs to the segment , i.e.,

Assume that we are given a regression function trained using . The function estimates the value of the dependent variable at time given the covariates, i.e., . The generalization error of on the data set is defined as

(1)

i.e., we consider the mean squared error. In this paper we consider the following problem:

Problem 1.

Given a regression function trained using the dataset , and a threshold , predict whether the generalization error of on the testing data  as defined by Eq. (1) satisfies when only the reduced testing data is known and the true dependent variable , , is unknown.

iii.1 Overview of the main idea

As discussed above in the introduction, without the ground truth we can only detect virtual concept drift that occurs as a consequence of changes in the covariate distribution . We therefore need a distance measure that measures how “far” a vector is from the data that was used to train the regressor. Small values of (which we will later call the concept drift indicator variable) mean that we are close to the training data and the regressor function should be reliable, while a large value of means that we have moved away from the training data, after which the regression estimate may be inaccurate.

It is possible to list some properties that a good distance measure should have. On one hand, we are only interested in the changes in the covariate distribution that may affect the behavior of the regression. For example, if there are attributes that the regressor does not use, then changes in the distribution of that attribute alone should not be relevant. On the other hand, if a changed (i.e., drifted) attribute is important for the output of the regressor, then changes in this attribute may cause concept drift and the value of should be large.

We propose to define this distance measure as follows. We first train different regression functions, say and , on different subsets of the training data. We then define the distance measure to be the difference between the predictions of these two functions, e.g., . The details how we select the subsets and compute the difference are given later in Section IV.

We can immediately observe that this kind of a distance measure has the suitable property that if some attributes are independent of the dependent variable, then they will not affect the behavior of the regression functions and, hence, the distance measure is not sensitive to them. In the next section we show that at least in the case of a simple linear model, the resulting measure is, in fact, monotonically related to the expected quadratic error of the regression function.

iii.2 Theoretical motivation

In this section, we show that our method can be used to approximate the ground truth error for an ordinary least squares (OLS) linear regression model. Assume, that our covariates have been sampled from some distribution, for which the expected values and the covariance matrix exists with the first term being the intercept, or . Hence, we rule out, e.g., the Cauchy distribution for which the expected value and variance are undefined. Given the parameter vector and the variance , the dependent variable is given by

where are independent random variables with zero mean and variance of .

Now, assume that we have trained an OLS linear regression model, parametrized by , on a dataset of size and obtained a linear model , and that we have also trained a different linear model on an independently sampled dataset of size and obtained a linear model parametrized by , respectively. For a given , the estimates of the dependent variable are then given by and , respectively.

We now prove the following theorem.

Theorem 1.

Given the definitions above, the expected mean squared error is monotonically related to the expectation of the squared difference between the two regressors and , i.e., , by the following equation to a leading order in and :

(2)
Proof.

The translation of the covariates can be absorbed in the intercept term of the parameter and the rotation can be compensated by rotating the remainder of the vector . We can therefore, without loss of generality, assume that the distribution from which the covariates have been sampled has been centered so that all terms except the intercept have an expectation of zero, or and for all . We can further assume that the axes of the covariates have been rotated so that they are uncorrelated and satisfy

where the Kronecker delta satisfies if and otherwise.

Now, for a dataset of size , the OLS estimate of , denoted by , is a random variable that obeys a distribution with a mean of and a covariance given by , where

(3)

where the terms of the order or smaller have been included in . The covariance is therefore proportional to and hence, at the limit of a large dataset we obtain the correct linear model, i.e., . For finite data there is always an error in the estimate of . The expected estimation error is larger for small data, i.e., if is small.

It follows from Eq. (3) that the expected mean squared error for a model evaluated at is given by

(4)

and the expected quadratic difference between the linear model estimates is given by

(5)

We can solve for from Eq. (5) and insert it in Eq. (4), from which Eq. (2) follows. ∎

We hence postulate that the squared differences between the estimates given by regressors trained on different subsets of the data — either sampled randomly or obtained by other means — can be used to estimate the mean squared error even when the ground truth (the value of ) is not known. Of course, in most interesting cases the regression functions are not linear, but as we show later in Sec. V, the idea works also for real datasets and complex non-linear regression models.

Our claim is therefore that the difference between the estimates of regressors trained on different subsets of the data in the point defines a distance function which can be evaluated even when the ground truth is unknown. If a data point is close to the data points used to train the regressors the distance should be small. On the other hand, if the data point is far away from the data used to train the regressors, the predictions of the regressors diverge and the distance and also the prediction error will be larger.

Figure 1: Example data set with covariate and response variable . The training data shown with numbers and the testing data shown with letters.

Iv The Drifter Algorithm

In this section we describe our algorithm for detecting concept drift when the ground truth is unknown. We start with a simple data set shown in Fig. 1 and go through the general idea using this data as an example. We then continue by providing the algorithmic details of the training and testing phases of our algorithm (Sections IV.1 and IV.2, respectively), and a discussion of how to select a suitable value for the drift detection threshold in Sec. IV.3.

Let us consider consisting of 15 data points, with the one-dimensional covariate and response , as shown in Fig. 1, and assume that the data set has been used to train a Support Vector Machine (SVM) regressor . The SVM model estimate of is shown with black solid line in Fig. 2. Our testing data then consists of the data points labeled with , and in Fig. 1, and we want to estimate the generalization error of , when we only have access to the covariates of .

Figure 2: The models trained using and subsequences of it.
Figure 3: The response variable and the estimates of using different models for the training data and the testing data .

Now, recalling Thm. 1, we can estimate the generalization error using the difference of two regressors. Thus, instead of considering , which we cannot compute without knowing the response variable, we estimate the error using the terms instead of for each , where is another regressor function.

To obtain a suitable , we train several regression functions, called segment models using subsequences of the data, i.e., segments. Our intuition here is that due to autocorrelation, a subsequence is more likely to contain samples from the same distribution of covariates. We call a distribution of covariates in a subsequence a concept. In our example, we consider segments , , , and , which are overlapping, and we train the segment models using OLS regression. With overlapping segments, we aim towards robustness, i.e., we assume it to be unlikely that the overlapping segmentation splits very clear concepts in a way that they would not be present in any of the segments. The linear segment models are shown in Fig. 2 using colored dashed lines, and the estimates are shown in Fig. 3 using the same colors. We observe that the segment models are good estimates for the SVM model on their respective training segments.

Using the segment models ,…, , we can compute an estimate of the generalization error using the terms instead of for each . This allows us then to compute some statistics based on the estimates obtained from this ensemble of segment models. Here, we choose the statistics, namely the concept drift indicator value, to be the second smallest error. The intuition is that if the test data resembled some concept in the training data, and an overlapping segmentation scheme was used, at least two of the segment models should provide a reasonably small indicator value. Hence, if there exists only a single small indicator value, it could well be due to chance. Thus, using the second smallest value as the indicator value increases the robustness of the method.

In Fig. 3 we visualize the terms for each data point in , where trained using the segment is the second-best linear model for . Since our estimate for the generalization error of in is large even when using the second-best linear model , we conclude that indeed it is likely that there is concept drift in .

In the following, we formalize the ideas presented in the discussion above, and provide a detailed description of the training and testing phases of the drifter algorithm.

\@float

algocf[t]\end@float

iv.1 Training phase

In the training phase of drifter (Alg. LABEL:alg:drifter:train), we train the segment models for subsequences, i.e., segments, of the training data. As input, we assume the training data , a segmentation of , and a function train_f for training segment models.

Hence, we assume that the user provides a segmentation of such that when the segment models are trained, the data used to train a model approximately corresponds to only one concept, i.e., the models “specialize” in different concepts. Here there might, of course, be overlap so that multiple models are trained using the same concept. We show in Sec. V that using a scheme in which the segmentation consists of equally-sized segments of length with 50% overlap, the drifter method is quite robust with respect to the choice of , i.e., just selecting a reasonably small segment length generally makes the method perform well and provides a simple baseline approach for selecting a segmentation. However, the segments could well be of varying length or non-overlapping. For instance, by using a segmentation that is a solution to the basis segmentation problem [36], one would know that each segment can be approximated with linear combinations of the basis vectors.

The training phase essentially consists of training a regression function for each segment using (lines LABEL:alg:tr:2LABEL:alg:tr:3 in Alg. LABEL:alg:drifter:train). These regression functions are the segment models. Note, that the model family of the segment models is chosen by the user and provided as input to Alg. LABEL:alg:drifter:train. Natural choices are, e.g., linear regression models or, if known, the same model family using which the function , that will be used in the testing phase, was trained.

\@float

algocf[t]\end@float

iv.2 Testing phase

The tester function of drifter (Alg. LABEL:alg:drifter:test) takes as input the testing data , the model , the segment models from Alg. LABEL:alg:drifter:train, and an integer (the indicator index order). For each of the segment models , we then determine the RMSE between the predictions from and on the test data (lines LABEL:alg:te:3LABEL:alg:te:4), i.e., we compute

(6)

where . This gives us values (line LABEL:alg:te:4) estimating the generalization error, and we then choose the th smallest value as the value for the concept drift indicator variable (line LABEL:alg:te:5). If this value is large, then the predictions from the full model on the test data in question can be unreliable.

In this paper, we use by default. The intuition behind this choice is that, due to the overlapping segmentation scheme we use, it is reasonable to assume that at least two of the segment models should have small values for ’s, if the testing data has no concept drift, while a single small value for could still occur by chance.

In the testing phase, there is an implicit assumption regarding the length of the testing data , i.e., it should hold that , where is the length of a segment in the training phase. This is because we assume that the segment models are trained to model concepts present in the training data. Hence, if , the testing data might consist of several concepts, resulting in a large value for the concept drift indicator value , implying concept drift even in the absence of such. This can be easily prevented, e.g., as we do in the experimental evaluation in Sec. V, by dividing the testing data into smaller (non-overlapping) test segments of length and calling the tester function (Alg. LABEL:alg:drifter:test) for each of the test segments. Thus, in this way we obtain a concept drift indicator value for each smaller segment in the testing data.

iv.3 Selection of the drift detection threshold

In order to solve Prob. 1, we still need a suitable concept drift detection threshold , i.e., we need a way to define a threshold for the concept drift indicator variable (Alg. LABEL:alg:drifter:test) that estimates the threshold for the generalization error in Prob. 1.

As a general observation we note that a good concept drift detection threshold depends both on the dataset and the application at hand. In this section, we propose a general method for obtaining a threshold, which according to our empirical evaluation (see Sec. V) performs well in practice for the datasets used in this paper. However, we note that a user knowledgeable of the particular data and the application can use this knowledge to select and potentially adjust a better threshold for the data and the application at hand.

One could also, e.g., make use of a validation set with known ground truth values (not used in training of ) and compute the generalization error . Then a suitable threshold  could be determined, e.g., using receiver operating characteristics (ROC) analysis, which makes it possible to balance the tradeoff between false positives and false negatives [37]. Note, however, that one needs to assume that there is no concept drift in the validation set , and also one should consider, e.g., crossvalidation when training to prevent overfitting which could heavily affect .

We now describe how to compute the threshold using only the training data. We first split the training dataset into (non-overlapping) segments of the same length as the testing data. Then, we compute the concept drift indicator value for each of these segments in the training data using Alg. LABEL:alg:drifter:test, i.e., . We then choose an concept drift detection threshold by using the mean and standard deviation of the indicator values of these segments

(7)

where is a constant multiplier of choice. The optimal value of depends on the properties of a particular dataset, but our empirical evaluation of the effect of varying (see Fig. 9 in Sec. V), shows that the performance with respect to the -score (see Eq. (8) for definition) is not overly sensitive with respect to the choice of . For example, works well for all the datasets we used.

iv.4 Using drifter to solve Prob. 1

We now summarize, how the drifter method is used in practice to solve Prob. 1. Assume that a model has been trained using , and we know that the concept length in the training data is . We use this knowledge to form a segmentation of such that there are segments of length . We also need to make a choice for the model family using which we should train the segment models (function train_f). In practice, linear regression models seem to consistently perform well (see, Sec. V). Then, the training phase consists of a call to Alg. LABEL:alg:drifter:train to obtain an ensemble of segment models.

Once the segment models have been trained, we can readily use these to detect concept drift in the testing data . In the testing phase, we should call Alg. LABEL:alg:drifter:test for a testing data with , where is the segments length used to train the segment models  in Alg. LABEL:alg:drifter:train. In practice, this is achieved by splitting the testing data into small segments of length (e.g., we use a constant in the experimental evaluation in Sec.V) and calling Alg. LABEL:alg:drifter:test for each small test segment individually.

If we have split the testing data into segments, and obtained a vector of concept drift indicator values using Alg. LABEL:alg:drifter:test, we can then compare these values to the concept drift detection threshold , which is either user-specified or obtained using the approach described in Sec. IV.3, and classify each segment in the testing data, either as a segment exhibiting concept drift () or not ().

We observe that the time complexity of drifter is dominated by the training phase, where we need to train regressors using data of size and dimensionality . For OLS regression, e.g., the complexity of training one segment model is hence and hence the complexity of the training phase is .

V Experiments

In this section we experimentally evaluate drifter in detection of concept drift. We first present the datasets and the regressors used (Sec. V.1) and discuss generalization error and the default parameters used in the experiments (Sec. V.2). In Secs. V.3 and V.4 we pin down suitable combinations of the remaining parameters of drifter. In Sec. V.5 we assess the runtime scalability of drifter on synthetic data, and finally in Sec. V.6 we look at how drifter finds concept drift on our considered dataset and regression function combinations.

The experiments were run using R (version 3.5.3) [38] on a high-performance cluster [39] (2 cores from an Intel Xeon E5-2680 2.4 GHz with 256 Gb RAM). An implementation of the drifter algorithm and the code for the experiments presented in this paper has been released as open-source software [40].

v.1 Datasets and regressors

We use the datasets described in Tab. 1 in our experiments. A brief description of each dataset and the regressor trained using it is provided below. During preprocessing we removed rows with missing values, and transformed factors into numerical values. For each dataset, we then use the first 50% of the data as the training set, and the remaining 50% as a testing dataset, i.e., and . We split the testing data into non-overlapping test segments of length as described in Sec. IV.4.

Name Target Regressor
aq 7355 11 CO(GT) SVM
airline 38042 8 Arrival delay RF
bike 731 8 Rental count LM
synthetic(,) LM, SVM, RF
Table 1: Datasets used in the experiments.

Air quality data

The aq dataset [41] contains hourly air quality sensor measurements spanning approximately one year. We preprocessed the data by removing rows with missing data as well as the attribute NMHC(GT) containing mostly missing data. We use the first half of the data as the training set and train a regressor for hourly averaged concentrations of carbon monoxide CO(GT) using Support Vector Machine (SVM) from the ‘e1071’ R package with default parameters.

Flight delay data

The airline dataset [42] contains data related to flight delays collected and published by the U.S. Department of Transportation’s Bureau of Transportation Statistics. We used the arrival delay variable as the target variable and selected a subset of the other attributes as covariates (namely, departure delay, day of the week, origin airport, airline, departure time, destination airport, distance, and scheduled arrival). In order to keep computation time manageable we only used every 150th sample. We used the first half of the data as the training set and trained a regressor for the arrival delay using Random Forest (RF) from the ‘randomForest’ R package with default parameters.

Bike rental data

The bike dataset [43] contains daily counts of bike rentals and associated covariates related to weather and date types for a period of about two years. As covariates we used the attributes for holiday, weekday, working day, weather situation, temperature related variables, humidity, and windspeed. Hence, inherently drifting covariates such as date and season were removed. Exploratory analysis indicated real concept drift to be present in the form of an increasing trend in the counts of bike rentals. Thus, we prepared an alternative version of the data by removing this trend in the testing data by multiplying each by

Hence, in the dataset bike(raw) we use the original rental counts (with real concept drift present), whereas in the dataset bike(detr) we use the modified rental counts (with real concept drift removed). We then used the first half of both datasets as the training set and trained OLS linear regression models (LM) and for predicting the rental counts for bike(raw) and bike(detr), respectively.

Synthetic data

We did not find adequate existing methods for generating synthetic regression datasets containing only virtual concept drift, and thus developed a new method for constructing the data. The synthetic(,) data we used is constructed as follows. The covariate matrix is sampled columnwise from with correlation length , defined as the number of steps after which the expected autocorrelation drops to , and the amplitude . The elements of a noise vector are sampled from a normal distribution . The target variable is then constructed as , where we use to introduce non-linearity. Here, we use , , and i.i.d. noise sampled from , where .

In the scalability experiments (reported in Sec. V.5) we vary the data dimensions and when generating datasets synthetic(,). In the remaining experiments, we use the dataset synthetic(,), i.e., a 5-dimensional dataset of length samples, where we add a concept drift component at by using during this period. We trained LM, RF, and SVM regressors with the synthetic data.

v.2 Generalization error threshold and parameters of drifter

The datasets we use do not have predefined ground truth values and we hence first need to define what constitutes concept drift in the test datasets. The user should choose the threshold : in some applications a larger generalization error could be tolerated, while in some other applications the user might want to be alerted already about smaller errors. In the absence of a user, we determined the error threshold for the datasets as follows. We used 5-fold cross-validation, where we randomly split the training data into five folds, and estimated the value of the th dependent variable by a regressor trained on the four folds that do not contain , thereby obtaining a vector of estimates for all . We then computed the generalization error for the training data as in Eq. (1) and then chose

Then, all values exceeding in the test dataset are considered concept drift. While this cross-validation procedure does not fully account for possible autocorrelation in the training data we found that in our datasets it gives a reasonable estimate of the generalization error in the absence of concept drift.

To assess what a suitable value for would be in our proposed scheme for selecting the detection threshold (Sec. IV.3) and to assess how well the scheme works in practice, we compute the “optimal” detection threshold in terms of the -score for a given error threshold as follows. We vary the concept drift detection threshold and evaluate the true and false positives rates on the test dataset, allowing us to form a ROC curve. We then pick the maximizing the -score:

(8)

where are true positives, are false positives and are false negatives.

For the other parameters, we use in the training phase the segmentation scheme with 50% overlap between consecutive segments. In the testing phase, we split the testing data into non-overlapping segments of fixed length (), and evaluate the concept drift indicator value on each test segment using . In preliminary experiments, we also tested a segmentation scheme with no overlap between segments in the training phase, and values . The effect of these parameter options was rather small in practice, and we chose the values using which the drifter method performed most robustly in detecting virtual concept drift for our datasets.

v.3 Effect of concept length and segment models

We next investigate the effects of the remaining input parameters, i.e., (i) the constant in Eq. (7), (ii) the concept length (i.e., the segment length in the training phase), and (iii) the effect of the model family for the segment models.

We varied , which means that there are segments in the training phase in the overlapping segmentation scheme. The maximum value for was determined by the requirement . For each and each dataset, we determined the value for that leads to in Eq. (7) maximizing the -score.

For the choice of the model family, we considered two cases: either the segment models were trained using the same model family as the model given as input, or linear regression was used for the segment models. Our evaluation showed that the linear segment models consistently performed the best (both in terms of performance, e.g., -score, robustness, and computational cost, i.e., time needed to train the models). We hence focus on utilizing linear regression models as segment models in the rest of this paper.

We would like to point out that there is an intuitive reason why LM outperforms SVM and RF as segment models. While SVM and RF give accurate predictions on the training data covariate distribution, they predict constant values outside of it. The linear OLS regressor on the other hand gives (non-constant) linearly increasing/decreasing predictions generalization the farther from the training data covariate distribution the testing data is. It should also be noted that for SVM the kernel choice makes a difference in terms of generalization behavior. We here used a radial basis function kernel, but if a polynomial kernel or a linear kernel were used, the model would behave more like LM.

The results are presented in Tab. 2. The table shows the number of segments of length in the testing data identified as true () and false () positives, and true () and false negatives (), respectively. We observe, that concept drift is detected with a reasonable accuracy for the aq, airline and synthetic(2000,5) datasets in terms of -score, i.e., the number of true positives and negatives is high, while the number of false positives and negatives remains low. For each of these datasets we have identified the best performing combination of and (shown with bold in Tab. 2), and we subsequently use these particular combinations in Sec. V.6.

For bike(raw) we observe negative values when optimizing . This is due to the real concept drift in the data, i.e., the bike rental counts are higher during the second year, likely due to increasing popularity of the service. This is an effect not present in the training data (see Sec. V.6 and Fig. 26c for details). Since real concept drift does not affect the concept drift indicator values, it means that the optimal threshold maximizing the -score would be set to a very low value. We also observe a high number of false negatives.

However, when we consider the detrended bike(detr) dataset in which the real concept drift has been removed, we no longer observe any (virtual) concept drift in the data (and hence we cannot compute the -scores). We can observe that our ’algorithm correctly handles this, i.e., all the segments in the testing data are classified as true negatives. For the values of bike(raw) and bike(detr) in Tab. 2 we have used larger than the maximal value of (similarly as in Sec. V.6 and Fig. 26c,d).

Data Full model Segment model k F1 TP FP TN FN
aq SVM LM 2 6.770 0.735 61 20 139 24
10 7.144 0.741 63 22 137 22
20 7.080 0.737 56 11 148 29
100 5.835 0.688 54 18 141 31
airline RF LM 2 5.632 0.786 11 1 1250 5
10 5.695 0.786 11 1 1250 5
20 5.769 0.786 11 1 1250 5
100 5.424 0.769 10 0 1251 6
bike(raw) LM LM 2 0 0 5 18
4 0 0 5 18
6 0 0 5 18
bike(detr) LM LM 2 0 0 23 0
4 0 0 23 0
6 0 0 23 0
synthetic(2000,5) LM LM 2 5.571 0.737 7 1 54 4
10 4.722 0.778 7 0 55 4
60 1.747 0.857 9 1 54 2
synthetic(2000,5) SVM LM 2 6.930 0.769 5 2 58 1
10 8.817 0.769 5 2 58 1
60 17.015 0.833 5 1 59 1
synthetic(2000,5) RF LM 2 5.819 0.750 6 1 56 3
10 9.649 0.778 7 2 55 2
60 3.883 0.842 8 2 55 1
Table 2: The effect of segment length , using , on drift detection accuracy in terms of the -score. is the value using which Eq. (7) results in maximizing the -score, (resp. ) is the count of true positives (resp. false positives), and (resp. ) is the count of true negatives (resp. false negatives).
(a) synthetic(2000,5) with LM
(b) synthetic(2000,5) with SVM
(c) synthetic(2000,5) with RF
(d) aq
(e) airline
Figure 9: The effect of the multiplier constant for in Eq. (7), from Tab. 2 shown with the corresponding -score.

v.4 Selecting a suitable drift detection threshold

In this section we investigate how varying the value of the drift detection threshold affects the performance of the drifter method. For each dataset, i.e., aq, airline, and synthetic(2000,5), we used the fixed parameter values as defined in Sec. V.2 and selected the concept length (using the parameter ) based on the previous experiment, i.e., we used the resulting in the best performance in terms of the -score using the optimal (shown in bold in Tab. 2). We excluded the bike(raw) and bike(detr) datasets here, since they do not contain virtual concept drift, which makes the relation between the -score and less informative. The results are presented in Fig. 9. We conclude that the performance of our method is quite insensitive to the value of and that a threshold value of seems to be a robust choice for the datasets considered.

(a) Varying .
(b) Varying .
(c) Varying .
Figure 13: Scalability of the drifter algorithm in the training phase using synthetic(,). In each figure, one of the parameters (training data length), (data dimension), and (number of training segments) is varied, while the remaining ones are kept constant (, , and ).
(a) Varying .
(b) Varying .
(c) Varying .
Figure 17: Scalability of the drifter algorithm in the testing phase with synthetic(,), when trained using synthetic(,). In each figure, one of the parameters (training data length), (data dimension), and (number of training segments) is varied, while the remaining ones are kept constant (, , and ).

v.5 Scalability

The scalability experiments were performed using the synthetic(,) data. We constructed the datasets as described in Sec. V.1, and varied the data dimensionality , the length of training data, and the parameter controlling the number of segments (and hence, ). We used synthetic(,) as the training dataset and generated a testing dataset of constant length , i.e., we used synthetic(,) as the testing data. Additionally, we used for the testing data generation, while was used for the training data. Since the actual training of the full model is not part of drifter and the quality of the model is not relevant here, we used the first 500 samples of the training data to train an SVM regressor . We then varied the choice for the model family (LM, SVM, RF) used by the drifter in training the segment models.

The median running time of the training and testing phases of drifter over five runs are shown in Fig. 13 and Fig. 17, respectively. Here we observe that indeed the training phase is the dominant factor affecting the scalability, as discussed in Sec. IV.4, and in particular when using OLS regression to train the segment models, our drift detection algorithm works fast for reasonable dataset sizes.

(a) synthetic (LM)
(b) synthetic (SVM)
(c) synthetic (RF)
Figure 21: The generalization error and concept drift indicator for the test segments of length in the synthetic(2000,5) dataset. Here, denotes the concept drift detection threshold and denotes the generalization error threshold. The vertical lines between the two curves indicate the segments that are true positives (gray solid line), false positives (orange dashed line), and false negatives (green longdash line).
(a) aq
(b) airline
(c) bike(raw)
(d) bike(detr)
Figure 26: The generalization error and concept drift indicator for test segments of length in aq, airline, and bike datasets. Here, denotes the concept drift detection threshold and denotes the generalization error threshold. The vertical lines between the two curves indicate the segments that are true positives (gray solid line), false positives (orange dashed line), and false negatives (green longdash line).

v.6 Detection of concept drift

Finally, we consider examples illustrating how our method for detecting concept drift works in practice. In Figures 21 and 26, we show the generalization error (green lines) and the concept drift indicator value (orange line), and in Fig. 32 the ROC-curves for synthetic(2000,5), aq, airline, bike(raw), and bike(detr) datasets. For synthetic(2000,5), airline, and aq data we have used parameters bolded in Tab. 2, and for bike(raw), and bike(detr) we used and selected to be larger than maximal value of , because for bike(raw) a negative value of would lead to a non-sensical value of , and because bike(detr) does not have concept drift at all.

For the synthetic(2000,5) data (Fig. 21) we observe that our algorithm can detect the virtual concept drift introduced during the period .

(a) synthetic (LM)
(b) synthetic (SVM)
(c) synthetic (RF)
(d) aq
(e) airline
Figure 32: ROC-curves [37] for synthetic(2000,5), aq, and airline datasets.

For the aq data (Fig. 26a) we observe that a significant amount of the testing data seems to exhibit concept drift, and our algorithm detects this. There is a rather natural explanation for this. The aq data contains measurements of a period of one year. The model has been trained on data covering the spring and summer months (March to August), while the testing period consists of the autumn and winter months (September to February). Hence, it is natural that the testing data contains concepts not present in the training data. Furthermore, one should observe that the last segments of data again begin to resemble the training data, and hence we do not observe concept drift in these segments.

For the airline data, we observe that that some of the segments in the training data also have a rather high generalization value for the error, indicating that there are parts of the training data that the regressor does not model particularly well. However, the concept drift indicator behaves similarly to RMSE (both for segments in the training and testing data), demonstrating that it can be used to estimate when the generalization error would be high.

For the bike(raw) data (Fig. 26c) we observe that even though the generalization error is large for most of the segments in the testing data, the drift detection indicator does not indicate concept drift. This is explained by the real concept drift present in data, and once we have removed it in the bike(detr) data (Fig. 26d) we observe no concept drift. We hence observe that a considerable number of false negatives can indicate real concept drift in the data. However, in order to detect this, one needs to have access to the ground truth values.

Vi Discussion

In this paper, we have presented and evaluated an efficient method for detecting concept drift in regression models when the ground truth is unknown. In this paper, we define concept drift as a phenomenon causing larger than expected estimation errors on new data, as a result of changes in the generating distribution of the data. Defining concept drift in terms of the estimation error, instead of considering all changes in the distribution, makes it possible to detect only the changes that actually affect the prediction quality. Thus, if concept drift detection is used to monitor the performance of a regression model, it reduces the false positives rate. It is surprising how little attention this problem has received, considering its importance in multiple domains.

When the dependent variable is unknown it is only possible to detect changes in the distribution of the covariates . Our idea is to use the regression functions themselves to study the changes in this distribution. As we have shown for linear models in Thm. 1, we postulate that if we train two or more regression functions on different subsets of the data, then the difference in the estimates given by the regression functions contains information about the generalization error. This method, while simple, is powerful. It, e.g., ignores by design features of the data that are irrelevant for estimating . The underlying assumption is that by using subsets of the training data we can train regressors that can capture concepts in the data, and if the testing data contains concepts not found in the training data, then it is likely that there is concept drift. The drifter method presented in this paper also scales well. Especially high performance is reached using OLS linear segment models.

In this paper, we have used models trained using different segments of the data. As future work, an interesting topic to study is how the data could be “optimally” partitioned for this problem. Another alternative—which we have experimented with but not reported here—is to train several regression models from different model families on the data. In this paper we have also focused on estimating the generalization error of a regression function. The same ideas could be applied to detect concept drift in classifiers as well.

The theoretical foundation for this approach is shown to hold in the simple case of linear regression. However, our empirical evaluation with real datasets of various types (and different regressors) demonstrates that the idea also works when there are sources of non-linearity. Our experiments suggest that often the (black-box) regressor given as input can be locally approximated using linear regressors, and the differences between the estimates from these regressors serve as a good indicator for concept drift. The current paper represents initial work towards a practical concept drift detection algorithm, with experimental evaluation illustrating parameters that work robustly for the datasets considered in this work. Further work is needed to establish general practices for selecting suitable parameters for the drifter method.

Acknowledgements.
We thank Dr Martha Zaidan for help and discussions. This work was funded by the Academy of Finland (decisions 326280 and 326339). We acknowledge the computational resources provided by Finnish Grid and Cloud Infrastructure [39].

References

  • Kuwatani et al. [2018] T. Kuwatani, H. Nagao, S.-i. Ito, A. Okamoto, K. Yoshida, and T. Okudaira, Recovering the past history of natural recording media by Bayesian inversion, Phys. Rev. E 98, 043311 (2018).
  • Chandrasekera and Mitchell [2018] T. C. Chandrasekera and J. Mitchell, Numerical inversion methods for recovering negative amplitudes in two-dimensional nuclear magnetic resonance relaxation-time correlations, Phys. Rev. E 98, 043308 (2018).
  • Liu et al. [2018] Z. Liu, J. E. McClure, and R. T. Armstrong, Influence of wettability on phase connectivity and electrical resistivity, Phys. Rev. E 98, 043102 (2018).
  • Lu et al. [2018a] C.-K. Lu, S. C.-H. Yang, and P. Shafto, Standing-wave-decomposition Gaussian process, Phys. Rev. E 98, 032303 (2018a).
  • Lee [2018] W. Lee, Generalized Langevin equation and the linear regression model with memory, Phys. Rev. E 98, 022137 (2018).
  • Batz et al. [2018] P. Batz, A. Ruttor, and M. Opper, Approximate Bayes learning of stochastic differential equations, Phys. Rev. E 98, 022109 (2018).
  • Singh et al. [2018] R. Singh, D. Ghosh, and R. Adhikari, Fast Bayesian inference of the multivariate Ornstein-Uhlenbeck process, Phys. Rev. E 98, 012136 (2018).
  • Nair et al. [2018] A. G. Nair, S. L. Brunton, and K. Taira, Networked-oscillator-based modeling and control of unsteady wake flows, Phys. Rev. E 97, 063107 (2018).
  • Peng et al. [2018] L. Peng, Y. Zhu, and L. Hong, Generalized onsager’s reciprocal relations for the master and Fokker-Planck equations, Phys. Rev. E 97, 062123 (2018).
  • Meng et al. [2018] X. F. Meng, R. A. Van Gorder, and M. A. Porter, Opinion formation and distribution in a bounded-confidence model on various networks, Phys. Rev. E 97, 022312 (2018).
  • Lehle and Peinke [2018] B. Lehle and J. Peinke, Analyzing a stochastic process driven by Ornstein-Uhlenbeck noise, Phys. Rev. E 97, 012113 (2018).
  • Hastie et al. [2009] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction (Springer, 2009).
  • Gama et al. [2014] J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, A survey on concept drift adaptation, ACM Computing Surveys 46, 44:1 (2014).
  • Kadlec et al. [2011] P. Kadlec, R. Grbić, and B. Gabrys, Review of adaptation mechanisms for data-driven soft sensors, Computers & Chemical Engineering 35, 1 (2011).
  • Vergara et al. [2012] A. Vergara, S. Vembu, T. Ayhan, M. A. Ryan, M. L.Homer, and R. Huerta, Chemical gas sensor drift compensation using classifier ensembles, Sensors and Actuators B: Chemical 166–167, 320 (2012).
  • Rudnitskaya [2018] A. Rudnitskaya, Calibration update and drift correction for electronic noses and tongues, Frontiers in Chemistry 6, 433 (2018).
  • Maag et al. [2018] B. Maag, Z. Zhou, and L. Thiele, A survey on sensor calibration in air pollution monitoring deployments, IEEE Internet of Things Journal 5, 4857 (2018).
  • Huggard et al. [2018] H. Huggard, Y. S. Koh, P. Riddle, and G. Olivares, Predicting air quality from low-cost sensor measurements, in Proceedings of Australasian Conference on Data Mining AusDM 2018, Communications in Computer and Information Science, Vol. 996, edited by R. Islam, Y. S. Koh, Y. Zhao, G. Warwick, D. Stirling, C.-T. Li, and Z. Islam (Springer, 2018) pp. 94–106.
  • Bifet and Frank [2010] A. Bifet and E. Frank, Sentiment knowledge discovery in twitter streaming data, in Proceedings of 13th International Conference on Discovery Science DS 2010, Lecture Notes in Artificial Intelligence, Vol. 6332, edited by B. Pfahringer, G. Holmes, and A. Hoffman (Springer, 2010) pp. 1–15.
  • Lindstrom et al. [2010] P. Lindstrom, S. J. Delany, and B. Mac Namee, Handling concept drift in a text data stream constrained by high labelling cost, in Proceedings to the 23rd International FLAIRS Conference (2010) pp. 32–37.
  • Schlimmer and Granger [1986] J. C. Schlimmer and R. H. Granger, Incremental learning from noisy data, Machine learning 1, 317 (1986).
  • Žliobaitė et al. [2016] I. Žliobaitė, M. Pechenizkiy, and J. Gama, An overview of concept drift applications, in Big Data Analysis: New Algorithms for a New Society, edited by N. Japkowicz and J. Stefanowski (Springer, 2016) pp. 91–114.
  • Lu et al. [2018b] J. Lu, A. Liu, F. Dong, F. Gu, J. Gama, and G. Zhang, Learning under concept drift: A review, IEEE Transactions on Knowledge and Data Engineering early access (2018b).
  • Gama et al. [2004] J. Gama, P. Medas, G. Castillo, and P. Rodrigues, Learning with drift detection, in Proceedings of17th Brazilian Symposium on Artificial Intelligence SBIA 2004, Lecture Notes in Artificial Intelligence, Vol. 3171, edited by A. L. C. Bazzan and S. Labidi (Springer, 2004) pp. 286–295.
  • Wang et al. [2017] L.-Y. Wang, C. Park, K. Yeon, and H. Choi, Tracking concept drift using a constrained penalized regression combiner, Computational Statistics & Data Analysis 108, 52 (2017).
  • Ikonomovska et al. [2011] E. Ikonomovska, J. Gama, and S. Džeroski, Learning model trees from evolving data streams, Data Mining and Knowledge Discovery 23, 128 (2011).
  • Dasu et al. [2006] T. Dasu, S. Krishnan, S. Venkatasubramanian, and K. Yi, An information-theoretic approach to detecting changes in multi-dimensional data streams, in Proceedings of Symposium on the Interface of Statistics, Computing Science, and Applications INTERFACE (2006).
  • Shao et al. [2014] J. Shao, Z. Ahmadi, and S. Kramer, Prototype-based learning on concept-drifting data streams, in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2014) pp. 412–421.
  • Qahtan et al. [2015] A. A. Qahtan, B. Alharbi, S. Wang, and X. Zhang, A PCA-based change detection framework for multidimensional data streams: Change detection in multidimensional data streams, in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2015) pp. 935–944.
  • de Mello et al. [2019] R. F. de Mello, Y. Vaz, C. H. Grossi, and A. Bifet, On learning guarantees to unsupervised concept drift detection on data streams, Expert Systems with Applications 117, 90 (2019).
  • Sethi and Kantardzic [2017] T. S. Sethi and M. Kantardzic, On the reliable detection of concept drift from streaming unlabeled data, Expert Systems with Applications 82, 77 (2017).
  • Lindstrom et al. [2013] P. Lindstrom, B. M. Namee, and S. J. Delany, Drift detection using uncertainty distribution divergence, Evolving Systems 4, 13 (2013).
  • Chandola and Vatsavai [2011] V. Chandola and R. R. Vatsavai, A Gaussian process based online change detection algorithm for monitoring periodic time series, in Proceedings of the 11th SIAM International Conference on Data Mining, SDM (SIAM, 2011) pp. 95–106.
  • Sobolewski and Wozniak [2013] P. Sobolewski and M. Wozniak, Concept drift detection and model selection with simulated recurrence and ensembles of statistical detectors, Journal of Universal Computer Science 19, 462 (2013).
  • Žliobaitė [2010] I. Žliobaitė, Change with delayed labeling: When is it detectable?, in Proceedings of ICDMW 2010, The 10th IEEE International Conference on Data Mining Workshops, edited by W. Fan, W. Hsu, G. I. Webb, B. Liu, C. Zhang, D. Gunopulos, and X. Wu (IEEE, 2010) pp. 843–850.
  • Bingham et al. [2006] E. Bingham, A. Gionis, N. Haiminen, H. Hiisilä, H. Mannila, and E. Terzi, Segmentation and dimensionality reduction, in Proceedings of the 2006 SIAM International Conference on Data Mining (SIAM, 2006) pp. 372–383.
  • Fawcett [2006] T. Fawcett, An introduction to ROC analysis, Pattern Recognition Letters 27, 861 (2006).
  • R Core Team [2019] R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria (2019).
  • [39] FCGI, Finnish Grid and Cloud Infrastructure (2019), urn:nbn:fi:research-infras-2016072533.
  • Tiittanen et al. [2019] H. Tiittanen, E. Oikarinen, A. Henelius, and K. Puolamäki, Drifter. (2019), https://github.com/edahelsinki/drifter. Accessed October 9, 2019.
  • Vito et al. [2008] S. D. Vito, E. Massera, M. Piga, L. Martinotto, and G. D. Francia, On field calibration of an electronic nose for benzene estimation in an urban pollution monitoring scenario, Sensors and Actuators B: Chemical 129, 750 (2008).
  • U.S. Department of Transportation [2017] U.S. Department of Transportation, 2015 Flight Delays and Cancellations (2017), https://www.kaggle.com/usdot/flight-delays. Accessed April 1, 2019.
  • Fanaee-T and Gama [2014] H. Fanaee-T and J. Gama, Event labeling combining ensemble detectors and background knowledge, Progress in Artificial Intelligence 2, 113 (2014).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393535
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description