1 Introduction

Abstract

We propose a new supervised learning algorithm, for classification and regression problems where two or more preliminary predictors are available. We introduce KernelCobra, a non-linear learning strategy for combining an arbitrary number of initial predictors. KernelCobra builds on the COBRA algorithm introduced by biau2016cobra, which combined estimators based on a notion of proximity of predictions on the training data. While the COBRA algorithm used a binary threshold to declare which training data were close and to be used, we generalize this idea by using a kernel to better encapsulate the proximity information. Such a smoothing kernel provides more representative weights to each of the training points which are used to build the aggregate and final predictor, and KernelCobra systematically outperforms the COBRA algorithm. While COBRA is intended for regression, KernelCobra deals with classification and regression. KernelCobra is included as part of the open source Python package Pycobra (0.2.4 and onward), introduced by guedj2018pycobra. Numerical experiments assess the performance (in terms of pure prediction and computational complexity) of KernelCobra on real-life and synthetic datasets.

machine learning; python; ensemble learning ; kernels ; open source software
\firstpage

1 \pubvolumexx \issuenum1 \articlenumber5 2019 \copyrightyear2019 \history \TitleKernel-Based Ensemble Learning in Python \AuthorBenjamin Guedj , Bhargav Srinivasa Desikan \AuthorNamesBenjamin Guedj and Bhargav Srinivasa Desikan \corresCorrespondence: benjamin.guedj@inria.fr \secondnoteBoth authors contributed equally to this work.

1 Introduction

In the fields of machine learning and statistical learning, ensemble methods consist in combining several estimators (or predictors) to create a new superior estimator. Ensemble methods (also known as aggregation in the statistical literature) have attracted a tremendous interest in recent years, and for a few problems are considered state-of-the-art techniques, as discussed by bell2007lessons. There is a wide variety of ensemble algorithms (some of which are discussed in dietterich2000ensemble, giraud2014introduction and shalev2014understanding), with a crushing majority devoted to linear or convex combinations.

In this paper we propose a non-linear way of combining estimators, adding to a streamline of works pioneered by mojirsheibani. Our method (KernelCobra) extends the COBRA (standing for COmBined Regression Alternative) algorithm introduced by biau2016cobra. The COBRA algorithm is motivated by the idea that non-linear, data-dependent techniques can provide flexibility not offered by existing (linear) ensemble methods. By using information of proximity between the training data and predictions on test data, training points are collected to perform the aggregate. The COBRA algorithm selects training points by checking if the proximity is less than a data dependant threshold , resulting in a binary decision (either keep the point or discard it). The KernelCobra algorithm we introduce in the present paper aims to smoothen this data point selection process by introducing a kernel-based method in assigning weights to various points in the collective. The only weights that points could take in the COBRA algorithm were 0 or 1, whereas our smoothed scheme will span real values between 0 and 1. We provide a python implementation of KernelCobra in the python package Pycobra, introduced and described by guedj2018pycobra. We assess on numerical experiments that KernelCobra consistently outperforms the original COBRA algorithm in a variety of situations.

The paper is organized as follows. Section 2 discusses related work and Section 3 introduces the ideas leading to KernelCobra. Section 4 presents the actual implementations of KernelCobra in the pycobra Python library. Section 5 illustrates the performance (both in prediction accuracy and computational complexity) on real-life and synthetic datasets, along with comparable aggregation techniques. Section 6 presents avenues for future work.

2 Related work

Our algorithm is inspired by the work of biau2016cobra which introduced the COBRA algorithm. COBRA itself is inspired by the seminal work by mojirsheibani, where the idea of using consensus between machines to create an aggregate was first discussed. Our algorithm KernelCobra is a strict generalisation of COBRA.

In a work parallel to ours, the idea of using the distance between points in the output space is also explored by fischer2018aggregation, where weights are assigned to points based on proximity of the prediction in the output space and the training data. However, the method employed (which we will now refer to as MixCobra) also uses the input data while constructing the aggregate. While it is true that more data-dependant information might improve the quality of the aggregate, we argue that in cases with high-dimensional input data, proximity between points will not add much useful information. Computing distance metrics in high dimensions is a computational challenge which, in our view, could undermine the statistical performance (see steinbach2004challenges, for a discussion). While using both input and output information might provide satisfactory results in lower dimensions, non-linear ensemble learning algorithms arguably perform particularly well in high dimensions as they are not affected by the dimension of the input space. This edge is lost in the MixCobra method.

KernelCobra overcomes this problem by only considering proximity of data points in the prediction space, allowing to perform faster calculations. This makes KernelCobra a promising candidate for high dimensional learning problems: as a matter of fact, KernelCobra is not affected at all by the curse of dimensionality, with the complexity only increasing with the number of preliminary estimators.

In a recent work, the original COBRA algorithm (as implemented by the pycobra Python library, see guedj2018pycobra) has successfully been adapted by guedj2019non to perform image denoising. The authors report that the COBRA-based denoising algorithm outperforms significantly most state-of-the-art denoising algorithms on a benchmark dataset, calling for the broadcasting of non-linear ensemble methods in computer vision and image processing communities.

3 KernelCobra: a kernelized version of COBRA

Throughout this section, we assume that we are given a training sample = of i.i.d. copies of (with the notation . We assume that . The space is equipped with the standard Euclidean metric. Our goal is to consistently estimate the regression function = , for some new query point , using the data .

To begin with, the original data set is split into two data sequences and , with . For ease of notation, the elements of are renamed , similar to the notation used by biau2016cobra.

Now, suppose that we are given a collection of competing estimators (referred to as machines from now on) to estimate . These preliminary machines are assumed to be generated using only the first sub-sample . In all practical scenarios, machines can be any machine learning algorithm, from classical linear regression all the way up to a deep neural network, including naive Bayes, decision trees, penalised regression, random forest, -nearest neighbors, and so on. These machines have no restrictions in their nature: they can be parametric or nonparametric. The only condition is that each of these machines is able to provide an estimation of on the basis of alone. Let us stress here that the number of machines is fixed.

As a gentle start, we now introduce a version of KernelCobra with the Euclidean distance and an exponential form of the weights – these will be eventually generalised.

Given the collection of basic machines , we define the aggregated estimator for any as

(1)

where the random weights are given by

(2)

The hyperparameter acts as a temperature parameter, to adjust the level of fit to data, and will be optimised in numerical experiments using cross-validation. Let us stress here that denotes the Euclidean distance between any two points . In (2), this serves as a way to measure the proximity or coherence between predictions on training data and predictions made for the new query point, across all machines.

This form (2) is more smooth than the form introduced in the COBRA algorithm (biau2016cobra) and is reminiscent of exponetial weights. We call the aggregated estimator in (1) with weights defined in (2) KernelCobra.

A more generic form is given by

(3)

where denotes a kernel used to capture the proximity between predictions on training and query data, across machines. We call the aggregated estimator in (1) with weights defined in (3) general KernelCobra.

This is a generalisation of the initial COBRA weights which are given by biau2016cobra

(4)

where is a (possibly data-dependent) threshold parameter. It can be seen that rather than the bumpy behaviour of (4) (which can take values only in ), the version we propose in (2) and (3) take continuous values in , adding more flexibility. Rather than a threshold to keep or discard data point in the weights, its influence is now always considered, by a measure of how preliminary machines predict outcomes for the new query point which are close to the predictions made for point . In other words, a data point will have more influence on the aggregated estimator (its weight will be higher) if machines predict similar outcome for and the new query point. Let us stress here that KernelCobra, as the initial COBRA algorithm, aggregate machines in a non-linear way: the aggregated estimator in (1) is a weighted combination of observed outputs s, not of initial machines (which serve to build the weights). As such, it is fairly different from most aggregation schemes which form linear combinations of machines’ outcomes.

Note also that computing the weights defined in (2) and (3) involve elementary computations over scalars (each machine’s prediction over the training sample and the new query point) rather than -dimensional vectors. As highlighted above, both versions of KernelCobra avoid the curse of dimensionality.

General KernelCobra allows for the use of any kernel which might be preferred by practitioners – it is the generic version of our algorithm. In practice, we have found that the KernelCobra defined with weights in (2) provides interesting empirical results, and is more interpretable. We thus provide both versions as they express a trade-off between generality and ease of interpretation and use.

We now devote the remainder of this section to two interesting byproducts of our approach, to the unsupervised setting and for classification.

3.1 The unsupervised setting

As COBRA and KernelCobra are non-linear aggregation methods, the final estimator is a weighted combination of observed outputs s. We can turn our approach to a more classical linear aggregation scheme, to the notable point that none of the approach depends on s, therefore allowing to consider the unsupervised setting. This differs from classical linear or convex aggregation methods such as exponential weights: the weights depend on a measure of performance such as an empirical risk, which will involve s.

We can now throw away all s for and we propose the following estimator for any new query point :

(5)

Our first set of weights is given by (2) or (3), and serves to weight data points. Our second set of weights used to aggregate the predictions of each machine, can be any sequence of weights summing up to 1, and serves to weight machines.

In other words, once the machines have been trained (either in a supervised setting using the outputs in subsample , or in an unsupervised setting by discarding all outputs across the dataset ), the estimator defined in (5) no longer needs outputs from the second half of the dataset , therefore extending to semi-supervised and unsupervised settings, further illustrating the flexibility of our approach.

3.2 Classification

Non-linear aggregation of classifiers has been studied by mojirsheibani, mojirsheibani2000kernel (where a kernel is also used to smoothen the point selection process). The papers mojirsheibani2002almost and balakrishnan2015simple focus on using the misclassification error to build the aggregate. We here provide a simple extension of our approach to classification.

For binary classification (), the combined classifier is given by

(6)

The weights can be chosen as (2) or (3).

We also provide a combined classifier for the multi-class setting: let us assume that is a finite discrete set of classes,

(7)

To conclude this section, let us mention that biau2016cobra proved that the combined estimator with weights chosen as in the initial COBRA algorithm (4) enjoys an oracle guarantee: the average quadratic loss of the estimator is upper bounded by the best (lowest) quadratic loss of the machines up to a remainder term of magnitude . This result is remarkable as it does not involve the ambient dimension but rather the (fixed) number of machines . We focus in the present paper on the introduction of KernelCobra and its variants, and its implementation in Python (detailed in the next section) – we leave for a future work the extension of biau2016cobra’s theoretical results.

4 Implementation

All new algorithms described in the present paper are implemented in the Python library pycobra (from version 0.2.4 and onward), we refer to guedj2018pycobra for more details.

The python library pycobra can be installed via pip using the command pip install pycobra. The PyPi page for pycobra is https://pypi.org/project/pycobra/. The code for pycobra is open source and can be found on GitHub at https://github.com/bhargavvader/pycobra. The documentation for pycobra is hosted at https://modal.lille.inria.fr/pycobra/.

We describe the general KernelCobra algorithm in Algorithm 1.

Data: input vector , Kernel, [Kernel Parameters], basic-machines, training-set-responses, training-set
# training-set is the set composed of all data_point and the responses.
# training-set-responses is the set composed of the responses. Result: prediction
weights = [] ;
# weights is a list of size with each index mapping to information of proximity of a data point;
for machine in basic-machines do
       pred = basic-machines[j]()
       # where basic-machines[j]() denotes the prediction made by machine at point ;
       for index,vector in training-set-responses do
             weights[index] += Kernel(pred, basic-machines[j](vector)) ;
            
       end for
      
end for
weights = weights / sum(weights) ;
result = training-set-responses * weights ;
Algorithm 1 General KernelCobra

KernelCobra is implemented as part of the KernelCobra class in the pycobra package. The estimator is scikit-learn compatible (see pedregosa2011scikit), and works similarly to the other estimators provided in the pycobra package. The only hyperparameter accepted in creating the object is a random state object.

The pred method implements the algorithm described in Algorithm 1, and the predict method serves as a wrapper for the pred method to ensure it is scikit-learn compatible. It should be noted that the predict method can be customised to pass any user-defined kernel (along with parameters), as suggested by (3). The default behaviour of the predict method is set to use the weights defined in (2).

Similarly to the other estimators provided in pycobra, KernelCobra can be used with the Diagnostics and Visualisation classes, which are used for debugging and visualising the model. Since it abides the scikit-learn ecosystem, one can use either GridSearchCV or the Diagnostics class to tune the parameters for KernelCobra (such as the temperature parameter).

The default regression machines used for KernelCobra are the scikit-learn implementations of Lasso, Random Forest, Decision Trees and Ridge regression. This is merely an editorial choice to have the algorithm up and ready immediately, but let us stress here that one can provide any own estimator using the load_machine method, with the only constraint being that it was trained on , and that it has a valid predict method.

We also provide the pseudo-code for the variant of KernelCobra in semi-supervised or unsupervised settings defined by (5) (Algorithm 2), along with the variant for multi-class classification defined by (7) (Algorithm 3).

Data: input vector , Kernel, [Kernel Parameters], basic-machines, training-set-responses, training-set
# training-set is the set composed of all data_point and the responses. training-set-responses is the set composed of the responses. Result: prediction
weights-points = [] ;
# weights-points is a list of size l with each index mapping to information of proximity of a data point;
for machine in basic-machines do
       pred basic-machines[j]()
       # where basic-machines[j]() denotes the prediction made by machine at point ;
       for index,vector in training-set-responses do
             weights-points[index] += Kernel(pred, basic-machines[j](vector) ) ;
            
       end for
      
end for
# machine-predictions is a list mapping each machine and it’s prediction of ;
# weights-machines is a list mapping each machine and it’s weight which must sum to 1 ;
weights-points = weights-points / sum(weights-points) ;
machine-predictions = weights-machines * machine-predictions ;
results = machine-predictions * weights-points ;
Algorithm 2 KernelCobra in the unsupervised setting
Data: input vector , basic-machines, training-set-responses, training-set
machine-predictions
# training-set is the set composed of all data_point and the responses.
# training-set-responses is the set composed of the responses.
# machine-predictions is the dictionary mapping the constituent machines and their predictions on the training-set Result: prediction
machine-set = [] ;
# machine-set is a dictionary which stores the label predicted for each point in the training set;
for machine in basic-machines do
       pred = basic-machines[j]()
       # where basic-machines[j]() denotes the prediction made by machine at point ;
       for index in training-set-responses do
             if machine-predictions[machine][index] == pred then
                  add index to machine-set[machine]
             end if
             
       end for
      
end for
return the majority vote on machine-set, as defined by (7) ;
Algorithm 3 KernelCobra for classification

To conclude this section, let us mention that the complexity of all presented algorithms is as we loop over all data points in the subsample and over all machines.

5 Numerical Experiments

We have conducted numerical experiments to assess the merits of KernelCobra in terms of statistical performance, and computational cost. We compare pythonic implementations of KernelCobra, MixCobra, the original COBRA algorithm as implemented by pycobra, and the default scikit-learn machines used to create our aggregate.

We test our method on four synthetic data-sets and two real world data-sets, and report statistical accuracy and CPU-timing. The synthetic datasets are generated using scikit-learn’s make-regression, make-friedman1 and make-sparse-uncorrelated functions. The two real world datasets are the Boston Housing dataset, and the Diabetes dataset.

Table 1 wraps up our results for statistical accuracy and establishes KernelCobra as a promising new kernel-based ensemble learning algorithm. Figure 1 compares the computational cost of the original COBRA, MixCobra and KernelCobra. As both COBRA and KernelCobra drop the input data, they do not suffer from an increase of data dimensionality and significantly outperform MixCobra.

The pycobra package also offers a visualisation suite which gives QQ-plots, boxplots of errors, and comparison between the predictions of machines and the aggregate along with the true values. We report a sample of those outputs in Figure 2.

Last but not least, we provide a sample of decision boundaries for the classification variant of KernelCobra on three datasets, in Figure 3, Figure 4 and Figure 5. These three datasets are scikit-learn generic datasets for classification - linearly-separable, make-moons, make-circles. The nature of these datasets provide us a way to visualise how ClassifierCobra classifies with regard to the default classifiers used.

Some notes about the nature of the experiments and the performance. KernelCobra is the best performing machine for 4 out of 6 datasets. These values are achieved using an optimally derived bandwidth parameter for that dataset. This is calculated using the optimal-kernelbandwidth function in the Diagnostics class of the pycobra package. The default bandwidth values do not perform as well, and if we further fine tune the bandwidth value, we would get potentially better results. MixCobra has similar tunable parameters which affect its performance, but takes significantly longer, as there are 3 parameters to tune. We use the default range of parameters to test before choosing optimal parameters for both KernelCobra and MixCobra in the results displayed.

When considering both the CPU timing to find optimal parameters and the statistical performance, KernelCobra outperforms the initial COBRA algorithm.

6 Conclusion and future work

We have introduced a generalisation of the COBRA algorithm from biau2016cobra which can be used for classification and regression (either supervised, semi-supervised and unsupervised). Our approach, called KernelCobra delivers a kernel-based ensemble learning algorithm which is versatile, computationally cheap and flexible. All variants of KernelCobra ship as part of the pycobra Python library introduced by guedj2018pycobra (from version 0.2.4), and are designed to be used in a scikit-learn environment. We will conduct in future work a theoretical analysis of the kernelised COBRA algorithm to complete the theory provided by biau2016cobra.

Gaussian Sparse Diabetes Boston Linear Friedman
random-forest 12266.640297 3.35474 2924.12121 18.47003 0.116743 5.862687
(1386.2011) (0.3062) (415.4779) (4.0244) (0.0142) (0.706)
ridge 491.466644 1.23882 2058.08145 13.907375 0.165907 6.631595
(201.110142) (0.0311) (127.6948) (2.2957) (0.0101) (0.2399)
svm 1699.722724 1.129673 8984.301249 74.682848 0.178525 7.099232
(441.8619) (0.0421) (236.8372) (114.9571) (0.0155) (0.3586)
tree 22324.209936 6.304297 5795.58075 32.505575 0.185554 11.136161
(3309.8819) (0.9771) (1251.3533) (14.2624) (0.0246) (1.73)
Cobra 1606.830549 1.951787 2506.113231 16.590891 0.12352 5.681025
(651.2418) (0.5274) (440.1539) (8.0838) (0.0109) (1.3613)
KernelCobra 488.141132 1.11758 2238.88967 12.789762 0.113702 4.844789
(189.9921) (0.1324) (1046.0271) (9.3802) (0.0089) (0.5911)
MixCobra 683.645028 1.419663 2762.95792 16.228564 0.104243 5.068543
(196.7856) (0.1292) (512.6755) (12.7125) (0.0104) (0.6058)
Table 1: For each estimator (first column) and each dataset (first row), we report the mean RMSE (along with standard deviation) over 100 independent runs. Bold numbers indicate the best method for each dataset.
Figure 1: CPU timing for building the initial COBRA estimator, MixCobra and KernelCobra. Each line is the average over 100 independent runs, and shades are standard deviations.
(a) COBRA, MixCobra, KernelCobra.
(b) COBRA vs. basic machines.
Figure 2: Boxplot of errors over 100 independent runs.
(a) Circle Data
(b) KNN
(c) LDA
(d) Logistic Regression
(e) Naive Bayes
(f) Neural Network
(g) Support Vector Machine
(h) Decision Tree
(i) KernelCobra
Figure 3: Decision boundaries of base classifiers and KernelCobra on the circle dataset.
(a) moon Data
(b) KNN
(c) LDA
(d) Logistic Regression
(e) Naive Bayes
(f) Neural Network
(g) Support Vector Machine
(h) Decision Tree
(i) KernelCobra
Figure 4: Decision boundaries of base classifiers and KernelCobra on the moon dataset.
(a) linear Data
(b) KNN
(c) LDA
(d) Logistic Regression
(e) Naive Bayes
(f) Neural Network
(g) Support Vector Machine
(h) Decision Tree
(i) KernelCobra
Figure 5: Decision boundaries of base classifiers and KernelCobra on the linear dataset.
\authorcontributions

Both authors contributed equally to this work.

\funding

A substantial fraction of this work has been carried out while both authors were affiliated to Inria, Lille - Nord Europe research centre, Modal project-team.

\conflictsofinterest

The authors declare no conflict of interest.

\reftitle

References

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402530
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description