Automated Machine Learning with Monte-Carlo Tree Search

Automated Machine Learning with Monte-Carlo Tree Search

Herilalaina Rakotoarison    Marc Schoenauer    Michèle Sebag
TAU, LRI-CNRS–INRIA, Université Paris-Saclay, France
{herilalaina.rakotoarison, marc.schoenauer}@inria.fr, sebag@lri.fr
Abstract

The AutoML task consists of selecting the proper algorithm in a machine learning portfolio, and its hyperparameter values, in order to deliver the best performance on the dataset at hand. Mosaic, a Monte-Carlo tree search (MCTS) based approach, is presented to handle the AutoML hybrid structural and parametric expensive black-box optimization problem. Extensive empirical studies are conducted to independently assess and compare: i) the optimization processes based on Bayesian optimization or MCTS; ii) its warm-start initialization; iii) the ensembling of the solutions gathered along the search. Mosaic is assessed on the OpenML 100 benchmark and the Scikit-learn portfolio, with statistically significant gains over Auto-Sklearn, winner of former international AutoML challenges.

1 Introduction

The automated selection of the machine learning (ML) algorithm yielding the best performance on the problem at hand, referred to as AutoML, has attracted interest since the late 1980s [Brazdil and Giraud-Carrier2018]: there exists no killer ML algorithm dominating all others on all datasets [Wolpert1996], and ML algorithms demonstrate a high sensitivity w.r.t. their hyper-parameters. With the explosion of machine learning applications, the AutoML issue becomes even more acute. AutoML gradually extended to hyper-parameters optimization [Bergstra et al.2011], and finally tackles the optimization of the overall ML pipeline from data preparation to model learning [Feurer et al.2015, Li et al.2017, Olson et al.2016, Chen et al.2018]. Several AutoML international challenges have been organized in the last decade [Guyon et al.2015, Guyon et al.2018], spurring the development of efficient AutoML systems such as Auto-Weka [Kotthoff et al.2017], Hyperband [Li et al.2017], Tpot [Olson et al.2016] and the challenge winner Auto-Sklearn [Feurer et al.2015] (more in section 2).

AutoML systems tackle a black-box expensive optimization problem: For a given target dataset,

(1)

where is the structural and parametric space of ML configurations (containing categorical and continuous parameters with hierarchical dependencies), and the performance of the model learned from the dataset at hand using configuration . ML configurations and pipelines are used interchangeably in the following.

A main difficulty of the AutoML optimization problem lies in the search space: an ML pipeline is a series of components (algorithms), together with their own hyper-parameters. The task thus consists in solving the combinatorial optimization of the pipeline structure, the performance of which depends on the parametric optimization of its component hyper-parameters.

Most AutoML approaches tackle both problems using a single optimization approach technique (e.g., Bayesian Optimization or Evolutionary Algorithms) whereas both problems are of very different nature. The contribution of the paper, presenting the Mosaic (MOnte-Carlo tree Search for AlgorIthm Configuration) approach,111Mosaic is publicly available under an open source license at https://github.com/herilalaina/mosaic_ml. is to use the best approach for each problem while tightly coupling both optimizations (section 3).

Specifically, the optimization of the pipeline can be viewed as a sequential decision process; Monte-Carlo Tree Search (MCTS) [Kocsis and Szepesvári2006] has demonstrated its ability to efficiently solve such sequential problems. On the other hand, Bayesian optimization [Mockus et al.1978, Wang2016] has been very successful solving expensive optimization problems, in particular in the context of hyper-parameters tuning [Hutter et al.2011]. These two approaches are coupled in Mosaic  and their coupling relies on a surrogate model of the performance of the pipelines, as in Auto-Sklearn. However, this surrogate model is not only used to guide the local search of the hyper-parameters, it is also incorporated at the heart of the MCTS search of the best pipeline structure.

The paper is organized as follows. Section 2 discusses the state of the art in AutoML, and presents the MCTS formal background. Section 3 gives a detailed overview of the proposed Mosaic approach. The experimental setting and the goals of experiments are presented in Section 4. Section 5 reports on the empirical validation222We warmly thank Auto-Sklearn authors, who kindly provided many explanations together with their open source code. We also thank Tpot authors, who provide an open source easy-to-use software package. of Mosaic on the OpenML benchmark suite and the Scikit-learn portfolio, demonstrating statistically significant gains over Auto-Sklearn and Tpot [Olson et al.2016].

2 Related work

This section briefly reviews previous work on the per-instance AutoML problem (Eq. (1)), first focusing on approaches using surrogate models and Bayesian Optimisation, then on MCTS and other approaches. Approaches focused on specific issues, e.g., neural architecture optimization [Wistuba2018], are omitted due to space limitations.

2.1 Surrogate Model-based optimization

Most prominent approaches today proceed iteratively, learning and exploiting an estimate of the optimization objective , called surrogate model.

Learning a surrogate model.

At step , surrogate model is learned from the set gathering the previously selected configurations and their associated performances. is then used to determine the most promising candidate , see below.

As said, a main difficulty lies in the structure of space . In all generality, this space includes categorical features (e.g., the name of the ML algorithm, the type of pre-processing) and continuous or integer features, the number and range of which depend on the value of the categorical features (e.g. the algorithm or pre-processing method). Diverse surrogate model hypothesis spaces have been considered: the Sequential Model-based Algorithm Configuration (Smac) [Hutter et al.2011], like Auto-Weka [Kotthoff et al.2017] and Auto-Sklearn [Feurer et al.2015], are based on Random Forests; [Bergstra et al.2011] use a Tree-structure Parzen Estimator (Tpe); Spearmint [Snoek et al.2015] is based on Gaussian processes (GP). An extensive comparison of these approaches [Eggensperger et al.2013] shows that Smac and Tpe perform best for high dimensional and mixed hyperparameter optimization problems, while the GP-based Spearmint performs best on low dimensional continuous search spaces.

Surrogate model-based optimization.

Surrogate models are often exploited along Bayesian optimization (BO) [Mockus et al.1978, Wang2016]. Assuming that model yields the performance distribution for any given , the most promising is determined by maximizing the expected improvement on the current best value [Mockus et al.1978], or more generally an acquisition function balancing performance expectation and variance [Wang2016].

A simple alternative is to learn a surrogate model as a random forest, yielding both a performance estimate and a variance estimate for any configuration. The next candidate is the configuration maximizing the approximate acquisition function, out of a number of configuration samples. The key issue here is the distribution used to sample the configuration space. For instance, Auto-Sklearn, as it uses Smac, considers a small number of configurations close to the best-so-far configuration, augmented with a large number of uniformly sampled configurations.

2.2 Monte-Carlo Tree Search

An alternative to Bayesian optimization is based on Monte-Carlo Tree Search [Kocsis and Szepesvári2006]. Considering a tree-structured search space , MCTS iteratively explores the space, gradually biasing the exploration toward the most promising regions of the search tree. Each iteration, referred to as tree-walk (Fig. 1), involves four phases [Gelly and Silver2011]:

Down the MCTS tree:

The first phase traverses the MCTS tree from the root node. In each (non-leaf) node of the tree, the next node to visit is classically selected among the child nodes of using the multi-armed bandit Upper Confidence Bound criterion [Auer2002]:

(2)

with the average reward gathered over all tree-walks with prefix , (resp. ) the number of visits to node (resp. node ), and a problem-dependent constant that controls the exploitation vs exploration trade-off;

Expansion:

When arriving at a leaf node, a new child node might be added. The choice of the new node can be guided using e.g. the Rapid Action Value Estimate [Gelly and Silver2011]. The number of child nodes is controlled and gradually extended along the Progressive Widening strategy [Auger et al.2013]: A new child node is added whenever the integer value of increases by one, being a user-defined parameter (typically 0.6).

Playout:

After the expansion phase, a playout strategy is used to complete the tree-walk until reaching a terminal node and computing the associated reward;

Back-propagation:

The reward value is back-propagated along the current path, incrementing for all visited nodes and updating accordingly.

Figure 1: Monte-Carlo Tree Search: each iteration involves four phases [Chaslot et al.2008].

Taking inspiration from AlphaGo Zero [Silver et al.2017], the AlphaD3M system builds upon MCTS to explore the pipeline search space [Drori et al.2018]. The difference compared to mainstream AutoML systems is twofold. Firstly, AlphaD3M explores the sequences of actions (insertion, deletion, replacement of pipeline parts) on pipelines, as opposed to directly exploring space . Secondly, AlphaD3M learns (resp. exploits) a recurrent neural net to encode the action probability of success (resp. probability of selection) conditioned on the current state, in lieu of surrogate model or selection rule.

2.3 Expensive optimization

As said, AutoML is an expensive black-box optimization problem: computing amounts to run the whole ML pipeline on the considered dataset. Several approaches have been proposed to reduce the computational cost. A first one consists of sub-sampling the training dataset [Swersky et al.2014, Li et al.2017, Klein et al.2017]. Two surrogate models are built in [Klein et al.2017]: one for the performance reached depending on the configuration and a fraction of the training set considered, another one for the actual computational cost of running on a fraction of the data. Both models are jointly exploited to determine the most promising pipeline in terms of performance improvement and moderate computational cost.

Another approach is Hyperband [Li et al.2017], launching a large number of random candidate configurations, subject to a given cut-off time. Hyperband iteratively prunes the unpromising candidates, and re-examines the other candidates with a larger cut-off, until the best candidates are allowed to run with no computational cost constraint. After its authors, Hyperband outperforms Smac and Tpe for hyper-parameters optimization on neural networks and support vector machines, though its performances are sensitive to its own hyper-hyper-parameters.

Two evolutionary approaches (EAs) have been proposed, handling particular ML pipelines. Tpot uses Genetic Programming to evolve pipelines made of parallel preprocessing and feature construction branches, that feed some model building method. A comparative study [Balaji and Allen2018] reports that Tpot is outperformed by Auto-Sklearn on classification problems while the reverse is true on regression problems. AutoStacker [Chen et al.2018] builds an ML pipeline by evolving new artificial features, and adding them to the original dataset. The whole stack is optimized using a vanilla EA with ad hoc mutation and crossover. AutoStacker outperforms Tpot, and yields some better results than Auto-Sklearn, though both algorithms have very different ways of handling CPU time.

2.4 Search initialization and solution agregation

It is long known that initialization is a most critical step for ill-posed optimization problems. The selection of the first candidates will govern the quality of the surrogate model (section 2.1) and the time-to-good configurations: the better the initial s, the more accurate the surrogate model will be in the worthy part of the search space333Moderate mistakes in the low-performing regions do not harm since these regions will not be much visited..

The selection of the initial s in Auto-Sklearn is based on the so-called MetaLearning heuristics. Formally, Auto-Sklearn is provided with an archive, gathering pairs where the meta-feature444Meta-features are used to describe datasets, using statistical, information theoretic and landmark-based measures [Muñoz et al.2018]. vector describes the -th dataset and is the best known pipeline for this dataset. Letting denote the meta-feature vector associated to the current dataset, its nearest neighbors in the archive (in the sense of the Euclidean distance on the meta-feature vector space) are computed and the s associated with these neighbors are used by Auto-Sklearn as first configurations [Feurer et al.2015].

Finally, the sequence of solutions found by an AutoML process can be exploited in the spirit of ensemble learning [Caruana et al.2004]. Auto-Sklearn.Ensemble delivers the compound model defined as the weighted sum of the models learned along the search, where the weights are optimized on a validation set.

3 MCTS-aided Algorithm Configuration

After introducing some notations, this section presents Mosaic and discusses its components.

An ML pipeline involves a fixed ordered sequence of decisions, respectively selecting the data preprocessing (including categorical variable encoding, missing value imputation, rescaling), feature selection, and learning algorithms. At the decision step, some algorithm is selected (with the finite set of possible algorithms at step). Denoting the (possibly varying dimension) space of hyper-parameters associated with , the eventual pipeline is described as , with . A complete pipeline structure is an555Note that elements in are not all admissible. Domain knowledge is used to early discard the non-admissible sequences . -uple , with its associated hyper-parameter space. A -pipeline structure (-ps) is a -tuple , with . Given a -ps , any with same first decisions as  is said to be compatible with (noted ) and the subset of pipelines compatible with  is noted .

A default distribution is defined on , involving a uniform distribution on all and, conditionally to the selected , a uniform distribution666Except for a few hyper-parameters such as the number of selected features in feature selection, for which the default distribution is biased toward small values. on the (bounded) . The default distribution on is defined in the same way.

3.1 Two intertwined optimization problems

The difficulty lies in simultaneously tackling the structural optimization of in and the parametric optimization of the associated hyper-parameters  in where i) the optimization objective is non-separable777That is, the marginal performance of depends on all other and on . Likewise, the marginal performance of depends on all and for .; ii) is of varying dimension, possibly depending on the value of some coordinates in (e.g. the number of neural layers controls the dimension of the neural layer size). At one extreme, one could optimize for every considered   an obviously intractable strategy. At the other extreme, one could estimate the performance of  from a few samples of .

Mosaic achieves an intermediate strategy: A surrogate model  on is maintained, generalizing all computed performances; During the optimization of the pipeline structure with MCTS, when considering incomplete structural pipeline , a full pipeline such that is determined along the line of Bayesian optimization and the performance is computed. Thanks to MCTS backpropagation step, this allows to build a proxy for the performance of .

More formally, the novelty in Mosaic is to tackle both structural and parametric optimization problems using two coupled strategies: MCTS is used to tackle the structural optimization of structure  and Bayesian optimization is used to tackle the parametric optimization of , where the coupling is ensured via the surrogate model(s). This hybrid strategy contrasts with that of Auto-Sklearn (resp. most other AutoML approaches), optimizing both and using Bayesian Optimization and a single surrogate model (resp. their own optimization methods). Note that in principle MCTS could be used to also achieve continuous optimization [Bubeck et al.2011]. However, the computational resource constraint on the AutoML problem, severely restricting the number of tree-walks, hinders a continuous MCTS optimization strategy.

3.2 Partial surrogate models

In Mosaic as in Auto-Sklearn (section 2), a surrogate model  of the optimization objective is built from all computed performances .

A first step is to derive from  a surrogate model on pipeline structures. For , let  be a -ps, and let denote the -ps built from  by selecting as -th decision. Then the surrogate is defined as:

(3)

estimated from a number ( in the experiments) of configurations sampled in .

A probabilistic selection policy can then be built from , with:

(4)

Taking inspiration from [Silver et al.2017], this policy is used to enhance the MCTS selection rule (below).

3.3 The Mosaic algorithm

Mosaic (Alg. 1) follows the general MCTS scheme (section 2.2), where the main four phases have been modified as follows:

Down the MCTS tree

In a non-leaf node of the MCTS tree, with a -ps, the child node is selected in using the AlphaGo Zero criterion:

(5)

where is the median888The average was also considered, giving very similar results, except in rare cases of heavily failed runs. of for all in , is defined by Eq.(4), is the number of times was visited, and is the usual constant controlling the exploration vs exploitation trade-off.

Expansion

In a leaf node of the MCTS tree, with a -ps, the child node in that maximizes the surrogate performance is added to the MCTS tree.

Playout

Letting be the (possibly complete) -ps, a full pipeline with is defined using a sampling playout strategy. Three sampling strategies were considered: i) a configuration is sampled according to the default distribution ; ii) a local search around the best recorded pipeline in is achieved and the best configuration according to is retained; iii) a number of configurations is sampled after , together with a few configurations sampled via a local search around , and the sample  that maximizes the Expected Improvement of is retained. In all cases, the true performance of the retained configuration is computed.

An empirical study (omitted for brevity) demonstrated that: the first sampling strategy is slow and prone to overfitting; the second strategy causes a loss of diversity of the considered pipelines, eventually resulting in a poor surrogate performance model . Hence only the third strategy is considered thereafter: the sampled configurations include ( in the experiments) configurations sampled from default distribution , augmented with pipelines closest999Formally, one selects every such that either and differs from by a single hyper-parameter value; or differs from by a single decision and is the default hyper-parameter vector . to .

Back-propagation

Performance is back-propagated up the tree along the current path, updating the corresponding values. Example is added to the surrogate training set, and the surrogate performance model is retrained anew.

Stopping criterion

The algorithm stops after the computational budget is exhausted (one hour per dataset in the experiments).

3.4 Initialization and Variants

The order of the decisions in the structural pipeline is key to the optimization: while MCTS yields asymptotic optimality guarantees, the discovery of good decisions can be delayed due to poorly informative or unlucky starts [Coquelin and Munos2007]. Accordingly, the order of decisions in the structural pipeline is fixed, and the first decision made in the root node of the tree is the choice of the learning algorithm. Note that each learning algorithm has an associated default complete pipeline.

Mosaic.Vanilla

The initialization proceeds as follows: For each learning algorithm ( with ), its default complete pipeline is launched, together with (= 3 in the experiments) other pipelines sampled from , and their associated performances are computed. The initial surrogate model is trained from the set of all such and is initialized for in .

Mosaic.MetaLearning

borrows Auto-Sklearn its better informed initialization, where the first 25 configurations are the best recorded ones for each of the nearest neighbors of the current dataset, in the sense of the meta-feature distance (section 2.4). The next configurations are selected as in Mosaic.Vanilla, and the actual search starts thereafter.

Mosaic.Ensemble

is similar to Mosaic.Vanilla, but returns the compound model defined as a weighted sum of the models computed along the AutoML search, using an online ensemble building strategy [Caruana et al.2004].

1:procedure Selection(state )
2:     while state not terminal do
3:          Select action using Eq. 5
4:         return Selection(.a)      return
5:procedure Expansion(state )
6:     return
7:procedure Playout(state)
8:      // best configuration
9:     return // Expected improvement
10:procedure Mosaic()
11:     while  do
12:         
13:          Selection()
14:          Expansion()
15:         x Playout()
16:         Observe performance of on
17:         for  do
18:              Update at state with
19:                             
Algorithm 1 Mosaic Vanilla

4 Experimental Setting

4.1 Goals of experiment

The empirical validation of Mosaic firstly aims to assess its performance compared to Auto-Sklearn [Feurer et al.2015], that consistently dominated other systems in the international AutoML challenges [Guyon et al.2015]. The other AutoML system used as baseline is the evolutionary optimization-based101010 AlphaD3M [Drori et al.2018] and AutoStacker [Chen et al.2018] could not be considered due to lack of information. Tpot (v0.9.5) [Olson et al.2016].

The second goal of experiments is to better understand the specifics of the AutoML optimization problem. A first issue regards the exploration vs exploitation trade-off on the structural vs parametric subspaces and the merits of using MCTS as opposed to Bayesian optimization on the structural space. A second issue regards the impact of the MetaLearning initialization. MCTS is notorious to achieve a consistent though moderate exploration, which as said might slow down the search due to unlucky early choices. The smart initialization tends to prevent such hazards. On the other hand, if the initialization is very effective, the more conservative Auto-Sklearn exploration strategy might be more appropriate.

The exploration strategies of Mosaic and Auto-Sklearn are compared, and the diversity of the visited configurations is examined in [Rakotoarison et al.2019].

4.2 Experimental setting.

Search space.

A fair comparison is ensured by assessing Auto-Sklearn and Mosaic on the same scikit-learn portfolio [Pedregosa et al.2011]. The search space involves 16 ML algorithms, 13 pre-processing methods, 2 categorical encoding strategies, 4 missing values imputation strategies, 6 rescaling strategies and 2 balancing strategies.111111The reader is referred to [Rakotoarison et al.2019] for more detail. The size of the structural search subspace is 6,048 (due to parameter dependencies). The overall parametric search space has dimensionality 147 (93 categorical scalar hyper-parameters, 32 integer, 47 continuous). Each hyper-parameter ranges in a bounded discrete or continuous domain. For each configuration , involves a dozen scalar hyper-parameters on average.

Mosaic involves 2 hyper-hyper-parameters additionally to those of Auto-Sklearn: the number of samples to compute (Eq. 3), controlling the exploration vs exploitation (Eq. (5)) and the coefficient of progressive widening . Shared hyper-hyper-parameters include: number of uniformly sampled configurations and variance for the local search in the Playout phase (section 3.3).

Figure 2: Comparative assessment of Mosaic and Auto-Sklearn: Average performance rank (the lower the better) on OpenML-100 vs CPU time of the Vanilla, Ensemble, MetaLearning and Ensemble+MetaLearning variants (left to right). Better seen in color.

Benchmark suite

The compared AutoML systems are assessed on the OpenML repository [Vanschoren et al.2013], including 100 binary and multi-class classification problems. The overall computational budget is set to 1 hour for each dataset. Computational times are measured on an AMD Athlon 64 X2, 5GB RAM. For all systems, every considered configuration is launched to learn a model from 70% of the training set with a cut-off time of 300 seconds, and performance is set to the model accuracy on the remaining 30%. After 1 hour, for each system the best configuration is launched to learn a model on the whole training set and its performance on the (unseen) test set is reported. Finally, this performance is averaged over 10 independent runs, and the average is reported as the system performance on this dataset. For the Meta-Learning variant, the considered archive includes all datasets but the one under examination.

For each dataset, the performances achieved by all systems are ranked. The overall performance of a system is its average rank over all (the lower the better). As the rank indicator might be blurred when many systems and their variants are considered together, duels between pairs of systems (Mosaic.X against Auto-Sklearn.X, where X ranges in Vanilla, Meta-Learning, Ensemble, Meta-Learning+Ensemble, section 3.4), are considered.

5 Empirical Validation

Vanilla variants.

The comparative performances of Vanilla Auto-Sklearn, Tpot and Mosaic vs computational time are displayed on Figs. 2-a and 3, showing that the hybrid optimization used in Mosaic clearly improves on the Bayesian optimisation-only used in Auto-Sklearn (and on the evolutionary optimization-only used in Tpot) from the early stages until the end.

Figure 3: Average performance ranks (lower is better) on OpenML-100 vs CPU time of the Vanilla versions of Mosaic (bottom), Auto-Sklearn (middle), and Tpot (top). Better seen in color.
Figure 4: Performance of Mosaic (y-axis) versus Auto-Sklearn (x-axis) on OpenML-100. Datasets for which the difference is statistically significant (resp. insignificant) after MWW test with confidence 5% are represented with a (resp ).

The actual performances of the configurations respectively selected by Auto-Sklearn and Mosaic are reported on Fig. 4. According to a Mann-Whitney-Wilcoxon test with 95% confidence, Mosaic significantly outperforms Auto-Sklearn on 21 datasets out of 100; Auto-Sklearn outperforms Mosaic on 6 datasets out of 100. Additionally, Mosaic improves on Auto-Sklearn on 35 other datasets (though not in a statistically significant way), and the reverse is true on 18 datasets. Both are equal on 18 datasets and both systems crashed on 2 datasets.

MetaLearning and Ensemble variants.

The impacts of the MetaLearning and Ensemble variants are displayed on Fig. 2. While Mosaic dominates Auto-Sklearn as long as the Vanilla variants are considered (Fig. 2-a), the difference decreases for the Ensemble variant (Fig. 2-b) and it becomes non-statistically significant for the MetaLearning variant (Fig. 2-c), as well as for the MetaLearning + Ensemble variant (Fig. 2-d).

A closer inspection of the results reveals that the best Auto-Sklearn configuration is almost always found during the initialization and Auto-Sklearn.MetaLearning thereafter mostly explores the close neighborhood of the initial configurations. In the meanwhile, Mosaic follows a more thorough exploration strategy; this exploration might entail a bigger risk of overfitting, discovering configurations with better performance on the validation set, at the expense of the performance on the test set.

Sensitivity w.r.t. Mosaic hyper-parameters

Complementary sensitivity studies have been conducted to assess the impact of Mosaic hyper-parameters. For computational reasons, only 30 datasets out of 100 have been considered, and MosaicV̇anilla is run 5 times with a 1 hour budget on each dataset.

Fig. 5 displays the average rank of Mosaic.Vanilla at the end of the learning curve, for in and PW in , showing that Mosaic dominates Auto-Sklearn for 24 settings out of 30.

Fig. 6 displays the average rank vs time of Mosaic.Vanilla for different values of (50, 100, 500, 1000), showing the low sensitivity of the performance w.r.t. in this range for and .

Figure 5: Sensitivity study w.r.t. hyper-parameters  and (progressive widening in expansion phase), for : Average rank of Mosaic.Vanilla against Auto-Sklearn.Vanilla (the lower, the better). Better seen in color (Mosaic in blue and Auto-Sklearn in red).
Figure 6: Sensitivity study w.r.t. hyper-parameters  for and : Average rank of Mosaic.Vanilla against Auto-Sklearn.Vanilla. Better seen in color (Mosaic in blue and Auto-Sklearn in red).

Comparing Mosaic and Auto-Sklearn exploration of the search space

The differences in the exploration strategies of Auto-Sklearn and Mosaic become more visible at a later stage of the search: Mosaic switches to the exploitation of the most promising MCTS subtrees (subspaces of the search space) and avoids regions where the last visited configurations were bad; on the other hand, Auto-Sklearn continues to explore even if the sub-space includes quite a few bad configurations [Rakotoarison et al.2019].

6 Discussion and Perspectives

The main contribution of the paper is the new Mosaic scheme, tackling the AutoML optimization problem through handling both the structural and the parametric optimization problems. The proposed approach is based on a novel coupling between Bayesian Optimization and MCTS strategies, that are tied by sharing the same surrogate model. In MCTS the surrogate model is used to estimate, in all nodes, the average performance of all subtrees (ends of pipeline) below this node, and thus to choose the next node. The same surrogate model is used during the roll-outs, to choose the optimal hyper-parameters of the pipeline using a Bayesian Optimization strategy.

Empirically, the results demonstrate that Mosaic significantly outperforms the challenge winner Auto-Sklearn on the OpenML benchmark suite, at least as long as the Vanilla and Ensemble variants are considered. With the MetaLearning variant however, the difference becomes insignificant as the bulk of optimization is devoted to the initialization (all the more so for large datasets, due to the one hour cut-off time).

The limitation of such a smart initialization is twofold. On the one hand, it relies on preliminary expensive computations to build the archive (one day computation per dataset on OpenML-100); on the other hand, it assumes the representativity of the problems in the archive. On-going work is concerned with estimating the risk of overfitting the OpenML benchmark, through measuring the sensitivity of the Auto-Sklearn and Mosaic MetaLearning variants when varying the fraction of the datasets in the archive.

In any case, the experimental evidence suggests that Vanilla Mosaic offers a robust and efficient AutoML facility when tackling a new application domain, and/or in the absence of a comprehensive archive.

A long term research perspective is to reconsider the design of the meta-features [Muñoz et al.2018]. In principle, a binary classification problem can be associated to any ML algorithm, where a dataset belongs to class if the algorithm performs comparatively well on this dataset, and class otherwise. The perspective is to apply equivariant learning [Cohen and Welling2016] at the dataset level to tackle this binary classification problem, and use the resulting equivariant classifier as (cheap) meta-feature.

Acknowledgments

This work was funded by the ADEME #1782C0034 project NEXT.

References

  • [Auer2002] P. Auer. Using Confidence Bounds for Exploitation-Exploration Trade-offs. Journal of Machine Learning Research, 3:397–422, 2002.
  • [Auger et al.2013] D. Auger, A. Couëtoux, and O. Teytaud. Continuous Upper Confidence Tree with Polynomial Exploration – Consistency. In Proc. ECML-PKDD, volume 8188 of Lecture Notes in Computer Science, pages 194–209. Springer Verlag, 2013.
  • [Balaji and Allen2018] A. Balaji and A. Allen. Benchmarking Automatic Machine Learning Frameworks. arXiv:1808.06492 [cs, stat], 2018.
  • [Bergstra et al.2011] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for Hyper-Parameter Optimization. In NIPS, pages 2546–2554, 2011.
  • [Brazdil and Giraud-Carrier2018] P. Brazdil and Ch. Giraud-Carrier. Metalearning and Algorithm Selection: Progress, State of the Art and Introduction to the Special Issue. Machine Learning, 107(1):1–14, 2018.
  • [Bubeck et al.2011] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesvári. X-armed bandits. Journal of Machine Learning Research, 12:1655–1695, 2011.
  • [Caruana et al.2004] R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes. Ensemble selection from libraries of models. In Proc.ICML, page 18, 2004.
  • [Chaslot et al.2008] G. Chaslot, S. Bakkes, I. Szita, and P. Spronck. Monte-Carlo Tree Search: A New Framework for Game AI. In AI and Interactive Digital Entertainment, pages 216–217. AAAI Press, 2008.
  • [Chen et al.2018] B. Chen, H. Wu, W. Mo, I. Chattopadhyay, and H. Lipson. Autostacker: A Compositional Evolutionary Learning System. In Proc. ACM-GECCO, pages 402–409. ACM Press, 2018.
  • [Cohen and Welling2016] T. Cohen and M. Welling. Group Equivariant Convolutional Networks. In Proc. ICML, volume 48, pages 2990–2999. PMLR, 2016.
  • [Coquelin and Munos2007] P.-A. Coquelin and R. Munos. Bandit algorithms for tree search. In Proc. UAI 2007, pages 67–74, 2007.
  • [Drori et al.2018] I. Drori, Y. Krishnamurthy, et al. AlphaD3M: Machine Learning Pipeline Synthesis . In ICML workshop on AutoML, 2018.
  • [Eggensperger et al.2013] K. Eggensperger, M. Feurer, F. Hutter, et al. Towards an Empirical Foundation for Assessing Bayesian Optimization of Hyperparameters. In NIPS wkp on BO in Theory and Practice, 2013.
  • [Feurer et al.2015] M. Feurer, Klein A., Eggensperger K., M. Blum, and F. Hutter. Efficient and Robust Automated Machine Learning. In C. Cortes et al., editor, NIPS 28, pages 2962–2970, 2015.
  • [Gelly and Silver2011] S. Gelly and D. Silver. Monte-Carlo Tree Search and Rapid Action Value Estimation in Computer GO. Artificial Intelligence, 175:1856–1875, 2011.
  • [Guyon et al.2015] I. Guyon, K. Bennett, G. Cawley, et al. Design of the 2015 Chalearn AutoML challenge. In Proc. IJCNN, pages 1–8. IEEE, 2015.
  • [Guyon et al.2018] I. Guyon, W.-W. Tu, et al. Automatic Machine Learning Challenge 2018: Towards AI for Everyone. In PAKDD 2018 Data Competition, 2018.
  • [Hutter et al.2011] F. Hutter, H.H. Hoos, and K. Leyton-Brown. Sequential Model-based Optimization for General Algorithm Configuration. In Proc. LION, pages 507–523. Springer Verlag, 2011.
  • [Klein et al.2017] A. Klein, S. Falkner, et al. Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets. In Proc. AISTAT, pages 528–536, 2017.
  • [Kocsis and Szepesvári2006] L. Kocsis and C. Szepesvári. Bandit Based Monte-Carlo Planning. In Fürnkranz et al., editor, Proc. ECML, pages 282–293. Springer, 2006.
  • [Kotthoff et al.2017] L. Kotthoff, Ch. Thornton, H.H. Hoos, F. Hutter, and K. Leyton-Brown. Auto-WEKA 2.0: Automatic Model Selection and Hyperparameter Optimization in WEKA. JMLR, 18(25):1–5, 2017.
  • [Li et al.2017] L. Li, K. Jamieson, G. DeSalvo, et al. Hyperband: A novel bandit-based approach to hyperparameter optimization. JMLR, 18(1):6765–6816, 2017.
  • [Mockus et al.1978] J. Mockus, V. Tiesis, and A. Zilinskas. The application of Bayesian methods for seeking the extremum. In L. Dixon and G. Szego, editors, Toward Global Optimization, volume 2. Elsevier, 1978.
  • [Muñoz et al.2018] M. A. Muñoz, L. Villanova, D. Baatar, and K. Smith-Miles. Instance spaces for machine learning classification. Machine Learning, 107(1):109–147, 2018.
  • [Olson et al.2016] R. S. Olson, N. Bartley, R. J. Urbanowicz, and J. H. Moore. Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. In Proc. ACM-GECCO, pages 485–492. ACM Press, 2016.
  • [Pedregosa et al.2011] F. Pedregosa, G. Varoquaux, A. Gramfort, et al. Scikit-learn: Machine learning in Python. JMLR, 12:2825–2830, 2011.
  • [Rakotoarison et al.2019] H. Rakotoarison, M. Schoenauer, and M. Sebag. Automated Machine Learning with Monte-Carlo Tree Search (Extended Version). arXiv:XXXX, 2019.
  • [Silver et al.2017] D. Silver, J. Schrittwieser, et al. Mastering the Game of GO without Human Knowledge. Nature, 550(7676):354–359, 2017.
  • [Snoek et al.2015] J. Snoek, O. Rippel, K. Swersky, et al. Scalable Bayesian Optimization using Deep Neural Networks. In Proc. ICML, pages 2171–2180, 2015.
  • [Swersky et al.2014] K. Swersky, J. Snoek, and R. P. Adams. Freeze-thaw Bayesian optimization. preprint arXiv:1406.3896, 2014.
  • [Vanschoren et al.2013] J. Vanschoren, J.N. van Rijn, B. Bischl, and L. Torgo. OpenML: Networked Science in Machine Learning. SIGKDD Explorations, 15(2):49–60, 2013.
  • [Wang2016] Ziyu Wang. Practical and Theoretical Advances in Bayesian Optimization. PhD thesis, Univ. Oxford, 2016.
  • [Wistuba2018] M. Wistuba. Deep Learning Architecture Search by Neuro-Cell-Based Evolution with Function-Preserving Mutations. In Berlingerio, M. et al., editor, Proc. ECML-PKDD, pages 243–258. Springer, 2018.
  • [Wolpert1996] D. H. Wolpert. The Lack of A Priori Distinctions Between Learning Algorithms. Neural Computation, 8(7):1341–1390, 1996.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393475
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description