Warmstarting of Model-based Algorithm Configuration

Warmstarting of Model-based Algorithm Configuration

Abstract

The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm from scratch for each new type of benchmark instances, here we propose to exploit information about ’s performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a very flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.

1Introduction

Many algorithms in the field of artificial intelligence rely crucially on good parameter settings to yield strong performance; prominent examples include solvers for many hard combinatorial problems (e.g., the propositional satisfiability problem SAT [15] or AI planning [5]) as well as a wide range of machine learning algorithms (in particular deep neural networks [25] and automated machine learning frameworks [7]). To overcome the tedious and error-prone task of manual parameter tuning for a given algorithm , algorithm configuration (AC) procedures automatically determine a parameter configuration of with low cost (e.g., running time) on a given benchmark set. General algorithm configuration procedures fall into two categories: model-free approaches, such as ParamILS [12], irace [21] or GGA [2], and model-based approaches, such as SMAC [16] or GGA++ [1].

Even though model-based approaches learn to predict the cost of different configurations on the benchmark instances at hand, so far all AC procedures start their configuration process from scratch when presented with a new set of benchmark instances. Compared with the way humans exploit information from past benchmark sets, this is obviously suboptimal. Inspired by the human ability to learn across different tasks, we propose to use performance measurements for an algorithm on previous benchmark sets in order to warmstart its configuration on a new benchmark set. As we will show in the experiments, our new warmstarting methods can substantially speed up AC procedures, by up to a factor of . In our experiments, this amounts to spending less than 20 minutes to obtain comparable performance as could previously be obtained within two days.

2Preliminaries

Algorithm configuration (AC). Formally, given a target algorithm with configuration space , a probability distribution across problem instances, as well as a cost metric to be minimized, the algorithm configuration (AC) problem is to determine a parameter configuration with low expected cost on instances drawn from :

In practice, is typically approximated by a finite set of instances drawn from . An example AC problem is to set a SAT solver’s parameters to minimize its average running time on a given benchmark set of formal verification instances. We refer to algorithms for solving the AC problem as AC procedures. They execute the target algorithm with different parameter configurations on different instances and measure the resulting costs .

Empirical performance models (EPMs). A core ingredient in model-based approaches for AC is a probabilistic regression model that is trained based on the cost values observed thus far and can be used to predict the cost of new parameter configurations on new problem instances . Since this regression model predicts empirical algorithm performance, it is known as an empirical performance model (EPM; [19] [19]). Random forests have been established as the best-performing type of EPM and are thus used in all current model-based AC approaches.

For the purposes of this regression model, the instances are characterized by numerical instance features. These features reach from simple ones (such as the number of clauses and variables of a SAT formula) to more complex ones (such as statistics gathered by briefly running a probing algorithm). Nowadays, informative instance features are available for most hard combinatorial problems (e.g., SAT [23], mixed integer programming [14], AI planning [6], and answer set programming [11]).

Model-based algorithm configuration. The core idea of sequential model-based algorithm configuration is to iteratively fit an EPM based on the cost data observed so far and use it to guide the search for well-performing parameter configurations. Algorithm ? outlines the model-based algorithm configuration framework, similarly as introduced by [16] ([16]) for the AC procedure SMAC, but also encompassing the GGA++ approach by [1] ([1]). We now discuss this algorithm framework in detail since our warmstarting extensions will adapt its various elements. First, in Line 1 a model-based AC procedure runs the algorithm to be optimized with configurations in a so-called initial design, keeping track of their costs and of the best configuration seen so far (the so-called incumbent). It also keeps track of a runhistory , which contains tuples of the cost obtained when evaluating configuration on instance . To obtain good anytime performance, by default SMAC only executes a single run of a user-defined default configuration on a randomly-chosen instance as its initial design and uses as its initial incumbent . GGA++ skips this step.

In Lines 2-5, the AC procedure performs the model-based search. While a user-specified configuration budget (e.g., number of algorithm runs or wall-clock time) is not exhausted, it fits a random-forest-based EPM on the existing cost data in (Line 3), aggregates the EPM’s predictions over the instances in order to obtain marginal cost predictions for each configuration and then uses these predictions in order to select a set of promising configurations to challenge the incumbent (Line 4) (SMAC) or to generate well-performing offsprings (GGA++). For this step, a so-called acquisition function trades off exploitation of promising areas of the configuration space versus exploration of areas for which the model is still uncertain; common choices are expected improvement [17], upper confidence bounds [26] or entropy search [10].

To determine a new incumbent configuration , in Line 5 the AC procedure races these challengers and the current incumbent by evaluating them on individual instances and adding the observed data to . Since these evaluations can be computationally costly the race only evaluates as many instances as needed per configuration and terminates slow runs early [12].

3Warmstarting Approaches for AC

In this section, we discuss how the efficiency of model-based AC procedures (as described in the previous section) can be improved by warmstarting the search from data generated in previous AC runs. We assume that the algorithm to be optimized and its configuration space is the same in all runs, but the set of instances can change between the runs. To warmstart a new AC run, we consider the following data from previous AC runs on previous instance sets :

  • Sets of optimized configurations found in previous AC runs on ;

  • Runhistory data of all previous AC runs. We denote the union of the instances from previous AC runs as .1

To design warmstarting approaches, we consider the following desired properties:

  1. When the performance data gathered on previous instance sets is informative about performance on the current instance set, it should speed up our method.

  2. When said performance data is misleading, our method should stop using it and should not be much slower than without it.

  3. The runtime overhead generated by using the prior data should be fairly small.

In the following subsections, we describe different warmstarting approaches that satisfy these properties.

3.1Warmstarting Initial Design (INIT)

The first approach we consider for warmstarting our model-based AC procedure is to adapt its initial design (Line 1 of Algorithm ?) to start from configurations that performed well in the past. Specifically, we include the incumbent configurations from all previous AC runs as well as the user-specified default .

Evaluating all previous incumbents in the initial design can be inefficient (contradicting Property 3), particularly if they are very similar. (This can happen when the previous instance sets are quite similar, or when multiple runs were performed on a single instance set to return the result with best training performance.)

To obtain a complementary set of configurations that covers all previously optimized instances well but is not redundant, we propose to use a two step approach. First, we determine the best configuration for each previous .

Secondly, we use an iterative, greedy forward search to select a complementary set of configurations across all previous instance sets—inspired by the per-instance selection procedure Hydra [31]. Specifically, for the second step we define the mincost of a set of configurations on the union of all previous instances as

start with and at each iteration, add the configuration to that minimizes . Because is a supermodular set function this greedy algorithm is guaranteed to select a set of configurations whose mincost is within a factor of () of optimal among sets of the same size [18].

Since we do not necessarily know the empirical cost of all on all , we use an EPM as a plug-in estimator to predict these costs. We train this EPM on all previous runhistory data . In order to enable this, the benchmark sets for all previous AC runs have to be characterized with the same set of instance features.

In SMAC, we use this set of complementary configurations in the initial design using the same racing function as in comparing challengers to the incumbent (Line ) to obtain the initial incumbent; to avoid rejecting challengers too quickly, a challenger is compared on at least instances before it can be rejected. In GGA++, these configurations can be included in the first generation of configurations.

3.2Data-Driven Model-Warmstarting (DMW)

Since model-based AC procedures are guided by their EPM, we considered to warmstart this EPM by including all cost data gathered in previous AC runs as part of its training data. In the beginning, the predictions of this EPM would mostly rely on , and as more data is acquired on the current benchmark this would increasingly affect the model. However, this approach has two disadvantages:

  1. When a lot of warmstarting data is available it requires many evaluations on the current benchmark to affect model predictions. If the previous data is misleading, this would violate our desired Property 2.

  2. Fitting the EPM on will be expensive even in early iterations, because will typically contain many observations. Even by using SMAC’s mechanism to invest at least the same amount of time in Lines 3 and 4 as in Line 5, in preliminary experiments this slowed down SMAC substantially (violating Property 3).

For these two reasons, we do not use this approach for warmstarting but propose an alternative. Specifically, to avoid the computational overhead of refitting a very large EPM in each iteration, and to allow our model to discard misleading previous data, we propose to fit individual EPMs for each once2 and to combine their predictions with those of an EPM fitted on the newly gathered cost data . This relates to stacking in ensemble learning [30]; however in our case, each constituent EPM is trained on a different dataset. Hence, in principle we could even use different instance features for each instance set.

To aggregate predictions of the individual EPMs, we propose to use a linear combination:

where are weights fitted with stochastic gradient descent (SGD) to minimize the combined model’s root mean squared error (RMSE). To avoid overfitting of the weights, we randomly split the current into a training and validation set (), use the training set to fit , and then compute predictions of and each on the validation set, which are used to fit the weights . Finally, we re-fit the EPM on all data in to obtain a maximally informed model.

In the beginning of a new AC run, with few data in , will not be very accurate, causing its weight to be low, such that the previous models will dominate the cost predictions. As more data is gathered in , the predictive accuracy of will improve and the predictions of the previous models will become less important.

Besides weighting based on the accuracy of the individual models, the weights have the second purpose of scaling the individual model’s predictions appropriately: these scales reflect the different hardnesses of the instance sets they were trained on and by setting the weights to minimize RMSE of the combined model on the current instances , they will automatically normalize for scale.

3.3Combining INIT and DMW (IDMW)

Importantly, the two methods we propose are complementary. A warmstarted initial design (INIT) can be easily combined with data-driven model-warmstarting (DMW) because both approaches affect different parts of model-based algorithm configuration: where to start from and how to integrate the full performance data from the current and the previous benchmarks to decide where to sample next. In fact, the two warmstarting methods can even synergize to yield more than the sum of their pieces: by evaluating strong configurations from previous AC runs in the initial design through INIT, the weights of the stacked model in DMW can be fitted on these important observations early on, improving the accuracy of its predictions even in early iterations.

4Experiments

We evaluated how our three warmstarting approaches improve the state-of-the-art AC procedure SMAC.3 In particular, we were interested in the following research questions:

Q1

Can warmstarted SMAC find better performing configurations within the same configuration budget?

Q2

Can warmstarted SMAC find well-performing configurations faster than default SMAC?

Q3

What is the effect of using warmstarting data from related and unrelated benchmarks?

Experimental Setup To answer these questions, we ran SMAC and our warmstarting variants4 on twelve well-studied AC tasks from the configurable SAT solver challenge [15], which are publicly available in the algorithm configuration library [13]. Since our warmstarting approaches have to generalize across different instance sets and not across algorithms, we considered AC tasks of the highly flexible SAT solver SparrowToRiss across instance sets. SparrowToRiss is a combination of two well-performing solvers: Riss [22] is a tree-based solver that performs well on industrial and hand-crafted instances; Sparrow [3] is a local-search solver that performs well on random, satisfiable instances. SparrowToRiss first runs Sparrow for a parametrized amount of time and then runs Riss if Sparrow could not find a satisfying assignment. Thus, SparrowToRiss can be applied to a large variety of different SAT instances. Riss, Sparrow and SparrowToRiss also won several medals in the international SAT competition. Furthermore, configuring SparrowToRiss is a challenging task because it has a very large configuration space with parameters and conditional dependencies.

To study warmstarting on different categories of instances, the AC tasks consider SAT instances from applications with a lot of internal structure, hand-crafted instances with some internal structure, and randomly-generated SAT instances with little structure. We ran SparrowToRiss on

  • application instances from bounded-model checking (BMC), hardware verification (IBM) and fuzz testing based on circuits (CF);

  • hand-crafted instances from graph-isomorphism (GI), low autocorrelation binary sequence (LABS) and -rooks instances (N-Rooks);

  • randomly generated instances, specifically, 3-SAT instances at the phase transition from the ToughSAT instance generator (3cnf), a mix of satisfiable and unsatisfiable 3-SAT instances at the phase transition (K3), and unsatisfiable 5-SAT instances from a generator used in the SAT Challenge and SAT Competition (UNSAT-k5);

  • and on randomly generated satisfiable instances, specifically, instances with 3 literals per clause and clauses (3SAT1k), instances with 5 literals per clause and clauses (5SAT500) and instances with literals per clause and clauses (7SAT90).

Further details on these instances are given in the description of the configurable SAT solver challenge [15]. The instances were split into a training set for configuration and a test set to validate the performance of the configured SparrowToRiss on unseen instances.

For each configuration run on a benchmark set in one of the categories, our warmstarting methods had access to observations on the other two benchmark sets in the category. For example, warmstarted SMAC optimizing SparrowToRiss on IBM had access to the observations and final incumbents of SparrowToRiss on CF and BMC.

As a cost metric, we chose the commonly-used penalized average running metric (PAR, i.e., counting each timeout as times the running time cutoff) with a cutoff of CPU seconds. To avoid a constant inflation of the PAR values, we removed all test instances post hoc that were never solved by any configuration in our experiments ( CF instances, IBM instances, BMC instances, GI instances, LABS instances and 3cnf instances).

On each AC task, we ran 10 independent SMAC runs with a configuration budget of days each. All runs were run on a compute cluster with nodes equipped with two Intel Xeon E5-2630v4 and GB memory running CentOS 7.

4.1Baselines

As baselines, we ran (I) the user-specified default configuration to show the effect of algorithm configuration, (II) SMAC without warmstarting, and (III) a state-of-the-art warmstarting approach for hyperparameter optimizers proposed by [29] ([29]), which we abbreviate as “adapted acquisition function” (AAF). The goal of AAF is to bias the acquisition function (Line in Algorithm ?) towards previously well-performing regions in the configuration space.5 To generalize AAF to algorithm configuration, we use marginalized prediction across all instances .

4.2Q1: Same configuration Budget

PAR10 score [sec] of final SparrowToRiss configurations returned by SMAC; median across SMAC runs. Best PAR10 is underlined and runs that are significantly better than SMAC (Column 2) according to a (one-sided) Mann-Whitney U test () are highlighted in bold face. is the default configuration of SparrowToRiss.
SMAC AAF INIT DMW IDMW
CF 326.5
IBM 150.6
BMC 421.5
GI 314.18
LABS 330.14
N-Rooks 116.78
3cnf 890.51
K3 152.85
UNSAT-k5 151.91
3SAT1k 104.42
5SAT500 3000
7SAT90 52.32

Table ? shows the median PAR10 test scores of the finally-returned configurations across the SMAC runs. Default SMAC nearly always improved the PAR scores of SparrowToRiss substantially compared to the SparrowToRiss default, yielding up to a -fold speedup (on UNSAT-k5). Warmstarted SMAC performed significantly better yet on of the AC tasks (BMC, 3cnf, 5SAT500 and 7SAT90), with additional speedups up to -fold (on 5SAT500). On two of the crafted instance sets (LABS and N-Rooks), the warmstarting approaches performed worse than default SMAC—details discussed later.

Overall, the best results were achieved by the combination of our approaches, IDMW.

This yielded the best performance of all approaches in 6 of the 12 scenarios (with sometimes substantial improvements over default SMAC) and statistically insignificantly different results than the best approach in 3 of the scenarios. Notably, IDMW performed better on average than its individual components INIT and DMW and clearly outperformed AAF.

4.3Q2: Speedup

Table 1: Speedup of warmstarted SMAC compared to default SMAC, i.e., comparing the time points of SMAC with and without warmstarting after which they do not perform significantly worse (according to a permutation test) than SMAC with the full budget. Speedups indicate that warmstarted SMAC reached the final performance of default SMAC faster, speedups indicate that default SMAC was faster. We marked the best speedup () in bold-face.
AAF INIT DMW IDMW
CF 0.1 0.5 0.7 2.7
IBM 3.9 16.2 1.4 9
BMC 1.2 1 11 29.3
GI 25.6 0.6 7.1 19.4
LABS 0.8 0.8 0.8 0.8
N-Rooks 0.4 0.4 0.4 0.5
3cnf 10.7 1 1 8.4
K3 0.9 0.9 1.8 1.8
UNSAT 1 1 1 1
3SAT1k 3.1 2.1 2.1 3.8
5SAT500 6 0.7 0.7 0.8
7SAT90 53.5 2.3 0.5 165.3
Geo Avg. 2.4 1.1 1.3 4.3
Figure 1: BMC
Figure 1: BMC
Figure 2: N-Rooks
Figure 2: N-Rooks
Figure 3: LABS
Figure 3: LABS

Table 1 shows how much faster our warmstarted SMAC reached the PAR10 performance default SMAC reached with the full configuration budget.6 The warmstarting methods outperformed default SMAC in almost all cases (again except LABS and N-Rooks), with up to 165-fold speedups. The most consistent speedups were achieved by the combination of our warmstarting approaches, IDMW, with a geometric-average -fold speedup. We note that our baseline AAF also yielded good speedups (geometric average of 2.4), but its final performance was often quite poor (see Table ?).

Figure ? illustrates the anytime test performance of all SMAC variants.7 In Figure 1, AAF, INIT and IDMW improved the performance of SparrowToRiss very early (after roughly - seconds), but only the DMW variants performed well in the long run.

To study the effect of our worst results, Figure 2 and Figure 3 shows the anytime performance on N-Rooks and LABS, respectively. Figure 2 shows that warmstarted SMAC performed better in the beginning, but that default SMAC performed slightly better in the end. This behavior is not captured in our quantitative analysis in Tables ? and Table 1. In contrast, Figure 3 shows that for LABS, warmstarted SMAC was initially mislead and then started improving like default SMAC, but with a time lag; we note that we only observed this pattern on LABS and conclude that configurations found on N-Rooks and GI do not generalize to LABS.

4.4Q3: Warmstarting Influence

Figure 4: red: IBM, blue: CF, green: BMC
Figure 4: red: IBM, blue: CF,
green: BMC
Figure 5: red: UNSAT-k5, blue: K3, green: 3cnf
Figure 5: red: UNSAT-k5, blue: K3, green: 3cnf

To study how our warmstarting methods learn from previous data, we show in Figure ? how the weights of the DMW approach changed over time. Figure 4 shows a representative plot: the weights were similar in the beginning (i.e., all EPMs contributed similarly to cost predictions) and over time, the weights of the previous models decreased, with the weight of the current EPM dominating. When optimizing on IBM, the EPM trained on observations from CF was the most important EPM in the beginning.

In contrast, Figure 5 shows a case in which the previous performance data acquired for benchmarks K3 and 3cnf do not help for cost predictions on UNSAT-k5. (This was to be expected, because 3cnf comprises only satisfiable instances, K3 a mix of satisfiable and unsatisfiable instances, and UNSAT-k5 only unsatisfiable instances.) As the figure shows, our DMW approach briefly used the data from the mixed K3 benchmark (blue curves), but quickly focused only on data from the current benchmark. These two examples illustrate that our DMW approach indeed successfully used data from related benchmarks and quickly ignored data from unrelated ones.

5Related Work

The most related work comes from the field of hyperparameter optimization (HPO) of machine learning algorithms. HPO, when cast as the optimization of (cross-)validation error, is a special case of AC. This special case does not require the concept of problem instances, does not require the modelling of running times of randomized algorithms, does not need to adaptively terminate slow algorithm runs and handle the resulting censored algorithm running times, and typically deals with fairly low-dimensional and all-continuous (hyper-)parameter configuration spaces. These works therefore do not directly transfer to the general AC problem.

Several warmstarting approaches exist for HPO. A prominent approach is to learn surrogate models across datasets [27]. All of these works are based on Gaussian process models whose computational complexity scales cubically in the number of data points, and therefore, all of them were limited to hundreds or at most thousands of data points. We generalize them to the AC setting (which, on top of the differences to HPO stated above, also needs to handle up to a million cost measurements for an algorithm) in our DMW approach.

Another approach for warmstarting HPO is by adapting the initial design. [8] ([8]) proposed to initialize HPO in the automatic machine learning framework Auto-Sklearn with well-performing configurations from previous datasets. They had optimized configurations from different machine learning data sets available as warmstarting data and chose which of these to use for a new dataset based on its characteristics; specifically, they used the optimized configurations from the most similar datasets. This approach could be adapted to AC warmstarting in cases where we have many AC benchmarks. However, one disadvantage of the approach is that – unlike our INIT approach – it does not aim for complementarity in the selected configurations. [28] ([28]) proposed another approach for warmstarting the initial design which does not depend on instance features and is not limited to configurations returned in previous optimization experiments. They combined surrogate predictions from previous runs and used gradient descent to determine promising configurations. This approach is limited to continuous (hyper-)parameters and thus does not apply to the general AC setting.

One related variant of algorithm configuration is the problem of configuring on a stream of problem instances that changes over time. The ReACT approach [9] targets this problem setting, keeping track of configurations that worked well on previous instances. If the characteristics of the instances change over time, it also adapts the current configuration by combining observations on previous instances and on new instances. In contrast to our setting, ReACT does not return a single configuration for an instance set and requires parallel compute resources to run a parallel portfolio all the time.

6Conclusion & Future Work

In this paper, we introduced several methods to warmstart model-based algorithm configuration (AC) using observations from previous AC experiments on different benchmark instance sets. As we showed in our experiments, warmstarting can speed up the configuration process up to 165-fold and can also improve the configurations finally returned.

A practical limitation of our DMW approach (and thus also for IDMW) is that the memory consumption grows substantially with each additional EPM (at least when using random forests fitted on hundreds of thousands of observations). We also tried to study warmstarting SMAC for optimizing SparrowToRiss on all instance sets except the one at hand, but unfortunately, the memory consumption exceeded 12GB RAM. Therefore, we plan to reduce memory consumption and to use instance features to select a subset of EPMs constructed on similar instances.

Finally, a promising future direction is to integrate warmstarting into iterative configuration procedures, such as Hydra [31], ParHydra [20], or Cedalion [24], which construct portfolios of complementary configurations in an iterative fashion using multiple AC runs.

7Acknowledgements

The authors acknowledge funding by the DFG (German Research Foundation) under Emmy Noether grant HU 1900/2-1 and support by the state of Baden-Württemberg through bwHPC and the DFG through grant no INST 39/963-1 FUGG.

Footnotes

  1. If the set of instances and the runhistory are not indexed, we always refer to the ones of the current AC run.
  2. In fact, if several AC runs share the same instance set , we fit only a single EPM based on the union of observations on .
  3. The source code of GGA++ is not publicly available and thus, we could not run experiments on GGA++.
  4. Code is publicly available at: URL hidden for blind review
  5. We note that combining AAF and INIT is not effective because evaluating the incumbents of INIT would nullify the acquisition function bias of AAF.
  6. To take noise into account across the runs, we performed a permutation test (with with permutations) to determine the first time point from which onwards there was no statistical evidence that default SMAC with a full budget would perform better.
  7. Since Figure ? shows test performance on unseen test instances, performance is not guaranteed to improve monotonically (a new best configuration on the training instances might not generalize well to the test instances).

References

  1. 2015.
    Ansótegui, C.; Malitsky, Y.; Sellmann, M.; and Tierney, K. Model-based genetic algorithms for algorithm configuration.
  2. 2009.
    Ansótegui, C.; Sellmann, M.; and Tierney, K. A gender-based genetic algorithm for the automatic configuration of algorithms.
  3. 2011.
    Balint, A.; Frohlich, A.; Tompkins, D.; and Hoos, H. Sparrow2011.
  4. 2014.
    Bardenet, R.; Brendel, M.; Kégl, B.; and Sebag, M. Collaborative hyperparameter tuning.
  5. 2011.
    Fawcett, C.; Helmert, M.; Hoos, H.; Karpas, E.; Roger, G.; and Seipp, J. Fd-autotune: Domain-specific configuration using fast-downward.
  6. 2014.
    Fawcett, C.; Vallati, M.; Hutter, F.; Hoffmann, J.; Hoos, H.; and Leyton-Brown, K. Improved features for runtime prediction of domain-independent planners.
  7. 2015.
    Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J. T.; Blum, M.; and Hutter, F. Efficient and robust automated machine learning.
  8. 2015.
    Feurer, M.; Springenberg, T.; and Hutter, F. Initializing Bayesian hyperparameter optimization via meta-learning.
  9. 2014.
    Fitzgerald, T.; O’Sullivan, B.; Malitsky, Y.; and Tierney, K. React: Real-time algorithm configuration through tournaments.
  10. 2012.
    Hennig, P., and Schuler, C. Entropy search for information-efficient global optimization.
  11. 2014.
    Hoos, H.; Lindauer, M.; and Schaub, T. claspfolio 2: Advances in algorithm selection for answer set programming.
  12. 2009.
    Hutter, F.; Hoos, H.; Leyton-Brown, K.; and Stützle, T. ParamILS: An automatic algorithm configuration framework.
  13. 2014a.
    Hutter, F.; López-Ibánez, M.; Fawcett, C.; Lindauer, M.; Hoos, H.; Leyton-Brown, K.; and Stützle, T. Aclib: a benchmark library for algorithm configuration.
  14. 2014b.
    Hutter, F.; Xu, L.; Hoos, H.; and Leyton-Brown, K. Algorithm runtime prediction: Methods and evaluation.
  15. 2017.
    Hutter, F.; Lindauer, M.; Balint, A.; Bayless, S.; Hoos, H.; and Leyton-Brown, K. The configurable SAT solver challenge (CSSC).
  16. 2011.
    Hutter, F.; Hoos, H.; and Leyton-Brown, K. Sequential model-based optimization for general algorithm configuration.
  17. 1998.
    Jones, D.; Schonlau, M.; and Welch, W. Efficient global optimization of expensive black box functions.
  18. 2012.
    Krause, A., and Golovin, D. Submodular function maximization.
  19. 2009.
    Leyton-Brown, K.; Nudelman, E.; and Shoham, Y. Empirical hardness models: Methodology and a case study on combinatorial auctions.
  20. 2017.
    Lindauer, M.; Hoos, H.; Leyton-Brown, K.; and Schaub, T. Automatic construction of parallel portfolios via algorithm configuration.
  21. 2016.
    López-Ibáñez, M.; Dubois-Lacoste, J.; Caceres, L. P.; Birattari, M.; and Stützle, T. The irace package: Iterated racing for automatic algorithm configuration.
  22. 2014.
    Manthey, N. Riss 4.27.
  23. 2004.
    Nudelman, E.; Leyton-Brown, K.; Devkar, A.; Shoham, Y.; and Hoos, H. Understanding random SAT: Beyond the clauses-to-variables ratio.
  24. 2015.
    Seipp, J.; Sievers, S.; Helmert, M.; and Hutter, F. Automatic configuration of sequential planning portfolios.
  25. 2012.
    Snoek, J.; Larochelle, H.; and Adams, R. P. Practical Bayesian optimization of machine learning algorithms.
  26. 2010.
    Srinivas, N.; Krause, A.; Kakade, S.; and Seeger, M. Gaussian process optimization in the bandit setting: No regret and experimental design.
  27. 2013.
    Swersky, K.; Snoek, J.; and Adams, R. Multi-task Bayesian optimization.
  28. 2015.
    Wistuba, M.; Schilling, N.; and Schmidt-Thieme, L. Learning hyperparameter optimization initializations.
  29. 2016.
    Wistuba, M.; Schilling, N.; and Schmidt-Thieme, L. Hyperparameter optimization machines.
  30. 1992.
    Wolpert, D. Stacked generalization.
  31. 2010.
    Xu, L.; Hoos, H.; and Leyton-Brown, K. Hydra: Automatically configuring algorithms for portfolio-based selection.
  32. 2014.
    Yogatama, D., and Mann, G. Efficient transfer learning method for automatic hyperparameter tuning.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
1052
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description