Neural Generative Models for Global Optimization with Gradients

Neural Generative Models for Global Optimization with Gradients

Abstract

The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones. In this paper we focus on the subproblem of global optimization for differentiable functions and we propose an Evolutionary Search-inspired solution where we model point search distributions via Generative Neural Networks. This approach enables us to model diverse and complex search distributions based on which we can efficiently explore complicated objective landscapes. In our experiments we show the practical superiority of our algorithm versus classical Evolutionary Search and gradient-based solutions on a benchmark set of multimodal functions, and demonstrate how it can be used to accelerate Bayesian Optimization with Gaussian Processes.

1 Introduction

In the sphere of practical applications for global optimization, there are many types of problems in which the cost of evaluating the objective function is high, rendering brute force methods inefficient. Two classical examples of such problems are hyper-parameters search in machine learning and model fitting - which often requires running long and expensive simulations. In many cases, derivatives of the objective function are not accessible or too expensive to compute, restricting the set of solvers to zero order algorithms (using function evaluations only). Still, global optimization with gradients covers a great diversity of interesting problems, like the efficient maximization of acquisition functions in Bayesian Optimization (BO) jones2001taxonomy () or the training of machine learning models with non-decomposable objectives such as Sample-Variance Penalization Maurer2009 (); Swaminathana () – decomposable ones being commonly optimized using stochastic methods. Recent work pedregosa2016hyperparameter (); maclaurin2015gradient () also opened the door to affordable computations of hyper-parameters derivatives, pushing for the development of efficient first order methods for global optimization.

One popular class of zero order global optimization algorithms is Evolutionary Search (ES). ES methods have recently seen a growing interest in the machine learning community, namely for the good results of Covariance Matrix Adaptation Evolutionary Strategies (CMA-ES) hansen2001completely () for hyper-parameter search and Natural Evolution Strategies (NES) wierstra2008natural (); salimans2017evolution () for policy search in reinforcement learning. The main idea behind ES is to maintain a population of points where to evaluate the objective function . Points with small objective value are kept for the next population (selection), combined together (recombination) and slightly randomly modified (mutation). This process is repeated iteratively, and relies on the internal noise to explore ’s landscape. In both CMA-ES and NES, populations are sampled from Gaussian distributions, their moments being modified iteratively along the optimization procedure.

If the simplicity of Gaussian search distributions allows computational tractability, it can also slow down the optimization process. For complicated functions, ellipsoids can give a poor fit of the areas with small objective values, which hinders the exploration process. In Figure 1 we show the evolution of under the CMA-ES algorithm for the Rosenbrock function. With the provided initialization, the Gaussian search distribution doesn’t perform efficient exploration as it cannot adopt the curvature imposed by the landscape. It expands and retracts until it finally fits into the straight part of the curved valley. Using more diverse and general search distributions is therefore an interesting way to improve evolutionary search. One of the most promising class of models for fitting search distributions is the class of generative neural networks dziugaite2015training (); mackay1995bayesian () that have been shown to be able to model non-trivial multimodal distributions cao2018improving (); vukotic2017generative ().

(a) Iteration 0
(b) Iteration 1
(c) Iteration 10
Figure 1: Evolution of samples from on the Rosenbrock function for CMA-ES.

Our main contribution is to replace Gaussian search distributions by generative neural networks, that allows us to use gradient information and to improve both convergence speed and quality of found minima. To the best of our knowledge, this is the first time that neural generative models are proposed for optimization purposes. Along with introducing this new type of architecture we show how to train it efficiently, and how it can be used to quickly optimize multimodal differentiable functions. The rest of this paper is organized as follows: in Section 2, we present related work on global optimization with zero and first order oracles. We describe in Section 3 the algorithm we propose, discuss its links with ES and provide insights into its architecture settings. In Section 4 we report experiments on a continuous global optimization benchmark, and show how our algorithm can be used to accelerate BO. We conclude and discuss future work in Section 5.

2 Problem formulation and related work

In global optimization the task is to find the global optimum of function over a compact set :

For general classes of functions, this cannot be performed greedily and requires the exploration of the associated landscape. Below, we briefly present zero and first order algorithms designed at solving this problem.

2.1 Zero order algorithms

When only provided with a zero order oracle of the objective, state-of-art methods are either based on evolutionary search or on designing a surrogate of the objective integrating optimism to ensure exploration - which is at the heart of the BO framework.

Evolutionary Search

A thorough overview of ES is out of the scope of this paper. Here we briefly present the NES class of algorithms and CMA-ES, for they provide intuition for our method. Both these methods maintain a set of points (called population) sampled from a parametric search distribution , at which they evaluate the objective . From these function evaluations, NES produces a search gradient on the parameters towards lower expected objective. Formally, it minimizes:

(1)

encouraging to steer its probability mass in area of smallest objective value. One version of the NES uses the score function estimator for using the likelihood-ratio trick in close resemblance to williams1992simple ():

(2)

that can be approximated from samples, and then feeds it into a stochastic gradient descent algorithm to optimize . Note that this requires a closed form for , which is chosen to be a Gaussian distribution for its tractability, with describing its mean and covariance matrix. Although it doesn’t explicitly minimizes (1), CMA-ES aims at the same objective. The search distribution is tied to be Gaussian and relies on Covariance Matrix Adaptation for the update. Formally (for its simplest form), CMA-ES retains a fraction of the best points within the population that are used to update the mean and variance of . We refer the interested reader to hansen2016cma () for thorough details on the procedure followed by CMA-ES.

Bayesian Optimization

Bayesian Optimization (BO) is a popular machine-learning framework for global optimization of expensive black-box functions that doesn’t require derivatives. The main idea behind BO is to sequentially: (a) fit a model of the objective given the current history of function evaluations , (b) use this model to identify a promising future query , (c) sample and add it to . BO has shown state-of-the-art performances for model fitting acerbi2017practical () or hyper-parameter tuning in machine learning snoek2012practical ().

Gaussian Processes (GPs) provide an elegant way to model the function provided and to balance the exploration/exploitation trade-off in a principled manner, by defining priors over functions. Let be a positive definite kernel over . The prior imposes that for any collection , the vector is a multivariate Gaussian with mean and covariance . Using conditional rules for Gaussian distribution, one can show that is also a Gaussian, whose mean and covariance are:

(3)
(4)

These derivations as well as many details and insights on GPs can be found in rasmussen2004gaussian (). The exploitation/exploration trade-off is ruled by the acquisition function, which role is to find the point where the improvement over the current minimum is likely to be the highest. In this work, we focus on one popular acquisition function: the Expected Improvement (EI, jones2001taxonomy ()), although there exists many others (villemonteix2009informational () ,srinivas2009gaussian ()) which could be equivalently used in this work. After fitting a GP on , we obtain a distribution for the value of the objective at every test point . The EI is defined as:

(5)

The EI is non-negative and takes the value at every of the training points. It is likely to be highly multimodal, especially when there is a large number of training points. It is generally smooth as it inherits the smoothness of the kernel (usually at least once differentiable). It could seem that so far, we just transposed the problem of minimizing the objective to the maximization of the acquisition function to find , where the expected improvement is the highest. However, in the BO setting, a common assumption is that the acquisition function is excessively cheaper to evaluate than the actual objective, which allows for brute force approach of its optimization. Still, GP inference cost grows cubically with the number of queried points and therefore in practice, BO becomes inefficient in high dimensions when one would need a large number of points before locating a global optimum. Being able to reduce the number of acquisition function evaluations is crucial for scaling BO. Also, note that the acquisition function has computable derivatives which might be very useful for its fast global maximization.

2.2 First-order algorithms

Surprisingly, there exist very few gradient-based methods for global optimization. A popular technique is to repeat greedy algorithms like gradient descent from a collection of initial starting points. This method has become very popular in the BO community, where repeated BFGS has become the standard way to optimize the acquisition function, as it (almost always) has computable derivatives. The initial starting points are usually sampled uniformly over the compact set , or according to adaptive grid strategies. Note that this initial population could also be the result of an ES procedure, although we are not aware of any work implementing this idea.

Recently, it was proposed to incorporate derivative information in the BO framework, either to improve the conditioning of the GP covariance matrix osborne2009gaussian (), or to affect directly the acquisition function lizotte2008practical (); wu2017bayesian (). However, the previous remarks on BO scalability still apply to these approaches.

3 Generative Neural Network for Differentiable Evolutionary Search

NES and CMA-ES choose the search distribution to be Gaussian, although any parametric distribution could be used to generate populations. A general way to construct one is to apply a parametric transformation to an initial random variable , which probability distribution we note . In our case, the parameter vector of this transformation has to be adapted or learned so that can be used to explicitly optimize .

Neural networks are able to generate complex transformations and their weights and biases can be learned quickly thanks to gradient back-propagation, and therefore they constitute good candidates for generating from . We here describe the method we propose to leverage this idea, called Generative Neural Network for Differentiable ES - GENNES.

3.1 Core algorithm

We note the neural network parametrized by (the weights and biases of the network), mapping the noise into points . As our goal is to generate queries with low-value of the objective, (1) is a natural cost function for training . However, note that we don’t have access to a closed form of and therefore ideas similar to NES cannot be applied. Still, (1) can be rewritten as:

(6)

where is the output of with input . This allows us to compute a stochastic estimate of ’s gradient with respect to :

(7)

where is a collection of samples from . This stochastic estimate of the gradient is then fed to a stochastic gradient descent algorithm like Adam kingma2014adam (). Note that the Jacobian term of the neural network can be easily computed via back-propagation. Figure 2 demonstrates the behavior of a naive implementation of GENNES on the Rosenbrock (unimodal, poorly conditioned) function and the Rastrigin function (highly multimodal). The batch size used here for the sake of illustration is purposely much larger than it would practically be for such low-dimensional problems to better describe the support of . It is noticeable that GENNES is able to learn a curved-shaped search distribution on the Rosenbrock, in accordance to the contour lines of the function. On Rastrigin’s function, we see that GENNES is able to explore several minima (including the global one) at the same time. This is very useful for multimodal objectives, where the exploration has to be conducted in disconnected areas of the landscape. We provide in Algorithm 1 the pseudo-code for GENNES.

1:procedure GENNES()
2:     Initialize with .
3:     for  do
4:         Sample i.i.d.
5:         Generate by a feed-forward pass of .
6:         Query and at , .
7:         Compute the stochastic estimator:
8:         Apply Adam with learning rate and gradient estimate to .
9:     end for
10:return
11:end procedure
Algorithm 1 Generative Neural Model for Differentiable ES (GENNES)
(a) Iteration 0
(b) Iteration 10
(c) Iteration 50
(a) Iteration 0
(b) Iteration 20
(c) Iteration 100
Figure 2: Evolution of with GENNES on the Rosenbrock (top) and Rastrigin (bottom) functions.

3.2 Links with ES

GENNES shares many intuitive links with ES. For instance, the batch size can be interpreted as a population size. Indeed, the batch can be understood as a population, which fitness signals (e.g ’s gradients) are used to encourage into generating better individuals at the next generation - i.e next update. In short, pushes search points along ’s negative gradients. However, because its capacity is limited and all the search points are generated by the same model, it is forced to mix and balance information coming from different queries, which can be understood as the recombination process of ES. Sampling different noise at different iterations can itself be understood as a mutation step. We optimize with Adam, as we noticed training the network with momentum gives inertia to the output distribution, leading to faster optimization. From an ES perspective this can be understood as using an evolution path hansen2015evolution ().

3.3 Architecture and refinements

Architecture

In all our experiments we use fully connected layers with leaky ReLU maas2013rectifier () activations to map the input noise to an intermediate embedding layer. We then add a fully connected layer with hyperbolic tangent activations as the final layer of the generator . This allows us to impose the output distribution to lay on a compact set - as we usually set bounds to the domain of interest over which we want to optimize the objective . We don’t use any renormalization techniques like batch normalization or weight normalization as we found that a simple architecture performed best in our experiments. The weights are initialized with Glorot initialization glorot2010understanding (), expect for those belonging to the last layer which are initialized according to a normal distribution. The noise is sampled according to a multivariate uniform distribution. The dimension, as well as the support of this distribution, are important hyper-parameters as they shape the form of the initial output distribution. An inconvenient shape (due to the noise dimension being way smaller than the objective dimension) or a degenerate support (if ’s standard deviation is too small) can seriously hinder the optimization process. To this end, we provide in Appendix B a robust adaptive initialization strategy to set the variance of the initial weights of the final layer and ’s magnitude.

Noise annealing

An interesting problem we encountered was the one of precision. With the architecture we described until here, GENNES is able to quickly locate ’s global minimum but fails to sample points arbitrarily close to it, leading the optimization procedure into what appears as a premature convergence. Ideally, we want to converge to a Dirac located at the global minimum, which means that should map the whole support of into a single point . This either requires infinite capacity of the network, or to unlearn the input - i.e set all weights of the first layer to zero. Both solutions being unrealistic, we propose to progressively reduce the support of . This procedure, which we refer to as noise annealing in similarity to the simulated annealing algorithm, can be used if high precision on the global minimum is required. Noise annealing has an interesting effect of the optimization procedure: as the support of the input distribution shrinks, so does the support of the output distribution, forcing it to be unimodal and with small support which progressively makes it follow the negative gradient of the function. To some extent, we can consider it as applying gradient descent to the most fit particle after a first evolutionary search procedure. Rules of thumbs are given in Appendix C for setting the schedule of the noise annealing.

4 Experimental Results

In this section, we first evaluate GENNES on a small continuous global optimization benchmark. The good results we obtain motivate the second set of experiments, where we show how GENNES can accelerate the BO procedure on a toy example and a hyper-parameter search experiment.

4.1 Continuous optimization benchmark

Experimental set-up

Here, we describe the experimental results obtained on four functions taken from the noiseless continuous optimization benchmark BBOB-2009 hansen:inria-00362633 () testbed: Rastrigin, Ackley, Styblinksi and Schwevel functions, which literal expressions can be found in Appendix A.1. All these functions are multimodal, although Ackley has much more local minima than others on the domain it is evaluated on. It also has the best global structure as the global minimum is at the center of a globally decreasing landscape. For every problem instance, we repeat the optimization procedure for a 10-fold, and report mean regret as a function of number of evaluations. At every new fold, the position of the global minimum of the objective is randomly translated, and the weights of the generator for GENNES are randomly initialized with a different seed.

Baselines

We compare our algorithm to a repeated version of L-BFGS (i.e ran from multiple start points), and also provide the results obtained by the respective authors implementations of aCMA-ES hansen2010benchmarking () and sNES schaul2012benchmarking (). Both these algorithms use zero-order oracle for optimization, while our algorithm and L-BFGS use first-order oracle. They however provide state-of-the art results for global optimization of such functions and therefore allow to discuss the validity of the minima discovered by GENNES. It is important to keep in mind that we count a gradient evaluation as equal to a function evaluation of the objective, therefore conclusions of these experiments will only hold in settings where this rule is reasonable. We run aCMA-ES and sNES with default (adapted) hyper-parameters, known to be robust. We only impose the initial mean - randomly drawn within the domain of interest - and covariance of their initial distribution, as well as their population size to match it with the one we use for GENNES, which we set to . The starting points for the multiple L-BFGS runs are taken uniformly at random over the domain of interest, and each run stops after a convergence criterion is met. Details on the baseline settings can be found in Appendix A.2. Note that we don’t compare with derivative-enabled BO methods as their runtime is excessively large for the regimes we consider.

Results and analysis

Rastrigin () Rastrigin ()
# evaluations 1e2 1e3 1e4 1e5 1e2 1e3 1e4 1e5
GENNES 71.3 41.3 4,1 3.9 279.0 212.1 72.3 19.0
LBFGS 24.3 13.3 6.9 5.4 90.4 74.5 60.7 41.2
sNES 107.2 48.1 4.3 4.2 333.5 242.4 40.3 20.8
aCMA-ES 100.9 40.3 6.1 6.0 100.9 233.2 37.0 37.0
Ackley () Ackley ()
# evaluations 1e2 1e3 1e4 1e5 1e2 1e3 1e4 1e5
GENNES 7.9 2.5 0.007 0.005 9.4 5,4 0.007 0.006
LBFGS 11.9 8.3 5.1 3.2 12.9 11.6 9.0 7.2
sNES 11.4 1.6 1e-7 5e-10 18.7 10.5 9e-05 2e-8
aCMA-ES 10.7 0.2 2e-11 1e-14 18.6 5.9 2e-09 1e-14
Styblinski () Styblinski ()
# evaluations 1e2 1e3 1e4 1e5 1e2 1e3 1e4 1e5
GENNES 206.1 111.1 7.8 5.2 2371.3 587.7 97.2 21.1
LBFGS 35.3 24.0 9.9 5.6 169.0 113.1 98.2 67.8
sNES 888.6 39.3 11.3 11.1 19022.0 647.3 70.6 68.9
aCMA-ES 657.6 28.9 24.1 23.9 14213 488.2 131.4 85.4
Schwefel () Schwefel ()
# evaluations 1e2 1e3 1e4 1e5 1e2 1e3 1e4 1e5
GENNES 3011.9 2186.3 595.8 533.6 10100.6 8381.9 1235.4 943.8
LBFGS 1463.2 989.7 718.5 542.8 5021.4 4540.6 3842.6 3331.7
sNES 3087.1 2587.7 685.3 685.3 20807.6 10253.4 2447.8 2429.4
aCMA-ES 3042.3 2301.8 1749.1 1739.2 13000.4 100021.2 5661.7 5658.7
Table 1: Comparison of best found objective value on Rastrigin, Ackley, Styblinski and Schwefel functions in dimensions , as a function of number of objective evaluations. Results are averaged over 10 repetitions. Experiments are stopped if all methods have converged or if the maximum budget has been reached.

In Table 1 we report the difference between the best value found by GENNES, sNES, aCMA-ES and repeated L-BFGS and the value of the objective’s minimum as a function of the number of function evaluations. It is noticeable that on the Rastrigin, Styblinski and Schwefel functions, repeated LBFGS is the best algorithm for small number of evaluations, as it greedily exploits the landscape and finds local minima very fast. However, as the number of evaluations grows, the early exploration performed by GENNES pays off and it finds the best minima out of all methods within the given budget. Note that on these three functions, GENNES performances increase compared to other methods as the dimension grows. This is to be expected, as repeated L-BFGS suffers from the curse of dimensionality, and as GENNES leverages the dimension independent oracle complexity of gradient-based methods. In a very highly multimodal function like Ackley, GENNES is better than its baselines for small number of evaluations, and we again see its performances increasing with the dimensionality of the problem. However, it gets stuck near the global optimum when zero order methods manage to find the global optimum very precisely. It is very likely that this phenomenon is caused by the sharp cliff surrounding the global minimum of Ackley’s function, making the task harder for gradient-based methods.

4.2 Accelerating Bayesian Optimization

We here present a possible application of GENNES, which is the efficient minimization of the acquisition function from a BO procedure with GPs. We propose to compare GENNES for optimizing the acquisition function with the repeated L-BFGS method - which is commonly used for this purpose in the BO community. The acquisition function we use is the EI - although results are reproducible for any differentiable acquisition function, while the surrogate is a GP with Automatic Relevance Determination Matern52 kernel. We run comparisons on two tasks: a low-dimensional hyper-parameter optimization task, and a high-dimensional multimodal toy function. Experiments are ran using the library GPyOpt gpyopt2016 ().

We first apply BO to the optimization of the hyper-parameters of a logistic regression model on the digits dataset from scipy. The four hyper-parameters to optimize are the learning rate, the L2 and L1 regularization coefficients and the number of iterations. Figure 3 displays the test error as a function of both the number of objective function evaluations and acquisition function queries. The curves are averaged over a 10-fold experiment. It is noticeable that GENNES finds better queries than L-BFGS after maximizing the acquisition function, but also does so in less acquisition queries. We repeat this experience in Figure 4 on the toy multimodal function Alpine1 in dimension , with similar conclusions. Note that as the problem dimension will grow, the improvement of GENNES over L-BFGS is likely to increase as there would be more local minima. Extensive details about this experimental set-up can be found in Appendix A.3.

(a) Test error versus number of objective evaluations.
(b) Test error versus number of acquisition function queries.
Figure 3: Comparison of GENNES and repeated L-BFGS for hyper-parameters optimization via BO on the digits dataset (). Results are average over 10 repetitions.
(a) Objective value versus number of objective evaluations. Results are average over 10 repetitions.
(b) Objective value versus number of acquisition function queries.
Figure 4: Comparison of GENNES and repeated L-BFGS for global optimization of the Alpine1 function (). Results are average over 10 repetitions.

5 Conclusion

We propose GENNES, a neural generative model to optimize multimodal black-box functions for which gradients are avalaible. We show the merits of our approach on benchmark set of multimodal functions by comparing with state-of-the-art zero order methods and a repeated gradient-based greedy method. We propose to use GENNES to accelerate Bayesian Optimization by efficiently maximizing acquisition functions. We show that we are able to outperform the most popular solution for this task, leading to faster discoveries of the objective global minimum. We believe this to be an important contribution, as Bayesian Optimization is often limited by the cost of evaluation of the acquisition function when the number of query points is large. Finding ways to optimize the acquisition function in less queries is therefore a step forward in scaling Bayesian Optimization.

Other applications of this new method are numerous. In future work, we wish to use recent methods maclaurin2015gradient () to directly compute hyper-parameters gradients and compare to recent gradient-based Bayesian Optimization methods wu2017bayesian (). Another promising application of our method is the efficient global optimization of deep neural networks, which we also plan to tackle in future work.

Appendix A Experimentations details

a.1 Benchmark functions

Table 2 details the formulas of the benchmark functions from 4.1 for a given dimension . All functions have a minimum value . In our experiment, the objective is translated at every new fold to shift the position of the global minimum.

Function Literal expression Domain
Rastrigin
Ackley
Styblinski
Schwefel
Alpine1
Table 2: Benchmark functions literal expressions.

a.2 Baselines settings

For every experiment, we set the population size of aCMA-ES, sNES and GENNES to . At every new fold, we randomly impose the mean of the initial distribution of aCMA-ES and sNES to lie within the objective domain. The standard deviation is set to properly cover the whole domain. All other hyper-parameters for both of these methods are left untouched, as their author provide robust adapted methods to set them (see [6, 22]).

For GENNES, a new seed is used at every new fold for the initialization of the weights and biases of the generator. The network is composed of fully connected layers with leaky ReLU activations (with ) and a last fully connected layer with tanh activation. The variance of the weights of this last layer are chosen according to Appendix B so that the output distribution has proper adapted variance that covers the domain. The noise annealing schedule is chosen according to Appendix C, with coefficient for all the different experiments.

a.3 Bayesian Optimization experiments

The first experiment we run tackles the optimization of the hyper-parameters of a logistic regression modal applied on the digits dataset, made up of 1797 8x8 images, representing a hand-written digit. The train/test split ratio is set to . The optimized hyper-parameters are the learning rate on a log-scale in , the L2 and L1 regularization parameters on a log-scale in and the number of iterations between . The generator used for GENNES is made up of 6 fully connected layers with leaky ReLU activations for the first 5 ones and with hyperbolic tangent for the last. The exponential scheduling factor is set to for all the experiments, and the population size to . We run GENNES for 20 iterations. At every iteration, we sample 20 starting point for L-BFGS, which is stopped when the minimum coordinate of the projected gradient has a magnitude lower than . The initial GP is fit with points.

The second experiment deals with the global optimization of the Alpine1 function on in dimension . The generator of GENNES is left untouched, except for the maximum number of iterations which is brought up to 30. We use 100 initial points for L-BFGS at every iteration, and initialize the GP with 100 points.

Appendix B Guaranteeing a safe initialization for the generator

We hereinafter describe the reasoning we follow to set the initial distribution of the input noise , which dimension we note . Let us consider that is sampled along a multivariate uniform distribution, which first and second moments we respectively note and . The weights of the first layer are sampled according to a centered multivariate normal distribution of variance . The activation of a neuron belonging to the first hidden layer is obtained via:

as the initial biases are set to . Therefore we can write the mean and variance of its related random variable (r.v) as:

(8)

assuming of course that the random variables representing the weights and the noise are independent. We approximate ’s distribution as a normal r.v with moments matching (8). We want to approximate the moments of the distribution obtained after feeding to a ReLU activation (we note the resulting scalar and its related r.v). Easy computations show that being centered, we have that:

and

Assuming for the sake of simplicity that is centered we can simplify that last expression as:

If all the following layers now have the same height , repeating this process indicates that the variance of the activations of the last hidden layer before the tanh activation is given by:

with being the variance of the weights in the following layers. Now, the weights of the ReLU layers being initialized with Glorot initialization, we have the for the first layer, and for the rest. Now denoting the variance of the distribution used to generate the weights of the very last layer and the variance we wish to impose to the output distribution, and approximating the hyperbolic tangent by the identity (the distribution of the activation is centered) we are left with the relation (after straight forward simplifications):

Finally, we are left with the following equality:

(9)

which dictates how to tune the product as a function of the generator’s depth and the desired variance for the output distribution (which is centered around as the weights of the last layer are sampled according to a centered distribution. Even though it requires some questionable approximations, we found this formula to work very well in practice for small networks (typically with and . The mean of the initial distribution can easily be shifted as required by the optimization problem at hand by adding bias on the output layer.

Notice that the dimension of the noise does not appear in (9). In practice, it is important to have close to as an excessively small value results in a weirdly shaped support for the output distribution, which hinders the optimization as the resulting distribution of gradients might mislead the generator. Indeed, the input dimension of the noise impacts the number of possibly uncorrelated output dimensions. For instance with input of size one all the outputs dimensions would be correlated.

Appendix C Setting the noise annealing schedule

The setting of the noise annealing schedule is crucial, as an early collapse of the output distribution could lead to a sequence of pure gradient-descent like updates which could mean that GENNES would potentially miss the global optimum. On the other hand, if high-precision for the position of the global optimum is required, the annealing should decrease quickly the support of the noise once the global minimum has been located. In practice, we found that an exponential decrease rule such as:

(10)

where is the initial noise (with initial magnitude) and is the parameter dictating the evolution of the noise support. If it is likely that the objective has few local minima and a convex shape near them, then we recommend setting to a small value, i.e like . On the other hand, if the objective is highly multimodal, we recommend setting it to a value closer to 1, such as .

References

  1. Luigi Acerbi and Wei Ji. Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search. In Advances in Neural Information Processing Systems, pages 1836–1846, 2017.
  2. The GPyOpt authors. GPyOpt: A Bayesian Optimization framework in Python. http://github.com/SheffieldML/GPyOpt, 2016.
  3. Yanshuai Cao, Gavin Weiguang Ding, Kry Yik-Chau Lui, and Ruitong Huang. Improving GAN training via Binarized Representation Entropy (BRE) Regularization. 2018.
  4. Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906, 2015.
  5. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth International Conference on Artificial Intelligence and Statistics, pages 249–256, 2010.
  6. Nikolaus Hansen. The CMA Evolution Strategy: A tutorial. arXiv preprint arXiv:1604.00772, 2016.
  7. Nikolaus Hansen, Dirk V Arnold, and Anne Auger. Evolution strategies. In Springer Handbook of Computational Intelligence, pages 871–898. Springer, 2015.
  8. Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Research Report RR-6829, INRIA, 2009.
  9. Nikolaus Hansen and Andreas Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.
  10. Nikolaus Hansen and Raymond Ros. Benchmarking a weighted negative covariance matrix update on the BBOB-2010 noiseless testbed. In Proceedings of the 12th annual conference companion on Genetic and evolutionary computation, pages 1673–1680. ACM, 2010.
  11. Donald R Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21(4):345–383, 2001.
  12. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  13. Daniel James Lizotte. Practical Bayesian Optimization. University of Alberta, 2008.
  14. Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In 30th International Conference on Machine Learning (ICML 2013), volume 30, page 3, 2013.
  15. David JC MacKay. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 354(1):73–80, 1995.
  16. Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pages 2113–2122, 2015.
  17. Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample variance penalization. arXiv preprint arXiv:0907.3740, 2009.
  18. Michael A Osborne, Roman Garnett, and Stephen J Roberts. Gaussian processes for global optimization. In 3rd international conference on learning and intelligent optimization (LION3), pages 1–15. Citeseer, 2009.
  19. Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In International Conference on Machine Learning, pages 737–746, 2016.
  20. Carl Edward Rasmussen. Gaussian Processes in machine learning. In Advanced lectures on machine learning, pages 63–71. Springer, 2004.
  21. Tim Salimans, Jonathan Ho, Xi Chen, and Ilya Sutskever. Evolution Strategies as a scalable alternative to Reinforcement Learning. arXiv preprint arXiv:1703.03864, 2017.
  22. Tom Schaul. Benchmarking separable Natural Evolution Strategies on the noiseless and noisy black-box optimization testbeds. In Proceedings of the 14th annual conference companion on Genetic and evolutionary computation, pages 205–212. ACM, 2012.
  23. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian Optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pages 2951–2959, 2012.
  24. Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995, 2009.
  25. Adith Swaminathan and Thorsten Joachims. The self-normalized estimator for counterfactual learning. In Advances in Neural Information Processing Systems, pages 3231–3239, 2015.
  26. Julien Villemonteix, Emmanuel Vazquez, and Eric Walter. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 44(4):509, 2009.
  27. Vedran Vukotić, Christian Raymond, and Guillaume Gravier. Generative Adversarial Networks for Multimodal Representation Learning in Video Hyperlinking. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pages 416–419. ACM, 2017.
  28. Daan Wierstra, Tom Schaul, Jan Peters, and Juergen Schmidhuber. Natural Evolution Strategies. In Evolutionary Computation, 2008. CEC 2008.(IEEE World Congress on Computational Intelligence). IEEE Congress on, pages 3381–3387. IEEE, 2008.
  29. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist Reinforcement Learning. In Reinforcement Learning, pages 5–32. Springer, 1992.
  30. Jian Wu, Matthias Poloczek, Andrew G Wilson, and Peter Frazier. Bayesian Optimization with Gradients. In Advances in Neural Information Processing Systems, pages 5273–5284, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198608
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description