Denoising Autoencoders for fast Combinatorial Black Box Optimization
Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties. We integrate a special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate the performance of DAE-EDA on several combinatorial optimization problems with a single objective. We asses the number of fitness evaluations as well as the required CPU times. We compare the results to the performance to the Bayesian Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a generative neural network which has proven competitive with BOA. For the considered problem instances, DAE-EDA is considerably faster than BOA and RBM-EDA, sometimes by orders of magnitude. The number of fitness evaluations is higher than for BOA, but competitive with RBM-EDA. These results show that DAEs can be useful tools for problems with low but non-negligible fitness evaluation costs.
Key words and phrases:Autoencoder; Estimation of Distribution Algorithms; Machine Learning; Combinatorial Optimization Problems; Neural Networks
Estimation of Distribution Algorithms (EDA, [21, 19]) are metaheuristics for combinatorial and continuous non-linear optimization. They maintain a population of solutions which they improve over consecutive generations. They estimate how likely it is that decisions are part of an optimal solution, and try to uncover the dependency structure between the decision variables. This information is obtained from the population by the estimation of a probabilistic model. If a model generalizes the population well, random samples drawn from the model have a structure and solution quality that is similar to the population itself. Repeated model estimation, sampling, and selection steps can solve difficult optimization problems. Simple models, such as factorizations of univariate frequencies, can be quickly estimated from a population, but they cannot represent interactions between decision variables well. As a consequence, EDAs using univariate frequencies cannot efficiently solve complex problems. Using multivariate models allows complex problems to be solved, but fitting the model to a population and sampling new solutions can be very time-consuming.
Recent work has shown that current models from machine learning such as the Restricted Boltzmann Machine (RBM), a stochastic neural network, can be used as probabilistic model for an EDA . While not entirely matching the quality of the more statistics-driven Bayesian Optimization Algorithm (BOA, ), they have other desirable properties: speed of training and sampling, and easy and efficient parallelization [31, 30].
We focus on another model from the field of machine learning, which is closely related to the RBM - the Autoencoder (AE, see e.g.[14, 4]). Recent work has shown that AEs implicitly capture the probability distribution of given data, and that sampling this distribution is possible . Although the AE is structurally similar to an RBM, the training procedure is simpler and computationally less expensive. Hence, they are even faster to train and sample.
In this paper, we integrate a DAE in an EDA and assess its performance on multiple standard benchmark problems from combinatorial optimization. We report both the number of fitness evaluations and the required CPU times. We include results for BOA, RBM-EDA, and a simple univariate method for comparison.
We review the basic concept of EDAs. We introduce Autoencoders, describe how to train and sample them, and show how an Autoencoder can be used in an EDA.
2.1. Estimation of Distribution Algorithms
EDAs are well-established tools for solving combinatorial optimization problems (see e.g. [21, 19]). The basic structure of EDAs is given by Algorithm 1. In a nutshell, they select promising individuals from a population, build a probabilistic model of this subpopulation and then use this model to sample new individuals. These new individuals are evaluated and usually form the new population. This loop continues until the population has converged. The underlying assumption is that a model, which has captured the essence of the old population, is able to sample new, unknown individuals that possess the same high-quality structure, thereby searching the solution space efficiently.
EDAs differ in their choice of the model. Simple models use a vector with activation probabilities for each variable of the problem, while neglecting dependencies between the variables, like UMDA or PBIL [21, 3]. Slightly more complex models use pairwise dependencies modeled as trees or forests . More complex dependencies can be captured by models with multivariate interactions, like ECGA or BOA [11, 28]. Multivariate models are better suited for complex optimization problems, as univariate models can cause an exponential growth of the required number of fitness evaluations for growing problem sizes [28, 25]. Many algorithms use probabilistic graphical models with directed edges, i.e., Bayesian networks, or undirected edges, i.e., Markov random fields . Hence, model building consists of finding a network structure that matches the problem structure and estimating the model’s parameters. Usually, the computational effort to build the model rises with model complexity and representational power.
This section shows how to train an AE, introduces the Denoising AE, and shows how to sample new solutions.
Structure and Training Procedure
AEs are neural networks that have often been used for dimensionality reduction and are one of the building blocks for deep learning (see e.g. [14, 5, 4]). They are, in essence, multi layer perceptrons, which is a very basic type of neural network (see e.g. ).
An AE’s structure is defined by one visible layer , at least one hidden layer , and one output layer (see Figure 1). The basic AE consists of two deterministic functions: the encoding function maps a given input, , to a hidden layer, , with parameters and . The decoding function , maps back to a reconstruction in the input space. The training objective of the AE is to find parameters which minimize the reconstruction error , i.e., the difference between and for all examples in the training set:
Common choices for are the mean squared error function or the cross entropy function .
Encoding and decoding functions are usually chosen as and , where is the logistic function, and are weight matrices of size and , respectively, and , are biases which work as offsets. Often, and are tied, i.e., . Then, the AEs configurable parameters are .
Minimizing (1) is performed by using a gradient descent algorithm (see Algorithm 2). First, the parameters are initialized to small, random values. Then, we repeat the following process for multiple epochs, i.e., passes through the training set: For each example in the training set we calculate the hidden layer, , and the corresponding reconstruction . We then change the parameters in the direction of the gradient, setting
with learning rate . We stop the loop if the reconstruction error is small enough or another termination criterion has been met. Often, the parameter optimization is carried out using stochastic gradient descent, i.e., we use the average gradient from a mini-batch of training examples to update . This usually speeds up learning and makes the gradient more stable .
If the representational power of the hidden layer is large enough (i.e., if is not too small), a trivial way to solve (1) is to learn the identity function where each is directly mapped to the corresponding . To force the model to learn a more useful representation, it is therefore often helpful to introduce a form of regularization [4, 2]. One example of a regularized AE is the Denoising Autoencoder (DAE) introduced in . Here, each training example is corrupted by a stochastic mapping , i.e., we add random noise. Subsequently, the DAE calculates the reconstruction of the corrupted input, using encoding and decoding function, as . As with the original AE, the parameters are updated in the direction of . Hence, the DAE tries to reconstruct rather than . The noise introduced by the corruption process also makes the model more robust to partially destroyed inputs .
Sampling a DAE
Classic AEs do not include a sampling process to generate new examples. However, recent work has shown that some variants of AEs, including the DAE, implicitly capture the structure of the data-generating density, and multiple sampling processes have been suggested and empirically validated (for an overview, see ). Here, we adopt the sampling process proposed in , because it is the most general approach, and comes with a theoretical justification.
Given a data-generating distribution, , a corruption process, and a DAE that has been trained to reconstruct from , the sampling process is as follows (see Algorithm 3): First, we randomly initialize a sample . Then, for sampling steps, we corrupt the sample using the corruption process and use the trained DAE to reconstruct the input . For the next sampling step, we set . After sampling steps, we use as a sample from the DAE.
In , it was shown that this process converges to samples from the DAE’s approximation of the data-generating distribution, i.e., the training data.
2.3. Using a Denoising Autoencoder in an EDA
We can use a DAE as probabilistic model for an EDA. In each generation of the EDA, we train a DAE to model the probability distribution of the solutions which survived the selection process. We then sample the DAE. Each sample is a vector . To turn this vector of real-valued elements into a candidate solution, i.e., a binary string, we sample each variable from a Bernoulli distribution with . Then, we evaluate the fitness of the candidate solutions, and let the selection function decide which individuals will reach the next generation.
Another approach for using a DAE in an EDA-like optimization process was recently suggested by . Contrary to our approach, the DAE in  is not used as a multivariate EDA model to sample new solutions. Instead, it is trained on the best 10-20% of the population only. Subsequently, it is used to improve a second set of selected individuals from the population. Those individuals are first corrupted by the DAE’s corruption process, and then reconstructed by the DAE, using encoding and decoding function.
We present test problems, reference algorithms, experimental setup, and results.
3.1. Test Problems
We evaluate DAE-EDA on concatenated deceptive traps, NK landscapes and the HIFF function. All three are standard benchmark problems. Their difficulty depends on the problem size, i.e., problems with more decision variables are more difficult. Furthermore, the difficulty of concatenated deceptive trap functions and NK landscapes is tunable by a parameter. All three problems are composed of subproblems, which are either deceptive (traps), overlapping (NK landscapes), or hierarchical (HIFF), and therefore multimodal.
Concatenated deceptive traps are tunably hard, yet decomposable test problems . Here, a solution vector is divided into subsets of size , with each one being a deceptive trap. Within a trap, all bits are dependent on each other but independent of all other bits in . Thus, the fitness contribution of the traps can be evaluated separately and the total fitness of the solution vector is the sum of these terms.
In particular, the assignment (i.e., the bits from to
In other words, the fitness of a single trap increases with the number of zeros, except for the optimum of all ones.
NK landscapes are defined by two parameters and and fitness components . A solution vector consists of bits. The bits are assigned to overlapping subsets, each of size . The fitness of a solution is the sum of fitness components. Each component depends on the value of the corresponding variable as well as other variables. Each maps each possible configurations of its variables to a fitness value. The overall fitness function is
Each decision variable usually influences several . These dependencies between subsets make NK landscapes non-separable. The problem difficulty increases with . is a special case where all decision variables are independent and the problem reduces to a unimodal onemax. We use instances of NK landscapes with known optima from .
The Hierarchical If-and-only-if (HIFF) function  is defined for solutions vectors of length where is the number of layers of the hierarchy. It uses a mapping function and a contribution function , both of which take two inputs. The mapping function takes each of the blocks of two neighboring variables of level , and maps them onto a single symbol each. An assignment of is mapped to , is mapped to and everything else is mapped to the null symbol ’-’. The concatenation of ’s outputs on level is used as M’s input for the next level of the hierarchy, i.e., if level has variables, level has variables. On each level, assigns a fitness to each block of two variables. The assignments and are both mapped to , everything else to . The total fitness is the sum of all blocks’ contributions on all levels. In other words, a block contributes to the fitness on the current level if both variables in a block have the same assignment. However, only if neighboring blocks agree on the assignment, they will contribute to the fitness on the next level. HIFF therefore has two global optima, the string of all ones, and the string of all zeros.
3.2. Reference Algorithms
Bayesian Optimization Algorithm
The Bayesian Optimization Algorithm is one of the state-of-the-art EDAs for discrete optimization problems. It was been proposed by  and has been heavily used and researched since then [27, 26, 1].
BOA uses a Bayesian network for modeling dependencies between variables. Decision variables correspond to nodes and dependencies between variables correspond to directed edges. As the number of possible network topologies grows exponentially with the number of nodes, BOA uses a greedy construction heuristic to find a network structure to model the training data. Starting from an unconnected (empty) network, BOA evaluates all possible additional edges, adds the one that maximally increases the fit between the model and selected individuals, and repeats this process until no more edges can be added. The fit between selected individuals and the model is measured by the Bayesian Information Criterion (BIC) . BIC is based on the conditional entropy of nodes given their parent nodes and correction terms penalizing complex models. It can be calculated independently for all nodes. If an edge is added to the Bayesian network, the change of the BIC can be computed quickly. BOAs greedy network construction algorithm adds the edge with the largest BIC gain until no more edges can be added. Edge additions resulting in cycles are not considered.
After the network structure has been learned, BOA calculates the conditional activation probability tables for each node. Once the model structure and conditional activation probabilities are available, BOA can produce new candidate solutions by drawing random values for all nodes in topological order.
RBM-EDA uses a Restricted Boltzmann Machine as multivariate model. Restricted Boltzmann Machines are stochastic neural networks consisting of two layers of neurons, where the connections between the layers form a bipartite graph .
The input or visible layer of an RBM holds the input data represented by binary variables. The second, hidden layer consists of neurons. There is no dedicated output layer in an RBM. A weight matrix holds weights between all neurons and . From a structural point of view, an RBM resembles an Autoencoder with one hidden layer where the output layer has been ”folded” back onto the input layer.
An RBM can be used as a model within an EDA, because it can be trained to model a probability distribution and it is possible to draw samples from this model [33, 12, 13]. Training the RBM means adjusting s.t. the RBM models the probability distribution of the training data. This can be done by using the gradient descent algorithm contrastive divergence . Sampling new individuals from the model’s probability distribution can be performed using Gibbs sampling .
 have shown that RBM-EDA is competitive to BOA. For difficult problems, it has a moderately higher, but still non-exponential complexity in the number of fitness evaluations. However, the time for solving problems grows slower with larger problem sizes. We compare DAE-EDA to RBM-EDA, because they are closely related in terms of the models’ structure and training process.
PBIL is one of the simplest EDAs. It assumes conditional independence of all problem variables. PBIL stores a vector of activation probabilities. PBIL creates new individuals by sampling each variable from a Bernoulli distribution with . In each EDA generation , PBIL selects the best individuals from the population, and updates each as
with determining the strength of the update. We include PBIL in the experiment, because its results give an intuitive measure on the difficulty of the test problems.
3.3. Experimental Setup
We use several instances of the test problems (see section 3.4). For each instance and algorithm, we test multiple population sizes. For DAE-EDA, RBM-EDA, and BOA, we choose , for PBIL, we choose ). We run 20 instances for each population size.
In each run, the EDA is allowed to run for 100 generations (2000 for PBIL). We terminate the EDAs if there is no improvement in the best solution for more than 20 generations (400 for PBIL). We report the average number of fitness evaluations and CPU time for the best solutions of all runs. All EDAs use tournament selection without replacement of size two .
For PBIL, we choose and ., i.e., we use only the best individual in each generation to update the model. For the RBM, we use the same parameter settings as in .
The algorithms were implemented in Matlab/Octave and executed using Octave V3.2.4 on a on a single core of an AMD Opteron 6272 processor with 2,100 MHz.
We use the following parameters for the DAE: The number of hidden neurons is equal to the problem size . The corruption process randomly corrupts 10% of the inputs by setting them to 0 or 1 (salt+pepper noise). When sampling new candidate solutions from the DAE, we perform sampling steps. During training, the learning rate is 0.2, the batch size for stochastic gradient descent is . We use a cross-entropy error measure for (1).
Like , we apply a simple parameter control scheme determining when to terminate DAE training. The scheme is based on the reconstruction error . usually decreases with the number of epochs. Every second epoch , we calculate for a fixed subset of the training set the relative difference We measure the decrease of the reconstruction error in the last 33% of all epochs as is then used to automatically check for convergence of the training. We stop training if . The rationale behind this is that the DAE has learned the relevant dependencies between the variables, and further training is unlikely to improve the model considerably. Furthermore, we stop the training if the DAE is overfitting, i.e., learning noise instead of problem structure. Therefore, we split the original training set into a training set containing 90% of all samples and a validation set containing the remaining 10%. We train the DAE only for the solutions in and, after each epoch, calculate the reconstruction error and for the training and validation set and , respectively. We stop the training phase as soon as (i.e., the difference between the reconstruction errors is larger than 10%).
We report the performance of DAE-EDA, RBM-EDA, BOA, and PBIL for concatenated deceptive traps with and , NK landscapes with and (two instances each) as well as the HIFF function with (see table 1). For each instance and algorithm, we select the minimal population size which leads to the optimal solution in at least 50% of the runs (left two result columns of table 1) and at least 90% of the runs (right two columns). We report the average number of fitness evaluations and CPU time of those runs.
First, we analyze the number of fitness evaluations required. For all problems, and for both the runs with at least 50% and 90% success rate, respectively, PBIL uses the highest number of fitness evaluations.
As expected, BOA has the best performance in terms of fitness evaluations. This is consistent with the previous findings comparing RBM-EDA and BOA .
Both DAE-EDA and RBM-EDA consistently use more fitness evaluations than BOA, except for the NK landscapes with . However, most of the time the number of fitness evaluations is on the same order of magnitude, and clearly better than that of the univariate PBIL. For the both the runs with at least 50% and 90% success rate, DAE-EDA and RBM-EDA are about tied for the number of instances with the least fitness evaluations.
For the 128 bit HIFF problem DAE-EDA needed only 45% of the fitness evaluations of the DAE inspired optimizer in . We attribute this mainly to the sampling process, which samples from the trained model’s distribution directly, instead of using the DAE as a tool for an advanced local search modifying selected individuals.
Second, we look at the average time the algorithms required to solve the respective problem. If PBIL is able to solve the problem to optimality, it is usually the fastest algorithm, because fitness evaluations are computationally inexpensive for all benchmark problems. Note that for the 60-bit concatenated 4-trap problem, PBIL is able to find the optimal solution in at least 50% of the runs, nevertheless, the DAE-EDA is faster as it needs only about 1‰ of PBIL’s fitness evaluations.
For all but one instance, DAE-EDA is significantly faster than both RBM-EDA and BOA, sometimes by multiple orders of magnitude (see Section 4). This is also true for the instances where RBM-EDA needs a lower number of fitness evaluations. This is due to the much quicker model building of the DAE.
|Problem||Algorithm||Average results Population size such that optimum is found|
|in 50% of runs||in 90% of runs|
|Evaluations||Time (sec)||Evaluations||Time (sec)|
|4-Traps 20 bit||DAE-EDA||2,550||1,150||18||6.9||4,450||1,359||21||9.2|
|4-Traps 40 bit||DAE-EDA||37,400||14,030||90||28||37,400||14,030||90||28|
|4-Traps 60 bit||DAE-EDA||61,800||22,225||182*||58||292,000||55,857||823*||216|
|5-Traps 25 bit||DAE-EDA||11,650||5,350||44||13||11,650||5,350||44||13|
|5-Traps 50 bit||DAE-EDA||57,750||18,250||200*||44||57,750||18,250||200*||44|
|5-Traps 75 bit||DAE-EDA||96,500||32,049||519*||111||247,500||45,373||1,297*||233|
|NK , ,||DAE-EDA||10,725||3,976||52||12||31,900||6,971||79*||16|
|NK , ,||DAE-EDA||49,100||13,423||109||24||328,000||77,974||396*||108|
|NK , ,||DAE-EDA||51,300||16,844||126||31||96,800||24,351||171||46|
|NK , ,||DAE-EDA||26,950*||8,176||99||23||175,600||45,218||279*||81|
|NK , ,||DAE-EDA||29,000||8,349||98||15||232,000||56,114||323||82|
|NK , ,||DAE-EDA||99,600||12,706||158||25||182,400||40,917||240*||62|
|NK , ,||DAE-EDA||189,200||63,964||340*||122||309,600||88,692||455*||136|
|NK , ,||DAE-EDA||213,200||53,861||347*||101||432,000||92,330||616*||143|
The results suggest that DAE-EDA is able to decompose the test problems properly, and solve the parts independently. This becomes evident when looking at the more complicated problems where the univariate PBIL struggles or fails. The quality of DAE-EDAs underlying probabilistic model is similar to the one of RBM-EDA, but not as good as BOA’s.
An interesting aspect of DAE-EDA is its speedy model building and sampling process. The CPU time to solve the test problems is much lower than that of the other multivariate methods, sometimes by multiple orders of magnitude. Note that the direct comparison of CPU times is not entirely fair for BOA. In a more efficient programming language instead of a script-based language like Matlab/Octave, BOA’s speedup is significantly higher than the one of DAE-EDA and RBM-EDA. However, almost every recent implementation of neural networks is parallelized on graphics processing units (GPU), which, in turn, speeds up training and sampling these models considerably (see e.g. [15, 17, 34]). Parallelizing multivariate EDAs such as BOA is well possible, however the speedups are often single- or double-digit, even on GPUs (see e.g. [24, 22]). In contrast, parallelizing EDAs using neural networks can saturate modern GPU hardware and yield very high speedups:  report speedups of up to 200, against optimized CPU code, for RBM-EDA, which uses a neural network model that is closely related to the DAE. Hence, it is reasonable to assume that an efficient GPU-based implementation of DAE-EDA will be very fast, compared to other EDAs on problems with low, but non-negligible fitness evaluation costs.
Regarding the model quality, we now exemplarily take a closer look at the results of two selected test instances, where DAE-EDA has a higher number of fitness evaluations than both BOA and RBM-EDA: the 75 bit concatenated 5-trap problem (Figure 2) and the NK landscape with , (Figure 3). In both Figures, the left-hand side shows the number of fitness evaluations, the right-hand side shows the CPU time. Each data point of a problem marks the average results for a specific population size. Lines connect data points with adjacent population sizes.
For the 75 bit 5-trap problem, we see that DAE-EDA approaches the optimum faster (i.e., with smaller population size, fewer fitness evaluations and less total time) than both RBM-EDA and BOA. For the NK landscape, DAE-EDA is comparable to BOA. However, for both problems, the transition region from partial success to complete success is larger for DAE-EDA. In other words, DAE-EDA quickly finds optimal or close-to-optimal solutions with few evaluations in some runs, but often needs much larger populations for reliable convergence in every run. This pattern is qualitatively similar for other problems: On average, DAE-EDA needs 2.5 the number of fitness evaluations to make 90% rather than 50% of runs converge, compared to 1.9 for RBM-EDA and for BOA (see table 1). This suggests that, in the current configuration, DAE-EDA is more dependent on the initialization of the DAE’s parameters . If, by chance, they are initialized in a particularly unfavorable way, the model is not able to learn an optimal hidden representation. The amount of random noise injected by corruption function is not sufficient to compensate for this effect. This hinders DAE-EDA from exploring the solution space more efficiently. This poses an interesting area for further research.
We introduced DAE-EDA, an Estimation of Distribution Algorithm which uses a Denoising Autoencoder as probabilistic model for solving combinatorial optimization problems. DAE-EDA uses a DAE to approximate the probability distribution of the fittest individuals and subsequently samples new candidate solutions from this distribution. We tested DAE-EDA using several instances of the standard benchmark problems concatenated deceptive traps, NK landscapes and the HIFF function. We compared the results to multiple other EDAs: The state-of-the-art Bayesian Optimization Algorithm, the multivariate RBM-EDA, another EDA based on stochastic neural networks, which has shown to be computationally less expensive than the state of the art BOA for complicated problems, and the univariate PBIL.
The results show that DAE-EDA is a very fast EDA that is able to decompose complicated problems. It needs a similar number of fitness evaluations like RBM-EDA, but does not reach the model quality of BOA. However, it is much faster than both RBM-EDA and BOA, as training and sampling the probabilistic model are conceptually simpler and computationally cheaper. Furthermore, a DAE is structurally similar to an RBM. Hence, we can assume the speedup of running a parallelized version of DAE-EDA on modern graphics processing units to be very high.
In sum, DAE-EDA can be a useful tool for solving complex combinatorial optimization problems, where fitness evaluation costs are low, but non-negligible.
There are multiple directions for further research. The results suggest that it could be beneficial to look into the parameter initialization more thoroughly, as DAE-EDA’s performance seems to be more susceptible to unfavorable initial parameters. Another promising direction is to use a deep, multi-layered DAE to solve hierarchical problems. Also, other techniques for sampling a DAE exist, which may result in a different performance of DAE-EDA.
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions on an earlier version of this paper.
- The variables assigned to trap do not have to be adjacent, but can be at any position in .
- A. Abdollahzadeh, A. Reynolds, M. Christie, D. W. Corne, B. J. Davies, G. J. Williams, et al. Bayesian optimization algorithm applied to uncertainty quantification. SPE Journal, 17(03):865–873, 2012.
- G. Alain and Y. Bengio. What regularized auto-encoders learn from the data-generating distribution. Journal of Machine Learning Research, 15:3563–3593, 2014.
- S. Baluja. Population-based Incremental Learning. A Method for Integrating Genetic Search-based Function Optimization and Competitive Learning. Technical Report CMU-CS-94-163, Carnegie Mellon University, Pittsburgh, PA, 1994.
- Y. Bengio. Learning deep architectures for AI. Foundations and Trends Machine Learning, 2(1):1–127, Jan. 2009.
- Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 153–160. MIT Press, 2007.
- Y. Bengio, L. Yao, G. Alain, and P. Vincent. Generalized Denoising Auto-Encoders as Generative Models. In Advances in Neural Information Processing Systems 26 (NIPS’13). NIPS Foundation (http://books.nips.cc), 2013.
- C. M. Bishop. Pattern Recognition and Machine Learning. Information Science and Statistics. Springer, 2006.
- A. W. Churchill, S. Sigtia, and C. Fernando. A denoising autoencoder that guides stochastic search. unpublished/preprint on arXiv, abs/1404.1614, 2014.
- K. Deb and D. E. Goldberg. Analyzing Deception in Trap Functions. Foundations of Genetic Algorithms, 2:93–108, 1993.
- S. Geman and D. Geman. Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-6(6):721–741, 1984.
- G. R. Harik, F. G. Lobo, and K. Sastry. Linkage learning via probabilistic modeling in the extended compact genetic algorithm (ECGA). In Scalable optimization via probabilistic modeling, pages 39–61. Springer, 2006.
- G. E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation, 14:1771–1800, 2002.
- G. E. Hinton, S. Osindero, and Y.-W. Teh. A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 18:1527–1554, 2006.
- G. E. Hinton and R. Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks. Science, 313(5786):504–507, 2006.
- G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
- S. A. Kauffman and E. D. Weinberger. The NK Model of Rugged Fitness Landscapes and its Application to Maturation of the Immune Response. Journal of Theoretical Biology, 141(2):211–245, 1989.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet Classification with Deep Convolutional Neural Networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
- P. Larrañaga, H. Karshenas, C. Bielza, and R. Santana. A Review on Probabilistic Graphical Models in Evolutionary Computation. Journal of Heuristics, 18(5):795–819, 2012.
- P. Larrañaga and J. A. Lozano. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Genetic Algorithms and Evolutionary Computation, 2. Kluwer Academic Pub, 2002.
- B. L. Miller and D. E. Goldberg. Genetic Algorithms, Tournament Selection, and the Effects of Noise. Complex Systems, 9:193–212, 1995.
- H. Mühlenbein and G. Paaß. From Recombination of Genes to the Estimation of Distributions I. Binary Parameters. In H.-M. Voigt, W. Ebeling, I. Rechenberg, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature - PPSN IV, volume 1141 of Lecture Notes in Computer Science, pages 178–187. Springer Berlin Heidelberg, 1996.
- A. Munawar, M. Wahib, M. Munetomo, and K. Akama. Theoretical and Empirical Analysis of a GPU-based Parallel Bayesian Optimization Algorithm. In 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies, pages 457–462. IEEE, 2009.
- K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
- J. Očenášek and J. Schwarz. The Parallel Bayesian Optimization Algorithm. In The State of the Art in Computational Intelligence, pages 61–67. Springer, 2000.
- M. Pelikan. Bayesian Optimization Algorithm. In Hierarchical Bayesian Optimization Algorithm, volume 170 of Studies in Fuzziness and Soft Computing, pages 31–48. Springer, 2005.
- M. Pelikan. Analysis of Estimation of Distribution Algorithms and Genetic Algorithms on NK Landscapes. Technical Report 2008001, Missouri Estimation of Distribution Algorithms Laboratory (MEDAL), January 2008. We used the first 25 instances in each subfolder of of the compressed container.
- M. Pelikan and D. E. Goldberg. Hierarchical BOA Solves Ising Spin Glasses and MAXSAT. In Genetic and Evolutionary Computation Conference (GECCO 2003), pages 1271–1282. Springer, 2003.
- M. Pelikan, D. E. Goldberg, and E. Cantu-Paz. BOA: The Bayesian Optimization Algorithm. In Genetic and Evolutionary Computation Conference (GECCO 1999), pages 525–532, 1999.
- M. Pelikan and H. Mühlenbein. The Bivariate Marginal Distribution Algorithm. Advances in Soft Computing-Engineering Design and Manufacturing, pages 521–535, 1999.
- M. Probst, F. Rothlauf, and J. Grahl. An Implicitly Parallel EDA Based on Restricted Boltzmann Machines. In Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO 2014), pages 1055–1062, New York, NY, USA, 2014. ACM.
- M. Probst, F. Rothlauf, and J. Grahl. Scalability of Using Restricted Boltzmann Machines for Combinatorial Optimization. preprint on arXiv, abs/1411.7542, 2014.
- G. Schwarz. Estimating the Dimension of a Model. The Annals of Statistics, 6(2):461–464, 1978.
- P. Smolensky. Information Processing in Dynamical Systems: Foundations of Harmony Theory. In D. E. Rumelhart, J. L. McClelland, and C. PDP Research Group, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, pages 194–281. MIT Press, 1986.
- I. Sutskever, O. Vinyals, and Q. V. V. Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc., 2014.
- P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and Composing Robust Features with Denoising Autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM, 2008.
- R. A. Watson, G. S. Hornby, and J. B. Pollack. Modeling Building-Block Interdependency. In Parallel Problem Solving from Nature - PPSN V, pages 97–106. Springer, 1998.