Optimizing Geometric Multigrid Methods with Evolutionary Computation

Optimizing Geometric Multigrid Methods with Evolutionary Computation

Jonas Schmitt 0000-0002-8891-0046 jonas.schmitt@fau.de Sebastian Kuckuk sebastian.kuckuk@fau.de  and  Harald Köstler harald.koestler@fau.de Friedrich-Alexander University Erlangen-Nürnberg (FAU)Chair for System Simulation, Department of Computer ScienceCauerstraße 11Erlangen91058Germany
Abstract.

For many linear and nonlinear systems that arise from the discretization of partial differential equations the construction of an efficient multigrid solver is a challenging task. Here we present a novel approach for the optimization of geometric multigrid methods that is based on evolutionary computation, a generic program optimization technique inspired by the principle of natural evolution. A multigrid solver is represented as a tree of mathematical expressions which we generate based on a tailored grammar. The quality of each solver is evaluated in terms of convergence and compute performance using automated local Fourier analysis (LFA) and roofline performance modeling, respectively. Based on these objectives a multi-objective optimization is performed using strongly typed genetic programming with a non-dominated sorting based selection. To evaluate the model-based prediction and to target concrete applications, scalable implementations of an evolved solver can be automatically generated with the ExaStencils framework. We demonstrate our approach by constructing multigrid solvers for the steady-state heat equation with constant and variable coefficients that consistently perform better than common V- and W-cycles.

copyright: none

1. Introduction

Solving the linear or nonlinear systems that arise from the discretization of partial differential equations (PDEs) efficiently is an outstanding challenge. The huge number of unknowns in many of these systems necessitates the design of fast and scalable solvers. Unfortunately, the optimal solution method highly depends on the system itself and it is therefore infeasible to formulate a single algorithm that works efficiently in all cases. Multigrid methods are a class of asymptotically optimal solution algorithms for (non-)linear systems. Although in the last decades great effort has been put into the design of efficient multigrid methods for many important cases, such as the Navier-Stokes or Maxwell’s equation, this task is still not fully solved. Within this paper we propose a novel approach for the automatic optimization of multigrid solvers through the use of evolutionary computation. Our approach builds on the work by Mahmoodabadi and Köstler (Mahmoodabadi2017, ), in which it was first demonstrated how genetic programming (GP) (koza1994genetic, ) can be used to optimize the iteration matrices of stationary iterative methods. In contrast to this work, where the iteration matrix was assembled explicitly, we represent iterative methods as symbolic expressions, which are independent of the size of the linear system. For this purpose we propose a new formal grammar for the automatic generation of multigrid expressions. We furthermore show how we can automatically obtain convergence and performance estimates for geometric multigrid solvers on rectangular grids of arbitrary size and how, based on these metrics, a multi-objective optimization can be performed using genetic programming and evolution strategies (ES) (beyer2002evolution, ). In addition the evolved multigrid solvers are emitted in the form of a domain specific language (DSL) specification and the ExaStencils code generation framework is employed to automatically generate a scalable implementation on the target platform. Our approach is therefore similar to the work by Thekale et al (thekale2010optimizing, ), where the optimal number of full multigrid cycles was optimized based on a cost model, but aims to achieve more generality by considering the construction of multigrid expressions as program optimization task. Finally, we demonstrate our approach by generating efficient solvers for the steady-state heat equation with constant and variable coefficients.

2. A Formal Grammar for Generating Multigrid Solvers

The task of constructing a multigrid solver for a certain problem is typically performed by a human expert with profound knowledge in numerical mathematics. To automate this task, we first need a way to represent multigrid solvers in a formal language that we can then use to automatically construct different instances on a computer. The rules of this language must ensure that only valid solver instances can be defined, which means that we can both automatically determine their convergence and runtime behavior. Additionally, we want to enforce that the generated method works on a hierarchy of grids, which requires the availability of inter-grid operations that allow to obtain approximations of the same operator or grid on a finer or coarser level. Consider the general system of linear equations defined on a grid with spacing

(1)

where is the coefficient matrix, the unknown and the right-hand side of the system. Each component of a multigrid solver can be written in the following form:

(2)

where is the approximate solution in iteration , the relaxation factor and an operator defined on the given level with spacing . For example, with the splitting , we can define the Jacobi

(3)

and the lexicographical Gauss-Seidel method

(4)

If we assume the availability of a prolongation operator , a restriction operator and an approximation for the inverse of on the coarser grid, a coarse grid correction can be defined as

(5)

Furthermore, we can substitute in (5) with (3) and obtain a two grid with Jacobi pre-smoothing

(6)

By repeatedly substituting subexpressions we can automatically construct a single expression for any multigrid solver. If we take the set of possible substitutions as a basis, we can define a list of rules according to which we can generate such an expression. We specify these rules in the form of a context-free grammar, which is described in figure 1 for classical pointwise smoothers.

{bnf*}\bnfprod

\bnftsiterate(\bnfpn, \bnfsp\bnfts, \bnfsp\bnfpn)
\bnfprod \bnftsapply(\bnfpn, \bnfsp\bnfpn) \bnfor\bnftsresidual(\bnfts, \bnfsp\bnfpn)
\bnfprod \bnftsiterate(\bnfpn, \bnfsp\bnfts, \bnfsp\bnfpn) \bnfor\bnftsiterate(\bnftscoarse-grid-correction(\bnfts, \bnfsp\bnfpn, \bnfsp\bnfts), \bnfsp\bnfts, \bnfsp\bnfes) \bnfor(\bnfts, \bnfsp\bnfts,\bnfsp\bnfes, \bnfsp\bnfes)
\bnfprod \bnftsinverse(\bnftsdiagonal(\bnfts))
\bnfprod \bnftsapply(\bnfpn, \bnfsp\bnfpn) \bnfor\bnftsresidual(\bnfts, \bnfsp\bnfpn) \bnfor\bnftscoarse-cycle(\bnfts, \bnfsp\bnfts, \bnfsp\bnftsapply(\bnfts, \bnfsp\bnfpn))
\bnfprod \bnftsiterate(\bnfpn, \bnfsp\bnfts, \bnfsp\bnfpn) \bnfor\bnftsiterate(\bnftsapply(\bnfts, \bnfsp\bnfpn), \bnfsp\bnfts, \bnfsp\bnfes)
\bnfprod \bnftsinverse(\bnftsdiagonal(\bnfts))
\bnfprod \bnftsapply(\bnfts, \bnfsp\bnftsapply(\bnfts, \bnfsp\bnfpn))
\bnfprod \bnftsred-black partitioning \bnfor\bnfes

(a) Syntax
function iterate((, , , ), , )
      with
     return (, , , )
end function
function apply(, (, , , ))
     
     return (, , , )
end function
function residual(, (, , , ))
     
     return (, , , )
end function
function coarse-cycle(, , (, , , ))
     
     
     
      (, , , )
     return (, , , )
end function
function coarse-grid-correction(, (, , , ), )
     (, , , )
     
     return (, , , )
end function
(b) Semantics
Figure 1. A formal grammar for the generation of multigrid solvers.

Figure 0(a) contains the production rules while figure 0(b) describes their semantics. Each rule defines the set of expressions by which a certain production symbol, denoted by , can be replaced. To generate an expression, starting with the symbol , this process is recursively repeated until the produced expression contains only terminal symbols or the empty string . The construction of a multigrid solver comprises the recursive generation of cycles on multiple levels. Consequently, we must be able to create a new system of linear equations on a coarser level, including a new initial solution, right-hand side and coefficient matrix. Moreover, if we decide to finish the computation on a certain level, we must be able to restore the state of the next finer level, i.e. the current solution and right-hand side, when applying the coarse grid correction. The current state of a multigrid solver on a certain level with grid spacing is represented as a tuple (, , ), where represents the current iterate, the right-hand side and a correction expression. To restore the current state on the next finer level, we additionally include a reference to the corresponding state tuple. According to figure 0(a), the construction of a multigrid solver always ends when the tuple (, , , ) is reached. This tuple contains the initial solution and right-hand side on the finest level and therefore corresponds to the original system of linear equations that we aim to solve. Here we have neither yet computed a correction, nor do we need to restore any state and hence both and contain the empty string. In general, our grammar includes three functions that operate on a fixed level.

The function iterate generates a new state tuple from a given one by applying the correction to the current iterate using the relaxation factor . If available a partitioning can be included to perform the update in multiple sweeps on subsets of and , e.g. a red-black Gauss-Seidel iteration. The function residual creates a residual expression based on the given state tuple, which is assigned to the newly created symbol . A correction can be transformed with the function apply, which generates a new correction by applying the linear operator to the old one. For example, the following function applications evaluate to one iteration of damped Jacobi smoothing:

where is the inverse diagonal of , as defined above.

Finally, it remains to be shown how one can recursively create a multigrid cycle on the next coarser level and then apply the result of its computation to the current approximate solution. This is accomplished through the functions coarse-cycle and coarse-grid-correction. The former expects a state tuple to which the restriction has been already applied. It then creates a new state on the next coarser level using the initial solution , the operator and the restricted correction as right-hand side . Note that on the coarsest level the resulting system of linear equation can be solved directly, which is denoted by the application of the inverse coarse grid operator. To enable the reestablishment of the previous state, a reference is stored in . If the computation is finished, the function coarse-grid-correction comes into play. It first restores the previous state on the next finer level and then computes a coarse grid correction by applying the interpolation operator to the solution computed on the coarser grid, which is then used as a new correction on the finer level. Again, the following example application demonstrates the semantics of these functions, where we abbreviate coarse-grid-correction with cgc:

With the implementation of these functions we have completed the definition of the syntax and semantics of our formal grammar for the generation of multigrid solvers. It must be mentioned that this grammar imposes certain restrictions on the structure of the generated solver. First of all, we only allow the application of operators, either for smoothing, restriction or prolongation, to the current residual. This is a sufficient constraint for the generation of multigrid solvers for linear system, though nonlinear multigrid methods, such as the full approximation scheme (FAS) (briggs2000multigrid, )(trottenberg2000multigrid, ), require the restriction and prolongation of the current solution and can therefore not be generated with the grammar presented here. Furthermore, we assume that the right-hand side is only available on the finest grid. A full multigrid (FMG) scheme starts on the coarsest grid and hence requires the availability of a right-hand side on each level, which could for instance be introduced as additional terminal symbol. Besides these obvious restrictions one could arbitrarily loosen the constraints implicitly made within the grammar and enable the combination of coarse grid corrections that originate from different expressions on a finer level. Even though we can not preclude that it is possible to generate improved multigrid methods without these restrictions, this work only represents a first step towards the automatic generation and optimization of these methods and we do not claim to consider all possible variations, but instead focus on the classical multigrid formulation, as presented in (hackbusch2013multi, ; briggs2000multigrid, ; trottenberg2000multigrid, ). Since we have shown how it is possible to generate expressions that uniquely represent different multigrid solvers using the formal grammar defined in figure 1, the remainder of this paper focuses on the evaluation and optimization of the resulting algorithms based on this representation.

3. Multi-Objective Optimization with Evolutionary Computation

The fundamental requirement for the optimization of an iterative method is to have a way to evaluate both its rate of convergence and performance on the target machine. As we want to fully automate this process, we must be able to perform all steps from the generation of a solver to its evaluation without requiring any human intervention. In general, there are two possibilities to automatically evaluate an algorithm. Assuming there exists a code generator that is able to generate machine instructions from a high-level algorithm, one could first translate it to this representation, then employ the generator to emit an executable and finally run it to evaluate both objectives. The main disadvantage of this approach is that, depending on the execution time of the code generator, this can be infeasible. The second possibility is to use predictive models to obtain an approximation for both objectives in significantly less compute time. This work focuses on the automatic optimization of geometric multigrid solvers on rectangular grids. In this case we can represent all matrices as one or multiple stencil codes and there exist models, which we briefly explain in the following, that allow us to predict the quality of a multigrid solver with respect to both objectives. Although, as it can not be expected that these predictions are always accurate, we still employ code generation to evaluate the best solvers of each optimization run.

3.1. Convergence estimation

The quality of an iterative method is first and foremost determined by its rate of convergence, i.e. the speed at which the approximation error approaches machine precision. One iteration of a multigrid solver can be expressed in the general form of equation (2). By separating all terms that contain the current iterate from the rest of the equation, we obtain the following form:

(7)

where is the unit matrix. The iteration matrix of the given multigrid solver is then given by

(8)

The spectral radius of this matrix, as defined by

(9)

where are the eigenvalues of , is essential for the convergence of the method. Assume is the exact solution of the system, the error in iteration then satisfies,

(10)

where is the th power of . The convergence factor of this sequence is the limit

(11)

which is equal to the spectral radius of the iteration matrix (saad2003iterative, ). In general, the computation of the spectral radius is of complexity for . If we however restrict ourselves to geometric multigrid solvers on rectangular grids, we can employ local Fourier analysis (LFA) to obtain an estimate for (wienands2004practical, ). LFA considers the original problem on an infinite grid while the boundary conditions are neglected. Recently LFA has been automated through the use of software packages (wienands2004practical, ; BRFourier2018, ). LFA Lab111LFA Lab: https://github.com/hrittich/lfa-lab is a library for the automatic local Fourier analysis of constant and periodic stencils (BRFourier2018, ) on rectangular grids. To automatically estimate the convergence factor of a multigrid solver using this tool, we first need to obtain the iteration matrix. Using the grammar described in the last section, we always generate expressions of the form of equation (2) from which we can extract the iteration matrix using the transformation described above. Finally, we emit the resulting expression, which represents the iteration matrix of our multigrid solver, in the form of an LFA Lab expression, for which we can automatically estimate the spectral radius.

3.2. Performance estimation

A popular yet simple model for estimating the performance of an algorithm on modern computer architectures is the roofline model (williams2009roofline, ). Based on the operational intensity of a compute kernel, i.e. the ratio of floating point operations to words loaded from and stored to memory, it gives an estimate for the maximum achievable performance, which is either limited by the memory bandwidth or the compute capabilities of the machine. The basic roofline formula is given by

(12)

where is the attainable performance, the peak performance of the machine, i.e. the maximum achievable amount of floating point operations per second, the operational intensity of the kernel and the peak memory bandwidth, i.e. the amount of words that can be moved from and to main memory per second. Within a geometric multigrid solver each kernel either represents a matrix-vector or vector-vector operation, where each vector corresponds to a rectangular grid and each matrix to one or multiple stencils. If we explicitly represent each operation in the form of a stencil, the computation of the operational intensity is straightforward.

3.3. Code Generation and Evaluation

In order to evaluate a solver that has been evolved within a certain stage of optimization on the target platform, we employ the ExaStencils code generation framework (lengauer2014exastencils, ), which was specifically designed for the generation of geometric multigrid implementations that run on parallel and distributed systems. To employ the code generation capabilities of this framework, we transform the evolved multigrid expression to an algorithmic representation, which we then emit in the form of a specification in ExaStencils’ DSL (schmitt2014exaslang, ). Based on this specification the framework generates a C++ implementation of the solver, including a default application, which we finally run to measure both its total execution time and defect reduction factor

(13)

per iteration on the target platform. We then obtain an approximate for the asymptotic convergence factor

(14)

where is the number of iterations until convergence (trottenberg2000multigrid, ).

3.4. Optimization

In case we want to find the optimal geometric multigrid solver for a certain problem, first the question about the size of the search space arises. With a sufficiently small search space one could attempt to simply enumerate all possible solutions. The infeasibility of this approach becomes obvious when looking at the grammar in figure 1. Assume our goal is to find a multigrid solver that operates on three levels, but the only allowed operation on the coarsest level is the application of a direct solver. Besides the start symbol and the production resulting in the application of a direct solver on the coarsest level, each non-terminal symbol produces at least two alternatives. Now assume we perform on average twenty productions per level. This means we must consider more than alternatives that must be all evaluated with respect to both objectives, which is already infeasible on a standard desktop computer. In practice this number will be even larger, especially if we consider more levels. Furthermore, we need to choose a value for all occurrences of the relaxation parameter , which yields an additional continuous optimization problem. In case the search space is too large to be directly enumerated, a remedy is to use heuristics that aim to search efficiently through the space of possible solutions and still find the global or at least a local optimum. Evolutionary algorithms are a class of search heuristics inspired by the principle of natural evolution that have been successfully applied to numerous domains (koza2010human, ). All of these methods have in common that they evolve a population of solutions (called individuals) through the iterative application of so-called genetic operators. Typically each iteration (or generation) of an evolutionary algorithm consists of the following steps:

  1. Selection: A number of best individuals is selected from the population of the last generation.

  2. Crossover: New individuals are created through the recombination of existing ones.

  3. Mutation: A number of individuals is randomly altered to create new instances.

The order and probability of application of each operation can be varied and different choices have been suggested for different optimization problems (back1997handbook, ). The exact implementation of each genetic operator depends on the class of problem, i.e. the structure of the solution. Within this work we consider two different optimization problems. First of all, we want to find the list of productions that, according to the context-free grammar presented in section 2, leads to the optimal multigrid solver. The class of evolutionary algorithms that evolve expressions according to a context-free grammar are summarized under the term genetic programming (poli08:fieldguide, ; koza1994genetic, ). To evolve a Pareto front of multigrid solvers with respect to both objectives, we employ strongly typed genetic programming (montana1995strongly, ) with a non-dominated sorting based selection (coello2007evolutionary, ). Because the computation of the spectral radius significantly slows down for expressions consisting of more than two levels, we split each optimization into multiple runs. Starting on the three coarsest levels, we perform the optimization assuming that we can obtain the correct solution on the coarsest level. During this process the value is chosen for each relaxation factor that occurs within an expression. After we have evolved a Pareto front of multigrid expressions, for each of those individuals an implementation is generated, which we then evaluate on the target platform by considering an instance of the problem that we aim to solve on the finest level. For this purpose we adapt the initial solution, boundary condition and right-hand side of the given problem to the grid defined on the current level. We then choose the individual that leads to the fastest solver with respect to execution time until convergence on the target platform.

In the second step, we turn our attention to the list of relaxation factors in order to improve the convergence of the best individual evolved in the first step, which corresponds to a single-objective continuous optimization problem, that we solve using a covariance matrix adaptation evolution strategy (CMA-ES) (hansen2001completely, ). To evaluate the convergence of a solver with respect to a certain choice of relaxation factors, we reuse the implementation generated at the end of the first optimization step to measure , as defined in equation (14). Since we only need to adapt a number of real-valued parameters in the respective source file, we just need to recompile the binary while avoiding the high cost of rerunning the ExaStencils code generator. To summarize, each optimization consists of the following steps:

  1. Multi-objective optimization using automated local Fourier analyis (LFA) and performance modeling

  2. Evaluation of Pareto-optimal solvers using code generation

  3. Relaxation parameter optimization based on the generated implementation

Finally, the resulting individual is employed as direct solver for the performance estimation and code generation on the next two levels. We repeat this procedure until the optimization on the finest level is finished. By recursively deploying the best individual of a run as direct solver for the next run, a single multigrid expression is obtained which operates on the complete range of levels. We implement our optimization approach in the Python programming language222EvoStencils: https://github.com/jonas-schmitt/evostencils using the framework DEAP (DEAP_JMLR2012, ) for the implementation of the evolutionary algorithms.

4. Evolving Solvers for the Steady-State Heat Equation

For our experiments we consider the steady-state heat equation with Dirichlet boundary conditions on a unit square, which is given by

(15)

where , is the divergence of and is the gradient of . The function describes the thermal conductivity of the material. We discretize equation (15) using finite differences on a cartesian grid with a step size of to obtain the system of linear equations . Our goal is to evolve optimal multigrid methods for solving this system. For this purpose we consider four different cases which are summarized in figure 2.

(a) 2D with constant coefficients
(b) 3D with constant coefficients
(c) 2D with variable coefficients
(d) 3D with variable coefficients
Figure 2. Overview of the considered test cases.

To obtain the same operator on a coarser level, we rediscretize equation (15) on a cartesian grid of the appropriate size. Note that the choice of a constant coefficient function results in Poisson’s equation. To obtain a Pareto front of multigrid expressions, we perform a multi-objective optimization for generations using genetic programming (GP) with a ES (beyer2002evolution, ) with , an initial population of and the non-dominated sorting procedure presented in (deb2000fast, ). This means that in each generation we create individuals based on an existing population of size and then select the best individuals for the next generation from the combined set. The fitness of each individual consists of two objectives, the spectral radius of its iteration matrix, estimated with LFA, and its execution time on the target platform, an Intel Xeon E3-1275 v5 (Skylake) machine with a peak performance of GFLOP/s (four physical cores, 16 DP FLOPs per cycle, GHz clock frequency) and a peak memory bandwidth of GB/s, estimated with the roofline model. To estimate the spectral radius in the case of variable coefficients, we approximate the coefficient function with a constant stencil at the center of the domain. Individuals are selected for crossover and mutation using a dominance-based tournament selection as described in (deb2000fast, ). New individuals are created by either crossover with a probability of , whereby we employ single-point crossover with a probability of to choose a terminal as crossover point or by mutation, through replacement of a certain subexpression with a new randomly created expression. To optimize the relaxation factors of a multigrid solver, we employ a CMA-ES (hansen2001completely, ) with , where is the number of relaxation factors, and generations. We restrict the set of productions for the generation of operator expressions to , which means that we only consider pointwise smoothers. The optimization is performed with a step size of on each level , whereby we employ a level range of for the 2D and for the 3D cases.

The figures 2(a), 2(b), 2(c) and 2(d) each contain a plot of the average value of both objectives in the last GP-optimization of the respective test case, i.e. the optimization on the three finest levels.

(a) 2D with constant coefficients and
(b) 2D with variable coefficients and
(c) 3D with constant coefficients and
(d) 3D with variable coefficients and
Figure 3. Average fitness throughout the optimization.

In all four cases the evolutionary algorithm is able to decrease the average of both objectives drastically within the first generations. Although in subsequent generations it is not possible to further decrease the average in one objective without entailing an increase in the other one. This is reflected in the further decrease of the average spectral radius throughout the optimization, while the average execution time per iteration increases slightly. Since we are first and foremost interested in finding multigrid solvers with fast convergence, we can tolerate a slight increase in the average execution time per iteration if this allows us to find better converging solvers, as the employed selection scheme always preserves those solutions that perform best in both objectives(deb2000fast, ). Since the spectral radius of an iteration matrix does not behave smooth with respect to small perturbations in the structure of a solver, drastic changes in the average can occur throughout an optimization. Due to the stochastic nature of evolutionary algorithms, for a non-smooth objective function there is always a chance that significantly worse individuals are created through mutation, which is the case in figure 2(d) between the generations 70 and 80. However, such fluctuations do not affect the general trend of decreasing spectral radii within the population and besides the mentioned deviation, the average of both objectives varies only smoothly throughout each of the four optimization runs, which means that the algorithm possesses a high degree of robustness. Table 1 shows the measured asymptotic convergence factor of the best individual, i.e. the solver with the lowest execution time until convergence, for each test case before and after the convergence optimization using CMA-ES.

Case Initial Convergence factor Optimized convergence factor Improvement
2D with constant coefficients %
2D with variable coefficients %
3D with constant coefficients %
3D with variable coefficients %
Table 1. Convergence optimization results on the finest level.

In all cases a significant improvement can be achieved, which ranges from to %.

Finally, to objectively assess the effectiveness of our complete optimization approach, we compare the best solver in each of the four cases with a number of common V- and W-cycles, for each of which we generate a thread-parallel implementation on the target platform using ExaStencils’ code generation capabilities. In all generated implementations we employ statically schedules OpenMP-threads for the parallelization, whereby the number of threads always matches the number of available physical cores. For compilation on the target machine GCC 8.2.0 is used. The results of the comparison are listed in the tables 2(a), 2(b), 2(c) and 2(d).

Method Convergence factor Time to convergence
V(1,1) ms
V(2,1) ms
V(2,2) ms
V(3,3) ms
W(2,2) ms
W(3,3) ms
EvoStencils ms
(a) 2D with constant coefficients.
Method Convergence factor Time to convergence
V(1,1) ms
V(2,1) ms
V(2,2) ms
V(3,3) ms
W(2,2) ms
W(3,3) ms
EvoStencils ms
(b) 2D with variable coefficients
Method Convergence factor Time to convergence
V(1,1) ms
V(2,1) ms
V(2,2) ms
V(3,3) ms
W(2,2) ms
W(3,3) ms
EvoStencils ms
(c) 3D with constant coefficients
Method Convergence factor Time to convergence
V(1,1) ms
V(2,1) ms
V(2,2) ms
V(3,3) ms
W(2,2) ms
W(3,3) ms
EvoStencils ms
(d) 3D with variable coefficients
Table 2. Solver comparison for the steady-state heat equation.

In all four cases the evolved solvers outperform the tested V- and W-cycles by a significant margin, with reductions in execution time until convergence that range from to % compared to the fastest method in each case. Moreover, all evolved cycles achieve convergence factors comparable to those of W-cycles, even though all of them can be classified as V-cycle. For a better interpretability of these results, we depict the algorithmic structure of the evolved multigrid methods in the figures 4, 5, 6 and 7

1:
2:
3:
4:
5:
6:
7:
8: with
9:
10: with
11:

Figure 4. Evolved solver for the 2D steady-state heat equation with constant coefficients.
1:
2:
3:
4:
5:
6:
7:
8: with
9:
10: with
11:

Figure 5. Evolved solver for the 2D steady-state heat equation with variable coefficients.
1:
2:
3:
4:
5:
6:
7:
8:
9: with
10:
11:
12:
13:
14: with
15:

Figure 6. Evolved solver for the 3D steady-state heat equation with constant coefficients.
1:
2:
3:
4:
5:
6:
7:
8: with
9:
10: with
11:

Figure 7. Evolved solver for the 3D steady-state heat equation with variable coefficients.

Each of those figures contains both an algorithmic as well as a graphic representation of the evolved multigrid solver on the three finest levels. In the latter each red node corresponds to one step of red-black Gauss-Seidel smoothing, each blue node to one step of Jacobi smoothing and each white node to the exact solution of the system on the given level. While the order of operations is displayed from left to right, restriction corresponds to a descend from top to bottom and prolongation to an ascend in the opposite direction. Even though the evolved methods are structurally different, they exhibit a number of common characteristics. For the 2D cases the evolved methods are almost identical for constant and variable coefficients in terms of their algorithmic structure, besides the use of a different method as first pre-smoothing step on the second finest level. Apart from that, all methods can be characterized as V-Cycles, which means that they only solve once exactly on the coarsest grid. Furthermore, all of them employ a combination of red-black Gauss-Seidel (RBGS) and Jacobi smoothing with different relaxation factors and a different number of smoothing steps on each level, whereby the latter is typically higher for the coarser levels. For pre-smoothing often one step of RBGS is used, either solely or followed by a varying number of Jacobi steps. In contrast, for post-smoothing we observe an exclusive use of under- and overrelaxed Jacobi iterations. The occurrence of these patterns is especially remarkable in front of the background that we have not prescribed the application of RBGS or Jacobi iterations, but only restricted the function apply within our grammar (see figure 1) to the application of the inverse diagonal of the operator to the current residual, whereby we allow a red-black partitioned computation when obtaining the new iterate. Even though this obviously includes Jacobi- and partitioned Gauss-Seidel-like methods, it also allows the application of the inverse diagonal zero or multiple times before computing a new iterate. In the former case the residual is directly added to the current iterate, which corresponds to a modified Richardson iteration. Guided by the minimization of both objectives, the spectral radius of the resulting iteration matrix and the estimated combined execution time of all applied operations, the evolutionary algorithm nevertheless learns that it is a good idea to apply the inverse diagonal of only once to the current iterate and then repeat this procedure, with different partitioning and relaxation factors.

5. Conclusion

In this work we have presented a novel approach for the automatic optimization of geometric multigrid methods based on a tailored context-free grammar for the generation of multigrid solvers and the use of evolutionary algorithms guided by a model-based prediction for the convergence and compute performance. Even though we have demonstrated that our approach in principle works and is able to evolve competitive solver for the steady-state heat equation starting from a randomly initialized population of solutions, there is still room for improvement and extensions. Instead of considering a case that is well researched and for which efficient solution methods are available, a more challenging task would be the solution of partial differential equations where a robust and efficient geometric multigrid solver has not been developed, which is for instance the case for many nonlinear PDEs. We furthermore want to improve the performance of our evolutionary algorithm by incorporating domain knowledge, in the form of individuals that are known to be efficient in solving a certain problem from long-standing research and analysis in the field of numerical mathematics and which can be directly included into the population to guide the evolution towards promising areas of the search space. We also aim to improve the accuracy of our model-based compute performance prediction through the use of a more sophisticated machine model such as the Execution-Cache-Memory (ECM) model (hager2016exploring, ). Finally, one could consider different algorithms for the optimization of programs generated by our multigrid grammar, such as reinforcement learning (sutton2018reinforcement, ) or Monte Carlo tree search (kocsis2006bandit, ) based techniques.

References

  • (1) Bäck, T., Fogel, D. B., and Michalewicz, Z. Handbook of evolutionary computation. CRC Press, 1997.
  • (2) Beyer, H.-G., and Schwefel, H.-P. Evolution strategies - a comprehensive introduction. Natural computing 1, 1 (2002), 3–52.
  • (3) Bolten, M., and Rittich, H. Fourier analysis of periodic stencils in multigrid methods. SIAM Journal on Scientific Computing 40, 3 (2018), A1642–A1668.
  • (4) Briggs, W. L., McCormick, S. F., et al. A multigrid tutorial, vol. 72. Siam, 2000.
  • (5) Coello, C. A. C., Lamont, G. B., Van Veldhuizen, D. A., et al. Evolutionary algorithms for solving multi-objective problems, vol. 5. Springer, 2007.
  • (6) Deb, K., Agrawal, S., Pratap, A., and Meyarivan, T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: Nsga-ii. In International Conference on Parallel Problem Solving From Nature (2000), Springer, pp. 849–858.
  • (7) Fortin, F.-A., De Rainville, F.-M., Gardner, M.-A., Parizeau, M., and Gagné, C. DEAP: Evolutionary algorithms made easy. Journal of Machine Learning Research 13 (jul 2012), 2171–2175.
  • (8) Hackbusch, W. Multi-grid methods and applications, vol. 4. Springer Science & Business Media, 2013.
  • (9) Hager, G., Treibig, J., Habich, J., and Wellein, G. Exploring performance and power properties of modern multi-core chips via simple machine models. Concurrency and Computation: Practice and Experience 28, 2 (2016), 189–210.
  • (10) Hansen, N., and Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evolutionary computation 9, 2 (2001), 159–195.
  • (11) Kocsis, L., and Szepesvári, C. Bandit based monte-carlo planning. In European conference on machine learning (2006), Springer, pp. 282–293.
  • (12) Koza, J. R. Genetic programming as a means for programming computers by natural selection. Statistics and computing 4, 2 (1994), 87–112.
  • (13) Koza, J. R. Human-competitive results produced by genetic programming. Genetic Programming and Evolvable Machines 11, 3-4 (2010), 251–284.
  • (14) Lengauer, C., Apel, S., Bolten, M., Größlinger, A., Hannig, F., Köstler, H., Rüde, U., Teich, J., Grebhahn, A., Kronawitter, S., et al. Exastencils: Advanced stencil-code engineering. In European Conference on Parallel Processing (2014), Springer, pp. 553–564.
  • (15) Mahmoodabadi, R. G., and Köstler, H. Genetic programming meets linear algebra: How genetic programming can be used to find improved iterative numerical methods. In Proceedings of the Genetic and Evolutionary Computation Conference Companion (New York, NY, USA, 2017), GECCO ’17, ACM, pp. 1403–1406.
  • (16) Montana, D. J. Strongly typed genetic programming. Evolutionary computation 3, 2 (1995), 199–230.
  • (17) Poli, R., Langdon, W. B., and McPhee, N. F. A field guide to genetic programming. Published via http://lulu.com and freely available at http://www.gp-field-guide.org.uk, 2008. (With contributions by J. R. Koza).
  • (18) Saad, Y. Iterative methods for sparse linear systems, vol. 82. siam, 2003.
  • (19) Schmitt, C., Kuckuk, S., Hannig, F., Köstler, H., and Teich, J. Exaslang: a domain-specific language for highly scalable multigrid solvers. In Domain-Specific Languages and High-Level Frameworks for High Performance Computing (WOLFHPC), 2014 Fourth International Workshop on (2014), IEEE, pp. 42–51.
  • (20) Sutton, R. S., and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018.
  • (21) Thekale, A., Gradl, T., Klamroth, K., and Rüde, U. Optimizing the number of multigrid cycles in the full multigrid algorithm. Numerical Linear Algebra with Applications 17, 2-3 (2010), 199–210.
  • (22) Trottenberg, U., Oosterlee, C. W., and Schuller, A. Multigrid. Elsevier, 2000.
  • (23) Wienands, R., and Joppich, W. Practical Fourier analysis for multigrid methods. Chapman and Hall/CRC, 2004.
  • (24) Williams, S., Waterman, A., and Patterson, D. Roofline: an insightful visual performance model for multicore architectures. Communications of the ACM 52, 4 (2009), 65–76.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393407
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description