A multigrid scheme for 3D Monge-Ampère equations Project was supported in part by Guangdong Province Science and Technology Development Grant (foreign cooperation project: 2013B051000075) of China, and in part by NSF 1021203, 1419028 of the United States.

A multigrid scheme for 3D Monge-Ampère equations thanks: Project was supported in part by Guangdong Province Science and Technology Development Grant (foreign cooperation project: 2013B051000075) of China, and in part by NSF 1021203, 1419028 of the United States.

Jun Liu 222Corresponding author. Department of Mathematics and Statistical Sciences, Jackson State University, Jackson, MS 39217, USA. (jun.liu@jsums.edu) 666The work described in this article is a result of a collaboration made possible by the IMA’s New Directions Short Course on “Topics on Control Theory: Optimal Mass Transportation” in June, 2014.    Brittany D. Froese 333Department of Mathematical Sciences, New Jersey Institute of Technology, University Heights, Newark, NJ 07102, USA. (bdfroese@njit.edu)    Adam M. Oberman 444Department of Mathematics and Statistics, McGill University, Montreal, QC H3A 0B9, Canada. (adam.oberman@mcgill.ca) 666The work described in this article is a result of a collaboration made possible by the IMA’s New Directions Short Course on “Topics on Control Theory: Optimal Mass Transportation” in June, 2014.    Mingqing Xiao 555Department of Mathematics, Southern Illinois University, Carbondale, IL 62901, USA. (mxiao@siu.edu)
July 3, 2019
Abstract

The elliptic Monge-Ampère equation is a fully nonlinear partial differential equation which has been the focus of increasing attention from the scientific computing community. Fast three dimensional solvers are needed, for example in medical image registration but are not yet available. We build fast solvers for smooth solutions in three dimensions using a nonlinear full-approximation storage multigrid method. Starting from a second-order accurate centered finite difference approximation, we present a nonlinear Gauss-Seidel iterative method which has a mechanism for selecting the convex solution of the equation. The iterative method is used as an effective smoother, combined with the full-approximation storage multigrid method. Numerical experiments are provided to validate the accuracy of the finite difference scheme and illustrate the computational efficiency of the proposed multigrid solver.

Keywords: Monge-Ampère equation; nonlinear Partial Differential Equations; finite difference method; Gauss-Seidel iteration; FAS multigrid method.

1 Introduction

The elliptic Monge-Ampère equation

with , where is the Hessian of the function , is a fully nonlinear partial differential equation (PDE) which has been the focus of increasing attention from the scientific computing community [37, 15]. It is the prototypical fully nonlinear elliptic PDE, with connections to differential geometry [39]. As tools for solving the equation have become more effective, the PDE has appeared in more applications, including the reflector design problem [32], the optimal transportation problem [3], and parameter identification problems in seismic signals [14]. In this paper, we focus on building fast solvers for the PDE in the three dimensional case. Our interest in this case stems from the image registration problem [25, 24, 23, 22], where current imaging resolution requires more effective solvers than are currently available.

There are two regimes to consider when solving the problem. In the regime where solutions are smooth, many approaches can lead to numerical methods which converge in practice. These include finite differences [10, 27, 36], finite elements [6, 7, 17, 29], spectral methods [34], and Fourier integral formulations [41].

In the second regime, where solutions are singular, standard methods either break down completely or become extremely slow [2]. In order to compute singular solutions, the theory of viscosity solutions can be used to build convergent monotone finite difference methods [1]. Monotone schemes require using wide stencils, which have a directional resolution parameter, in addition to a spatial discretization parameter. As a result, they are generally less accurate than second order in space. On the other hand, they are stable in the maximum norm, and can be solved by a (slow) forward Euler method, with an explicit CFL condition [30]. Refinements of the monotone method result in faster solvers [19] and higher accuracy [20]. The fast solvers employed were an exact Newton’s method (with an explicitly computed Jacobian) and a direct linear solver in two dimensions [20]. The initial guess for Newton’s method used one step of an iterative solver from [2], which also required a single solution of a Poisson equation. Computations were performed in three dimensions, but the direct linear solvers were too costly for larger sized problems.

A significant challenge in computing solutions of the Monge-Ampère equation is the need to enforce an additional constraint that the solution be convex; without this, the solution is not unique. In two-dimensions, for example, the Monge-Ampère equation will typically have two solutions: one convex and one concave. The situation becomes more complicated at the discrete level. For a two-dimensional problem with discretization points, a finite difference scheme will typically admit different solutions [15]. In three-dimensions, the problem becomes still more complicated. In two-dimensions, the correct solution can be selected by choosing the solution that has a positive Laplacian. This is equivalent to selecting the smallest root of a quadratic equation. This structure was exploited by the Gauss-Seidel and iterative Poisson methods developed in [2]. However, in three-dimensions, positivity of the Laplacian (sum of the eigenvalues of the Hessian) and the Monge-Ampère operator (product of the eigenvalues) is not enough to guarantee convexity (positivity of all three eigenvalues). Using monotone schemes, convexity can be enforced by separately computing the positive and negative parts of various second directional derivatives.

In this paper, we focus on fast solution methods for the three-dimensional Monge-Ampère operator, with a given source function which is positive and continuous in a convex domain ,

(1.1)

with a given Dirichlet boundary condition

(1.2)

and the additional global constraint

which is necessary for ellipticity and uniqueness. We record the the explicit form of the operator in three dimensions as follows

We note that the convexity constraint is formally equivalent to the condition that the Hessian of the solution is positive definite,

We assume also that that boundary data is consistent with the restriction of a convex function to the boundary. These conditions on the data, along with strict convexity of the domain, are enough to ensure that solutions are classical (twice continuously differentiable) [9]. For simplicity of computations, we work with square domains, which allow for the possibility of singular solutions, however, using known solutions avoids this problem.

Without additional difficulties, the method developed in this paper can also be applied to the two-dimensional problem

(1.3)

The objective of this work is to build fast solvers for the three-dimensional elliptic Monge-Ampere equation. We focus on the Dirichlet problem in the smooth solution regime, where the simple narrow stencil finite difference methods can be used. The goal is to build an effective, fast, scalable solver that can be used in applications. It is not trivial to build a multigrid solver in this setting–naive implementations of multigrid will fail. For example, the commonly suggested strategy [38] of performing a few Newton iterations to approximately solve the corresponding nonlinear scalar equations at each grid point would not lead to a convergent FAS multigrid solver for our problem. This is because such a method has no mechanism for selecting the correct, convex solutions. We introduce such a selection mechanism, which leads to an effective multigrid method. However, when the equation becomes degenerate (corresponding to non-strictly convex solutions, which can arise when the right hand side ), the method presented here breaks down. This is likely related to the loss of uniform ellipticity of the equation, and the loss of strict convexity of the solution [2]. In this setting, wide-stencil finite difference methods are typically needed to guarantee convergence to weak solutions [18, 28]. A possible work around is to combine monotone (or filtered) schemes with a multi grid method. For now, we restrict our attention to smooth solutions, where the method presented in this paper is effective.

One approach is to use Newton’s method with an iterative solver [31]. For example, the Newton-Multigrid method arises when a linear multigrid algorithm is used to approximately solve the linearized Jacobian system. A disadvantage of Newton’s method is that it requires a good initial guess, which should typically be convex [27]. We instead choose to treat the non-linearity directly using the linear multigrid framework and coarse-grid correction, which leads to the most common nonlinear multigrid method—the full approximation scheme (FAS) [4, 5, 8, 38, 33]. For the FAS multigrid to be effective as an iterative method, it requires an effective smoother (or relaxation) scheme such as a nonlinear Gauss-Seidel iteration, which must eliminate the high frequency components of the approximation errors at the current fine level. Once the FAS V-cycle iteration is established, the full multigrid (FMG) technique, based on nested iterations, can be exploited to obtain a good initial guess for the fine-grid problem.

In this work we develop a FMG-FAS multigrid algorithm for the 3D Monge-Ampère equation. The method is based on a nonlinear Gauss-Seidel iteration that includes a mechanism for selecting the convex solution. As a noticeable advantage over the above mentioned Newton-Multigrid method, the FMG-FAS multigrid algorithm usually has better global convergence properties. This is particularly the case for the Monge-Ampère equation since the Newton-Multigrid approach does not have any way to select the correct, convex solution. The FMG-FAS multigrid typically has a lower memory requirement as well since there is no need to compute and store the Jacobian matrices [38].

There are very few published works devoted to designing fast nonlinear solvers, particularly for larger problem sizes in three dimensions. The multigrid method was used as a preconditioner for a Newton-Krylov iteration in [10, 13] in two and three dimensions up to grids of size , which took approximately five minutes using eight processors. For the applications considered in that work, solutions of the Monge-Ampère equation would typically be close to , which led naturally to a good initialization of Newton’s method. In this paper, we approach the problem using the nonlinear full approximation scheme (FAS) multigrid method, which will not require an accurate initialization. Although the FAS multigrid methods are popular and widely used in many disciplines involving nonlinear PDEs, such as variational image registration models [12], the only references we found on applying FAS multigrid method to the Monge-Ampère equation were two-dimensional simulations in a master’s thesis [35]. Besides, an adaptive FAS multigrid algorithm based on a continuation method was developed in  [11] for solving a 2D balanced vortex model. It is worthwhile to point out that the reported computational CPU time does not scale linearly with respect to the degrees of freedom (see e.g. Table 5.1 in [35]), which motivates us to look further into the efficiency of the FAS multigrid implementations. In addition, we are mainly interested in solving three dimensional (3D) Monge-Ampère equation, which is well-known to be much more difficult [7].

This paper is organized as follows. In the next section, we present a discretization of the PDE using second-order centered finite differences and propose a nonlinear Gauss-Seidel iterative method for solving the discretized system. The method extends the two-dimensional Gauss-Seidel method of [2]; in our case, we need to solve a cubic rather than a quadratic equation. In Section 3, we introduce a full multigrid method based on the FAS V-cycle algorithm, where the nonlinear Gauss-Seidel iterations function as a smoother. In Section 4, numerical results for several two- and three-dimensional examples are reported, which demonstrate the accuracy, mesh independent convergence, and linear time complexity of our proposed FMG-FAS multigrid solver. Finally, the paper ends with several concluding remarks in Section 5.

2 A nonlinear Gauss-Seidel iteration in 3D

This section is motivated by the two-dimensional explicit finite difference method in [2] where second derivatives are discretized using standard centered differences on a uniform Cartesian grid. There a Gauss-Seidel iteration is constructed by selecting the smaller root of a point-wise defined quadratic equation over each grid point. The resulting method is formally second order accurate provided the solution is regular enough. In the following, we will develop a similar Gauss-Seidel iteration for the three-dimensional case, where we have to handle a cubic equation at each grid point.

Let be a positive integer. We discretize the space domain uniformly into

with a uniform mesh step size . Let be the discrete approximation of . Notice that the values of with or or are directly specified by the Dirichlet boundary conditions. Hence, we only need to set up finite difference approximations for all the interior grid points. We employ the second-order accurate centered finite difference approximations [26] for all the second-order derivatives as follows.

(2.1)
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)

Using these approximations, we can construct the discrete Hessian

Then the discrete form of the elliptic Monge-Ampère equation in the interior of the domain reads

(2.7)

Inserting the approximations into equation (2.7) at each interior grid point leads to a cubic polynomial with respect to

(2.8)

where we use the notation

and .

The above finite difference approximations (2) are intentionally formulated as a cubic equation of at the current node by assuming its neighboring nodes are fixed. The nonlinear Gauss-Seidel iteration follows from iteratively solving the cubic equation sequentially over all grid points in a lexicographic order. The so-called red-black ordering with better smoothing and parallel properties is usually recommended for 2D problems, but it is of limited advantage [40] to our 3D scheme involving a total of 19 points. In this case, four or more colors are necessary to fully parallelize the updating order [21], which is not further pursued in the current paper. The Dirichlet boundary conditions (1.2) are enforced at the boundary grid points during the iteration. However, we also need to determine which root of the cubic is to be selected in order to ensure local convexity of the discrete solutions. In the 2D case, the corresponding approach leads to a quadratic equation [2], and selection of the smallest root yields the convex solution. This rule is not directly applicable for the cubic equation arising in 3D, which may have either one or three real roots. As we are only interested in real-valued solutions, we simply select the smallest of all the real roots; that is

(2.9)

These roots can either be computed exactly or approximately using a root-finding algorithm.

Fixed points of this scheme are equivalent to solutions of the Monge-Ampère equation (2.7), which is justified in Theorems 2.1-2.2. In particular, we emphasize that the fixed point of this scheme is guaranteed to be discretely convex. This provides a clear advantage over Newton’s method, which must be coupled to a sufficiently accurate initial guess in order to prevent convergence to an incorrect, non-convex solution.

Theorem 2.1.

Let be a solution of the discrete Monge-Ampère equation (2.7). Then is a fixed point of (2.9).

Proof.

Consider a fixed location in the grid. Since satisfies the discrete Monge-Ampère equation, it automatically satisfies the cubic equation as well. It remains to show that is the smallest real root of this cubic.

One possibility is that the cubic equation has only a single real root, in which case this must coincide with the real-valued .

The other option is that the cubic equation has three real roots, . We remark that using the notation defined above, the discrete Hessian corresponding to each of these roots can be written as

(2.10)

where the off-diagonal elements do not depend on the value of . This is a symmetric real-valued matrix, and therefore has real eigenvalues.

Now suppose that is any eigenvalue of with eigenvector . Then we can compute

Thus

is an eigenvalue of , with equality only if . In particular, the eigenvalues of the discrete Hessian are decreasing functions of the root .

Since by assumption, the discrete Monge-Ampère equation has a root that yields a positive-definite Hessian, at least one of the roots will yield three positive eigenvalues. As the eigenvalues are decreasing in , the smallest root must yield three positive eigenvalues.

Suppose now that another root also produces positive eigenvalues. Since both satisfy the discrete Monge-Ampère equation at , we have

This is a contradiction, which means that the smallest real root is the only root that yields three positive eigenvalues.

We conclude that the smallest real root of the cubic is the root that corresponds to the convex solution of the discrete Monge-Ampère equation (2.7). ∎

Theorem 2.2.

Let be a fixed point of (2.9). Then is a solution of the discrete Monge-Ampère equation (2.7).

Proof.

Consider any fixed grid point . By the definition of the polynomial , the fixed point satisfies . It remains to show that the discrete Hessian is positive definite.

Consider the symmetric real-valued matrix

(2.11)

which has three real eigenvalues .

We also define

(2.12)

which has eigenvalues , . We note that zeros of the polynomial are equivalent to roots of

We record the fact that

By the Intermediate Value Theorem, there exists some real root . By definition, is the smallest real root and thus

Then the eigenvalues of the discrete Hessian are given by

and the discrete Hessian is positive definite.

We conclude that all fixed points correspond to solutions of the discrete Monge-Ampère equation. ∎

3 FAS multigrid method

The drawback of directly using the nonlinear Gauss-Seidel iteration is that the number of iterations required for convergence increases with the number of discretization points, so the total solution time grows super-linearly with the number of variables. However, the nonlinear Gauss-Seidel iteration can be used as an effective smoother, which makes it particularly effective when combined with the nonlinear multigrid method that follows.

In this section, we introduce the standard nonlinear FAS multigrid method for solving our discretized nonlinear problem. As a highly efficient iterative algorithm, the FAS multigrid method is a nonlinear generalization of the linear multigrid algorithm, which typically has optimal computational complexity for linear elliptic PDEs. The FAS multigrid method provides a powerful approach for handling nonlinear equations without the need for the global linearization required by Newton’s method. Unlike with Newton’s method, there is typically no need to initialize the solver with a very good initial guess. Below, we briefly describe the FAS multigrid algorithm as well as its FMG version based on the standard textbooks [5, 8, 38, 33].

For a general nonlinear system that is discretized using the fine mesh-size

one V-cycle FAS multigrid iteration is recursively defined in Algorithm 1 [8, 38, 33].

Algorithm 1 FAS multigrid V-cycle iteration (with )
Steps
IF ()
(1)           Approximately solve:
ELSE
(2)           Pre-smooth times:
(3)           Restriction residual:
(4)           Initialize coarse guess: ,
(5)           Define coarse r.h.s.:
(6)           Recursion:
(7)           Prolongation:
(8)           Correction:
(9)           Post-smooth times:
ENDIF
(10) RETURN .

In a single FAS V-cycle iteration, the fine-grid solution first undergoes a small number of smoothing iterations. Following this, the solution and residual are restricted to a coarser grid () using two (possibly different) restriction operators(, ). A new V-cycle iteration is then performed on this coarser level, with this procedure proceeding recursively until the grid reaches the coarsest level . At the coarsest level, the underlying problem size has become so small that it can be (approximately) solved using a very small number of Gauss-Seidel iterations.

Computing the solution on the coarser level leads to a coarse approximation of the solution. A prolongation operator () transfers this approximation to the fine grid, which provides a coarse grid correction in the fine grid solution. The fine grid solution is further corrected using a small number of smoothing iterations.

As suggested in [38], the approximation restriction operator is chosen as straight injection. Depending on the application, there are many possible choices for and . For a better computational efficiency, we choose to use the residual restriction operator from half-weighting averaging [38] with the following stencil form

and the prolongation operator from trilinear interpolation [38] with a corresponding stencil form

Other types of and are also available when better accuracy is required.

We also note that a straightforward definition of the coarse grid operator is possible by simply using the discretized equations with a coarser step size . This is the most common choice for a fully structured mesh. We point out that the so-called Galerkin coarse grid operator, which constitutes a core component of the popular linear algebraic multigrid, is not easily used with the nonlinear FAS multigrid framework [38].

Figure 1: The smoothing effect of our nonlinear Gauss-Seidel iterations with Example 1. Left: the initial oscillating error is chosen to be with a maximum norm . Right: after 2 Gauss-Seidel iterations, the approximation error becomes smooth with a maximum norm . Both error surface plots are only snapshots at for better visualization.

The last but most crucial component of the method is an effective smoother smooth, whose major function is to eliminate the high-frequency components of the approximation errors. As a standalone solver, the chosen smoothing iteration may converge very slowly as the mesh refines. This is the case for the nonlinear Gauss-Seidel iteration developed in the previous section. However, because it damps the high frequency components of the error, it will serve as an effective smoother smooth in Algorithm 1. Such a smoothing property is numerically illustrated in Figure 1, which shows that two iterations seem to be sufficient to dramatically smooth out high-frequency errors. This is unsurprising given that the classical linear Gauss-Seidel iteration has been widely recognized as a benchmark smoother in the linear multigrid method. In practice, one or two pre- and post-smoothing iterations typically give the best overall performance. More pre- and post-smoothing iterations could be helpful when the smoothing effect of the smoother is weak or degraded by the possible singular solutions.

The efficiency of the above FAS multigrid iterative solver can be further improved when its initial guess is derived from the idea of nested iteration, which leads to the most efficient full multigrid (FMG) algorithm [8, 38, 33]. The FMG algorithm based on above the FAS V-cycle multigrid iteration is recursively defined in Algorithm 2.

Algorithm 2 FMG-FAS algorithm (with )
Steps
IF ()
(1)           Approximately solve:
ELSE
(2)           Restriction:
(3)           Recursion:
(4)           Cubic interpolation:
(5)           FAS V-cycle iteration:
ENDIF
(6) RETURN .

In Algorithm 2, no initial guess is required at the finest level , but we do need an initial guess at the coarsest level if we are going to solve the coarsest nonlinear system using our Gauss-Seidel iteration. Fortunately, the coarsest nonlinear system has a very small dimension with effectively one unknown on a grid, the remaining values being determined by the Dirichlet boundary conditions. Thus it is easy to obtain a good approximate solution no matter what initial guess is used. In our implementation, we simply take the initial guess at the coarsest level to be identically zero. Often, a cubic interpolation will be used in the above FMG algorithm so that one full FMG iteration can deliver an approximation with the desired discretization accuracy. This is useful if the underlying FAS V-cycle multigrid iteration converges satisfactorily, as is the case for Poisson equation solvers [38]. Due to the strong nonlinearity of our problem, we do not expect one FMG sweep to be sufficient. However, it does provide a very good initial guess for the FAS V-cycle multigrid iterations. In both algorithms, the coarse level problem is solved approximately by performing just one nonlinear Gauss-Seidel iteration.

4 Numerical examples

In this section, we test our FAS multigrid algorithm using several examples available in the literature. All simulations are implemented using MATLAB on a laptop PC with Intel Core i3-3120M CPU@2.50GHz and 12GB RAM. The CPU time (in seconds) is estimated by MATLAB’s built-in timing functions tic/toc. We apply one FMG iteration as an initialization step. Due to the nonlinearity, this initialization step is necessary for ensuring mesh-independent convergence of the FAS algorithm. For each FAS V-cycle, we perform only two pre-smoothing and two post-smoothing Gauss-Seidel iterations, The coarsest level () problem is approximately solved using only one Gauss-Seidel iteration. Numerical simulations indicate that the FMG-FAS algorithm has almost the same efficiency using more accurate solvers at the coarsest level.

At the -th iteration, let denote the computed approximation of solution and be the corresponding residual vector. We use the pre-specified reduction in the relative residuals as the stopping criterion

where denotes the discrete norm. Our numerical simulations show that the chosen tolerance is sufficient to achieve the level of discretization error. Let denote the computed solution approximated at the last iteration. We first compute the infinity norm of the approximation error

and then estimate the experimental order of accuracy by computing the logarithmic ratios of the approximation errors between two successive refined meshes, i.e.,

which should be close to two provided that the scheme delivers a second-order accuracy.

In most cases, only one FMG iteration is needed for convergence. In special cases, we may need a few extra multigrid V-cycle iterations to fulfill the above convergence criterion. In 3D problems, using two pre- and post-smoothing steps, the computational cost of one FAS V-cycle is about the cost of a single Gauss-Seidel iteration on the finest mesh. One FMG-FAS iteration costs roughly times the cost of a single Gauss-Seidel iteration on the finest mesh if we ignore the additional cost of restriction and interpolation.

When we use the nonlinear Gauss-Seidel iterations as a standalone solver, it is challenging to pick a good initial guess such that the iteration converges quickly. For the purpose of simple comparison, we always choose the slightly perturbed solution as the initial guess for the nonlinear Gauss-Seidel solver, where is the analytic solution. Even using this impractical initial guess, the Gauss-Seidel solver cannot compete with the efficiency of the FMG-FAS method. The 3D numerical comparison between the standalone Gauss-Seidel solver and the FMG-FAS method is given in subsection 4.2.

4.1 Two dimensional examples

We begin by applying the corresponding 2D version of our FMG-FAS method using the nonlinear Gauss-Seidel iteration developed in [2] as the smoother. The scheme involves a 9-point stencil, and we test the Gauss-Seidel smoother using the red-black ordering and the plain lexicographic ordering, respectively. The FMG-FAS multigrid method delivers a far better computational efficiency then the results of [2], which used the Gauss-Seidel iteration as a standalone solver. In particular, the computation time scales almost linearly with respect to the degrees of freedom, which is verified by a consistent fourfold increase in the CPU time (in seconds) as the step-size is halved.

Example 1. Let . Choose

such that an analytic solution is

Example 2. Let . Choose

such that an exact solution is

Example 3. Let . Choose

such that an exact solution is

Example 1 Example 2 Example 3
Error Order Iter CPU () Error Order Iter CPU () Error Order Iter CPU ()
128 4.5e-06 1 0.1 3.8e-05 1 0.1 8.9e-03 2 0.1
256 1.1e-06 2.0 1 0.1 1.3e-05 1.5 1 0.1 6.4e-03 0.5 2 0.2
512 2.8e-07 2.0 1 0.3 4.7e-06 1.5 1 0.3 4.5e-03 0.5 1 0.3
1024 7.0e-08 2.0 1 1.0 1.7e-06 1.5 1 1.1 3.2e-03 0.5 1 1.1
2048 1.7e-08 2.0 1 3.6 5.9e-07 1.5 1 4.0 2.3e-03 0.5 1 3.8
4096 4.3e-09 2.0 1 16.2 2.1e-07 1.5 1 16.5 1.6e-03 0.5 1 18.0
8192 1.0e-09 2.1 1 67.1 7.4e-08 1.5 1 69.1 1.1e-03 0.5 1 70.3
Table 1: Results for Example 1, 2, and 3 with the FMG-FAS algorithm (red-black ordering).
Example 1 Example 2 Example 3
Error Order Iter CPU () Error Order Iter CPU () Error Order Iter CPU ()
128 5.1e-06 1 0.1 4.3e-05 1 0.1 8.9e-03 3 0.1
256 1.3e-06 2.0 1 0.2 1.5e-05 1.5 1 0.3 6.3e-03 0.5 2 0.2
512 3.3e-07 2.0 1 0.8 5.4e-06 1.5 1 0.9 4.4e-03 0.5 1 0.8
1024 8.2e-08 2.0 1 3.2 1.9e-06 1.5 1 3.27 3.2e-03 0.5 1 3.1
2048 2.1e-08 2.0 1 12.7 6.8e-07 1.5 1 12.9 2.2e-03 0.5 1 12.6
4096 5.1e-09 2.0 1 53.2 2.4e-07 1.5 1 54.6 1.6e-03 0.5 1 53.5
8192 1.3e-09 2.0 1 245.6 8.5e-08 1.5 1 259.4 1.1e-03 0.5 1 250.7
Table 2: Results for Example 1, 2, and 3 with the FMG-FAS algorithm (lexicographic ordering).

For simplicity, we combine the results of Examples 1, 2, and 3 in Tables 1 and 2. The CPU times manifest the desired linear time complexity of our FMG-FAS algorithm. The non-smoothness of the solution in Examples 2 and 3 degrade the second-order accuracy of the approximations to and , respectively. Our algorithm seems to be able to effectively and efficiently handle this type of mildly singular solution. It also shows that the red-black ordering has slightly better approximation accuracy as well as superior computational efficiency due to the faster vectorization in MATLAB implementation.

Although we notice in our numerical experiments that only one iteration is sufficient to obtain the same approximation errors in infinity norm, for singular solutions (not even in ), additional FAS V-cycle iterations may be needed (e.g. in Example 3) to fulfill the given stopping criterion based on residual norm reduction in discrete norm. Nevertheless, in all non-degenerate () examples we considered, we recovered linear computational complexity as the mesh was refined. In the fully degenerate setting (), the ellipticity of the PDE breaks down, and the multigrid approach may become less effective in accelerating the convergence of the Gauss-Seidel iteration. In that case, the deterioration of our algorithm is associated with the possible failure of the corresponding finite difference approximations, which are not provably convergent, and can break down in the singular case.

4.2 Three dimensional examples

Next we turn our attention to numerically solving the three-dimensional Monge-Ampère equation using the Gauss-Seidel iteration and the FMG-FAS method developed in this paper.

Example 4 [18]. Let . Choose

such that the exact solution is

In Table 3 we report the relative residual norms, infinity norm of errors, iteration numbers, and CPU times of the FMG-FAS algorithm and the Gauss-Seidel solver, respectively. The ‘Error’ column and ‘Order’ column show that the central finite difference discretization indeed achieves the expected second-order accuracy in the maximum norm. The ‘Iter’ column implies that our FMG-FAS algorithm possesses a mesh-independent convergence. More specifically, here ‘Iter’=1 implies that only one FMG iteration is sufficient to fulfill our stopping criteria; no futher V-cycles are necessary. The achieved relative residual norms (column ’RelRes’) turns out to be much lower, thanks to the fast convergence of the FMG algorithm. We can do a simple calculation to see that one FMG iteration indeed costs about 5 Gauss-Seidel iterations. Consider the mesh size , one Gauss-Seidel iteration takes about second and hence one FMG iteration should require approximately seconds, which is almost exactly what we achieved in our algorithm. The ‘CPU’ column indicates that our FMG-FAS algorithm has a roughly linear time complexity since the CPU time increases by eight times as the mesh size is halved.

On the other hand, the nonlinear Gauss-Seidel iterations as a standalone solver requires an increasing number of iterations as the mesh size decreases, which is expected. Even starting with an artificial initial guess that is impractically close the desired true solution, the nonlinear Gauss-Seidel iteration still does not produce sufficiently accurate approximations using the same stopping criterion (compare the row with ). It is also interesting to notice that the relative residuals (column ‘RelRes’) of the Gauss-Seidel solver are much larger than that of the FMG-FAS algorithm.

For comparison, Table 4 also lists the results reported in [19] and [7], where a wide-stencil hybrid finite difference (FD) solver and two finite element (FE) approximations were used for the same test problem. We mention that the results in [7] made use of a Newton solver, which required a good initial guess that was obtained by the vanishing moment method [16]. Though we can not fairly compare the CPU times directly as they were computed on different machines, it is clear that the FMG-FAS method is the only one that scales linearly with the number of degrees of freedom. The accuracy of the FMG-FAS approach is also competitive. As a whole, our FMG-FAS method appears superior than both the wide-stencil finite difference solvers and the finite element approximations for this type of simple problem with a sufficiently smooth solution.

We also mention that when , the second order accuracy is no longer observed. This is due to the limitations of machine precision and round-off errors during the computation. We note that the discretized Monge-Ampère operator involves division by when . We could get around this using higher precision arithmetic. However, the global error is now , which is small. The key is that we are able to resolve the data, which necessitates solving on large grids, and the resulting accuracy is sufficient for the applications we have in mind.

FMG-FAS Gauss-Seidel
DOFs RelRes Error Order Iter CPU () RelRes Error Order Iter CPU ()
8 216 3.1e-07 1.2e-03 1 0.1 9.3e-07 1.2e-03 72 0.4
16 2744 9.5e-09 2.7e-04 2.1 1 0.2 9.9e-07 3.0e-04 2.0 270 11.5
32 27000 2.1e-10 5.7e-05 2.3 1 2.0 1.0e-06 8.1e-05 1.9 867 324.6
64 238328 4.1e-12 1.4e-05 2.0 1 16.9
128 2000376 5.2e-13 6.2e-06 1.2 1 136.9
Table 3: Results for Example 4 with the FMG-FAS algorithm and the Gauss-Seidel solver.
Wide-stencil FD [19] FE (quadratic)  [7] FE (cubic)  [7]
DOFs Error CPU () DOFs Error CPU () DOFs Error CPU ()
8 343 1.51e-02 0.04 1/4 1581 9.12e-03 3.89 4952 4.14e-04 28.78
12 1331 1.40e-02 0.10 1/8 12611 2.29e-03 38.55 40985 5.71e-05 140.52
16 3375 1.29e-02 0.71 1/12 42798 4.49e-04 140.89 140861 7.52e-06 874.57
22 9261 1.21e-02 6.72 1/16 99436 2.73e-04 355.78 329244 6.36e-06 2758.98
32 29791 1.11e-02 86.63 1/20 195110 1.21e-04 803.47
Table 4: Reported results for Example 4 from [19] and [7].

Example 5. Let . Choose

such that an exact solution is

In Table 5 we report the relative residual norms, infinity norm of errors, iteration numbers, and CPU times of the FMG-FAS algorithm and the Gauss-Seidel solver, respectively. Again, we observed the similar excellent performance of our FMG-FAS algorithm against with the nonlinear Gauss-Seidel solver.

FMG-FAS Gauss-Seidel
RelRes Error Order Iter CPU () RelRes Error Order Iter CPU ()
8 6.8e-08 1.7e-02 2 0.1 9.2e-07 1.7e-02 82 0.4
16 2.1e-08 3.3e-03 2.4 1 0.2 9.8e-07 4.3e-03 2.0 296 12.9
32 2.7e-10 8.1e-04 2.0 1 2.0 1.0e-06 1.1e-03 2.0 954 356.2
64 3.7e-12 2.1e-04 2.0 1 16.4
128 4.8e-13 5.2e-05 2.0 1 134.1
Table 5: Results for Example 5 with the FMG-FAS algorithm and the Gauss-Seidel solver.

Example 6 [7]. Let . Choose

such that an analytic solution is

In Table 6 we report the relative residual norms, infinity norm of errors, iteration numbers, and CPU times of the FMG-FAS algorithm and the Gauss-Seidel solver, respectively. In constrast to the previous two examples, the solution to this problem is not smooth as there is a singularity at the origin. Nevertheless, our central finite difference scheme still approximates the solutions with a reasonable accuracy in the infinity norm. Similarly, our FMG-FAS algorithm greatly outperforms the nonlinear Gauss-Seidel solver.

FMG-FAS Gauss-Seidel
RelRes Error Order Iter CPU () RelRes Error Order Iter CPU ()
8 2.7e-07 5.3e-04 1 0.1 9.7e-07 4.0e-04 86 0.4
16 1.1e-08 2.1e-04 1.4 1 0.2 9.9e-07 1.4e-04 1.5 312 13.2
32 4.8e-10 7.4e-05 1.5 1 2.0 9.9e-07 5.1e-05 1.5 925 345.8
64 2.0e-11 2.6e-05 1.5 1 16.4
128 1.2e-12 9.3e-06 1.5 1 133.9
Table 6: Results for Example 6 with the FMG-FAS algorithm and the Gauss-Seidel solver.

Example 7 [18]. Let . Choose

such that an analytic solution is

In Table 7 we report the relative residual norms, infinity norm of errors, iteration numbers, and CPU times of the FMG-FAS algorithm and the Gauss-Seidel solver, respectively. This example is more challenging due to the unbounded gradient at the boundary. The ‘Error’ column shows our scheme has a roughly accuracy in the infinity norm, which is consistent with the error in standard discretizations for the two-dimensional version of this example [2]. Obviously, our FMG-FAS algorithm requires significantly less CPU time than the Gauss-Seidel solver. We note that the slightly better accuracy achieved by the Gauss-Seidel solver is due to the unrealistic initial guess used.

FMG-FAS Gauss-Seidel
RelRes Error Order Iter CPU () RelRes Error Order Iter CPU ()
8 1.7e-07 1.4e-03 1 0.1 8.9e-07 1.4e-03 75 0.4
16 2.1e-08 1.4e-03 0.0 1 0.2 9.7e-07 1.4e-03 0.0 266 12.2
32 2.7e-08 4.3e-03 -1.7 1 2.0 9.9e-07 1.1e-03 0.3 854 320.2
64 4.9e-09 2.2e-03 1.0 1 16.3
128 1.7e-09 1.6e-03 0.4 1 133.1
Table 7: Results for Example 7 with the FMG-FAS algorithm and the Gauss-Seidel solver.

5 Conclusions

In this paper, we presented a fast method for solving the three-dimensional elliptic Monge-Ampère equation, focusing on the Dirichlet problem and smooth solutions. We combined a nonlinear Gauss-Seidel iterative method with a standard centered difference discretization. The Gauss-Seidel iteration was then used as an effective smoother, in combination with a nonlinear full approximation scheme (FAS) multigrid method. Two and three dimensional numerical examples were computed to demonstrate the computational efficiency of the proposed multigrid method, and comparison was made with other available approaches. In particular, we show that the computational cost of the method scales approximately linearly. Computations on a recent laptop allowed for grid sizes of in less than three minutes. More importantly, since the methods scale well, implementation on more powerful computers will lead to a feasible approach to solving problems with the larger resolutions that occur in, for example three dimensional image registration

Future work will be to extend these ideas to filtered almost-monotone finite differences schemes in order to build solvers that apply to the singular solutions that can occur in applications. We would also like to extend the method to the Optimal Transportation problem with applications to three dimensional image registration, or to parameter identification.

Acknowledgments

The authors would like to thank the two anonymous referees for their valuable comments and suggestions that have greatly contributed to improving the original version of this manuscript. The first author gratefully acknowledges the support and hospitality provided by the IMA during his participation in the IMA’s New Directions Short Course on “Topics on Control Theory”, which took place from May 27 to June 13, 2014.

References

  • [1] G. Barles and P. E. Souganidis. Convergence of approximation schemes for fully nonlinear second order equations. Asymptotic Anal., 4(3):271–283, 1991.
  • [2] J.-D. Benamou, B. D. Froese, and A. M. Oberman. Two numerical methods for the elliptic Monge-Ampère equation. M2AN Math. Model. Numer. Anal., 44(4):737–758, 2010.
  • [3] J.-D. Benamou, B. D. Froese, and A. M. Oberman. Numerical solution of the optimal transportation problem using the Monge-Ampère equation. J. Comput. Phys., 260:107–126, 2014.
  • [4] A. Brandt. Multi-level adaptive solutions to boundary-value problems. Math. Comp., 31(138):333–390, 1977.
  • [5] A. Brandt and O. Livne. Multigrid Techniques: 1984 Guide with Applications to Fluid Dynamics, Revised Edition. Classics in Applied Mathematics. SIAM, 2011.
  • [6] S. C. Brenner, T. Gudi, M. Neilan, and L.-y. Sung. penalty methods for the fully nonlinear Monge-Ampère equation. Math. Comp., 80(276):1979–1995, 2011.
  • [7] S. C. Brenner and M. Neilan. Finite element approximations of the three dimensional Monge-Ampère equation. ESAIM Math. Model. Numer. Anal., 46(5):979–1001, 2012.
  • [8] W. L. Briggs, V. E. Henson, and S. F. McCormick. A multigrid tutorial. SIAM, Philadelphia, PA, 2000.
  • [9] L. Caffarelli, L. Nirenberg, and J. Spruck. The Dirichlet problem for nonlinear second-order elliptic equations. I. Monge-Ampère equation. Comm. Pure Appl. Math., 37(3):369–402, 1984.
  • [10] L. Chacón, G. L. Delzanno, and J. M. Finn. Robust, multidimensional mesh-motion based on Monge-Kantorovich equidistribution. J. Comput. Phys., 230(1):87–103, 2011.
  • [11] Y. Chen and S. R. Fulton. An adaptive continuation-multigrid method for the balanced vortex model. J. Comput. Phys., 229(6):2236–2248, 2010.
  • [12] N. Chumchob and K. Chen. A robust multigrid approach for variational image registration models. J. Comput. Appl. Math., 236(5):653–674, 2011.
  • [13] G. L. Delzanno, L. Chacón, J. M. Finn, Y. Chung, and G. Lapenta. An optimal robust equidistribution method for two-dimensional grid adaptation based on Monge–Kantorovich optimization. Journal of Computational Physics, 227(23):9841–9864, 2008.
  • [14] B. Engquist and B. D. Froese. Application of the Wasserstein metric to seismic signals. Communications in Mathematical Sciences, 12(5), 2014.
  • [15] X. Feng, R. Glowinski, and M. Neilan. Recent developments in numerical methods for fully nonlinear second order partial differential equations. SIAM Rev., 55(2):205–267, 2013.
  • [16] X. Feng and M. Neilan. Mixed finite element methods for the fully nonlinear Monge-Ampère equation based on the vanishing moment method. SIAM J. Numer. Anal., 47(2):1226–1250, 2009.
  • [17] X. Feng and M. Neilan. Vanishing moment method and moment solutions for fully nonlinear second order partial differential equations. Journal of Scientific Computing, 38(1):74–98, 2009.
  • [18] B. D. Froese and A. M. Oberman. Convergent finite difference solvers for viscosity solutions of the elliptic Monge-Ampère equation in dimensions two and higher. SIAM J. Numer. Anal., 49(4):1692–1714, 2011.
  • [19] B. D. Froese and A. M. Oberman. Fast finite difference solvers for singular solutions of the elliptic Monge-Ampère equation. J. Comput. Phys., 230(3):818–834, 2011.
  • [20] B. D. Froese and A. M. Oberman. Convergent filtered schemes for the Monge-Ampère partial differential equation. SIAM Journal on Numerical Analysis, 51(1):423–444, 2013.
  • [21] M. M. Gupta and J. Zhang. High accuracy multigrid solution of the 3D convection-diffusion equation. Appl. Math. Comput., 113(2-3):249–274, 2000.
  • [22] E. Haber and J. Modersitzki. Image registration with guaranteed displacement regularity. International Journal of Computer Vision, 71(3):361–372, 2007.
  • [23] E. Haber, G. Pryor, J. Melonakos, A. Tannenbaum, et al. 3d nonrigid registration via optimal mass transport on the GPU. Medical image analysis, 13(6):931–940, 2009.
  • [24] E. Haber, T. Rehman, and A. Tannenbaum. An efficient numerical method for the solution of the optimal mass transfer problem. SIAM Journal on Scientific Computing, 32(1):197–211, 2010.
  • [25] S. Haker, L. Zhu, A. Tannenbaum, and S. Angenent. Optimal mass transport for registration and warping. International Journal of Computer Vision, 60(3):225–240, 2004.
  • [26] R. LeVeque. Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems. SIAM, Philadelphia, PA, USA, 2007.
  • [27] G. Loeper and F. Rapetti. Numerical solution of the Monge-Ampère equation by a Newton’s algorithm. C. R. Math. Acad. Sci. Paris, 340(4):319–324, 2005.
  • [28] T. S. Motzkin and W. Wasow. On the approximation of linear elliptic differential equations by difference equations with positive coefficients. Journal of Mathematics and Physics, 31(1):253–259, 1952.
  • [29] M. Neilan. Quadratic finite element approximations of the Monge-Ampère equation. Journal of Scientific Computing, 54(1):200–226, 2013.
  • [30] A. M. Oberman. Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian. Discrete Contin. Dyn. Syst. Ser. B, 10(1):221–238, 2008.
  • [31] J. M. Ortega and W. C. Rheinboldt. Iterative solution of nonlinear equations in several variables. SIAM, Philadelphia, PA, 2000.
  • [32] C. R. Prins, J. H. M. Ten Thije Boonkkamp, J. van Roosmalen, W. L. Ijzerman, and T. W. Tukker. A Monge-Ampère-solver for free-form reflector design. SIAM J. Sci. Comput., 36(3):B640–B660, 2014.
  • [33] Y. Saad. Iterative methods for sparse linear systems. SIAM, Philadelphia, PA, 2003.
  • [34] L.-P. Saumier, M. Agueh, and B. Khouider. An efficient numerical algorithm for the optimal transport problem with periodic densities. IMA J. Appl. Math., 80(1):135–157, 2015.
  • [35] A. K. Soin. Multigrid for Elliptic Monge-Ampère Equation. Master’s thesis, University of Waterloo, Waterloo, Ontario, Canada, 2011.
  • [36] M. M. Sulman, J. F. Williams, and R. D. Russell. An efficient approach for the numerical solution of the Monge-Ampère equation. Appl. Numer. Math., 61(3):298–307, 2011.
  • [37] E. Tadmor. A review of numerical methods for nonlinear partial differential equations. Bull. Amer. Math. Soc. (N.S.), 49(4):507–554, 2012.
  • [38] U. Trottenberg, C. W. Oosterlee, and A. Schüller. Multigrid. Academic Press Inc., San Diego, CA, 2001.
  • [39] N. S. Trudinger and X.-J. Wang. The Monge-Ampere equation and its geometric applications. Handbook of geometric analysis, 1:467–524, 2008.
  • [40] J. Zhang. Fast and high accuracy multigrid solution of the three-dimensional Poisson equation. J. Comput. Phys., 143(2):449–461, 1998.
  • [41] V. Zheligovsky, O. Podvigina, and U. Frisch. The Monge-Ampère equation: Various forms and numerical solution. Journal of Computational Physics, 229(13):5043–5061, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
12792
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description