Fast alternating bi-directional preconditioner for the 2D high-frequency Lippmann-Schwinger equation

Fast alternating bi-directional preconditioner for the 2D high-frequency Lippmann-Schwinger equation

Abstract

This paper presents a fast iterative solver for Lippmann-Schwinger equation for high-frequency waves scattered by a smooth medium with a compactly supported inhomogeneity. The solver is based on the sparsifying preconditioner [63] and a domain decomposition approach similar to the method of polarized traces [64]. The iterative solver has two levels, the outer level in which a sparsifying preconditioner for the Lippmann-Schwinger equation is constructed, and the inner level, in which the resulting sparsified system is solved fast using an iterative solver preconditioned with a bi-directional matrix-free variant of the method of polarized traces. The complexity of the construction and application of the preconditioner is and respectively, where is the number of degrees of freedom. Numerical experiments in 2D indicate that the number of iterations in both levels depends weakly on the frequency resulting in a method with an overall complexity.

1Introduction

In this paper we present a fast solver for the high-frequency Lippmann-Schwinger equation in 2D. Let be the frequency of a time-harmonic wave propagating in a medium with wave-speed denoted by , where . Let be a rectangular domain and assume that the refraction index is constant up to a compactly supported perturbation. The refraction index is denoted by , where . In this case, , the wave scattered by the perturbation from an incident wave , is the solution to the Helmholtz equation

where satisfies the Sommerfeld radiation condition,

and the incoming field satisfies the homogeneous Helmholtz equation

in some neighborhood of containing .

From eqs. and it follows that the unknown field satisfies

Convolving with

the Green’s function for the 2D free-space Helmholtz operator, results in the Lippmann-Schwinger equation:

We chose among different formulations of the Lippmann-Schwinger equation (for other formulations see [2]) by the following reasons :

  • the unknown scattered field, , satisfies the Sommerfeld radiation conditions automatically;

  • the convolution kernel is translation invariant which allows for a rapid application via fast Fourier transforms (FFT) [21].

Even though can be computed by discretizing using finite differences, finite elements or composite spectral discretizations [38]; and then solving the resulting linear system using state-of-the art preconditioners [18], there are several advantages on computing by solving :

  1. arises from a second kind Fredholm formulation, implying that the conditioning number of the discretized system does not deteriorate as the discretization is refined;

  2. is defined on a bounded set and the solution satisfies the Sommerfeld radiation condition automatically, whereas is defined in the whole ; hence one needs to truncate the computational domain, and to impose absorbing boundary conditions (ABC) at the boundary of the truncated computational domain, using perfectly matched layers (PML) [12] or other techniques [28] (see [14] for a review);

  3. most discretizations of via finite differences or finite elements suffer from dispersion error and the pollution effect [7] (see [57] for recent references to methods designed to alleviate this issue); whereas the introduction of the analytical Green’s function in and the use of an accurate quadrature rule automatically alleviates these issues.

However, solving raises other numerical issues. The discretization of produces large dense linear systems, which becomes impractical, both memory- and complexity-wise, for typical direct solvers. Although a new generation of direct methods based on hierarchical compression, such as [2], have been able to considerably reduce the complexity of solving integral equations coming from elliptic partial differential equations (PDE), when applying these methods to integral equations coming from wave equations such as , they manage to exhibit linear or quasi-linear complexity for low frequency problems, but they fall back to in high-frequency regimes. This phenomenon can be clearly explained by recent studies of the approximate separability of the Green’s functions. It was shown in [11] that the Green’s functions for elliptic operators are highly separable, implying that the off-diagonal blocks of the discrete Green’s function are extremely low-rank. In contrast, it was shown in [32] that the Green’s functions for the Helmholtz operator in high-frequency regime are highly non-separable in general. This last results provides an asymptotic frequency-dependent lower bound on the ranks of the off-diagonal blocks of the discrete Green’s function, which hinders the compressibility of the blocks, thus resulting in algorithms with higher complexity.

In addition, the Lippmann-Schwinger equation becomes numerically-ill conditioned as the frequency and contrast increases, due to multiple scattering, hindering the convergence rate of iterative methods. This difficulty has led to the development of efficient preconditioners, such as [6]; however, they tend to require significantly more iterations to converge as the frequency increases. Some of them are based on semi-analytical techniques, which require fairly strict assumptions on the perturbation (for example [17]); they tend to work well under the assumptions, but they naturally deteriorate if not.

There is an extensive literature on volume integral methods, including fast methods [10] and, in particular, methods solving the Lippmann-Schwinger equation [1], and we do not review it here, but to the authors’ knowledge there is no method that can handle general smooth perturbations in quasi-linear time in the high-frequency regime.

In the present paper we introduce a new preconditioner for the 2D high-frequency Lippmann-Schwinger equation, with a total asymptotic cost , which compares favorably with respect to most of the methods (both iterative and direct) in the literature. The scaling holds for general smooth perturbations without large resonant cavities. The solver is based on the sparsifying preconditioner and a fast iterative solver in a domain decomposition setting. In particular, the sparsifying preconditioner is used to approximate the Lippmann-Schwinger equation with a sparse system, which is solved fast using a bi-directional matrix-free variant of the method of polarized traces [66].

The method has two stages, the off-line, or setup, stage that is only performed once for each linear system to solve with an asymptotic cost of , and the on-line, or solve, stage that is performed for each right-hand-side, with an asymptotic cost . Furthermore, the on-line stage has two levels: the inner level in which the approximate sparse system is solved iteratively using a bi-directional preconditioner, and the outer level, in which the resulting solution from the inner level is used to precondition the Lippmann-Schwinger equation.

Under the assumptions that the perturbation is smooth and it does not represent a resonant cavity the number of outer iterations is essentially independent of the frequency [63], and numerical experiments presented in this paper, indicate that the number of inner iterations depends, at worst, logarithmically with the frequency, resulting in the aforementioned complexity.

1.1Results

In this paper we propose a new two-level preconditioner for the 2D high-frequency Lippmann-Schwinger equation that results in a method with complexity. We follow the approach in [63] to construct a sparsifying preconditioner, and we develop a fast iterative solver to solve the sparsified system, in order to achieve an overall optimal complexity. The main improvement with respect to the sparsifying preconditioner is how the resulting sparse linear system is solved. In [63] the system is solved via multi-frontal methods [27] incurring an asymptotic cost for the setup and for the application. Exchanging the multifrontal solver by an iterative one, preconditioned using a bi-directional variant of the method of polarized traces, results in a drastic reduction of the asymptotic cost.

The inner-level preconditioner is a matrix-free variant of the method of polarized traces [66], which has been adapted to solve the sparsified Lippmann-Schwinger equation efficiently. As it will be explained in the sequel, preconditioning the sparsified system using the original method of polarized traces results in an algorithm that is sensitive to the sign of the perturbation and to the direction of the incident waves. Then a new bi-directional variant is introduced to eliminate these adverse effects.

The method of polarized traces is a domain decomposition method [64], which relies on a layered domain decomposition coupled with an equivalent surface integral equation (SIE) that is easy to precondition. The matrix-free variant introduced in [66] uses local solves on extended subdomains in order to apply the SIE and the preconditioner. The number of iterations to convergence depends on the underlying medium and the quality of the absorbing boundary conditions (ABC) imposed at the interfaces between subdomains. For the Helmholtz problem, there exist a myriad of different formulations of ABC [12]; unfortunately, for the sparse system resulting from sparsifying the Lippmann-Schwinger equation such ABC are not readily available. We give details on building such ABC for the sparsified Lippmann-Schwinger system in Section 3.3.

Alas, the formulation of the ABC in this paper is not perfect, which is evidenced by numerical tests showing that the number of iteration needed to convergence using the matrix-free version of the method of polarized traces grows sub-optimally as . Optimality can be restored by introducing an alternating bi-directional sweep, and by preconditioning the volume problem directly, which reduces the number of iterations to . To show the effectiveness of our proposed method, we also include an example arising from plasma physics, in particular, an electromagnetic wave impinging on a confined plasma in a fusion reactor.

In the present work we mainly focus on explaining the key ideas and verification of the method by 2D tests. The algorithm presented in this paper can be easily extended to 3D, with an offline complexity of and an on-line complexity of . The complexities can be further reduced using a nested approach such as the one presented in [49] resulting in and complexity respectively.

1.2Outline of the paper

The present paper is organized as follows: we present the discretization of the Lippmann-Schwinger equation and the provide a brief review of the sparsifying preconditioner in Section 2. In Section 3, we review the method of polarized traces and we discuss the modifications needed to adapt it to the sparsified Lippmann-Schwinger equation, in particular we explain the formulation for the ABC in Section 3.3 and the bi-directional preconditioner in Section 3.5. In Section 4 we present the two-level preconditioner and we discuss its complexity. Finally, in Section 5, we present various numerical experiments to support the complexity claims.

2Discretization and sparsifying preconditioner

In this section we discuss the discretization used and we provide a brief review of the sparsifying preconditioner [63]. We refer the interested reader to Section 2 of [63] for further details.

We discretize the domain with an uniform mesh1 of a step size , such that we have 10 points per wavelength in the background wavespeed, . We denote the collocations points by for and , where and are the number of grid-points in each direction. In addition, we define a vector index , notation that will be useful to explain the sparsifying preconditioner. We use a simple modification of the trapezoidal rule [26] to discretize the integral within the convolution operation in . In particular, we use a quadrature that only modifies the weight in the diagonal, obtaining a quadrature rule with order .

We denote the unknown scattered field in as , and we write the resulting discretized linear system as

in which can be written as

where is the identity matrix, and is a diagonal matrix that encodes the multiplication of times . Finally, is given by

with .

Given that the convolution kernel is translation invariant and that the mesh is uniform we can easily embed in a Toeplitz matrix and use the FFT to apply it in operations (or we can use the method in [15] if the mesh is non-uniform).

The sparsifying preconditioner is based on designing a sparse matrix such that the product is approximately sparse in the sense that a large amount of it entries are extremely small. The main idea is to multiply by , and threshold all the small entries of the product to zero, obtaining a sparse matrix that approximates . As it will be explained in the sequel, the sparsity pattern of is the same as a second-order compact finite difference discretization of the Laplacian, and its entries are specially constructed to annihilate the free space Green’s function away from the source. In return, the matrix , which shares the same sparsity pattern as , is similar to the matrix arising from a finite differences discretization of the Helmholtz equation .

Since , it follows that to make approximately sparse, we only need to find such that the product is approximately sparse. The last requirement is equivalent to enforce that the entries of the matrix approximately annihilate the off-diagonal blocks of . This requirement can be recast as a minimization problem, which is solved via a singular value decomposition (SVD).

Define the set of immediate neighbors (stencils) of as

We seek such that for each :

  • only are non-zeros (sparsity of );

  • (approximate sparsity of the ).

Given that is translation invariant we impose to be translation invariant. In such case, it is sufficient to consider only the case when . Then the problem can be recast as

which can be efficiently solved via a singular value decomposition of such that

Finally, is built such that , for in the interior of . For on a special treatment is needed, which depends on the geometry of . For the sake of simplicity we omit such treatment but we refer the interested reader to Section 4 of [63].

By multiplying by and sparsifying the product, we obtain the approximated sparse system

for which the preconditioner is defined as

It has been shown in [63] that when the preconditioner mentioned above is used within GMRES iterations, the number of iterations to convergence is almost independent of the frequency.

One alternative to this preconditioner would be to discretize via finite differences or finite elements, and then use this approximate solution to precondition . Unfortunately, such preconditioners tend to behave poorly, due to the pollution error that is normally introduced with such discretizations. The pollution error creates serious phase errors making the preconditioner incompatible with the linear system being preconditioned (see [57] for a similar example for multigrid). The optimization procedure used in the construction of the sparsifying preconditioner takes into account the exact free space Green’s function and hence minimizes the dispersion error. The size of the stencils used in and should strike a balance between sparsity and dispersion error. Since the matrix mimics the discretization of the Helmholtz differential operator, it is expected that the ideas for fast solver for the Helmholtz equation can be applied to the resulting sparse system. This is what we propose in the next Section.

3Method of polarized traces

As seen from Section 2, the sparsifying preconditioner for the Lippmann-Schwinger equation relies on solving the sparsified system

which, in the present paper, is solved fast using a bi-directional variant of the method of polarized traces. In this section, we first briefly introduce the matrix-free version of the method of polarized traces [66] in one direction, and then we discuss the modifications needed to adapt it to the sparsified Lippmann-Schwinger equation. We provide a simple description of the three key elements of the method of polarized traces for the Helmholtz equation in the following order:

  1. the layered domain decomposition with the extended subdomains;

  2. how to define the transmission conditions between subdomains using a discrete Green’s representation formula issued from an algebraic decomposition;

  3. how to implement an absorbing boundary conditions for the subdomains to obtain a fast converging iterative method.

We then provide the full algorithm for the one-directional preconditioner; we provide its physical intuition and discuss its short-comings, which are rectified by the bi-directional preconditioner that is introduced right after.

3.1Domain decomposition

The domain is partitioned in horizontal layers as depicted in Figure 1. Following [64] we extend each subdomain, which we denote by . Within each extended subdomain we assemble the discrete local problem

where is the characteristic function of .

We impose to satisfy the compatibility condition, i.e., coincides exactly with in the interior of . Within , i.e., the extension of the subdomain, we do not require to coincide with , instead we use the extra degrees of freedom to impose an analogue to an absorbing boundary condition. Given that in the sparsified Lippmann-Schwinger system does not arise from a direct discretization of the Helmholtz equation, the boundary condition for subdomains needs a different treatment, which is presented in Section 3.3.

Figure 1 shows an example of such decomposition. At the left we have all the degrees of freedom, in the middle we have the domain decomposed in several layers, and at the right we have the extended domains. We represent in gray the grid points whose associated entries of each local matrix remain the same, and in orange the grid points that correspond to the extended degrees of freedom. Note that the entries of associated to these extended grid points will be modified in order to impose an ABC.

Figure 1: Layered domain decomposition. The orange grid-points represent artificial grid points in the extended domain.
Figure 1: Layered domain decomposition. The orange grid-points represent artificial grid points in the extended domain.

We point out that the bidirectional preconditioner uses two of such decompositions: one in which the domain is decomposed in horizontal layers, such as in Figure 1, and one in which the domain is decomposed in vertical layers. Alternating sweeping will be performed on these two settings.

3.2Algebraic decomposition

As for any domain decomposition method, the objective is to solve the global problem by solving the local problems iteratively, which requires transmission conditions at the interfaces between subdomains. In the case of high-frequency problems, due to possible pollution errors [8], such transmissions conditions need to be of higher order or algebraically exact with respect to the discrete global problem, in order to ensure a fast convergence. One advantage of the method of polarized traces is that such conditions can be seamlessly imposed at an algebraic level, bypassing the need to define optimized transmission conditions [35].

In order to impose algebraically exact transmission conditions between layers, we use the blocks of to define a discrete Green’s representation formula. Such a formula can be deduced by imposing a discontinuous local solution, as it was performed in [58].

By construction we have that

in which all the sub-matrices are sparse. Given that the mesh is uniform we use the following ordering of the unknowns

where is given by

which corresponds to sampled at a constant .

We write for the wavefield defined locally at the -th layer, i.e., , and for the values at the local depth of .

In particular, represents the wavefield sampled at the bottom of the layer, and is the wavefield at the top of the layer. Moreover, is the local wavefield sampled immediately below the bottom of the layer, in the extended region within the PML; is the local wavefield sampled immediately above the top of the layer.

Finally, we use the notation to denote a discrete measure. For a generic vector , is defined as follows:

This notation will be used extensively to characterize the forcing terms necessary to transfer the information from one subdomain to its neighbors.

The correct Green’s representation formula is given by the equivalent and more manageable expression2

It can be shown that if the traces of the global , i.e. , are known and used in , then coincides exactly with inside the layer, i.e., (see Appendix C of [64] for a rigorous proof for matrices arising from finite differences). In [66] the above formula is used to build an equivalent surface integral formulation in a matrix-free fashion, which is then solved iteratively. The convergence is fast provided that high-quality ABC are implemented, which is the topic of the next subsection.

3.3Absorbing boundary conditions

One of the most important elements of the method of polarized traces is the implementation of ABC at the interfaces between subdomains. ABC have successfully been used in domain decomposition for elliptic equations [31], and a simpler version of ABC were extensively used in the first domain decomposition method for the Helmholtz equation to ensure convergence [25]. High-quality absorbing conditions between subdomains are key to the fast convergence of iterative solvers for the high-frequency Helmholtz equation [18].

The high-quality ABC are needed in order to minimize the artificial reverberations at the interfaces between subdomains that arise from the truncation introduced by the domain decomposition. One of the objectives of the ABC, is to make the local Green’s functions, which are defined by the local problem , as close as possible to the global Green’s function restricted to the subdomain without scattering outside the subdomain. In other words, the local Green’s functions should capture the local interactions within the subdomain, but should not contain pollution effects due to artificial reflections at the truncated boundary. Given that the medium outside the subdomain is not involved in the local system, the local Green’s functions do not capture those long range interactions among subdomains; these interactions will be captured in an iterative manner during the sweeps.

In practice, without careful treatment at the truncated boundaries, the artificial reverberations can heavily pollute the local Green functions and hinder the convergence of the iterative method. For example, the adverse effect of the artificial reverberations can be overwhelming for the cases when Dirichlet or Neumann boundary conditions are imposed at the boundaries between subdomains [61], causing the method to converge in iterations, thus resulting in a super-linear complexity.

Given that the matrix in our problem does not arise from a direct discretization of the Helmholtz equation, absorbing boundary conditions are not readily available. A natural way is to localize the perturbation, , to the subdomain via a smooth cut-off. The localization produces a local discrete system in the form of that satisfies the compatibility conditions.

Now, we proceed to give an explanation of how to define the local problems using the localization to obtain an approximated ABC. Intuitively, the solution to should provide an approximation of the wave scattered by the perturbation supported in . Thus, one naive approach would be to define the local problem as

and then use the sparsifying preconditioner to obtain a sparse local problem in the form of .

However, the local perturbation is discontinuous at the interfaces between subdomains, which will produce strong artificial reflections for waves impinging on the artificial boundary of . These strong reflections will pollute the local Green’s function and result in a slow convergence. Fortunately, this adverse effect can be easily rectified by using a window, or cut-off, function that transitions the perturbation smoothly to zero.

Define a set of smooth cut-off functions , such that

We can define the continuous counterpart of as the problem

which is posed for . We recall that the matrix , is a diagonal matrix that encodes the multiplication by ; we define as the diagonal matrix that represents the multiplication by . We can define the discrete local Lippmann-Schwinger equation as,

The global subscript indicates that is a matrix. In this case we can multiply by , and we would obtain a local system that satisfies the compatibility condition of . However, this would be prohibitively expensive given that would need to solve a system for each subdomain.

The key observation is that only a small portion of in deviates from an identity matrix. This is caused by the small support of , in which . Thus, we can re-write in instead of , and discretize it resulting in

which is a quasi-1D (see [30]) system, where is the thickness, in grid points, of each extended subdomain. In the implementation of the method, the cut-off is constant in the direction tangential to the boundary, direction in our layered setting, and it is a third order spline in the direction normal to the boundary, direction in our setting.

In theory, we could apply the sparsifying preconditioner to ; however, given that the mesh is also truncated, the optimization via a SVD can provide different results, hindering the compatibility condition. Instead, we multiply by the corresponding block of , but with a special care at the boundaries of , where we use the stencil generated at the boundary of , which provides a satisfying ABC [63] for the background wave-speed at the outer boundary of the extended subdomain.

Unfortunately, the smooth cut-off used to localize the perturbation is not equivalent to an ABC, which would approximate the Dirichelet to Neumann (DtN) map at the boundary; such that it can absorb and dampen waves propagating out of the boundary in all directions with little reflections. The smooth cut-off in the normal direction to the boundary minimally affects waves propagating out of the domain close to the normal direction. However, for waves propagating near and almost tangential to the boundary, a smooth cut-off may not be good enough depending on the sign of the perturbation as illustrated in Figures Figure 2 and Figure 3. If the perturbation is negative, the waves propagate faster inside the subdomain than in the cut-off region. If a wave propagates almost tangentially to the boundary to the cut-off region it will bend outwards, and it will be ultimately absorbed by the numerical ABC situated at the artificial boundary of the extended subdomain. On the other hand, if the perturbation is positive, then the waves propagate slower inside the subdomain than in the background media. Under these circumstances, the subdomain with a smooth cut-off to the background media creates an artificial wave-guide effect at the subdomain boundary. Waves propagating almost normal to the boundaries will still cross the smooth cut-off and get absorbed by the numerical ABC at the artificial boundary of the extended subdomain. However, waves propagating almost transversally along the subdomain boundary will bend inwards due to the artificial cut-off, producing spurious grazing waves that pollute the local solution within the subdomain. This last problem can be attenuated to some extent by introducing a complex shift within the cut-off region, thus dampening the grazing waves. We point out that we are only introducing a complex shift in the extended domain; it would be possible to introduce a complex shift in as in [40], however, that would violate the compatibility condition.

Figure 2: Example of a global (top) and local (bottom) Green’s function for a positive perturbation, we can observe that there is some interference in the local Green’s function given by the spurious grazing wave that is concentrated at the top of the slab.
Example of a global (top) and local (bottom) Green’s function for a positive perturbation, we can observe that there is some interference in the local Green’s function given by the spurious grazing wave that is concentrated at the top of the slab.
Figure 2: Example of a global (top) and local (bottom) Green’s function for a positive perturbation, we can observe that there is some interference in the local Green’s function given by the spurious grazing wave that is concentrated at the top of the slab.
Figure 3: Example of a global (top) and local (bottom) Green’s function for a negative perturbation, we can observe some differences between the Green’s functions, but we do not observe the kind of interference, nor the concentration at the boundaries, as observed in Fig. .
Example of a global (top) and local (bottom) Green’s function for a negative perturbation, we can observe some differences between the Green’s functions, but we do not observe the kind of interference, nor the concentration at the boundaries, as observed in Fig. .
Figure 3: Example of a global (top) and local (bottom) Green’s function for a negative perturbation, we can observe some differences between the Green’s functions, but we do not observe the kind of interference, nor the concentration at the boundaries, as observed in Fig. .

Although the complex shift helps to reduce the number of iterations for the original method of polarized traces from to , which is better than standard iterative methods, it is not optimal. In order to obtain the lower rate of iterations to converge, we introduce in the next section an effective bi-directional alternating sweeping preconditioner to handle those grazing waves due to either the incoming waves or the smooth cut-off at the artificial boundaries between two subdomains in one directional sweeping.

3.4One-directional algorithm

In this section we discuss the preconditioner for the sparse problem based on the method of polarized traces for one-directional sweep and we provide the algorithm in pseudo-code with its physical interpretation.

The main intuition behind the method of polarized traces is the directionality of the waves, which are imposed by the polarizing conditions (see Section 3 of [64]) and the absorbing boundary conditions. For example, the wavefield generated by a local source,

irradiates from the interior of to the exterior of it. If we sample at the bottom of , we can physically interpret it as the trace of a wavefield propagating downwards.

Following this physical interpretation, the wavefield propagating downwards from propagates inwards . By using the Green’s representation formula we can propagate the wavefield from to by artificial sources located at the top of following

which is an alternate expression for the incomplete Green’s representation formula introduced in [64].

We can observe that contains the energy radiating from and . If we sample at the bottom of , it can be interpreted as the downwards wavefield consisting of the downwards wavefield generated by and the wavefield generated by that propagated through .

We can continue the propagation by adding the local wavefields in a manner that is usually called a multiplicative Schwarz iteration (see [36]). Analogously, the same procedure can be performed for the wavefields propagating upwards. The application of both sweeps, downwards and upwards can be easily understood as a block Jacobi iteration as presented in [64]. Furthermore, the polarizing conditions can be further exploited to reduce the number of iterations by using a Gauss-Seidel approach described by Alg. ?, which in practical terms reduces the number of iterations by a factor of two (see [66]).

All the intuitive arguments provided in the current exposition can be made rigorous from an algebraic point of view, we skip such explanation and redirect the interested readers to [64].

3.5Bi-directional algorithm

In this Section we first discuss briefly the shortcomings of Alg. ? when applied to the sparsified Lippmann-Schwinger equation and then introduce the bi-directional preconditioner in order to achieve the optimal complexity.

The algorithm as presented in Alg. ?, may not work optimally in our setting for the Lippmann-Schwinger equation, in which the perturbation is localized using a smooth cut-off. The main issue is that waves propagating nearly tangential to the artificial boundaries between subdomains are perturbed due to the cut-off and depending on the perturbation they can be bended inwards and thus generate spurious grazing waves. Even though the spurious grazing waves are dampened by the introduction of a complex shift in the cut-off region to some extent, they can still cause significant deviations of the local Green’s functions with respect to the global ones. Thus, they hinder the fast convergence of the iterations, especially when the perturbation is positive, i.e., the wave-guide situation, as explained in Section 3.3 and shown in Figures Figure 2 and Figure 3.

Moreover, a closer inspection to Alg. ? reveals that within the algorithmic pipeline, the residual given by , where are concentrated at the boundaries between subdomains. In particular, can be interpreted in the continuous case as a measure supported at the boundaries between subdomains with most singular term of the form . Then, as explained in Page 243 of [56], the restriction of such measure to a subdomain (line 3 in Alg. ?) is not well defined; such ambiguity ultimately results in a slow convergence rate. One way to solve the residual problem is presented in [56], where an overlapping domain decomposition is used to ensure that at each iteration the residual lies at the interior of a subdomain.

In this work, we solve both problems simultaneously by introducing a simple alternating bi-directional preconditioner, that consists on applying an up-down sweep performed by Alg. ? followed by an left-right pass sweep, performed by Alg. ? but applied to the transpose of , which is denoted in Alg. ?. The introduction of the bi-directional preconditioner in Alg. ? handles effectively the spurious grazing waves in one sweep by performing another sweep in the orthogonal direction. In addition, the application of the preconditioner in the orthogonal direction, after a step of iterative refinement confines the residual to the interior of the subdomains, thus eliminating the ambiguity and resulting in a fast convergence.

In particular, as it will be shown in the numerical tests, the bi-directional preconditioner is more efficient and less sensitive to the geometry, direction of incoming waves and sign of the perturbation, albeit with a higher memory footprint. The sweep in the orthogonal direction can easily be implemented as preconditioning the transpose of the matrix using Alg. ?.

4Two-level Preconditioner

We briefly summarize the operations within the two levels of the preconditioner presented in this paper as follows:

  • the inner level, in which in is computed approximately using the bi-directional preconditioner Alg. ?;

  • and, the outer level, in which is solved iteratively using GMRES preconditioned with Alg. ?.

There is a balance to the choice of the tolerance for the inner GMRES iteration (in Alg. ?): if the tolerance is chosen too small, then the constants are large; on the other hand, if the tolerance is too big, the constants will decrease but the number of outer iterations will grow with the frequency, thus increasing the asymptotic cost. The best choice in our case, is to set the inner loop tolerance to . This choice ensures that the amount of outer iterations will be the same as using a direct multi-frontal method and produces reasonable constants.

As mentioned in the introduction, the algorithm presented in this paper has two phases: an off-line phase performed only once, and an on-line phase performed for each right-hand-side, or incoming wave. The complexity of the solver is summarized in Table 1; in the sequel we provide further details for the complexity for each phase, in which we suppose the number of subdomains, L, is and that .

Table 1: Complexity for each state of the solver, supposing the ,the thickness of each extended local subdomain, is .
Stage Complexity
Off-line
On-line

4.1Off-line complexity

For the off-line or setup phase, we need to build for which we need to compute . As explained in [63] can be constructed by solving only a hand-full of optimization problems. This property arises mainly by the translation invariance of the convolution kernel, meaning that we can reuse the same stencil for many grid-points. Solving the minimization problem needs a SVD of a matrix, which can be performed in complexity. To create we need to multiply times ; exploiting the sparsity of and performing the multiplication in the correct order, the product can be carried in operations.

We can build all the local matrices in complexity by reusing ; we build local matrices, and each has non-zero elements. Finally, the factorization of all the local problems can be performed in complexity. Each local matrix is a circulant matrix and their factorization can be performed in time, and we have systems to factorize, which results in complexity. We point out that assembly and factorization of the local matrices is an embarrassingly parallel operation which can be effortlessly parallelized.

4.2On-line complexity

For the outer GMRES we need to apply , which consists of multiplications by sparse matrices and a convolution that can be performed fast via a FFT (see section 2.3 of [26]). The cost of applying is dominated by the application of the FFT, resulting on an overall complexity. For Alg. ? each local solve can be performed in time resulting in a complexity . We have an overall complexity of for each application of Alg. ?.

Finally, the overall on-line complexity depends on the number of outer and inner iterations to converge. From [63] the number of outer iterations is almost independent of the frequency, , and we provide in the next section numerical evidence that under the assumptions mentioned in the introduction, the number of inner iterations grows logarithmically with the frequency, , and given that we are in the high-frequency regime , resulting in the overall complexity mentioned in the introduction.

5Numerical Experiments

The numerical experiments were implemented in Julia v0.4.3 linked against Intel MKL 11.2, and were performed in a server with dual socket Intel Xeon CPU E5-2670, and 384 Gb of RAM. We used the FFT library FFTW3 [34] for the application of . For the local solves in the inner iteration we used the PARDISO [54] and UMFPACK [23] solvers.

One of the main advantages of the method of polarized traces lies on its modularity. We used the state-of-the-art library FFTW3 for the application of but it is possible to further accelerate the application of by using GPU-accelerated libraries (such as cuFFT [51]), or a distributed-memory FFT libraries. Within the main bottleneck of the algorithm, which is the application of the preconditioner, it is possible to effortlessly exchange the linear solver. In our experiments we used PARDISO [54] and UMFPACK [23], but it is easy to exchange them by solvers using compressed linear algebra [52], or by massively parallel solvers such as MUMPS [5] (or its compressed linear algebra variant [4]) or SuperLU-DIST [24], among many others.

In this section we corroborate the complexity claims for our method; in particular, present 3 different kinds of numerical experiments:

  1. we study the asymptotic cost of solving sparsified systems from discretizations of the Lippmann-Schwinger equation using the bi-directional preconditioner;

  2. we solve a wave scattering example in three different medium profiles shown in Figure 4 using the full two level preconditioner,

  3. we solve an example arising from reflectometry of plasmas confined in a fusion reactor [62].

5.1Sparsified system

Figure 4: (left) Perturbation used for the wave scattering. (right) Wave-speed for 64 randomly placed Gaussian bumps
(left) Perturbation used for the wave scattering. (right) Wave-speed for 64 randomly placed Gaussian bumps
Figure 4: (left) Perturbation used for the wave scattering. (right) Wave-speed for 64 randomly placed Gaussian bumps

First, we assemble two linear systems by sparsifying the Lippmann-Schwinger equation for the perturbation given in Figure 4 (left) and demonstrate the performance of the original method of polarized traces, and the new bi-directional preconditioner. The two sparsified Lippmann-Schwinger systems correspond to the same perturbation shown in Figure 4 (left) but with a different sign, to which we refer to as the positive and negative perturbation. We solve the resulting sparsified systems with GMRES (tolerance of ) preconditioned with the method of polarized traces [66] and the bi-directional preconditioner in Alg. ?. We solve for different right-hand sides representing different incident waves, and we compute an average time. We repeat the process for a set of increasing frequencies, such that we have points per wavelength in the background medium. Moreover, the domain is decomposed in subdomains, such that each subdomain is roughly wavelengths thick, and each subdomain is extended by 10 grid points in both directions (see Figure 1). The average execution times are presented in Table 2.

Table 2: Average number of iterations (in parenthesis) and average execution time to solve the sparsified system arising from the Lipmann-Schwinger equation using the positive (+) and negative (-) perturbation, using either the method of polarized traces (PT), or the new bi-directional preconditioner(BD). We use GMRES with a tolerance of , in this case is the number of degrees of freedom, is the frequency and is the number of subdomains.
N [s] [s] [s]

Table 2 clearly depicts the different behavior between the matrix-free version of the method of polarized traces, and its bi-directional variant. We can clearly observe that the execution time increases faster for the original method of polarized traces, which is mainly due to the increase of the number of iteration to convergence. This trend is more marked in the case of the positive perturbation as explained in Section 3.3. In particular, we observe that the execution time (due to an increase in the number of iterations) grows as instead of and thus increasing overall the complexity of the global algorithm. In addition, we can observe that the constants are lower for the bi-directional preconditioner. We point out that the bottleneck of both preconditioners is the local solves, which are performed using a multifrontal solver; the main difference between both methods is the number of such solves at each iteration: for the method of polarized traces we need solves3 per layer and per iteration, whereas the bi-directional preconditioner needs only 4; in addition, the bi-directional preconditioner converges in a lower number of iterations.

Figure 5: Left: execution time of the off-line stage; Right: execution time of the on-line stage of solving the Sparsified Lippmann-Schwinger equation using GMRES preconditioned with Alg. . The tolerance was fixed to 10^{-6}, and \omega h = {\mathcalO}(1).
Left: execution time of the off-line stage; Right: execution time of the on-line stage of solving the Sparsified Lippmann-Schwinger equation using GMRES preconditioned with Alg. . The tolerance was fixed to 10^{-6}, and \omega h = {\mathcalO}(1).
Figure 5: Left: execution time of the off-line stage; Right: execution time of the on-line stage of solving the Sparsified Lippmann-Schwinger equation using GMRES preconditioned with Alg. . The tolerance was fixed to , and .

Figure 5 (left), shows the scaling for the factorization of the local problems and the assembly of the sparsified system. In addition, Figure 5 (right), shows the execution time for the solution of the sparsified system using the bi-directional algorithm as reported in Table 2 but in logarithmic scale. We can observe from Figure 5 that the assembly and factorization cost is and that the online cost of one inner iteration is , and, in particular, .

5.2Wave scattering

Figure 6: Real part of the total wavefield generated by the scattering of a plane wave impinged on the negative perturbation (left) and on the positive perturbation (right), in both examples we have \omega = 500.
Real part of the total wavefield generated by the scattering of a plane wave impinged on the negative perturbation (left) and on the positive perturbation (right), in both examples we have \omega = 500.
Figure 6: Real part of the total wavefield generated by the scattering of a plane wave impinged on the negative perturbation (left) and on the positive perturbation (right), in both examples we have .

We test the performance of the two-level preconditioner by solving the Lippmann-Schwinger equation associated to three problems in wave scattering. We use three different perturbation profiles, the negative smooth perturbation, the positive smooth perturbation and Gaussian bumps as shown in Figure 4 (left) and (right). The profiles were chosen to depict the behavior of the method in three generic settings for wave propagation: a defocussing medium (negative perturbation), a focusing medium (positive perturbation) and a medium with multiple scattering (Gaussian bumps). We can observe the total wavefield scattered by a plane-wave impinging on each perturbation in Figure 6 (left), Figure 6 (right) and Figure 10 (left), respectively.

Figure 7:  Average execution time for the on-line stage of solving the high-frequency Lippmann-Schwinger equation with GMRES (tolerance of 10^{-10}) preconditioned with Alg.  with an inner tolerance of 10^{-3}, for the three different perturbation profiles; \omega h = {\mathcalO}(1).
Figure 7: Average execution time for the on-line stage of solving the high-frequency Lippmann-Schwinger equation with GMRES (tolerance of ) preconditioned with Alg. with an inner tolerance of , for the three different perturbation profiles; .

For each perturbation profile we solve a system of increasing size, keeping constant. Each system is solved using GMRES with a tolerance of , preconditioned with Alg. ? with a inner tolerance of ; the domain is decomposed in subdomains, and each subdomain is extended 10 grid points in each direction (see Figure 1). At each frequency, we compute the wavefield generated by a set of 64 different incident waves impinging on the perturbation by solving the Lippmann-Schwinger equation, and we report the average execution time in Figure 7. From Figure 7 we can observe that the execution time scales as for each perturbation profile, which corroborates the claims made in the introduction. For the off-line stage we obtain the same execution times as in Figure 5 (left).

Figure 8:  Left: average execution time for the on-line stage of solving the Lippmann-Schwinger with the Gaussian bumps perturbation with the original sparsifying preconditioner, i.e., using multifrontal methods, and using the two-level preconditioner with \omega h = {\mathcalO}(1); Right: execution time of the off-line stage for the two-level preconditioner and the original sparsifying preconditioner.
 Left: average execution time for the on-line stage of solving the Lippmann-Schwinger with the Gaussian bumps perturbation with the original sparsifying preconditioner, i.e., using multifrontal methods, and using the two-level preconditioner with \omega h = {\mathcalO}(1); Right: execution time of the off-line stage for the two-level preconditioner and the original sparsifying preconditioner.
Figure 8: Left: average execution time for the on-line stage of solving the Lippmann-Schwinger with the Gaussian bumps perturbation with the original sparsifying preconditioner, i.e., using multifrontal methods, and using the two-level preconditioner with ; Right: execution time of the off-line stage for the two-level preconditioner and the original sparsifying preconditioner.

Moreover, for the sake of comparison, we time the execution time of solving the Lippmann-Schwinger equation using the original sparsifying preconditioner [63] (i.e., solving via a multifrontal solver [23]). For the comparison we used the multiple scattering perturbation in Figure 4 (right), with the same set of frequencies and incoming waves as before. We use the implementation of the sparsifying preconditioner in [65]. The average execution times of the on-line stage are reported in Figure 8 (left) together with the averages times for the two-level preconditioner. From Figure 8 (left) we can observe that, unsurprisingly, the original sparsifying preconditioner has the same asymptotic on-line cost as the two-level preconditioner presented in this paper; however, the main difference lies in the off-line stage, in which the multifrontal solver has a complexity of compared to for the two-level preconditioner as shown in Figure 8 (right).

5.3Plasma physics

Figure 9: Perturbation for the plasma physics example (taken from ) we can observe the at the interior the perturbation becomes negative, thus yielding a strongly elliptic problem locally, thus becoming an evanescent medium.
Figure 9: Perturbation for the plasma physics example (taken from ) we can observe the at the interior the perturbation becomes negative, thus yielding a strongly elliptic problem locally, thus becoming an evanescent medium.
Figure 10: Real part of the total wavefield generated by the scattering of a plane wave in the Gaussian bumps model (left) and in the plasma model (right). In both examples \omega = 500.
Real part of the total wavefield generated by the scattering of a plane wave in the Gaussian bumps model (left) and in the plasma model (right). In both examples \omega = 500.
Figure 10: Real part of the total wavefield generated by the scattering of a plane wave in the Gaussian bumps model (left) and in the plasma model (right). In both examples .
Figure 11: Average execution time for solving the Lippmann-Schwinger equation using the plasma perturbation profile (left), and the average number of outer iterations needed for convergence (right). The problem was solved with GMRES (tolerance of 10^{-10}) preconditioned with the two-level preconditioner with an inner tolerance of 10^{-3} and \omega h = {\mathcalO}(1). The extra logarithmic factor comes from the increasing number of outer iterations.
Average execution time for solving the Lippmann-Schwinger equation using the plasma perturbation profile (left), and the average number of outer iterations needed for convergence (right). The problem was solved with GMRES (tolerance of 10^{-10}) preconditioned with the two-level preconditioner with an inner tolerance of 10^{-3} and \omega h = {\mathcalO}(1). The extra logarithmic factor comes from the increasing number of outer iterations.
Figure 11: Average execution time for solving the Lippmann-Schwinger equation using the plasma perturbation profile (left), and the average number of outer iterations needed for convergence (right). The problem was solved with GMRES (tolerance of ) preconditioned with the two-level preconditioner with an inner tolerance of and . The extra logarithmic factor comes from the increasing number of outer iterations.

Finally, we test the performance of the two-level preconditioner using a problem arising from plasma physics, in particular, a radio-frequency (RF) wave propagating in a fusion reactor. Under some simplifications, the propagation of a RF wave can be described using a cold plasma model [62]. We borrow from [2] a model4 corresponding to an idealized plasma density profile shown in Figure 9. The physics in this case are different, an incoming wave propagates until it hits a cut-off region in which the problem becomes strictly elliptic or evanescent, thus causing the waves to reflect, as depicted in Figure 10 (right).

We solve a system of increasing size, keeping constant; we decompose the domain such that we have roughly wavelengths inside each subdomain, and each subdomain is extended 10 grid points in each direction. At each frequency, we compute the wavefield generated by a set of 64 different incident waves impinging on the perturbation by solving the Lippmann-Schwinger equation and we report the average execution times in Figure 11 (left).

In the case of the plasma example, we observe from Figure 11 (left) a slightly bigger scaling. The main reason for this scaling is the fact that the number of outer iteration grows logarithmically, as shown in Figure 11 (right), which is presumably caused by the transition of the physics at the cut-off region. Nonetheless, the number of inner iterations grows like , the same as in the wave scattering case, resulting in an overall complexity of , which, to the our knowledge, compares favorably to most of current solvers.

6Conclusions

We have presented a new fast iterative method for the high-frequency Lippmann-Schwinger equation in 2D. The method has an asymptotic cost of for the wave scattering problems presented in this paper.

Some future work and extensions are: the design of more efficient ABC, in order to use only one directional preconditioner, thus decreasing the memory footprint; extensions to 3D, in which a nested approach can be used to obtain low complexity algorithms; parallelization of the solvers using state-of-the-art linear solvers; reduction of the constant via compressed linear algebra, which can provide extra savings in the plasma example, where the local ellipticity of the problem can be exploited.

Another line of research is to further reduce the on-line complexity by using the original reduction to a SIE of the method of polarized traces, coupled to the precomputation of integral operators and their compression.

We point out that it is envisageable to change the outer GMRES loop with the flexible GMRES [53] algorithm, and reduce the inner-level tolerance to accelerate the algorithm, without a penalty on the asymptotic complexity.

7Acknowledgments

We would like to thank Lexing Ying, and Leslie Greengard for fruitful discussions; Antoine Cerfon for his help with the plasma example, and Laurent Demanet for computational resources.

Footnotes

  1. If the perturbation has sharp transitions the sparsifying preconditioner can be modified to handle adaptive quadratures without asymptotic penalty. However, the convolution product will have to be handled differently [15] and the construction of the preconditioner will require randomized techniques in order to keep the same asymptotic complexity.
  2. It is possible to recover the Green’s representation formula by multiplying by the inverse of
  3. The larger amount of solves needed for the matrix-free version of the method of polarized traces is due to the application of the integral operator using local solves at each GMRES iteration, in addition to the solves required for the application of the preconditioner.
  4. We refer the interested reader to Example 4.3 of [2] for further details.

References

  1. M. B. Amar and Farnoux F. C., Numerical solution of the Lippmann-Schwinger equations in photoemission: application to Xenon, Journal of Physics B: Atomic and Molecular Physics, 16 (1983), p. 2339.
  2. S. Ambikasaran, C. Borges, L.-M. Imbert-Gerard, and L. Greengard, Fast, adaptive, high order accurate discretization of the Lippmann-Schwinger equation in two dimension, ArXiv e-prints, [math.NA] 1505.07157 (2015).
  3. S. Ambikasaran and E. Darve, An fast direct solver for partial hierarchically semi-separable matrices, Journal of Scientific Computing, 57 (2013), pp. 477–501.
  4. P. Amestoy, C. Ashcraft, O. Boiteau, A. Buttari, J.-Y. L’Excellent, and C. Weisbecker, Improving multifrontal methods by means of block low-rank representations, SIAM Journal on Scientific Computing, 37 (2015), pp. A1451–A1474.
  5. P. R. Amestoy, I. S. Duff, J.-Y. L’Excellent, and J. Koster, A fully asynchronous multifrontal solver using distributed dynamic scheduling, SIAM Journal on Matrix Analysis and Applications, 23 (2001), pp. 15–41.
  6. F. Andersson and A. Holst, A fast, bandlimited solver for scattering problems in inhomogeneous media, Journal of Fourier Analysis and Applications, 11, pp. 471–487.
  7. I. Babuska, F. Ihlenburg, E. T. Paik, and S. A. Sauter, A generalized finite element method for solving the Helmholtz equation in two dimensions with minimal pollution, Computer Methods in Applied Mechanics and Engineering, 128 (1995), pp. 325–359.
  8. I. M. Babuška and S. A. Sauter, Is the pollution effect of the FEM avoidable for the Helmholtz equation considering high wave numbers?, SIAM Journal on Numerical Analysis, 34 (1997), pp. 2392–2423.
  9. A. Bayliss, C. I. Goldstein, and Turkel E., On accuracy conditions for the numerical computation of waves, Journal of Computational Physics, 59 (1985), pp. 396 – 404.
  10. M. Bebendorf, Hierarchical LU decomposition-based preconditioners for BEM, Computing, 74 (2005), pp. 225–247.
  11. M. Bebendorf and W. Hackbusch, Existence of -matrix approximants to the inverse fe-matrix of elliptic operators with -coefficients, Numerische Mathematik, 95 (2002), pp. 1–28.
  12. J.-P. Bérenger, A perfectly matched layer for the absorption of electromagnetic waves, Journal of Computational Physics, 114 (1994), pp. 185–200.
  13. A. Bermúdez, L. Hervella-Nieto, A. Prieto, and R. Rodríguez, An optimal perfectly matched layer with unbounded absorbing function for time-harmonic acoustic scattering problems, Journal of Computational Physics, 223 (2007), pp. 469 – 488.
  14. height 2pt depth -1.6pt width 23pt, Perfectly matched layers for time-harmonic second order elliptic problems, Archives of Computational Methods in Engineering, 17 (2010), pp. 77–107.
  15. G. Beylkin, C. Kurcz, and L. Monzón, Fast convolution with the free space Helmholtz Green’s function, Journal of Computational Physics, 228 (2009), pp. 2770 – 2791.
  16. Proceedings of the Sixth International Conference on Electrical Transport and Optical Properties of Inhomogeneous Media.
    O. P. Bruno, Wave scattering by inhomogeneous media: efficient algorithms and applications, Physica B: Condensed Matter, 338 (2003), pp. 67 – 73.
  17. O. P. Bruno and E. M. Hyde, An efficient, preconditioned, high-order solver for scattering by two-dimensional inhomogeneous media, Journal of Computational Physics, 200 (2004), pp. 670 – 694.
  18. Z. Chen and X. Xiang, A source transfer domain decomposition method for Helmholtz equations in unbounded domain, SIAM Journal on Numerical Analysis, 51 (2013), pp. 2331–2356.
  19. height 2pt depth -1.6pt width 23pt, A source transfer domain decomposition method for Helmholtz equations in unbounded domain part II: Extensions, Numerical Mathematics: Theory, Methods and Applications, 6 (2013), pp. 538–555.
  20. H. Cheng, J. Huang, and T. J. Leiterman, An adaptive fast solver for the modified Helmholtz equation in two dimensions, Journal of Computational Physics, 211 (2006), pp. 616 – 637.
  21. J. W. Cooley and J. W. Tukey, An algorithm for the machine calculation of complex Fourier series, Mathematics of Computation, 19 (1965), pp. 297–301.
  22. E. Corona, P.-G. Martinsson, and D. Zorin, An direct solver for integral equations on the plane, Applied and Computational Harmonic Analysis, 38 (2015), pp. 284 – 317.
  23. T. A. Davis, Algorithm 832: UMFPACK v4.3—an unsymmetric-pattern multifrontal method, ACM Transactions on Mathematical Software, 30 (2004), pp. 196–199.
  24. J. W. Demmel, S. C. Eisenstat, J. R. Gilbert, X. S. Li, and J. W. H. Liu, A supernodal approach to sparse partial pivoting, SIAM J. Matrix Analysis and Applications, 20 (1999), pp. 720–755.
  25. B. Després, Domain decomposition method and the Helmholtz problem, Mathematical and Numerical Aspects of Wave Propagation Phenomena, (1991), p. 44–52.
  26. R. Duan and V. Rokhlin, High-order quadratures for the solution of scattering problems in two dimensions, Journal of Computational Physics, 228 (2009), pp. 2152 – 2174.
  27. I. S. Duff and J. K. Reid, The multifrontal solution of indefinite sparse symmetric linear, ACM Trans. Math. Softw., 9 (1983), pp. 302–325.
  28. B. Engquist and A. Majda, Absorbing boundary conditions for the numerical simulation of waves, Math. Comp., 31 (1977), pp. 629–651.
  29. B. Engquist and L. Ying, Sweeping preconditioner for the Helmholtz equation: Hierarchical matrix representation, Communications on Pure and Applied Mathematics, 64 (2011), pp. 697–735.
  30. B. Engquist and L. Ying, Sweeping preconditioner for the Helmholtz equation: moving perfectly matched layers, Multiscale Modeling & Simulation, 9 (2011), pp. 686–710.
  31. Special Issue on Absorbing Boundary Conditions.
    B. Engquist and H.-K. Zhao, Absorbing boundary conditions for domain decomposition, Applied Numerical Mathematics, 27 (1998), pp. 341 – 365.
  32. height 2pt depth -1.6pt width 23pt, Aapproximate separability of Green’s functions of the Helmholtz equation in high frequency limit, March 2014.
  33. O. G. Ernst and M. J. Gander, Why it is difficult to solve Helmholtz problems with classical iterative methods, in Numerical Analysis of Multiscale Problems, Ivan G. Graham, Thomas Y. Hou, Omar Lakkis, and Robert Scheichl, eds., vol. 83 of Lecture Notes in Computational Science and Engineering, Springer Berlin Heidelberg, 2012, pp. 325–363.
  34. Special issue on “Program Generation, Optimization, and Platform Adaptation”.
    M. Frigo and S. G. Johnson, The design and implementation of FFTW3, Proceedings of the IEEE, 93 (2005), pp. 216–231.
  35. M.J. Gander, Optimized schwarz methods, SIAM Journal on Numerical Analysis, 44 (2006), pp. 699–731.
  36. M. J. Gander, F. Magoulès, and F. Nataf, Optimized Schwarz methods without overlap for the Helmholtz equation, SIAM Journal on Scientific Computing, 24 (2002), pp. 38–60.
  37. A. George, Nested dissection of a regular finite element mesh, SIAM Journal on Numerical Analysis, 10 (1973), pp. 345–363.
  38. A. Gillman, A.H. Barnett, and P.G. Martinsson, A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media, BIT Numerical Mathematics, (2014), pp. 1–30.
  39. A. Gillman and P.-G. Martinsson, A direct solver with complexity for variable coefficient elliptic PDEs discretized via a high-order composite spectral collocation method, SIAM Journal on Scientific Computing, 36 (2014), pp. A2023–A2046.
  40. I. G. Graham, E. A. Spence, and E. Vainikko, Domain decomposition preconditioning for high-frequency Helmholtz problems using absorption, ArXiv e-prints, (2015).
  41. K. L. Ho and L. Greengard, A fast direct solver for structured linear systems by recursive skeletonization, SIAM Journal on Scientific Computing, 34 (2012), pp. A2507–A2532.
  42. K. L. Ho and L. Ying, Hierarchical interpolative factorization for elliptic operators: Integral equations, Communications on Pure and Applied Mathematics, (2015), pp. n/a–n/a.
  43. M. Gu J. Xia, S. Chandrasekaran and X. S. Li, Superfast multifrontal method for large structured linear systems of equations, SIAM Journal on Matrix Analysis and Applications, 31 (2010), pp. 1382–1411.
  44. S. Johnson, Notes on perfectly matched layers (PMLs), March 2010.
  45. F. Lanzara, V. Maz’ya, and G. Schmidt, Numerical solution of the Lippmann–Schwinger equation by approximate approximations, Journal of Fourier Analysis and Applications, 10 (2004), pp. 645–660.
  46. W. Leng, A fast propagation method for the Helmholtz equation, ArXiv e-prints, (2015).
  47. W. Leng and L. Ju, An overlapping domain decomposition preconditioner for the Helmholtz equation, ArXiv e-prints, (2015).
  48. F. Liu and L. Ying, Additive sweeping preconditioner for the Helmholtz equation, ArXiv e-prints, (2015).
  49. height 2pt depth -1.6pt width 23pt, Recursive sweeping preconditioner for the 3D Helmholtz equation, ArXiv e-prints, (2015).
  50. P.-G. Martinsson and V. Rokhlin, A fast direct solver for boundary integral equations in two dimensions, Journal of Computational Physics, 205 (2005), pp. 1 – 23.
  51. J. Nickolls, I. Buck, M. Garland, and K. Skadron, Scalable parallel programming with CUDA, Queue, 6 (2008), pp. 40–53.
  52. F.-H. Rouet, X. S. Li, P. Ghysels, and A. Napov, A distributed-memory package for dense Hierarchically Semi-Separable matrix computations using randomization, ArXiv e-prints, (2015).
  53. Y. Saad, A flexible inner-outer preconditioned GMRES algorithm, SIAM Journal on Scientific Computing, 14 (1993), pp. 461–469.
  54. Selected numerical algorithms.
    O. Schenk and K. Gärtner, Solving unsymmetric sparse systems of linear equations with PARDISO, Future Generation Computer Systems, 20 (2004), pp. 475 – 487.
  55. J. Sifuentes, Preconditioned Iterative Methods for Inhomogeneous Acoustic Scattering Applications, PhD thesis, Rice University, Houston TX, USA, 2010.
  56. C. Stolk, A rapidly converging domain decomposition method for the Helmholtz equation, Journal of Computational Physics, 241 (2013), pp. 240–252.
  57. C. C. Stolk, A dispersion minimizing scheme for the 3-D Helmholtz equation with applications in multigrid based solvers, ArXiv e-prints, (2015).
  58. height 2pt depth -1.6pt width 23pt, An improved sweeping domain decomposition preconditioner for the Helmholtz equation, ArXiv e-prints, (2015).
  59. E. Turkel, D. Gordon, R. Gordon, and S. Tsynkov, Compact 2D and 3D sixth order schemes for the Helmholtz equation with variable wave number, Journal of Computational Physics, 232 (2013), pp. 272 – 287.
  60. G. Vainikko, Fast solvers of the Lippmann-Schwinger equation, in Direct and Inverse Problems of Mathematical Physics, RobertP. Gilbert, Joji Kajiwara, and YongzhiS. Xu, eds., vol. 5 of International Society for Analysis, Applications and Computation, Springer US, 2000, pp. 423–440.
  61. A. Vion and C. Geuzaine, Double sweep preconditioner for optimized Schwarz methods applied to the Helmholtz problem, Journal of Computational Physics, 266 (2014), pp. 171–190.
  62. H. Weitzner, Lower hybrid waves in the cold plasma model, Communications on Pure and Applied Mathematics, 38 (1985), pp. 919–932.
  63. L. Ying, Sparsifying preconditioner for the Lippmann–Schwinger equation, Multiscale Modeling & Simulation, 13 (2015), pp. 644–660.
  64. L. Zepeda-Núñez and L. Demanet, The method of polarized traces for the 2D Helmholtz equation, Journal of Computational Physics, 308 (2016), pp. 347 – 388.
  65. https://github.com/Forgotten/Lippmann-Schwinger, 2016.
    L. Zepeda-Nunez.
  66. L. Zepeda-Nunez and L. Demanet, Nested domain decomposition with polarized traces for the 2D Helmholtz equation, ArXiv e-prints, [math.NA] 1510.01831 (2015).
  67. L. Zepeda-Núñez, R. J. Hewett, and L. Demanet, Preconditioning the 2D Helmholtz equation with polarized traces, in SEG Technical Program Expanded Abstracts 2014, 2014, pp. 3465–3470.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
22390
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description