Integral equations on curved surfaces

A high-order accurate accelerated direct solver for acoustic scattering from surfaces

James Bremer Adrianna Gillman  and  Per-Gunnar Martinsson
July 9, 2019
Abstract.

We describe an accelerated direct solver for the integral equations which model acoustic scattering from curved surfaces. Surfaces are specified via a collection of smooth parameterizations given on triangles, a setting which generalizes the typical one of triangulated surfaces, and the integral equations are discretized via a high-order Nyström method. This allows for rapid convergence in cases in which high-order surface information is available. The high-order discretization technique is coupled with a direct solver based on the recursive construction of scattering matrices. The result is a solver which often attains complexity in the number of discretization nodes and which is resistant to many of the pathologies which stymie iterative solvers in the numerical simulation of scattering. The performance of the algorithm is illustrated with numerical experiments which involve the simulation of scattering from a variety of domains, including one consisting of a collection of ellipsoids with randomly oriented semiaxes arranged in a grid, and a domain whose boundary has curved edges and corner points.

11footnotetext: Department of Mathematics, Dartmouth College22footnotetext: Department of Applied Mathematics, University of Colorado, Boulder33footnotetext: Department of Mathematics, University of California, Davis.44footnotetext: Corresponding author. E-mail address: bremer@math.ucdavis.edu.

1. Introduction

The manuscript describes techniques based on boundary integral equation formulations for numerically solving certain linear elliptic boundary value problems associated with acoustic scattering in three dimensions. There are three principal components to techniques of this type:

  1. Reformulation. The boundary value problem is first reformulated as a system of integral equations, preferably involving operators which are well-conditioned when considered on spaces of integrable functions.

  2. Discretization. The resulting integral operators are then discretized. High-order discretization methods are to be preferred to low-order schemes and the geometry of the domain under consideration is the principal consideration in producing high-order convergent approaches.

  3. Accelerated Solver. The large dense system of linear algebraic equations produced through discretization must be solved. Accelerated solvers which exploit the mathematical properties of the underlying physical problems are required to solve most problems of interest.

For boundary value problems given on two-dimensional domains — which result in integral operators on planar curves — satisfactory approaches to each of these three tasks have been described in the literature. Uniquely solvable integral equation formulations of boundary value problems for Laplace’s equation and the Helmholtz equation are available [4, 30, 1, 19, 14]; many mechanisms for the high-order discretization of singular integral operators on planar curves are known [14, 25] and even operators given on planar curves with corner singularities can be discretized efficiency and to near machine precision accuracy [6, 5, 10, 23, 22]; finally, fast direct solvers which often have run times which are asymptotically optimal in the number of discretization nodes have been constructed [27, 15, 33, 3, 16, 12, 20, 24].

For three-dimensional domains, the state of affairs is much less satisfactory. In this case, the integral operators of interest are given on curved surfaces which greatly complicates problems (2) and (3) above. Problem (2) requires the evaluation of large numbers of singular integrals given on curved regions. Standard mechanisms for addressing this problem, like the Duffy transform followed by Gaussian quadrature, are limited in their applicability (see [7] for a discussion). Most fast solvers (designed to address problem (3)) are based on iterative methods and can fare poorly when the surface under consideration exhibits complicated geometry. For example, a geometry consisting of a grid of ellipsoids ( discretization points per ellipsoid) bounded by a box two wavelengths in size takes over GMRES iterations to get the residual below (see Figure 6 and Remark 6.1). Moreover, iterative methods are unable to efficiently handle problems with multiple right hand sides that arise frequently in design problems.

In this manuscript, we describe our contributions towards providing a remedy to problems (2) and (3). We consider scattering problems that arise in applications such as sonar and radar where the goal is to find the scattered field at a set of locations generated when an incident wave sent from locations hit an object or collections of objects (whose surface we denote ), cf. Fig. 1(a). The locations of the source and target points are known and typically given by the user. In this context, we present an efficient high-order accurate method for constructing a scattering matrix which maps the incident field generated by the given sources on to a set of induced charges on that generate the scattered field on . A crucial component to constructing is solving a boundary integral equation defined on . A recently developed high order accurate discretization technique for integral equations on surfaces [7, 8] is used. This discretization has proven to be effective for Laplace and low frequency scattering problems. Since many scattering problems of interest involve higher frequencies and complicated geometries, a large number of discretization points are typically required to achieve high accuracy, even for high-order methods. In consequence, it would be infeasible to solve the resulting linear systems using dense linear algebra (e.g., Gaussian elimination). As a robust alternative to the commonly used iterative solvers (whose convergence problems in the present context was previously discussed), we propose the use of a direct solver that exploits the underlying structure of the matrices of interest to construct an approximation to the scattering matrix . Direct solvers have several benefits over iterative solvers: (i) the computational cost does not depend on the spectral properties of the system, (ii) each additional incoming field can be processed quickly, (iii) the amount of memory required to store the scattering matrix scales linearly with the number of discretization points.

1.1. Model problems

The solution technique presented is applicable to many boundary integral equations that arise in mathematical physics but we limit our discussion to the boundary value problems presented in this section.

Let denote a scattering body, or a union of scattering bodies; Figures 6, 7, 8, 9, and 11 of Section 6 illustrate some specific domains of interest. Let denote the surface of the body (or bodies). We are interested in finding the scattered field off for a given incident wave . Specifically this involves solving either an exterior Dirichlet problem for sound-soft scatterers

(DBVP)

where , or an exterior Neumann problem for sound-hard scatterers

(NBVP)

where is the outwards point unit normal vector to , and where .

1.2. Outline

The paper proceeds by introducing the integral formulations for each choice of boundary condition in Section 2. Section 3 presents a high order accurate technique for discretizing the integral equations. Section 4 formally defines a scattering matrix and describes a method for constructing an approximation of the scattering matrix. Section 5 describes an accelerated technique for constructing the scattering matrix that often has asymptotic cost of (providing the scattering body and the wave-length of the radiating field are kept constant as increases). Section 6 presents the results of several numerical experiments which demonstrate the properties of the proposed solver.

2. Integral equation formulations of linear elliptic boundary value problems

The reformulation of the boundary value problems in Section 1.1 as integral equations involves the classical single and double layer operators

(2.1)

where is the free space Green’s function

associated with the Helmholtz equation at wavenumber and where denotes the outward-pointing unit normal to the surface at the point .

The reformulation of (DBVP) and (NBVP) is not entirely straightforward because of the possibility of spurious resonances. That is, for certain values of the wavenumber , the naïve integral equation formulations of (DBVP) and (NBVP) lead to operators with nontrivial null-spaces, even though the original boundary value problems are well-posed. The wave-numbers for which this occurs are called spurious resonances. Complications arise due to the existence of multiple solutions at a resonance. An additional (and perhaps more serious) problem is that for values of close to resonant values, the integral operators (2.1) become ill-conditioned.

2.1. The boundary integral formulation for Dirichlet boundary values problems

The double layer representation

of the solution of (DBVP) leads to the second kind integral equation

As is well known, this integral equation is not be uniquely solvable in the event that the eigenproblem

admits a nontrivial solution. A uniquely solvable integral equation can be obtained by choosing to represent the solution as

(2.2)

see, for instance, [28, 14]. This leads to the integral equation

(2.3)

which is sometimes referred to as the Combined Field Integral Equation (CFIE). We make use of the CFIE to solve the problem (DBVP) in this article.

2.2. The boundary integral formulation for Neumann boundary values problems

The single layer representation

of the solution of (NBVP) leads to the second kind integral equation

(2.4)

Once again the equation (2.4) is not necessarily uniquely solvable — when the eigenproblem

(2.5)

has nontrivial solutions (that is, when is a resonant wavenumber), the operator appearing on the right-hand side of (2.4) is not invertible. Since the focus of this article is on the discretization of integral operators and the rapid inversion of the resulting matrices, we simply avoid wavenumbers for which spurious resonances exist. Appendix A presents a robust integral equation formulation which is suitable for use by the solver of this paper. We forgo its use here in order to reduce the complexity of the presentation.

3. A Nyström method for the discretization of integral equations on surfaces

In this work, we use a modified Nyström method for the discretization of the integral operators (2.1) of the proceeding section. Our approach differs from standard Nyström methods in two important ways:

  1. It captures the action of an operator rather than its pointwise behavior. Among other advantages, this means that the approach results in well-conditioned operators in the case of Lipschitz domains.

  2. A highly accurate mechanism for evaluating the singular integrals which arise in Nyström discretization whose cost is largely independent of the geometry of the surface is employed. This is in contrast to standard methods for the evaluation of the singular integrals arising from the Nyström discretization of integral operators on surfaces which involve constants which depend on the geometry of the domain.

This manuscript gives a cursory account of the method. A detailed description can be found in [7] and a discussion of the advantages of discretization over standard Nyström and collocation methods can be found in [6].

3.1. Decompositions and discretization quadratures.

We define a decomposition of a surface to be a finite sequence

of smooth mappings given on the simplex

with non-vanishing Jacobian determinants such that the sets

form a disjoint union of .

We call a quadrature rule on with positive weights which integrates all elements of the space of polynomials of degree less than or equal to on the simplex a discretization quadrature. Such a rule is called Gaussian if the length of the rule is equal to . That is, a Gaussian rule is one whose length is equal to the dimension of the space of polynomials of degree less than or equal to but which integrates polynomials of degree less than or equal to . For the most part, Gaussian quadratures on triangles are not available. In the experiments of this paper, we used quadrature rules constructed via a generalization of the procedure of [9]. Table 1 lists the properties of the discretization quadrature rules used in this paper and compare their lengths to the size of a true Gaussian quadrature.

Discretization order 4 6 8 10 12 16
Integration order 8 12 16 20 24 32
Length of a true Gaussian quadrature 15 28 45 66 91 153
Length of the quadrature that we use 17 32 52 82 112 192
Table 1. The properties of the quadratures used in this paper. We compare the length of an ideal (hypothetical) Gaussian quadrature to the quadrature that we actually have.

3.2. Discretizations of integral operators.

We now associate with a decomposition

and a discretization quadrature a scheme for the discretization of certain integral operators. We begin by letting be the subspace of consisting of all functions which are pointwise defined on and such that for each the function

is a polynomial of order on . Denote by be the projection of onto the subspace and let be a mapping which takes to a vector with entries

The ordering of the entries of is irrelevant. Suppose that is of the form

(3.1)

with a linear combination of the kernels listed in (2.1). Let denote the matrix which commutes the discretization of the operator induced by the specified decomposition and discretization quadrature as specified in the diagram (3.2).

(3.2)

3.3. Quadrature.

Forming the matrix which appears the diagram (3.2) is an exercise in numerical quadrature. Let be a discretization node on and let be the corresponding weight. The entries of the matrix in (3.2) associated with the mapping and the point must map the scaled values

(3.3)

of the function at the images of the discretization quadrature nodes under to the value of the integral

(3.4)

The manner in which this vector is formed depends on the location of the point vis-à-vis the patch . Let denote the ball of minimum radius containing and denote the ball with the same center as but with twice the radius. Then we say that a point is well-separated from a surface if is outside of , and we say that is near if is inside of the ball but not on the surface .

3.3.1. Smooth regime.

When the point is well-separated from the kernel is smooth and the discretization quadrature can be used to evaluate the integral (3.4). In this case, we take the entries of the matrix to be

(3.5)

3.3.2. Near regime.

When is near , formula (3.5) may not achieve sufficient accuracy. Thus, we adaptively form a quadrature with nodes and weights

sufficient to evaluate the nearly singular integrals which arise and apply an interpolation matrix which takes the scaled values (3.3) of the function at the nodes of the discretization quadrature to its scaled values at the nodes of the adaptive quadrature. The vector of entries of is given by the product of the vector of values of the kernel evaluated at the adaptive quadrature nodes with the matrix . As in the scheme of [7], the adaptive quadrature is formed in such a way that the matrix has special structure — it is a product of block diagonal matrices — which allows it to be applied efficiently.

3.3.3. Singular regime.

When the target node lies on the surface , the integral (3.4) is singular and its evaluation becomes quite difficult. The typical approach to the evaluation of integrals of this form is to apply the Duffy transform or polar coordinate transform in order to remove the singularity of the kernel. Once this has been done, quadratures for smooth functions (for instance, product Legendre quadratures) are typically applied.

While the such schemes are exponentially convergent, they can be extremely inefficient. The difficulty is that the functions resulting from the change of variables can have poles close to the integration domain. For instance, if a polar coordinate transform is applied, the singular integrals which must be evaluated take the form

(3.6)

where the function gives a parameterization of the integration domain in polar coordinates and each is a quotient form

where is a positive constant dependent on the order and the kernel under consideration, is a trigonometric polynomial of finite order, and is a function which depends on the parameterization . More specifically, if

then the function is given by

where

Obviously, the function can have zeros close to the real axis.

The approach used in this paper — and described in [7] — calls for first applying a change of variables in order to ensure that the mapping is conformal at the target node. Then the function is a constant which does not depend on and the integrand simplifies considerably. The unfortunate side effect of applying this change of variables is that the integration domain is no longer the standard simplex but rather an arbitrary triangle. Now the integral which must be evaluated takes the form

(3.7)

with the trigonometric polynomials of finite order. The function can have poles close to the integration domain, however. We combat this problem by precomputing a collection of quadrature rules designed to integrate smooth functions on arbitrary triangles and apply them to evaluate the integrals (3.7). The performance of this approach is described in [7]; in one example of that paper, the standard approach of applying a change of variables followed by Gauss-Legendre quadrature required more than quadrature nodes to accurately evaluate the singular integrals arising from a boundary value problem while the precomputed rules used here and in that paper required less than nodes.

4. Scattering matrices as a numerical tool

This section describes the concept of a scattering matrix, first from a physical perspective, and then the corresponding linear algebraic interpretation. It also informally describes how to build approximate scattering matrices computationally. A rigorous description is given in Section 5.

4.1. The scattering operator

We consider a “sound-soft” scattering formulation illustrated in Figure 1. We are given a charge distribution on a contour (the radiation “source”) that generates an incoming field on the scatterer . The incoming field on the scatterer induces an outgoing field that satisfies (DBVP). Our objective is to computationally construct , the restriction of the outgoing field to the specified contour , which represents the “antenna.” To solve (DBVP), we use the combined field formulation (2.2). In other words, we represent the outgoing field via a distribution of “induced sources” on the contour . The map we seek to evaluate can be expressed mathematically as

(4.1)

where is the operator

is the operator

(4.2)

and is the operator

Observe that since is an invertible second kind integral operator, its inverse has singular values bounded away from zero.

Given a finite precision , we say that an operator is a “scattering operator” for to within precision when

It turns out that the operators and have rapidly decaying singular values. Let denote the orthogonal projection onto the set of leading left singular vectors of , and let denote the orthogonal projection onto the set of leading right singular vectors of . Then

It follows that if we define an approximate scattering operator via

(4.3)

then the approximation error

converges to zero very rapidly as increases.

         
(a) (b)
Figure 1. Geometry of the scattering problem in Section 4. The scattering surface is . (a) The radiation source is different from the antenna . (b) The typical geometry considered with . The scattering matrix generated by this geometry is the “complete” scattering matrix in the sense that any incoming field can be handled (as long as the radiation source is not inside ), and the full radiated field can be obtained.

4.2. Definition of a discrete scattering matrix

The continuum scattering problem described in Section 4.1 has a direct linear algebraic analog for the discretized equations. Forming the Nyström discretization of the three operators in (4.1), we obtain the map

(4.4)

Suppose that is an matrix, and let denote a bound on the -ranks of and . Then form matrices and (the reason for the subscripts will become clear shortly) of size such that

where a matrix with the superscript denotes the pseudo-inverse of the matrix. In other words, the columns of span the column space of and the columns of span the row space of . Then the map (4.4) can be approximated as

(4.5)

where

(4.6)

The matrix is the discrete analog of the continuum scattering matrix for defined by (4.3).

4.3. Hierarchical construction of discrete scattering matrices

Observe that direct evaluation of the definition (4.6) of via, e.g., Gaussian elimination, would be very costly since is dense, and could potentially be large (much larger than ). To avoid this expense, we will build via an accelerated hierarchical procedure. As a first step, we let the entire domain denote the root of the tree, and split into two disjoint halves, cf. Figure 2(b),

(4.7)

Let and denote the number of collocation points in and , respectively. The idea is now to form scattering matrices and for the two halves, and then form by “merging” these smaller scattering matrices. To formalize, split the matrix into blocks to conform with the partitioning (4.7),

The off-diagonal blocks and have very rapidly decaying singular values, which means that they can be factored

(4.8)
(4.9)

The matrices and are additionally assumed to span the column spaces of and , in the sense that there exist “small” matrices and such that

(4.10)

Inserting (4.8), (4.9), and (4.10) into (4.6), we find

(4.11)

Using the Woodbury formula for matrix inversion (see Lemma 5.1), it can be shown that

(4.12)

where and are the scattering matrices for and , respectively. The matrices and are given by the following formulas

(4.13)

Combining (4.11) and (4.12), we obtain the formula

(4.14)

The upside here is that in order to evaluate (4.14), we only need to invert a matrix of size , whereas (4.6) requires inversion of a matrix of size . The downside is that we are still left with the task of inverting and to evaluate and via (4.13). Now, to reduce the cost of constructing and , we further sub-divide

as shown in Figure 2(c). Then it can be shown (see Lemma 5.1) that and are given by the formulas

where

                   
(a) (b) (c)
Figure 2. Hierarchical partitioning of the domain as described in Section 4.3. (a) The full domain . (b) Partition . (c) Partition and .

The idea is now to continue to recursively split all patches until we get to a point where each patch holds sufficiently few discretization points that its scattering matrix can inexpensively be constructed via, e.g., Gaussian elimination.

4.4. Scattering ranks

5. The hierarchical construction of a scattering matrix

Section 4 defined the concept of a scattering matrix, and informally outlined a procedure for how to build approximate scattering matrices via a hierarchical procedure. This section provides a more rigorous description, which requires the introduction of some notational machinery. The key concept is that the matrix corresponding to the discretized operator (defined by (4.2)) can be represented efficiently in a data sparse format that we call Hierarchically Block Separable (HBS), and is closely related to the Hierarchically Semi-Separable (HSS) format [32, 29, 31]. Our presentation is in principle self-contained but assumes some familiarity with hierarchically rank-structured matrix formats such as HSS/HBS (or the related -matrix format [2, 3]).

Remark 5.1.

The method presented in Sections 4.2, 4.3, and this section use the same rank for the incoming and outgoing factorizations purely for simplicity of presentation. In practice this is not required.

5.1. Problem formulation

Let denote the dense matrix obtained upon Nyström discretization (see Section 3) of the operator as defined by (4.2). Moreover, suppose that we are given a positive tolerance , and two tall thin matrices and satisfying the following:

  • A well-conditioned matrix of size whose columns span (to within precision the column space of the matrix that maps the given charge distribution on the “radiation source” to the collocation points on . In other words, any incoming field hitting can be expressed as a linear combination of the columns of .

  • A well-conditioned matrix of size whose columns span (to within precision the row space of the matrix that maps the induced charges on the collocation points on to the collocation points on the “antenna” . In other words, any field generated on by induced charges on can be replicated by a charge distribution in the span of .

Our objective is then to construct a highly accurate approximation to the scattering matrix

(5.1)

5.2. Hierarchical tree

The hierarchical construction of the scattering matrix is based on a binary tree structure of patches on the scattering surface . We let the full domain form the root of the tree, and give it the index , . We next split the root into two roughly equi-sized patches and so that . The full tree is then formed by continuing to subdivide any patch that holds more than some preset fixed number of collocation points. A leaf is a node in the tree corresponding to a patch that never got split. For a non-leaf node , its children are the two nodes and such that , and is then the parent of and . Two boxes with the same parent are called siblings.

Let denote the full list of indices for the collocation points on , and let for any node the index vector mark the set of collocation nodes inside . Let denote the number of nodes in .

Level Level Level Level , , , …, , …

Figure 3. Numbering of nodes in a fully populated binary tree with levels. The root is the original index vector .

5.3. Hierarchically Block Separable (HBS) matrices

We say that a dense matrix is HBS with respect to a given tree partitioning of its index vector if for every node in the tree, there exist basis matrix and with the following properties:

  1. For every sibling pair , the corresponding off-diagonal block admits (up to precision ) the factorization

    (5.2)

    where and .

  2. For every sibling pair with parent , there exist matrices and such that

    (5.3)
    (5.4)

For notational convenience, we set for every leaf node

Note that every basis matrix and is small. The point is that the matrix is fully specified by giving the following for every node :

  • the two small matrices , and ,

  • if is a leaf, the diagonal block , and

  • if is a parent node with children , the sibling interaction matrices .

Particular attention should be paid to the fact that the long basis matrices and are never formed — these were introduced merely to facilitate the derivation of the HBS representation. Table 4 summarizes the factors required, and Algorithm I describes how for any vector the matrix-vector product can be computed if the HBS factors of are given.

Name: Size: Function:
For each leaf Diagonal block.
node : Basis for the columns in .
Basis for the rows in .
For each parent Sibling interaction matrix.
node with Sibling interaction matrix.
children : Basis for the (reduced) incoming fields on .
Basis for the (reduced) outgoing fields from .
Figure 4. An HBS matrix associated with a tree is fully specified if the factors listed above are provided.

Algorithm I: HBS matrix-vector multiply Given a vector and a matrix in HBS format, compute . It is assumed that the nodes are ordered so that if is the parent of , then .
for if is a leaf . else , where and denote the children of . end if end for for if is a parent , where and denote the children of . else . end if end for

Remark 5.2 (Meaning of basis matrices ).

Let us describe the heuristic meaning of the “tall thin” basis matrix . Conditions (5.2) and (5.3) together imply that the columns of span the column space of the off-diagonal block , as well as the columns of . We can write the restriction of the local equilibrium equation on as

(5.5)

Exploiting that the columns of span the column spaces of both matrices on the right hand side of (5.5), we can rewrite the equation as

(5.6)

where is an efficient representation of the local incoming field, and where is an efficient representation of the field on caused by charges on . The basis matrix analogously provides an efficient way to represent the outgoing fields on . The “small” matrices and then provide compact representations of and . Observe for instance that

These concepts are described in further detail in Appendix B.

Remark 5.3.

(Exact versus numerical rank) In practical computations, the off-diagonal blocks of discretized integral operators only have low numerical (as opposed to exact) rank. However, in the present context, the singular values tend to decay exponentially fast, which means that the truncation error can often be rendered close to machine precision. We do not in this manuscript provide a rigorous error analysis, but merely observe that across a broad range of numerical experiments, we did not see any error aggregation; instead, the local truncation errors closely matched the global error.

5.4. Hierarchical construction of a scattering matrix

For any node , define its associated scattering matrix via

(5.7)

The following lemma states that the scattering matrix for a parent node with children and can be formed inexpensively from the scattering matrices and , along with the sibling interaction matrices and .

Lemma 5.1.

Let be a node with children and . Then

(5.8)

Lemma 5.1 immediately leads to an algorithm for computing the scattering matrix