Integral equations on curved surfaces

# A high-order accurate accelerated direct solver for acoustic scattering from surfaces

James Bremer Adrianna Gillman  and  Per-Gunnar Martinsson
July 9, 2019
###### Abstract.

We describe an accelerated direct solver for the integral equations which model acoustic scattering from curved surfaces. Surfaces are specified via a collection of smooth parameterizations given on triangles, a setting which generalizes the typical one of triangulated surfaces, and the integral equations are discretized via a high-order Nyström method. This allows for rapid convergence in cases in which high-order surface information is available. The high-order discretization technique is coupled with a direct solver based on the recursive construction of scattering matrices. The result is a solver which often attains complexity in the number of discretization nodes and which is resistant to many of the pathologies which stymie iterative solvers in the numerical simulation of scattering. The performance of the algorithm is illustrated with numerical experiments which involve the simulation of scattering from a variety of domains, including one consisting of a collection of ellipsoids with randomly oriented semiaxes arranged in a grid, and a domain whose boundary has curved edges and corner points.

11footnotetext: Department of Mathematics, Dartmouth College22footnotetext: Department of Applied Mathematics, University of Colorado, Boulder33footnotetext: Department of Mathematics, University of California, Davis.44footnotetext: Corresponding author. E-mail address: bremer@math.ucdavis.edu.

## 1. Introduction

The manuscript describes techniques based on boundary integral equation formulations for numerically solving certain linear elliptic boundary value problems associated with acoustic scattering in three dimensions. There are three principal components to techniques of this type:

1. Reformulation. The boundary value problem is first reformulated as a system of integral equations, preferably involving operators which are well-conditioned when considered on spaces of integrable functions.

2. Discretization. The resulting integral operators are then discretized. High-order discretization methods are to be preferred to low-order schemes and the geometry of the domain under consideration is the principal consideration in producing high-order convergent approaches.

3. Accelerated Solver. The large dense system of linear algebraic equations produced through discretization must be solved. Accelerated solvers which exploit the mathematical properties of the underlying physical problems are required to solve most problems of interest.

For boundary value problems given on two-dimensional domains — which result in integral operators on planar curves — satisfactory approaches to each of these three tasks have been described in the literature. Uniquely solvable integral equation formulations of boundary value problems for Laplace’s equation and the Helmholtz equation are available [4, 30, 1, 19, 14]; many mechanisms for the high-order discretization of singular integral operators on planar curves are known [14, 25] and even operators given on planar curves with corner singularities can be discretized efficiency and to near machine precision accuracy [6, 5, 10, 23, 22]; finally, fast direct solvers which often have run times which are asymptotically optimal in the number of discretization nodes have been constructed [27, 15, 33, 3, 16, 12, 20, 24].

For three-dimensional domains, the state of affairs is much less satisfactory. In this case, the integral operators of interest are given on curved surfaces which greatly complicates problems (2) and (3) above. Problem (2) requires the evaluation of large numbers of singular integrals given on curved regions. Standard mechanisms for addressing this problem, like the Duffy transform followed by Gaussian quadrature, are limited in their applicability (see [7] for a discussion). Most fast solvers (designed to address problem (3)) are based on iterative methods and can fare poorly when the surface under consideration exhibits complicated geometry. For example, a geometry consisting of a grid of ellipsoids ( discretization points per ellipsoid) bounded by a box two wavelengths in size takes over GMRES iterations to get the residual below (see Figure 6 and Remark 6.1). Moreover, iterative methods are unable to efficiently handle problems with multiple right hand sides that arise frequently in design problems.

In this manuscript, we describe our contributions towards providing a remedy to problems (2) and (3). We consider scattering problems that arise in applications such as sonar and radar where the goal is to find the scattered field at a set of locations generated when an incident wave sent from locations hit an object or collections of objects (whose surface we denote ), cf. Fig. 1(a). The locations of the source and target points are known and typically given by the user. In this context, we present an efficient high-order accurate method for constructing a scattering matrix which maps the incident field generated by the given sources on to a set of induced charges on that generate the scattered field on . A crucial component to constructing is solving a boundary integral equation defined on . A recently developed high order accurate discretization technique for integral equations on surfaces [7, 8] is used. This discretization has proven to be effective for Laplace and low frequency scattering problems. Since many scattering problems of interest involve higher frequencies and complicated geometries, a large number of discretization points are typically required to achieve high accuracy, even for high-order methods. In consequence, it would be infeasible to solve the resulting linear systems using dense linear algebra (e.g., Gaussian elimination). As a robust alternative to the commonly used iterative solvers (whose convergence problems in the present context was previously discussed), we propose the use of a direct solver that exploits the underlying structure of the matrices of interest to construct an approximation to the scattering matrix . Direct solvers have several benefits over iterative solvers: (i) the computational cost does not depend on the spectral properties of the system, (ii) each additional incoming field can be processed quickly, (iii) the amount of memory required to store the scattering matrix scales linearly with the number of discretization points.

### 1.1. Model problems

The solution technique presented is applicable to many boundary integral equations that arise in mathematical physics but we limit our discussion to the boundary value problems presented in this section.

Let denote a scattering body, or a union of scattering bodies; Figures 6, 7, 8, 9, and 11 of Section 6 illustrate some specific domains of interest. Let denote the surface of the body (or bodies). We are interested in finding the scattered field off for a given incident wave . Specifically this involves solving either an exterior Dirichlet problem for sound-soft scatterers

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩Δu+κ2u= 0 in Ωc,u= g on Γ,∂u∂r−iκu= O(1r) as r→∞, (DBVP)

where , or an exterior Neumann problem for sound-hard scatterers

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Δu+κ2u= 0 in Ωc,∂u∂η= g on Γ,∂u∂r−iκu= O(1r) as r→∞, (NBVP)

where is the outwards point unit normal vector to , and where .

### 1.2. Outline

The paper proceeds by introducing the integral formulations for each choice of boundary condition in Section 2. Section 3 presents a high order accurate technique for discretizing the integral equations. Section 4 formally defines a scattering matrix and describes a method for constructing an approximation of the scattering matrix. Section 5 describes an accelerated technique for constructing the scattering matrix that often has asymptotic cost of (providing the scattering body and the wave-length of the radiating field are kept constant as increases). Section 6 presents the results of several numerical experiments which demonstrate the properties of the proposed solver.

## 2. Integral equation formulations of linear elliptic boundary value problems

The reformulation of the boundary value problems in Section 1.1 as integral equations involves the classical single and double layer operators

 Sκf(x)=∫∫ΓGκ(x,y)f(y)ds(y),Dκf(x)=∫∫Γηy⋅∇yGκ(x,y)f(y)ds(y)  andD∗κf(x)=∫∫Γηx⋅∇xGκ(x,y)f(y)ds(y) (2.1)

where is the free space Green’s function

 Gκ(x,y)=exp(iκ|x−y|)|x−y|

associated with the Helmholtz equation at wavenumber and where denotes the outward-pointing unit normal to the surface at the point .

The reformulation of (DBVP) and (NBVP) is not entirely straightforward because of the possibility of spurious resonances. That is, for certain values of the wavenumber , the naïve integral equation formulations of (DBVP) and (NBVP) lead to operators with nontrivial null-spaces, even though the original boundary value problems are well-posed. The wave-numbers for which this occurs are called spurious resonances. Complications arise due to the existence of multiple solutions at a resonance. An additional (and perhaps more serious) problem is that for values of close to resonant values, the integral operators (2.1) become ill-conditioned.

### 2.1. The boundary integral formulation for Dirichlet boundary values problems

The double layer representation

 u(x)=Dκσ(x).

of the solution of (DBVP) leads to the second kind integral equation

 12σ(x)+Dκσ(x)=g(x).

As is well known, this integral equation is not be uniquely solvable in the event that the eigenproblem

 Δu+κ2u=0 in Ω∂u∂η=0 on Γ

admits a nontrivial solution. A uniquely solvable integral equation can be obtained by choosing to represent the solution as

 u(x)=Dκσ(x)+i|κ|Sκσ(x); (2.2)

see, for instance, [28, 14]. This leads to the integral equation

 12σ(x)+Dκσ(x)+i|κ|Sκσ(x)=g(x), (2.3)

which is sometimes referred to as the Combined Field Integral Equation (CFIE). We make use of the CFIE to solve the problem (DBVP) in this article.

### 2.2. The boundary integral formulation for Neumann boundary values problems

The single layer representation

 u=Sκσ.

of the solution of (NBVP) leads to the second kind integral equation

 12σ(x)+D∗κσ(x)=g(x). (2.4)

Once again the equation (2.4) is not necessarily uniquely solvable — when the eigenproblem

 Δu+κ2u=0 in Ωu=0 on Γ (2.5)

has nontrivial solutions (that is, when is a resonant wavenumber), the operator appearing on the right-hand side of (2.4) is not invertible. Since the focus of this article is on the discretization of integral operators and the rapid inversion of the resulting matrices, we simply avoid wavenumbers for which spurious resonances exist. Appendix A presents a robust integral equation formulation which is suitable for use by the solver of this paper. We forgo its use here in order to reduce the complexity of the presentation.

## 3. A Nyström method for the discretization of integral equations on surfaces

In this work, we use a modified Nyström method for the discretization of the integral operators (2.1) of the proceeding section. Our approach differs from standard Nyström methods in two important ways:

1. It captures the action of an operator rather than its pointwise behavior. Among other advantages, this means that the approach results in well-conditioned operators in the case of Lipschitz domains.

2. A highly accurate mechanism for evaluating the singular integrals which arise in Nyström discretization whose cost is largely independent of the geometry of the surface is employed. This is in contrast to standard methods for the evaluation of the singular integrals arising from the Nyström discretization of integral operators on surfaces which involve constants which depend on the geometry of the domain.

This manuscript gives a cursory account of the method. A detailed description can be found in [7] and a discussion of the advantages of discretization over standard Nyström and collocation methods can be found in [6].

### 3.1. Decompositions and discretization quadratures.

We define a decomposition of a surface to be a finite sequence

 {ρj:Δ1→Γ}mj=1

of smooth mappings given on the simplex

 Δ1={(x,y)∈R2 : 0≤x≤1, 0≤y≤1−x}

with non-vanishing Jacobian determinants such that the sets

 ρ1(Δ1),ρ2(Δ1),…,ρm(Δ1)

form a disjoint union of .

We call a quadrature rule on with positive weights which integrates all elements of the space of polynomials of degree less than or equal to on the simplex a discretization quadrature. Such a rule is called Gaussian if the length of the rule is equal to . That is, a Gaussian rule is one whose length is equal to the dimension of the space of polynomials of degree less than or equal to but which integrates polynomials of degree less than or equal to . For the most part, Gaussian quadratures on triangles are not available. In the experiments of this paper, we used quadrature rules constructed via a generalization of the procedure of [9]. Table 1 lists the properties of the discretization quadrature rules used in this paper and compare their lengths to the size of a true Gaussian quadrature.

### 3.2. Discretizations of integral operators.

We now associate with a decomposition

 D={ρj:Δ1→Γ}mj=1

and a discretization quadrature a scheme for the discretization of certain integral operators. We begin by letting be the subspace of consisting of all functions which are pointwise defined on and such that for each the function

 f(ρj(x))∣∣dρj(x)∗dρj(x)∣∣1/2

is a polynomial of order on . Denote by be the projection of onto the subspace and let be a mapping which takes to a vector with entries

 f(ρj(xi))√wi∣∣dρj(ρj(xi))∗dρj(ρj(xi))∣∣1/4,   j=1,…,m,  i=1,…,l.

The ordering of the entries of is irrelevant. Suppose that is of the form

 Tf(x)=∫∫ΓK(x,y)f(y)ds(y), (3.1)

with a linear combination of the kernels listed in (2.1). Let denote the matrix which commutes the discretization of the operator induced by the specified decomposition and discretization quadrature as specified in the diagram (3.2).

 S⊂L2(Γ)PT−−−−−−→S⊂L2(Γ)⏐⏐↓Φ@VVΦVCmlA−−−−−−→Cml (3.2)

Forming the matrix which appears the diagram (3.2) is an exercise in numerical quadrature. Let be a discretization node on and let be the corresponding weight. The entries of the matrix in (3.2) associated with the mapping and the point must map the scaled values

 f(ρj(x1))|dρ∗j(xk)dρj(xk)|1/4√wk,k=1,2,…,l, (3.3)

of the function at the images of the discretization quadrature nodes under to the value of the integral

 ∫∫Δ1K(ρi(x),ρj(y))f(ρj(y))|dρ∗j(y)dρj(y)|1/2√w dy. (3.4)

The manner in which this vector is formed depends on the location of the point vis-à-vis the patch . Let denote the ball of minimum radius containing and denote the ball with the same center as but with twice the radius. Then we say that a point is well-separated from a surface if is outside of , and we say that is near if is inside of the ball but not on the surface .

#### 3.3.1. Smooth regime.

When the point is well-separated from the kernel is smooth and the discretization quadrature can be used to evaluate the integral (3.4). In this case, we take the entries of the matrix to be

 (3.5)

#### 3.3.2. Near regime.

When is near , formula (3.5) may not achieve sufficient accuracy. Thus, we adaptively form a quadrature with nodes and weights

 u1,…,um,r1,…,rm

sufficient to evaluate the nearly singular integrals which arise and apply an interpolation matrix which takes the scaled values (3.3) of the function at the nodes of the discretization quadrature to its scaled values at the nodes of the adaptive quadrature. The vector of entries of is given by the product of the vector of values of the kernel evaluated at the adaptive quadrature nodes with the matrix . As in the scheme of [7], the adaptive quadrature is formed in such a way that the matrix has special structure — it is a product of block diagonal matrices — which allows it to be applied efficiently.

#### 3.3.3. Singular regime.

When the target node lies on the surface , the integral (3.4) is singular and its evaluation becomes quite difficult. The typical approach to the evaluation of integrals of this form is to apply the Duffy transform or polar coordinate transform in order to remove the singularity of the kernel. Once this has been done, quadratures for smooth functions (for instance, product Legendre quadratures) are typically applied.

While the such schemes are exponentially convergent, they can be extremely inefficient. The difficulty is that the functions resulting from the change of variables can have poles close to the integration domain. For instance, if a polar coordinate transform is applied, the singular integrals which must be evaluated take the form

 ∫2π0∫R(θ)0(q−1(θ)+q0(θ)r+q1(θ)r2+q2(θ)r3+⋯ )drdθ (3.6)

where the function gives a parameterization of the integration domain in polar coordinates and each is a quotient form

 rj(θ)l(θ)α,

where is a positive constant dependent on the order and the kernel under consideration, is a trigonometric polynomial of finite order, and is a function which depends on the parameterization . More specifically, if

 ξ1=∂ρj(x1,x2)∂x1    and    ξ2=∂ρj(x1,x2)∂x2,

then the function is given by

 l(θ)=√λcos2(θ)+2γcos(θ)sin(θ)+λ−1sin2(θ)

where

 cos(γ)=ξ1⋅ξ2|ξ1||ξ2|    and    λ=|ξ1||ξ2|.

Obviously, the function can have zeros close to the real axis.

The approach used in this paper — and described in [7] — calls for first applying a change of variables in order to ensure that the mapping is conformal at the target node. Then the function is a constant which does not depend on and the integrand simplifies considerably. The unfortunate side effect of applying this change of variables is that the integration domain is no longer the standard simplex but rather an arbitrary triangle. Now the integral which must be evaluated takes the form

 ∫2π0∫~R(θ)0(~q−1(θ)+~q0(θ)r+~q1(θ)r2+~q2(θ)r3+⋯ )drdθ (3.7)

with the trigonometric polynomials of finite order. The function can have poles close to the integration domain, however. We combat this problem by precomputing a collection of quadrature rules designed to integrate smooth functions on arbitrary triangles and apply them to evaluate the integrals (3.7). The performance of this approach is described in [7]; in one example of that paper, the standard approach of applying a change of variables followed by Gauss-Legendre quadrature required more than quadrature nodes to accurately evaluate the singular integrals arising from a boundary value problem while the precomputed rules used here and in that paper required less than nodes.

## 4. Scattering matrices as a numerical tool

This section describes the concept of a scattering matrix, first from a physical perspective, and then the corresponding linear algebraic interpretation. It also informally describes how to build approximate scattering matrices computationally. A rigorous description is given in Section 5.

### 4.1. The scattering operator

We consider a “sound-soft” scattering formulation illustrated in Figure 1. We are given a charge distribution on a contour (the radiation “source”) that generates an incoming field on the scatterer . The incoming field on the scatterer induces an outgoing field that satisfies (DBVP). Our objective is to computationally construct , the restriction of the outgoing field to the specified contour , which represents the “antenna.” To solve (DBVP), we use the combined field formulation (2.2). In other words, we represent the outgoing field via a distribution of “induced sources” on the contour . The map we seek to evaluate can be expressed mathematically as

 uout=ToutT−1ΓTinsin,{outgoing field on }Γout{given charges on }Γin (4.1)

where is the operator

 [Tinsin](x)=∫ΓinGκ(x,y)sin(y)ds(y),x∈Γ,

is the operator

 [TΓg](x)=12g(x)+Dκg(x)+i|κ|Sκg(x),x∈Γ, (4.2)

and is the operator

 [Toutσ](x)=∫ΓGκ(x,y)σ(y)ds(y)+∫∂Ω∂νyGκ(x,y)σ(y)ds(y),x∈Γout.

Observe that since is an invertible second kind integral operator, its inverse has singular values bounded away from zero.

Given a finite precision , we say that an operator is a “scattering operator” for to within precision when

 ∥ToutT−1ΓTin−ToutSTin∥≤ε.

It turns out that the operators and have rapidly decaying singular values. Let denote the orthogonal projection onto the set of leading left singular vectors of , and let denote the orthogonal projection onto the set of leading right singular vectors of . Then

 ∥ToutP(k)out−Tout∥≤σk+1(Tout)and∥TinP(k)in−Tin∥≤σk+1(Tin).

It follows that if we define an approximate scattering operator via

 S=P(k)outT−1ΓP(k)in, (4.3)

then the approximation error

 ∥ToutT−1ΓTin−ToutP(k)outT−1ΓP(k)inTin∥

converges to zero very rapidly as increases.

### 4.2. Definition of a discrete scattering matrix

The continuum scattering problem described in Section 4.1 has a direct linear algebraic analog for the discretized equations. Forming the Nyström discretization of the three operators in (4.1), we obtain the map

 uout=AoutA−1Ainsin.Nout×1Nout×NN×NN×NinNin×1 (4.4)

Suppose that is an matrix, and let denote a bound on the -ranks of and . Then form matrices and (the reason for the subscripts will become clear shortly) of size such that

 ∥Ain−^U1^U†1Ain∥≤ε,∥Aout−Aout(^V1^V†1)∗∥≤ε

where a matrix with the superscript denotes the pseudo-inverse of the matrix. In other words, the columns of span the column space of and the columns of span the row space of . Then the map (4.4) can be approximated as

 uout=(Aout(^V†1)∗)S1(^U†1Ain)sin,Nout×1Nout×k1k1×k1k1×Nin (4.5)

where

 S1=^V∗1A−1^U1. (4.6)

The matrix is the discrete analog of the continuum scattering matrix for defined by (4.3).

### 4.3. Hierarchical construction of discrete scattering matrices

Observe that direct evaluation of the definition (4.6) of via, e.g., Gaussian elimination, would be very costly since is dense, and could potentially be large (much larger than ). To avoid this expense, we will build via an accelerated hierarchical procedure. As a first step, we let the entire domain denote the root of the tree, and split into two disjoint halves, cf. Figure 2(b),

 Γ1=Γ2∪Γ3. (4.7)

Let and denote the number of collocation points in and , respectively. The idea is now to form scattering matrices and for the two halves, and then form by “merging” these smaller scattering matrices. To formalize, split the matrix into blocks to conform with the partitioning (4.7),

 A=[A2,2A2,3A3,2A3,3].

The off-diagonal blocks and have very rapidly decaying singular values, which means that they can be factored

 A2,3=^U2~A2,3^V∗3,n2×n3n2×k2k2×k3k3×n3 (4.8) A3,2=^U3~A3,2^V∗2.n3×n2n3×k3k3×k2k2×n2 (4.9)

The matrices and are additionally assumed to span the column spaces of and , in the sense that there exist “small” matrices and such that

 (4.10)

Inserting (4.8), (4.9), and (4.10) into (4.6), we find

 S1=V∗1[^V∗200^V∗3][A2,2^U2~A2,3^V∗3^U3~A3,2^V∗2A3,3]−1[^U200^U3]U2. (4.11)

Using the Woodbury formula for matrix inversion (see Lemma 5.1), it can be shown that

 [^V∗200^V∗3][A2,2^U2~A2,3^V∗3^U3~A3,2^V∗2A3,3]−1[^U200^U3]=[IS2~A2,3S3~A3,2I]−1[S200S3] (4.12)

where and are the scattering matrices for and , respectively. The matrices and are given by the following formulas

 S2=^V∗2A−12,2^U2,andS3=^V∗3A−13,3^U3. (4.13)

Combining (4.11) and (4.12), we obtain the formula

 S1=V∗1[IS2~A2,3S3~A3,2I]−1[S200S3]U1. (4.14)

The upside here is that in order to evaluate (4.14), we only need to invert a matrix of size , whereas (4.6) requires inversion of a matrix of size . The downside is that we are still left with the task of inverting and to evaluate and via (4.13). Now, to reduce the cost of constructing and , we further sub-divide

 Γ2=Γ4∪Γ5,andΓ3=Γ6∪Γ7,

as shown in Figure 2(c). Then it can be shown (see Lemma 5.1) that and are given by the formulas

 S2=V∗2[IS4~A4,5S5~A5,4I]−1[S400S5]U2,andS3=V∗3[IS6~A6,7S7~A7,6I]−1[S600S7]U3,

where

 S4=^V∗4A−14,4^U4,S5=^V∗5A−15,5^U5,S6=^V∗6A−16,6^U6,S7=^V∗7A−17,7^U7.

The idea is now to continue to recursively split all patches until we get to a point where each patch holds sufficiently few discretization points that its scattering matrix can inexpensively be constructed via, e.g., Gaussian elimination.

## 5. The hierarchical construction of a scattering matrix

Section 4 defined the concept of a scattering matrix, and informally outlined a procedure for how to build approximate scattering matrices via a hierarchical procedure. This section provides a more rigorous description, which requires the introduction of some notational machinery. The key concept is that the matrix corresponding to the discretized operator (defined by (4.2)) can be represented efficiently in a data sparse format that we call Hierarchically Block Separable (HBS), and is closely related to the Hierarchically Semi-Separable (HSS) format [32, 29, 31]. Our presentation is in principle self-contained but assumes some familiarity with hierarchically rank-structured matrix formats such as HSS/HBS (or the related -matrix format [2, 3]).

###### Remark 5.1.

The method presented in Sections 4.2, 4.3, and this section use the same rank for the incoming and outgoing factorizations purely for simplicity of presentation. In practice this is not required.

### 5.1. Problem formulation

Let denote the dense matrix obtained upon Nyström discretization (see Section 3) of the operator as defined by (4.2). Moreover, suppose that we are given a positive tolerance , and two tall thin matrices and satisfying the following:

• A well-conditioned matrix of size whose columns span (to within precision the column space of the matrix that maps the given charge distribution on the “radiation source” to the collocation points on . In other words, any incoming field hitting can be expressed as a linear combination of the columns of .

• A well-conditioned matrix of size whose columns span (to within precision the row space of the matrix that maps the induced charges on the collocation points on to the collocation points on the “antenna” . In other words, any field generated on by induced charges on can be replicated by a charge distribution in the span of .

Our objective is then to construct a highly accurate approximation to the scattering matrix

 S1=^V∗1A−1^U1. (5.1)

### 5.2. Hierarchical tree

The hierarchical construction of the scattering matrix is based on a binary tree structure of patches on the scattering surface . We let the full domain form the root of the tree, and give it the index , . We next split the root into two roughly equi-sized patches and so that . The full tree is then formed by continuing to subdivide any patch that holds more than some preset fixed number of collocation points. A leaf is a node in the tree corresponding to a patch that never got split. For a non-leaf node , its children are the two nodes and such that , and is then the parent of and . Two boxes with the same parent are called siblings.

Let denote the full list of indices for the collocation points on , and let for any node the index vector mark the set of collocation nodes inside . Let denote the number of nodes in .

### 5.3. Hierarchically Block Separable (HBS) matrices

We say that a dense matrix is HBS with respect to a given tree partitioning of its index vector if for every node in the tree, there exist basis matrix and with the following properties:

1. For every sibling pair , the corresponding off-diagonal block admits (up to precision ) the factorization

 A(Iα,Iβ)=^Uα~Aα,β^Vβ,nα×nβnα×kαkα×kβkβ×nβ (5.2)

where and .

2. For every sibling pair with parent , there exist matrices and such that

 (5.3) (5.4)

For notational convenience, we set for every leaf node

 Uτ=^Uτ,andVτ=^Vτ.

Note that every basis matrix and is small. The point is that the matrix is fully specified by giving the following for every node :

• the two small matrices , and ,

• if is a leaf, the diagonal block , and

• if is a parent node with children , the sibling interaction matrices .

Particular attention should be paid to the fact that the long basis matrices and are never formed — these were introduced merely to facilitate the derivation of the HBS representation. Table 4 summarizes the factors required, and Algorithm I describes how for any vector the matrix-vector product can be computed if the HBS factors of are given.

###### Remark 5.2 (Meaning of basis matrices ^Uτ,^Vτ,Uτ,Vτ).

Let us describe the heuristic meaning of the “tall thin” basis matrix . Conditions (5.2) and (5.3) together imply that the columns of span the column space of the off-diagonal block , as well as the columns of . We can write the restriction of the local equilibrium equation on as

 A(Iτ,Iτ)σ(Iτ)=Ain(Iτ,:)sin−A(Iτ,Icτ)σ(Icτ) (5.5)

Exploiting that the columns of span the column spaces of both matrices on the right hand side of (5.5), we can rewrite the equation as

 A(Iτ,Iτ)σ(Iτ)=^Uτ(~uτ−~vτ), (5.6)

where is an efficient representation of the local incoming field, and where is an efficient representation of the field on caused by charges on . The basis matrix analogously provides an efficient way to represent the outgoing fields on . The “small” matrices and then provide compact representations of and . Observe for instance that

These concepts are described in further detail in Appendix B.

###### Remark 5.3.

(Exact versus numerical rank) In practical computations, the off-diagonal blocks of discretized integral operators only have low numerical (as opposed to exact) rank. However, in the present context, the singular values tend to decay exponentially fast, which means that the truncation error can often be rendered close to machine precision. We do not in this manuscript provide a rigorous error analysis, but merely observe that across a broad range of numerical experiments, we did not see any error aggregation; instead, the local truncation errors closely matched the global error.

### 5.4. Hierarchical construction of a scattering matrix

For any node , define its associated scattering matrix via

 Sτ=^V∗τ(A(Iτ,Iτ))−1^Uτ. (5.7)

The following lemma states that the scattering matrix for a parent node with children and can be formed inexpensively from the scattering matrices and , along with the sibling interaction matrices and .

###### Lemma 5.1.

Let be a node with children and . Then

 Sτ=V∗τ[ISα~Aα,βSβ~Aβ,αI]−1[Sα00Sβ]Uτ. (5.8)

Lemma 5.1 immediately leads to an algorithm for computing the scattering matrix