Gradient-augmented level set method with optimally local scheme

A gradient-augmented level set method with an optimally local, coherent advection scheme

Jean-Christophe Nave Department of Mathematics
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139
http://www.mit.edu/~jcnave
Rodolfo Ruben Rosales Department of Mathematics
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139
and  Benjamin Seibold Department of Mathematics
Temple University
http://www.math.temple.edu/~seibold/
Abstract.

The level set approach represents surfaces implicitly, and advects them by evolving a level set function, which is numerically defined on an Eulerian grid. Here we present an approach that augments the level set function values by gradient information, and evolves both quantities in a fully coupled fashion. This maintains the coherence between function values and derivatives, while exploiting the extra information carried by the derivatives. The method is of comparable quality to WENO schemes, but with optimally local stencils (performing updates in time by using information from only a single adjacent grid cell). In addition, structures smaller than the grid size can be located and tracked, and the extra derivative information can be employed to obtain simple and accurate approximations to the curvature. We analyze the accuracy and the stability of the new scheme, and perform benchmark tests.

Key words and phrases:
level set method, subgrid resolution, CIR-method, cubic, curvature
65M25; 35L65

1. Introduction

Level set methods represent a surface as the zero contour of a level set function [14], which can be numerically defined on a regular Eulerian grid. The advection of the surface translates then into an appropriate advection of the level set function. Derivative quantities, such as normal vectors and curvature, can be computed from the level set function without explicitly representing the surface. Commonly used level set approaches encounter problems or inconveniences in the following aspects: the representation of small structures, the approximation of derivative quantities (such as curvature), and the large size of the stencils required by high order finite difference schemes.

We investigate the extent to which these problems can be remedied, if the level set function is augmented by gradient information. In this case, the surface can be represented by an appropriate interpolation that incorporates the additional information. In this paper, we consider a bi-/tri-cubic Hermite interpolation to define the surface in each cell. This yields a certain level of “subgrid” resolution, i.e. structures smaller than the grid resolution can be represented. In addition, the Hermite bi-/tri-cubic interpolant provides a simple and accurate approximation to the normals and the curvature anywhere in the computational domain. In the context of level set methods, the use of a bicubic interpolation to construct second order approximations to interfaces was proposed by Chopp [3], but without any tracking of gradient information.

The idea of using gradient information to improve the accuracy of numerical methods for hyperbolic conservation laws was introduced by van Leer [23, 24, 25, 26, 27]. In particular, his MUSCL (“Monotonic Upstream-centered Scheme for Conservation Laws”) scheme and the PPM (“Piecewise Parabolic Method”) by Colella and Woodward [4] use gradient information. While in those methods, gradient values are reconstructed from function values, the CIP method of Takewaki, Nishiguchi, and Yabe [21, 22] stores gradients as an independent quantity to solve hyperbolic conservation laws. In the context of level set methods, Raessi, Mostaghimi, and Bussmann present an approach that advects normal vectors independent of, but in an analogous fashion to, the level set function values [15].

In this paper, we present an approach that advects function values and gradients as independent quantities, but in a fully coupled fashion. The approach is based on a generalization of the CIR method [5], and uses the Hermite cubic interpolant mentioned above in a natural way. Characteristic curves are tracked backwards, and function values and gradients are obtained from the interpolation. The resulting advection scheme is globally third order accurate, with stencils that can be chosen no wider than a single cell.

In Sect. 2, we provide a brief overview of the classical level set method, introduce the idea of gradient-augmented approaches, and present a Hermite cubic interpolation that will be the basis for the new method. Three fundamental problems with classical level set methods are outlined in Sect. 3, with focus on how the incorporation of gradients is beneficial. The precise numerical scheme is given in Sect. 4. Its accuracy and stability are analyzed theoretically in Sect. 5, for a simple case. In Sect. 6, we numerically investigate the accuracy of the new approach, and test its performance on various benchmark tests. Finally, in Sect. 7, we outline various questions to be investigated in future work.

2. Level Set Approaches Without and With Gradients

2.1. Classical Level Set Method

Many applications, such as the simulation of two-phase flows in require the representation and advection of a manifold of codimension 1, which in the following we call a surface. Level set methods [14] represent the surface as the zero contour of a level set function . In the domain enclosed by the surface, one has , while outside . Geometric quantities, such as normal vectors and curvature, can be obtained from the level set function: , and . In order to move the surface with a velocity field , the level set function is advected according to the partial differential equation

 ϕt+→v⋅∇ϕ=0. (1)

The level set function can be defined on a regular grid. High order ENO [17] or WENO [10] schemes are commonly used to approximate the advection equation (1).

Gradients and curvatures can be approximated by finite differences. For an accurate and stable approximation, it is beneficial if is a signed distance function

 |∇ϕ(x)|=1. (2)

Even if is a distance function initially, it typically ceases to be so due to deformations induced by the velocity field. One remedy to this problem is to recover (2) by solving the reinitialization equation

 ϕτ=sign(ϕ0)(1−|∇ϕ|), (3)

where is the level set function at time , in pseudo-time [20], or by solving the stationary Eikonal equation (2) using a fast marching method [16]. Another approach is to solve (1) with a modified velocity field, so that (2) is preserved. This modified field is constructed by extending the original velocity field away from the surface [1].

The actual surface is obtained from the level set function using contouring algorithms [11]. These approaches are typically based on a bi-/tri-linear interpolation inside a grid cell. The linear interpolant along cell edges locates intersections of the surface with the grid edges. These intersection points are subsequently connected to form surface patches in each cell. Ambiguous connection cases are decided based on the full bi-/tri-linear interpolant inside the cell. In many applications, it is sufficient to know the location of the surface on the grid edges, for instance in the ghost fluid method [7].

We consider a generalized level set approach. The level set function is augmented by gradient information , which is defined on the same grid as . The surface is defined using both independent quantities and . This approach has various advantages in the context of level set methods. Raessi et al. show that under some circumstances, curvature can be computed more accurately if gradients are accessible [15]. In addition, as we show in the following, with gradients, subgrid structures can be represented, and a high order advection scheme with optimally local stencils can be formulated.

Evolving the implicitly defined surface with a velocity field translates to equation (1) for , and equation

 →ψt+∇(→v⋅→ψ)=0 (4)

for . Equation (4) is obtained by applying the gradient operator to (1). A straightforward approach is to approximate each equation (1) and (4) using a high order finite difference scheme. Raessi et al. apply this approach [15], introducing only a weak coupling through an extension velocity field [1]. While such decoupled (or weakly coupled) approaches can improve the classical level set approach, the full potential of a coupled approach is not used. In contrast, here we present an approach that evolves the level set function and its gradients in a coherent and fully coupled fashion. The precise methodology is presented in Sect. 4.

2.3. Cell-Based Hermite Interpolant

In the gradient-augmented level set method, every grid point carries a function value and a gradient vector. Thus, in space dimensions, every grid cell has pieces of information on each of the cell corner points. As we show, this allows the definition of a cell-based Hermite interpolant, i.e. a function that is inside each cell, and that matches the function values and gradients at all cell corner points. In this paper, we consider a -cubic Hermite interpolant, i.e. a cubic in 1D, a bi-cubic in 2D, a tri-cubic in 3D, etc. It is a natural generalization of the bi-/tri-linear interpolation used in classical level set approaches. The interpolant is simple to construct, using a tensor product approach.

In the following, we use the classical multi-index notation. For vectors and , one defines , and , and , where . For convenience, we formulate some results for cubes, for which denotes the edge length . However, the results apply to rectangular cells of arbitrary edge lengths as well. In this case, the estimates are valid with respect to the scaling parameter .

Definition 1.

A -cubic polynomial is a polynomial in , of degree in each of the variables. Hence, it has an expression of the form

 H(→x)=∑→α∈{0,1,2,3}pc→α→x→α,

involving parameters .

Definition 2.

A -rectangle (or simply “cell”) is a set , where . If , we speak of a -cube (of size ), denoted by . The -cube is called unit -cube.

Let the vertices in a cell be indexed by a vector , i.e. the vertex of index is at position . In particular, for , we have .

Definition 3.

Let be a -rectangle, and let be a sufficiently smooth function defined on an open set in that includes . The data for on is the set of scalars given by

 ϕ→v→α=∂→αϕ(→x→v),

where both .

Lemma 1.

Two -cubic polynomials with the same data on some -rectangle, must be equal.

Proof.

WLOG assume that the -rectangle is the unit -cube . Let be the difference between the two polynomials—with zero data on . We show now that .

1. If , is a cubic polynomial in one variable, with double zeros at . Hence .

2. If , the result yields . Hence, the same argument as in yields: .

3. If , the result yields . As before, it follows that from the result that .

From the above, it should now be obvious how to complete the proof using induction on . ∎

Theorem 2.

For any arbitrary data set on some -rectangle, there exists exactly one -cubic polynomial which corresponds to the data.

Proof.

Lemma 1 shows that there exists at most one such polynomial. Here we construct one. WLOG assume that the -rectangle is the unit -cube . For each vertex of , indexed by , and each derivative , one can construct Lagrange basis polynomials , i.e. -cubic polynomials that satisfy

 ∂→βϕ(→x→w)W→v→α=δ→α→βδ→v→w∀→β,→w∈{0,1}p.

Here , where is Kronecker’s delta. Hence, each of the basis polynomials equals 1 on exactly one vertex and for exactly one type of derivative, and equals 0 for any other vertex or derivative.

The basis polynomials can be constructed as tensor products of the form

 W→v→α(→x)=p∏i=1wviαi(xi),

where each of the is a -cubic polynomial

 wvα(x)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩f(x)if~{}v=0,α=0f(1−x)if~{}v=1,α=0g(x)if~{}v=0,α=1−g(1−x)if~{}v=1,α=1 (5)

where and .

A -cubic that corresponds to the data, is then defined by a linear combination of the basis functions

 H(→x)=∑→v,→α∈{0,1}pϕ→v→αW→v→α(→x). (6)

We now investigate how well the -cubic (6) approximates a smooth function, when interpolating it on the vertices of a cell. We are interested in its accuracy, as the cell size approaches zero. While the following analysis also holds for -rectangles, for simplicity (and WLOG), we provide the expressions for -cubes only.

Definition 4.

Consider a (sufficiently smooth) scalar function , defined in an open neighborhood of the origin in . Let be small enough so that the -cube is included in . The -cubic Hermite interpolant to in is the -cubic polynomial , such that and have the same data on .

Using the notation of Def. 3, and equation (6), it is easy to see that the -cubic Hermite interpolant in Def. 4 can be written in the form

 H(→x)=∑→v,→α∈{0,1}pϕ→v→αh|→α|W→v→α(→xh). (7)

This expression is straightforward to differentiate analytically: derivatives of the -cubic basis functions (5), and powers of , appear.

Example 1 (1D Hermite cubic interpolant).

On a 1D cell of size , with grid values denoted by , and derivatives by , the Hermite cubic interpolant is defined by

 H(x)=ϕif(ξ)+ϕi+1f(1−ξ)+h(ϕixg(ξ)−ϕi+1xg(1−ξ)), (8)

and its derivative is

 H′(x)=1h(ϕif′(ξ)−ϕi+1f′(1−ξ))+ϕixg′(ξ)+ϕi+1xg′(1−ξ), (9)

where we use the relative coordinate .

Remark 1.

The 1D Hermite cubic (8) is the unique minimizer of the functional under the constraints that the function values and derivatives are matched at and . This is a standard problem in calculus of variations. The cubic (8) is a minimizer, since it solves the corresponding Euler-Lagrange equation . It is the unique minimizer, since is convex, and the domain for the minimization problem is also convex. Thus, the 1D Hermite cubic minimizes the norm of the second derivative.

Lemma 3.

Let the data determining the -cubic Hermite interpolant in Def. 4 be known only up to some error. Then equation (7) yields the interpolation error

 δH(→x)=∑→v,→α∈{0,1}pW→v→α(→xh)h|→α|δϕ→v→α,

where we use the notation to indicate the error in some quantity . In particular, if the data are known with accuracy, then .

Theorem 4.

Let and be as in Def. 4. Then, for any point in , we have

 ∂→αH=∂→αϕ+O(h4−|→α|), (10)

where the coefficients in the error terms can be bounded by some constant multiple of the norm in .

Proof.

Let be the degree three polynomial obtained by using a Taylor expansion for , centered at some point in . Then, by construction, satisfies (10) above. In particular, the data for on is related to the data for (same as the data for ) in the manner specified in Lemma 3. Hence (10) follows. ∎

Lemma 3 and Thm. 4 imply that the -cubic interpolant is fourth order accurate, if any required derivative is known with accuracy on the cell vertices. Since in a gradient-augmented method, we only assume that the function values () and the gradients () are given, we need to construct any required cross derivatives () with accuracy , from the given data. Interestingly, this means that one can set to zero all the derivatives of order higher than 3, without affecting the interpolant’s accuracy. Unfortunately, this convenience does not come into play for dimensions .

The cross derivatives required by (7) can be constructed to the appropriate order from the function values and the derivatives at the grid points. Two possible approaches are:

1. Central differencing.  Second order cross derivatives can be approximated by central differences of neighboring points, such as (here written for on a cell of size , all proportional to )

 ϕi,j,kxy=ϕi+1,j,ky−ϕi−1,j,ky2Δx+O(h2).

By construction, these approximations are second order accurate. The third order cross derivative can be approximated by

 ϕi,j,kxyz=ϕi+1,j+1,kz−ϕi−1,j+1,kz−ϕi+1,j−1,kz+ϕi−1,j−1,kz4ΔxΔy+O(h2).

This approach defines one unique value for each required cross derivative at each grid point. Hence, the thus defined -cubic interpolant is across cell edges. A technical disadvantage is that the optimal locality is to a certain extent lost: The interpolation in a cell uses information from adjacent cells corner points.

2. Cell-based approach.  In 2D, a second order accurate approximation to the second order cross derivatives at the vertices of a cell can be obtained from the gradient values at the vertices in the same cell, using finite differences and interpolation/extrapolation:

1. The central differences , , , approximate at the cell edge centers.

2. Weighted averages of these values yield approximations to at the cell vertices. The weights follow from bilinear interpolation, and are for the two nearby edge centers, and for the two opposing edge centers.

To obtain the cross derivatives in 3D, the formulas above are applied in a facet of the 3D cell.

A first order accurate approximation to the third cross derivative is given by

 ϕi,j,kxyz=ϕi+1,j+1,kz−ϕi,j+1,kz−ϕi+1,j,kz+ϕi,j,kzΔxΔy+O(h).

This approach is purely cell-based, and the interpolation can be implemented as a single black-box routine. A technical disadvantage of this approach is that one and the same grid point is assigned different cross derivative values, depending on which cell it is a vertex of. Since the approximations are second order accurate, the various cross derivatives at a grid point differ by . Consequently, the resulting -cubic interpolant is continuous, but the gradient jumps across cell edges, with discontinuities of size .

In practice, both approaches perform well. Note that above approximations are just one possible way to approximate the cross derivatives. The final method proves rather robust with respect to the actual approximation chosen. In fact, even if an error is done in the cross derivatives (e.g. by setting them equal to zero), the method remains convergent, though with a lower order of accuracy.

3. Benefits of Incorporating Gradient Information

While the classical level set method, outlined in Sect. 2.1, is a powerful tool in representing and advecting surfaces, it suffers from some problems and inconveniences. In this paper, we address three fundamental aspects, and show how a gradient-augmented approach, as outlined in Sect. 2.2, can ameliorate them:

• Small structures are lost once below the grid resolution. Gradients yield a certain level of subgrid resolution (Sect. 3.1).

• An accurate approximation of the curvature involves difficulties. With gradients, surface normals and curvature can be easily obtained from the -cubic Hermite interpolation (Sect. 3.2).

• Accurate schemes for the advection equation (1) involve large stencils. With gradients, a third order accurate scheme can be formulated, with optimally local stencils (Sect. 3.3).

3.1. Representation of Structures of Subgrid Size

Structures that are at least a few cells wide are represented well by the classical level set method. However, smaller structures are only represented if they contain grid points. This leads to the inconsistency that one and the same small structure can be present (if a grid point happens to fall into it) or be missing (if it happens to fall in between grid points). Furthermore, even if initially present, small structures may vanish over the course of a computation due to approximation errors in the numerical scheme, such as numerical diffusion or dispersion. Small drops, jets or films (see Fig. 1) may be lost during a computation, resulting in a “loss of volume”. Other difficulties are the numerical coalescence of nearby structures, or a numerical pinch-off in a thinning process. In practice, the loss of small structures can be prevented using adaptive mesh refinement (AMR) techniques [18, 12], which add a significant level of complexity, especially when high order approximations need to be preserved across multiple levels of refinement. The problem of volume loss can be addressed by enforcing conservation of volume, as proposed by Sussman and Fatemi [19]. Such methods guarantee conservation of volume (up to a small approximation error), but may yield incorrect topologies. An alternative approach is to augment the level set function by Lagrangian particles, as proposed by Enright, Fedkiw, Ferziger, and Mitchell [6]. This latter method resolves the difficulties described above in a satisfactory manner, but at the expense of simplicity.

Fig. 2 shows a signed distance level set function (solid line) of a 1D “bubble” between two grid points. Classical level set methods use a linear interpolation (dashed line). Hence, no structure is identified, since the level set function has equal sign on the two grid points. Of course, the same structure would be identified if a grid point fell within it, but it would be lost when advected into the situation shown in Fig. 2. Also, if one knows that is a signed distance function, the surface can be identified, as shown by Chopp [3]. Unfortunately, this approach incurs difficulties in higher space dimensions, where ambiguities arise. In addition, the assumption of having a precise signed distance function is too limiting in many cases.

A gradient-augmented level set approach allows the representation of a full structure, i.e. two surfaces, between two neighboring grid points. Fig. 2 shows a cubic interpolant (dash-dotted line) along a cell edge, constructed from the correct function values and gradients. In fact, a subgrid structure is detected. Similarly, with bi-/tri-cubics, subgrid structures in 2D and 3D can be approximately represented. Each subgrid structure shown in Fig. 1 is defined by a tri-cubic, as follows. For a cube of size , we consider the function values on the vertices . The drop (sphere) is then obtained by providing gradients . The jet (cylinder) is obtained by . And the film (two planes) is obtained by .

At the same time, the example in Fig. 2 indicates a major drawback of -cubic approaches: Level set functions are typically non-smooth (e.g. signed distance functions). Hermite -cubics are smooth, hence the approximation is not very accurate near kinks. As a result, even with -cubic approaches, smaller structures may vanish eventually, though significantly later than with classical level set methods. A potential remedy can be to use higher order, or nonlinear interpolations, which we shall investigate in future work.

Theoretically, the gradient-augmented level set approach allows the representation of (at least some) isolated structures of arbitrarily small size. Hence, the use of gradient information is more than a mere increase of resolution. Of course, in practice there are limitations to the size of the small structures that can be represented reliably. In addition, only isolated subgrid structures can be represented. The structure shown in Fig. 2 is indistinguishable from two small structures in the cell, whose outer boundaries are at the same positions as the boundaries of the single structure.

3.2. Approximation of Derivative Quantities

In many applications (e.g. in two-phase flow simulations with surface tension), normal vectors and curvatures are required. In terms of the first and second derivatives of the level set function , the expressions are (here in 2D)

 →n =(ϕx,ϕy)T(ϕ2x+ϕ2y)12, (11) κ =ϕxxϕ2y−2ϕxϕyϕxy+ϕyyϕ2x(ϕ2x+ϕ2y)32. (12)

In classical level set methods, the required derivatives , , , , are typically obtained from by central differences. Assuming that the level set function is known with fourth order accuracy on the grid points, second order accurate approximations to normals and curvature on the grid points are obtained from the expressions (11) and (12).

Away from grid points (such as it is required by the ghost fluid method), second order accurate approximations can be obtained by using weighted averages. Several problems with computing curvature from level set (and volume of fluid) functions, in particular in connection with reinitialization (3), have been presented by Raessi et al. [15]. In addition, even if a suitably high order advection scheme is used, under various circumstances the level set function values can cease to be fourth order accurate. For instance, when a WENO stencil crosses a discontinuity in the gradient (e.g. signed distance level set functions (2) possess discontinuous derivatives), the accuracy of the approximation can drop [10].

Gradient-augmented level set methods offer new possibilities in the approximation of derivative quantities. For instance, the -cubic Hermite interpolation considered here gives rise to a particularly simple way to obtain normals and curvatures at arbitrary points. Inside each cell, all first and second derivatives can be obtained analytically by differentiating the interpolant (7). Normal vectors and curvature are then simply obtained by (11), respectively (12). Due to Thm. 4, first derivatives are third order accurate, and so is the obtained . Further, second derivatives are second order accurate, and so is the obtained . The numerical results shown in Sect. 6.1.3 verify this. Hence, using the Hermite -cubic, derivative quantities (of up to second order) can be obtained everywhere with at least second order accuracy, by a simple recipe with optimally local stencils, i.e. they use information only from a single cell.

3.3. Coherent Advection with Optimally Local Stencils

In order to both accurately advect structures, and calculate derivative quantities, high order schemes have to be used. In classical approaches, the stencils for high order schemes reach over multiple cells. This results in a cumbersome implementation on adaptive grids and near boundaries.

In Sect. 4, we present a semi-Lagrangian approach that solves the advection problem with third order accuracy. The characteristic form of the equations (1) and (4) is used to update the function values and gradients at the grid points. In each time step, the characteristics are traced backwards from the grid points, and function values and gradients on the characteristic, at the prior time step, are extracted from the Hermite interpolant, as defined in Sect. 2.3. A key advantage of this approach is that at each grid point, the data is updated using only information from a single adjacent cell. Hence, the gradient-augmented approach delivers a high order advection with optimally local stencils. It seems evident that the optimal locality should allow a simple treatment when combined with adaptive mesh refinement and near boundaries. The detailed investigation of this matter is left for future work.

4. Numerical Methodology of the Generalized CIR Scheme

We consider the linear advection equation (1) with . The evolution of the level set function and its gradient, given by (1) and (4) can be rewritten as

 ϕt+→v⋅∇ϕ=0→ψt+→v⋅∇→ψ=−∇→v⋅→ψ. (13)

For functions defined everywhere, the evolution for is completely determined by . However, for functions known only on grid points, the gradients carry additional information that is not encoded in . For the moment, we assume the velocity field , and the velocity deformation matrix

 ∇→v=⎛⎜ ⎜ ⎜ ⎜⎝∂u∂x∂v∂x∂w∂x∂u∂y∂v∂y∂w∂y∂u∂z∂v∂z∂w∂z⎞⎟ ⎟ ⎟ ⎟⎠

to be exactly accessible everywhere. The system (13) consists of an advection part (left hand side), and a source term for . Its characteristic form is

 dϕdt=0andd→ψdt=−∇→v⋅→ψalongd→xdt=→v(→x,t). (14)

This system of ODE describes the evolution of and along the characteristic curves defined by . We solve these equations by a generalization of the CIR method by Courant, Isaacson, and Rees [5]: the characteristics (14) are traced backwards from the grid points, and the required data is obtained by the interpolation. Note that, while the CIR method is, in general, non-conservative for nonlinear conservation laws, its use is justified here, since equations (1) and (4) are linear.

Let , denote the data at time , and , denote the data at time . We find the characteristic curves that pass through grid points at time , and solve (14) along these characteristic curves. One time step of the scheme from to reads as follows:

1. For each grid point , solve the characteristic equation

 ∂∂τ→x(τ)=→v(→x(τ),τ),→x(t+Δt)=→xj, (15)

backwards from to , to find the point . The characteristic curve that passes through at time , reaches at time .

2. Assign and , both evaluated analytically from the Hermite interpolation (7) in the cell that falls into.

3. Solve (14) forward along the characteristic curve, i.e. solve

 ∂∂τ→ψ(τ)=−∇→v(→x(τ),τ)⋅→ψ(τ),→ψ(t)=˚→ψj, (16)

from to . Assign .

In principle, there is no requirement for the backward characteristic curves to remain in a single cell, as long as the integration scheme for (15) is accurate enough to prevent the approximate backwards characteristic curves from intersecting (which may cause oscillations). However, in practice, it is often convenient, and not too restrictive, to choose , where is the maximum magnitude of , and is the grid size. This guarantees that characteristic curves from different grid points do not intersect, and that characteristic curves do not cross multiple cells.

As proved in Thm. 4, the interpolation approximates with fourth order accuracy and with third order accuracy. Therefore, in order to achieve the maximum possible order of accuracy, the backwards characteristics equation (15) needs to be solved with fourth order accuracy in each time step.111This then yields a globally third order in time algorithm, see Sect. 5.5. One step of the Shu-Osher RK3 method [17] does the job. Since gradients are generally one order less accurate, their evolution equation (16) needs to be solved with third order accuracy. One step of Heun’s RK2 method (explicit trapezoidal) does the job. However, in Sect. 5 we outline another possibility, which is to systematically inherit the update rule for (16) from the scheme used for (15). We also investigate the accuracy and stability of the gradient-augmented scheme.

In one space dimension, for constant velocity fields, the presented approach is equivalent to the CIP method [21, 22], which also uses a -cubic interpolant. Note that the presented gradient-augmented level set approach is not limited to -cubic interpolants. Within the setting of superconsistency, introduced in Sect. 5.1, the projection step, defined in Sect. 5.2, can be replaced by other forms of projection (see Rem. 2).

The gradient-augmented level set method is not a finite difference method, since differential operators are not approximated. Instead, fundamental properties of the exact solution to the underlying equation are used, and derivatives are evaluated analytically from the interpolation patch. The approach is based on local basis functions, as a finite element method is. However, the update rule is very different than for classical finite element approaches.

The generalized CIR approach is cell-based, thus preserving the benefits of the level set method, such as handling of topology changes, parallelizability, etc. In addition, it is optimally local: The new data values at a grid point use only information from corner points of a single adjacent cell. Compared to WENO schemes, which use information multiple cells away, the CIR method promises an easier treatment of approximations on locally refined meshes and near domain boundaries. In addition, the locality allows smaller structures to get close together without spoiling the accuracy of the approximation.

4.1. Boundary Conditions

When solving the linear advection equation (1) on a domain with outward normals , boundary conditions have to be prescribed wherever the flow enters the domain, i.e. . The accurate treatment of boundary conditions is a common challenge in high order methods. For level set methods, this is important whenever the interface is close to the boundary. In WENO methods, ghost points have to be created using appropriate extrapolations. Since WENO stencils reach over multiple cells, not only do inflow boundaries pose a challenge, but also outflow boundaries, at which the actual equation does not require any boundary conditions.

In contrast, the gradient-augmented CIR method presented above, treats outflow boundaries naturally: no information has to be prescribed, since characteristics come from inside the domain. At inflow boundaries, information for both and has to be prescribed. Below we give two illustrative examples that show how to create the required information from the advection equation (1), for the two most common types of boundary conditions. Other types of boundary conditions can be treated similarly.

Consider a domain boundary that is aligned with cell edges. WLOG assume that this boundary is perpendicular to . Then

• Dirichlet boundary conditions prescribe on the boundary, which is perpendicular to . Therefore, all partial derivatives perpendicular to , i.e.  and , as well as , are prescribed as well. The missing derivative on the boundary follows from the equation (1) as

 ϕx=−ϕt+vϕy+wϕzu.

Note that , since .

• Neumann boundary conditions prescribe on the boundary. Thus equation (1) yields a linear advection equation on the boundary

 ϕt+vϕy+wϕz=−uϕx, (17)

with initial conditions given by the values of on the boundary at . Equation (17) can again be solved using the gradient-augmented CIR scheme. This yields the function values and the derivatives perpendicular to , i.e.  and .

5. Analysis of the Gradient-Augmented CIR Method

Consider the numerical scheme presented in Sect. 4, for the linear advection equation (1), with given. Let be the solution to the characteristic equations (14), defined by

 ∂∂τ→X(→x,t,τ)=→v(→X(→x,t,τ),τ),→X(→x,t,t)=→x. (18)

The solution operator to the linear advection equation (1) is then

 St,t+Δtϕ(→x,t)=ϕ(→x,t+Δt)=ϕ(→X(→x,t+Δt,t),t). (19)

Now consider a numerical approximation to , as arising from a numerical ODE solver, e.g. a Runge-Kutta scheme. Let the corresponding approximate solution operator to (1) be denoted by

 At,t+Δtϕ(→x,t)=ϕ(→X(→x,t+Δt,t),t). (20)

5.1. Superconsistency

Equation (19) provides the exactly evolved solution, while equation (20) defines an approximately evolved solution, both at every point in the computational domain. In the actual numerical method, we consider only the (approximate) characteristic curves that go through the grid points at time . However, we can use the solution operator (20) to derive a natural update rule for the gradients. The gradient of the approximately advected function is

 ∇(At,t+Δtϕ(→x,t))=∇→X(→x,t+Δt,t)⋅∇ϕ(→X(→x,t+Δt,t),t), (21)

which yields the update rule for approximate function values and gradients

 {ϕ(→x,t+Δt)=ϕ(→X(→x,t+Δt,t),t)→ψ(→x,t+Δt)=∇→X(→x,t+Δt,t)⋅→ψ(→X(→x,t+Δt,t),t) (22)
Definition 5.

We call a gradient-augmented scheme superconsistent, if it satisfies (22).

Example 2.

Using forward Euler to approximate (18) yields the superconsistent scheme

 ˚→x =X(→x,t+Δt,t)=→x−Δt→v(→x,t+Δt) ∇˚→x =I−Δt∇→v(→x,t+Δt) ϕ(→x,t+Δt) =ϕ(˚→x,t) →ψ(→x,t+Δt) =∇˚→x⋅→ψ(˚→x,t)

While this scheme is particularly simple, it is only first order accurate. A globally third order accurate scheme is obtained when using a third order Runge-Kutta scheme, such as in the following example.

Example 3.

Using the Shu-Osher scheme [17] for (18) yields the superconsistent scheme

 →x1 =→x−Δt→v(→x,t+Δt) ∇→x1 =I−Δt∇→v(→x,t+Δt) →x2 =→x−Δt(14→v(→x,t+Δt)+14→v(→x1,t)) ∇→x2 =I−Δt(14∇→v(→x,t+Δt)+14∇→x1⋅∇→v(→x1,t)) ˚→x =→x−Δt(16→v(→x,t+Δt)+16→v(→x1,t)+23→v(→x2,t+12Δt)) ∇˚→x =I−Δt(16∇→v(→x,t+Δt)+16∇→x1⋅∇→v(→x1,t)+23∇→x2⋅∇→v(→x2,t+12Δt)) ϕ(→x,t+Δt) =ϕ(˚→x,t) →ψ(→x,t+Δt) =∇˚→x⋅→ψ(˚→x,t)

Of course, it is not necessary to use a superconsistent scheme to update . In fact, other schemes to approximate (4) might be more accurate and/or more efficient. However, superconsistent schemes have the advantage that the gradients are evolved exactly as if the function values were evolved everywhere, and then the gradients were obtained by differentiating the function. This increases the coherence between function values and derivatives. In addition, superconsistent schemes can be analyzed in function spaces, without actually having to consider an evolution for the gradient.

5.2. Projection

The update rule (22) uses the function values and the gradients at for each grid point . The point is in general not a grid point. Hence, in the gradient-augmented numerical scheme, and are defined by the Hermite interpolation. Each time step of a superconsistent gradient-augmented scheme can be interpreted (in a function space) as an approximate advection step, followed by a projection step

 Mt,t+Δt=PAt,t+Δt.

At time , the level set function is represented by the cell-based -cubic interpolant approximation. This function is then advanced in time to by the approximate advection operator . Then a new representation in terms of cell-based -cubic interpolants is obtained by the projection operator , which uses the function and gradient values of the advected function at the grid points.

In this paper, the projection is given by the -cubic Hermite interpolant defined in equation (7), with the proviso that the required cross derivatives that appear in (7) are obtained by one of the (various possible) finite differentiation schemes described in Sect. 2.3.

In general, the operator alone is a much better approximation to than the combined . However, the projection step is required to be able to represent the numerical approximation to , at a given instance in time, by a finite amount of data. For the choice of used in this paper, a function is uniquely defined by the function values and gradient values on the grid points . Hence, in the numerical scheme, the advection operator has to be evaluated only along the characteristic curves that go through the grid points, using only the function values and derivatives there.

Remark 2.

In principle, the presented generalized CIR algorithm works for any projection which depends on a finite amount of information that can be extracted from the function being projected. A superconsistent scheme is defined by selecting an approximate advection scheme and an appropriate projection strategy. The -cubic interpolation projection used here is linear, generalizes nicely to higher space dimensions, and gives rise to a simple algorithm. However, other projections may exist, providing increased subgrid resolution, or other advantages. Obvious candidates include the incorporation of higher order information, such as curvature. We plan to investigate these, and other alternatives, in future work.

5.3. Consistency

We investigate the consistency of the method, by estimating the truncation error after applying a single step of the numerical scheme to the exact solution. Let denote the time step, and the grid size.

Theorem 5 (consistency).

Assume that the exact solution is smooth with bounded derivatives up to fourth order, and that the velocity field is smooth with bounded derivatives up to third order. If a third order method is used to integrate the characteristic equations, then the presented gradient-augmented method with is consistent, with errors that are in the level set function , and in the gradient .

Proof.

One step of third order scheme is fourth order accurate is fourth order accurate

 X(→x,t+Δt,t)=X(→x,t+Δt,t)+O(Δt4).

Thus it yields a fourth order accurate approximation to the advected solution

 |ϕ(X(→x,t+Δt,t))−ϕ(→x,t+Δt)|=|ϕ(X(→x,t+Δt,t))−ϕ(X(→x,t+Δt,t))|=|∇ϕ(X(→x,t+Δt,t))|O(Δt4). (23)

The corresponding error in the gradient is obtained using the triangle inequality, after adding and subtracting the Jacobian of the approximate advection step times the gradient of at the exact characteristic at time . For a better transparency of the formulas, here we omit the arguments for , , , and , which are always evaluated at .

 ∣∣∇→X⋅∇ϕ(→X,t)−∇ϕ(→x,t+Δt)∣∣≤∣∣∇→X⋅∇ϕ(→X,t)−∇→X⋅∇ϕ(→X,t)∣∣+∣∣∇→X⋅∇ϕ(→X,t)−∇→X⋅∇ϕ(→X,t)∣∣=∣∣∇→X⋅(∇ϕ(→X,t)−∇ϕ(→X,t))∣∣+∣∣(∇→X−∇→X)⋅∇ϕ(→X,t)∣∣≤∣∣∇→X∣∣∣∣D2ϕ(→X)∣∣∣∣→X−→X∣∣=O(Δt4)+∣∣∇(→X−→X)∣∣=O(Δt4)∣∣∇ϕ(→X,t)∣∣=O(Δt4). (24)

Equation (24) yields the error in the gradient, obtained by a function that is defined everywhere. Due to Def. 5, this is exactly the error on in any superconsistent scheme.

As proved in Thm. 4, the projection of a smooth function incurs a fourth order error in the function values, and a third order error in the gradients

 |Pϕ(→x)−ϕ(→x)| =O(h4), (25) |∇(Pϕ)(→x)−∇ϕ(→x)| =O(h3). (26)

Starting with the exact solution , we denote the approximately advected function

 ϕ∗(→x)=At,t+Δtϕ(→x,t)=ϕ(X(→x,t+Δt,t),t).

Followed by the projection operator, we obtain one step of the numerical scheme. Adding and subtracting the approximately advected solution, using the triangle inequality, and using the estimates (23) and (25), we obtain the error in function value

 |(Pϕ∗)(→x)−ϕ(→x,t+Δt)|≤|(Pϕ∗)(→x)−ϕ∗(→x)|=O(h4)+|ϕ(X(→x,t+Δt,t),t)−ϕ(→x,t+Δt)|=O(Δt4).

Hence, with , one step is fourth order accurate in . Similarly, the estimates (24) and (26) yield that one step of the numerical scheme is third order accurate in . ∎

When taking multiple steps of the scheme, the approximate advection and projection operators are applied iteratively. Taking a step from a function that approximates the true solution with fourth order accuracy in function values and third order accuracy in gradient values, exactly preserves the above accuracies because Thm. 4 applies to the projection. Thus the gradient-augmented CIR method is consistent, with a local truncation error that is fourth order in and third order in .

Remark 3.

Observe that for superconsistent schemes, such as the one given in Example 3, the approximate advection itself preserves gradients with fourth order accuracy, while the projection yields (and requires) only a third order accurate approximation of gradients. Therefore, it can be more efficient to solve the gradient evolution equation (16) with a Runge-Kutta 2 scheme, while not sacrificing accuracy.

Remark 4.

The orders of accuracy derived above are achieved for smooth functions with derivatives up to fourth order. Note that level set functions are frequently signed distance functions (2) and as such not differentiable along certain sets of measure zero. In cells that contain such “ridges” of the level set function, the accuracy of the scheme typically drops to first order. For structures that are multiple grid cells wide, this drop in accuracy does not affect the evolution of the zero contour. In contrast, structures of subgrid size may be evolved only with first order accuracy, which is of course still better than losing them completely. The situation shown in Fig. 2 visualizes this effect. Note further that even near ridges, the presented numerical scheme does not create spurious oscillations in the first derivative. In 1D, this follows since the Hermite cubic projection minimizes the norm of the second derivative (see Rem. 1). Although this simple principle does not transfer directly to 2D or 3D, similar stability properties as in the 1D case are observed (see Sect. 6).

5.4. Stability

Here we investigate the stability of the method, i.e. integration over a fixed time interval , with , does not lead to a blow up of neither nor . Proving stability for the gradient-augmented advection scheme is difficult, since the application of the projection to a smooth function can both increase the function values (because of the effect of the gradients at the grid points), and the gradients (a cubic can have steeper slopes than a given smooth function with the same values and gradients at the grid points). The following theorem shows the stability of the method in the constant coefficient case in one space dimension.

Theorem 6 (stability).

The presented gradient-augmented method, considered in one space dimension and for a constant velocity field, is stable.

Proof.

Consider the 1D advection equation with constant. We choose , and WLOG assume . Since the scheme is gradient-augmented, the state vector contains both function values and derivatives. Let denote the approximate solution, and denote the approximate derivative, both at the grid point and time . One step of the generalized CIR scheme is obtained by applying (8) and (9)

 ϕn+1j=1h(ϕnjf(ξ)+ϕnj+1f(1−ξ))+h(ψnjg(ξ)−ψnj+1g(1−ξ)),ψn+1j=1h(ϕnjf′(ξ)−ϕnj+1f′(1−ξ))+h(ψnjg′(ξ)+ψnj+1g′(1−ξ)), (27)

where , while and are the basis functions (5). Note that, since the velocity field is constant, the gradient is also constant along the characteristics. In the special case , the scheme becomes

 ϕn+1j=ϕnj+1andψn+1j=ψnj+1,

i.e. it is decoupled and exact, and thus stable. Hence, in the following, we restrict to the case .

Since we are investigating a linear partial differential equation with constant coefficients, we can apply von Neumann stability analysis, and analyze the stability of (27) in Fourier space. We consider the Fourier transform of the equations, and rewrite them in terms of the Fourier coefficients and , which are related to and