Non-uniform Discontinuous GalerkinFilters via Shift and Scale

# Non-uniform Discontinuous Galerkin Filters via Shift and Scale

Dang-Manh Nguyen and Jörg Peters
###### Abstract

Convolving the output of Discontinuous Galerkin (DG) computations with symmetric Smoothness-Increasing Accuracy-Conserving (SIAC) filters can improve both smoothness and accuracy. To extend convolution to the boundaries, several one-sided spline filters have recently been developed. We interpret these filters as instances of a general class of position-dependent spline filters that we abbreviate as PSIAC filters. These filters may have a non-uniform knot sequence and may leave out some B-splines of the sequence.

For general position-dependent filters, we prove that rational knot sequences result in rational filter coefficients. We derive symbolic expressions for prototype knot sequences, typically integer sequences that may include repeated entries and corresponding B-splines, some of which may be skipped. Filters for shifted or scaled knot sequences are easily derived from these prototype filters so that a single filter can be re-used in different locations and at different scales. Moreover, the convolution itself reduces to executing a single dot product making it more stable and efficient than the existing approaches based on numerical integration. The construction is demonstrated for several established and one new boundary filter.

## 1 Introduction

Since the output of Discrete Galerkin (DG) computations often captures higher order moments of the true solution [ML78], postprocessing DG output by convolution can improve both smoothness and accuracy [BS77, CLSS03]. In the interior of the domain of computation, symmetric smoothness increasing accuracy conserving (SIAC) filters have been demonstrated to provide optimal accuracy [CLSS03]. However the symmetric footprint precludes using these filters near boundaries of the computational domain.

To address this problem Ryan and Shu [RS03] pioneered the use of one-sided spline filters. The Ryan-Shu filters improve the error, but not necessarily the point-wise errors. In fact, the filters are observed to increase the pointwise error near the boundary, motivating the design of the SRV filter [vSRV11], a filter kernel of increased support. Due to near-singular calculations, a stable numerical derivation of the SRV filter requires quadruple precision. Indeed, the coefficients of the boundary filters [RS03, vSRV11, MRK12, RLKV15, MRK15] are computed by inverting a matrix whose entries are computed by Gaussian quadrature; and, as pointed out in [RLKV15], SRV filter matrices are close to singular. [RLKV15] therefore introduced the RLKV filter, that augments the filter of [RS03] by a single additional B-spline. This improves stability and retains the support-size. However, RLKV filter errors on canonical test problems are higher than those of symmetric filters and they have sub-optimal and superconvergence rates [RLKV15], as well as poorer derivative approximation [LRKV16] than SRV filters. Consequently, SRV filters merit attention if their stablity can be improved.

The contribution of this paper is to reinterpret the published one-sided filters in the framework of position-dependent spline filters with general knot sequences [Pet15] (that were inspired by the partial symbolic expression of coefficients for uniform knot sequences in [MRK15]). This reinterpretation allows expressing and pre-solving them in sympbolic form, preempting instability so these filters can reach their full potential. Specifically, this paper

• proves general properties of SIAC filters, when their knots are shifted or scaled;

• uses the properties to express the kernels in a factored, semi-explicit form that becomes explicit for given knot patterns and yields rational coefficients for rational knot sequences;

• characterizes a class of position-dependent SIAC filters, short PSIAC filters, based on splines with general knot sequences; the coefficients of PSIAC filters are polynomial expressions in the position;

• shows that SRV and RLKV [RS03, vSRV11, MRK12, RLKV15, MRK15] are PSIAC filters; and

• illustrates the general framework by comparing the established numerical approach with the new symbolic filter derivation and by adding a new effective, stably-computed filter of lower degree than SRV or RLKV.

The new characterization allows, in standard double precision, to replace the current three-step numerical approach of approximate computation of the matrix, its inversion and application to the data by Gauss quadrature, by a single-step symbolic approach derived in Theorem 4.1. Fig. 1 contrasts, for standard double precision, the noisy error of numerical SRV filtering with the error of the new symbolic formulation.

Not only is the symbolic approach more stable, but

• scaled and shifted version of the filter are easily obtained from one symbolic prototype filter;

• computation is more efficient: filtering the DG output reduces to a single dot product of two vectors of small size;

• stabilizes and simplifies computing approximate derivatives of the filtered DG output [RSA05] (see also [Tho77, RC09, LRKV16]) to differentiating the polynomial representation of the filtered output; and

• proves the smoothness of the so-filtered DG output to be .

The last point is of interest since [RS03, vSRV11] observed and conjectured that the smoothness of the filtered DG computation is the same as the smoothness in the domain interior where the symmetric filter applied. Not only does this conjecture hold true, but Theorem 4.1 implies that the smoothness is .

#### Organization.

Section 2 introduces the canonical test equation, B-splines, convolution, and a generalization of the formula of [Pet15] for convolution with splines based on arbitrary knot sequences. Section 3 reformulates the generalized convolution formula and derives the convolution coefficients when the knot sequence is a scaled or shifted copy of a prototype sequence. Section 4, in particular Theorem 4.1, summarizes the resulting efficient convolution with PSIAC filters based on scaled and shifted copies of a prototype sequence. Section 5 shows that SRV and RLKV are PSIAC filters and advertises their true potential by comparing their computation using the numerical approach to using the more stable symbolic approach. To illustrate the generality of the setup, a new multiple-knot linear filter is added to the picture.

## 2 Background

Here we establish the notation, distinguishing between filters and DG output, explain the canonical test problem, the DG method, B-splines and reproducing filters and we review one-sided and position-dependent SIAC filters in the literature.

### 2.1 Notation

Here and in the following, we abbreviate the sequences

 i:j :={(i,i+1,…,j−1,j),if i≤j,(i,i−1,…,j+1,j),if i>j,si:j:=(si,…,sj).

We denote by the convolution of a function with a function , i.e.

 (f∗g)(x):=∫Rf(t)g(x−t)dt=(g∗f)(x),

for every where the integral exists and we use the terms filter, filter kernel and kernel interchangeably, even though, strictly speaking, filtering means convolving a function with a kernel. We reserve the following symbols:

The notation is illustrated by the following example.

###### Example 2.1.

Consider a degree-one spline filter, i.e. , defined over the knot sequence and associated with the index set . That is, the two B-splines defined over the knot squences and are omitted, , and . A linear () DG output on 200 uniform segments of the interval implies , and .

### 2.2 The canonical test problem, the Discontinuous Galerkin method and B-splines

To demonstrate the performance of the filters on a concrete example, [RS03] used the following univariate hyperbolic partial differential wave equation:

 dudτ+ddx(κ(x,τ)u) =ρ(x,τ), x∈(a,b),τ∈(0,˜τ) (1) u(x,0) =u0(x), x∈[a,b]

subject to periodic boundary conditions, , or Dirichlet boundary conditions where, depending on the sign of , is either or . Subsequent work [RS03, vSRV11, RLKV15] adopted the same differential equation to test their new one-sided filters and to compare to the earlier work. Eq. (1) is therefore considered the canonical test problem. We note, however, that SIAC filters apply more widely, for example to FEM and elliptic equations [BS77].

In the DG method, the domain is partitioned into intervals by a sequence of break points . Assuming that the sequence is rational, introducing will allow us later to consider the prototype sequence of integers. The DG method approximates the time-dependent solution to Eq. (1) by

 u(x,τ):=m∑i=0ui(τ)ϕi(x;hs0:m), (2)

where the scalar-valued functions , , are linearly independent and satisfy the scaling relations

 ϕi(hx;hs0:m)=ϕi(x;s0:m). (3)

Examples of functions are non-uniform B-splines [dB05], Chebysev polynomials and Lagrange polynomials. Relation (3) is typically used for refinement in FEM, DG or Iso-geometric PDE solvers.

Setting , the weak form of Eq. (1) for ,

 ∫ba(dudτv−κ(x,τ)udvdx)dx=∫baρ(x,τ)vdx−(κ(x,τ)u(x,τ)v(x))∣∣x=bx=a, (4)

forms a system of ordinary differential equations with the coefficients , , as unknowns.

The goal of SIAC filtering is to smooth in by convolution in with a linear combination of B-splines. Typically filtering is applied after the last time step when . Following [Pet15], we characterize B-splines in terms of divided differences [CS66]. For a sufficiently smooth univariate real-valued function with th derivative , divided differences are defined by

 \vrulewidth0.6ptheight5.6ptdepth−0.28ptΔ(ti)g :=g(ti), and for j>i \vrulewidth0.6ptheight5.6ptdepth−0.28ptΔ(ti:j)g :={(\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔ(ti+1:j)g−\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔ(ti:j−1)g)/(tj−ti), if ti≠tj,1(j−i)!g(j−i)(ti), if ti=tj. (5)

If is a non-decreasing sequence, we call its elements knots and the classical definition of the B-spline of degree with knot sequence is

 B(x|ti:j) :=(tj−ti) \vrulewidth0.6ptheight5.6ptdepth−0.28ptΔ(ti:j)(max{(⋅−x),0})k. (6)

Here acts on the function for a given . Consequently, a B-spline is a non-negative piecewise polynomial function in with support on the interval . If is the multiplicity of the number in the sequence , then is at least times continuously differentiable at . This definition of agrees, after scaling, with the definition of the B-spline by recurrence [dB02]:

 N(t|ti:i+k+1)=ti+k+1−tik+1B(t|ti:i+k+1). (7)

### 2.3 SIAC filter kernel coefficients

A piecewise polynomial is said to be a SIAC spline kernel of reproduction degree  if convolution of with monomials reproduces the monomials up to degree , i.e., if

 (f∗(⋅)δ)(x)=xδ,δ=0..r. (8)

Mirzargar et al. [MRK15] derived semi-explicit formulas when the filter has uniform knots while [Pet15] gives semi-explicit formulas for the coefficients of spline kernels over general knot sequences. The following definition further generalizes these formulas by allowing to skip some B-splines when constructing the kernel.

###### Definition 2.1 (SIAC spline filter kernel).

Let be a sequence of strictly increasing integers between and . A SIAC spline kernel of degree and reproduction degree with index sequence and knot sequence is a spline

 f(x):=∑j∈\scalebox.7$J$fjB(x|tj:j+k+1),

of degree with coefficients chosen so that

 (∑j∈\scalebox.7$J$fjB(⋅|tj:j+k+1)∗(−⋅)δ)(x)=(−x)δ,δ=0,…,r. (8’)

When then Definition 2.1 replicates the definition of [Pet15].

###### Lemma 2.1 (SIAC coefficients).

The vector of B-spline coefficients of the SIAC filter with index sequence and knot sequence is

 f:=first column of M−10,t,\scalebox.7$J$,M0,t,\scalebox.7$J$:=[\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj:j+k+1xk+1+δ]δ=0:r,j∈\scalebox.7$J$. (9)
###### Proof.

We prove invertibility of . The remaining claims of the lemma then follow as in [Pet15].

Let be a left null vector of , i.e. for all , and

 0=\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj:j+κr∑δ=0ℓδxκ+δ=\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj:j+κxκr∑δ=0ℓδxδ=\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj:j+κp(x). (10)

Let be the interpolant of at , i.e. spanning two consecutive hence overlapping knot sequences. If knots repeat, is a Hermite interpolant. By Rolle’s theorem, the derivative , vanishes at a set of knots interlaced with , i.e. . The inequality is strict unless represents a multiple root. Then (Hermite) interpolates at and by the relation between divided difference and derivatives,

 κ\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj:j+κp=\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔt1j:j+κ−1Dp and κ\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj+1:j+κ+1p=\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔt1j+1:j+κDp.

Induction yields

 κ!\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj:j+κp=2\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtκ−1j:j+1Dκ−1p, and finally κ!\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj:j+κp=Dκp(tκj), and κ!\vrulewidth0.6ptheight5.6ptdepth−0.28ptΔtj+1:j+κ+1p=Dκp(tκj+1)

for . That is, the th divided difference of each sequence equals the th derivative at points respectively ; and the shift of the subsequence of knots from to implies that where is either strictly to the right of or is a multiple root. Then (10) implies that , a polynomial of degree at most , has roots counting multiplicity, hence is the zero polynomial. Given the factor of , this can only hold if , i.e.  has no non-trivial left null vector and, as a square matrix, is invertible.

Since all sequences , can be obtained by repeated shifts to the right, the conclusion holds for and is well-defined. ∎

### 2.4 Review of Symmetric and Boundary SIAC filters

We split the DG data at any known discontinuities and treat the domains separately. Then convolution can be applied throughout a given closed interval .

A SIAC spline kernel with knot sequence is symmetric (about the origin in ) if for . Convolution with a symmetric SIAC kernel of a function at then requires to be defined in a two-sided neighborhood of . Near boundaries, Ryan and Shu [RS03] therefore suggested convolving the DG output with a one-sided kernel whose support is shifted to one side of the origin: for near the left domain endpoint , the one-sided SIAC kernel is defined over where is the degree of the DG output. The Ryan-Shu -position-dependent one-sided kernel yields optimal -convergence, but its point-wise error near can be larger than that of the DG output.

In [vSRV11], Ryan et al. improved the one-sided kernel by increasing its monomial reproduction from degree to degree . This one-sided kernel reduces the boundary error when but the kernel support is increased by additional knot intervals and numerical roundoff requires high precision calculations to determine the kernel’s coefficients. ([vSRV11] did not draw conclusions for degrees .)

Ryan-Li-Kirby-Vuik [RLKV15] suggested an alternative position-dependent one-sided kernel that has the same support size as the symmetric kernel and its reproduction degree is only higher by one. The new idea is that the spline space defining the kernel is enriched by one B-spline. The new kernel computation is stable up to degree in double precision. When the RLKV kernel’s point-wise error on the canonical test problem is as low as that of the symmetric SIAC kernel that is applied in the interior. However, when , the RLKV error is higher than that of the symmetric kernel.

In [RS03, vSRV11, MRK12, RLKV15] convolution with position-dependent one-sided kernels is computed as follows. For each domain position ,

• Calculate kernel coefficients:
compute the entries of the position-dependent reproduction matrix by Gaussian quadrature; solve a corresponding linear system for the kernel coefficients to match the monomials to be reproduced, collected in the vector . As pointed out in [RLKV15], may be close to singular (for example, when for the SRV kernels) so that higher numerical precision (e.g. quadruple precision) is required to assemble and solve the linear system.

• Convolve the kernel with the DG output by Gaussian quadrature.

Note that, unlike the (position-independent) classical symmetric SIAC filter, the position-dependent boundary kernel coefficients have to be determined afresh for each point .

## 3 Coefficients of shifted and scaled filters

To calculate filters for DG output more efficiently and more stably, we reformulate Lemma 2.1 in multi-index notation:

 tω0:n :=tω00…tωnnand|t0:n|:=n∑j=0|tj|

as follows.

###### Lemma 3.1 (SIAC reproduction matrix).

The matrix Eq. (9) has the alternative form

 M:=M0,t0:n,\scalebox.7$J$ =[∑|ω|=δtωj:j+k+1]δ=0:r,j∈\scalebox.7$J$. (11)
###### Proof.

Applying Steffensen’s formula [dB05, Eq. (27)]111The index in [dB05, Eq. (27)] is misprinted. It should be: ., the divided differences of monomials in (9) can be rewritten as (11). ∎

Denote the reproduction matrix associated with the shifted knot sequence as

 Mξ:=[∑|ω|=δ(tj:j+k+1+ξ)ω]δ=0:r,j∈\scalebox.7$J$. (12)

Since each entry is a polynomial of degree in , the determinant of is a polynomial of degree in . Using for example Cramer’s rule, the entries of are rational functions in whose nominator and denominator are polynomials of degree in . Since the convolution coefficients are the entries of the first column of , it is remarkable, that we can show that the coefficients are not rational but polynomial and of degree rather than in .

To prove this claim, we employ the following technical result that generalizes a binomial indentity to multiple indices.

###### Lemma 3.2 (technical lemma).

We abbreviate and write to indicate that, for each component, . Then for ,

 ∑α≥a,|α|=δ(αa)=(δ+k+1|a|+k+1)=(δ+k+1δ−|a|). (13)
###### Proof.

By the Maclaurin expansion: , , we see that

 xk(1−x)k+1=∑ℓ≥k(ℓk)xℓ,|x|<1. (14)

Applying Eq. (14) to both sides of the identity

 xk+1xa0(1−x)a0+1⋯xak+1(1−x)ak+1+1=x|a|+k+1(1−x)|a|+k+2, (15)

and dividing both sides by , we see that

 ∑α≥a(αa)x|α|=∑ℓ≥|a|+k+1(ℓ|a|+k+1)xℓ−k−1. (16)

Selecting the coefficients of from both sides of Eq. (16) yields Eq. (13). ∎

###### Lemma 3.3 (Reproduction matrix for shifted knots).

The matrices and are related by

 (17)

(In (17), only non-zero entries are shown.) Note that is the result of deleting the first rows and columns of the lower triangular Pascal matrix of order .

###### Proof.

Abbreviating , the entry of defined by Eq. (12) is

 Mξ(δ,j) =∑|α|=δ∑\boldmath0≤ℓ≤α(αℓ)ξ|ℓ|tα−ℓj (18) =∑|α|=δδ∑β=0ξβ∑|ℓ|=β,\boldmath0≤ℓ≤α(αℓ)tα−ℓj=δ∑β=0ξβ∑|a|=δ−βtaj∑|α|=δ,α≥a(αa).

The last equality of Eq. (18) follows by substituting . Applying Lemma 3.2 to Eq. (18) and noting that , we see that

 Mξ(δ,j)=δ∑β=0(δ+k+1β)ξβM(δ−β,j). (19)

Eq. (19) is the expanded form of Eq. (17). ∎

Next we consider the effect of scaling knots, as might be done to refine a DG computation.

###### Lemma 3.4 (Reproduction matrix for scaled knots).

Let denote the square matrix with diagonal and zero otherwise. Then for

 M−10,ht,\scalebox.7$J$ =M−10,t,\scalebox.7$J$diag(h−(0:r)), where h−(0:r):=[1,h−1,…,h−r]. (20)
###### Proof.

By Lemma 3.1, multiplying the -th row of by yields , and hence

 M0,ht,\scalebox.7$J$=diag([1,h,…,hr])M0,t,\scalebox.7$J$

which is equivalent to (20). ∎

Alltogether, we obtain the following semi-explicit formula for the filter coefficients.

###### Theorem 3.1 (Scaled and shifted SIAC coefficients).

The SIAC filter coefficients associated with the knot sequence are polynomials of degree in :

 fξ:=[fξ,ℓ]ℓ=0:r= M−10,t,\scalebox.7$J$diag([(−1)ℓ(ℓ+k+1)ℓ]ℓ=0:r)diag(h−(0:r))(ξ0:r)t. (21)
###### Proof.

By Lemma 3.3 and Lemma 3.4, the matrix corresponding to the scaled and shifted knot sequence has the inverse

 M−1ξ=M−10,t,\scalebox.7$J$diag(h−(0:r))(Pk,r(ξ))−1. (22)

According to [ES04], and hence . Since Theorem 2.1 requires only the first column of , we replace, in (22), by its first column and obtain

 fξ=M−10,t,\scalebox.7$J$diag(h−(0:r))diag([(−1)ℓ(ℓ+k+1ℓ)]ℓ=0:r)(ξ0:r)t. (23)

Eq. (21) follows, because the diagonal matrices commute. ∎

Compared to (23) formula (21) has the advantage that it groups together two matrices that can be pre-computed independent of and .

The following corollary implies that the kernel coefficients , can be computed stably, as scaled integers.

###### Corollary 3.1 (Rational SIAC filter coefficients fξ,ℓ).

If the knots are rational, then the filter coefficients are polynomials in and with rational coefficients.

###### Proof.

Lemma 3.1 implies that the entries of are rational if the knots are rational. Since the determinant of a matrix with rational entries is rational, for example Cramer’s rule implies that the convolution coefficients are rational. ∎

## 4 Position-dependent (PSIAC) filtering

In this section, we first derive a general factored expression for the convolution of PSIAC filters with DG data. Then we specialize the setup to one-sided PSIAC filters when the DG breakpoint sequence is uniform.

### 4.1 Position-dependent (PSIAC) filters

When symmetric SIAC filtering increases the smoothness of the DG output, the result is in general a piecemeal function. One may expect the same of any one-sided kernel. However, this section proves that convolution with position-dependent PSIAC-filters yields a single polynomial piece over their interval of application. We start by defining position-dependent kernels.

###### Definition 4.1 (PSIAC kernel).

A PSIAC kernel at position has the form

 fx(s):=∑j∈\scalebox.7$J$fx;jB(s|htj:j+k+1+x),s∈h[t0,tn]+x. (24)

Here we introduced the scaling in anticipation of using a prototype knot sequence, typically a subset of integers, to create one filter and then apply its shifts and its scaled version that can be cheaply computed as explained in the previous section. That is, the DG output will be convolved with a PSIAC kernel  of reproduction degree , associated with an index sequence and defined over the scaled and shifted knot sequence where the constant adjusts to the left or right boundary. Substituting for in Eq. (24), changing variable from to and recalling that , we obtain an alternative spline representation of with -dependent coefficients over the shifted and reversed knot sequence :

 fx−hλ(s)=∑j∈\scalebox.7$J$fjr−jB(x−s|hλ−htn−j:jr−j),s∈h[t0,tn]+x−hλ. (25)

Now consider the DG output with break point sequence as in Eq. (2). Convolving with yields, after a change of variable , the filtered DG output

 (u∗ fx−hλ)(x) (26) =∑j∈\scalebox.7$J$fjr−j∫htn+x−hλht0+x−hλu(x−s,τ)B(x−s|hλ−htn−j:jr−j)ds =∑i∈I,j∈\scalebox.7$J$ui(τ)fjr−j∫hλ−ht0hλ−htnϕi(t;hs0:m)B(t|hλ−htn−j:jr−j)dt.

We note that, in the last expression, the integral no longer depends on . Rather, the coefficient depends on : by Theorem 3.1 is a polynomial in . This yields the following factored representation of the convolution.

###### Theorem 4.1 (Efficient PSIAC filtering of DG output).

Let be a PSIAC kernel of reproduction degree with index sequence and knot sequence . Let and , be the DG output. Let be the set of indices of basis functions with support overlapping . Then the filtered DG approximation is a polynomial in of degree :

 (u∗fx)(x) =uIQλdiag(h−(0:r))((x−hλ)0:r)t. (27) uI :=[ui(τ)]i∈I, Qλ :=GλAM−10,t,\scalebox.7$J$diag([(−1)ℓ(ℓ+k+1ℓ)]ℓ=0:r), Gλ :=[∫λ−t0λ−tnϕi(s;s0:m)B(s|λ−tn−j:jr−j)ds]i∈I,j∈\scalebox.7$J$, (28)

and the reversal matrix (1 on the antidiagonal and zero else).

###### Proof.

Let be the position-dependent coefficients of the parametrized kernel  arranged in reverse order. We rewrite Eq. (26) as

 (u∗ fx−hλ)(x)=uIGhλfr:0, (29) Ghλ :=[∫h(λ−t0)h(λ−tn)ϕi(t;hs0:m)B(t|hλ−htn−j:jr−j)dt]i∈I,j∈\scalebox.7$J$.

Since by Eq. (3), . Then the change of variable yields

 Ghλ(i,j) =∫λ−t0λ−tnϕi(hs;hsi:i+d+1)B(hs|hλ−htn−j:jr−j)hds=Gλ(i,j).

By Theorem 3.1

 fr:0 =Af0:r (30) =AM−10,t0:n,\scalebox.7$J$diag([(−1)ℓ(ℓ+k+1ℓ)]ℓ=0:r)((x−hλ)0:r)t.

Eq. (29) with Eq. (30) substituted equals Eq. (27). ∎

The factored representation implies that instead of recomputing the filter coefficients afresh for each point of the convolved output as in the established numerical approach, we simply compute the coefficients corresponding to one prototype knot sequence , scale by as needed and at runtime pre-multiply with the data and post-multiply with the vector of shifted monomials as stated in Eq. (27).

Increased multiplicity of an inner knot of the SIAC kernel reduces its smoothness, and this, in turn, reduces the smoothness of the filtered output. By contrast, Theorem 4.1 shows that when the PSIAC knots are shifted along evaluation points then PSIAC convolution yields a polyonomial, i.e. infinite smoothness regardless of the knot multiplicity. We may view position-dependent filtering as a form of polynomial approximation.

Additionally, the polynomial characterization directly provides a symbolic expression for the derivatives of the convolved DG output.

###### Corollary 4.1 (Derivatives of PSIAC-filtered DG output).
 dℓdxℓ(u∗fx)(x)=uIQλdiag(h−(0:r))(dℓdxℓ(x−hλ)0:r)t. (31)

The following Corollary shows that for rational DG break points and rational kernel knots, we can pre-compute and store the prototype matrix stably in terms of integer fractions.

###### Corollary 4.2 (Rational PSIAC convolution coefficients).

With the assumptions and notation of Theorem 4.1, if the basis functions are piecewise polynomials with rational coefficients, the shift is rational and the sequences and are rational then the matrix has rational entries.

###### Proof.

By Lemma 3.1, the reproduction matrix has rational entries. Since the integral of a polynomial with rational coefficients over an interval with rational end points is rational, Eq. (28) implies that also has rational entries. This implies that the entries of are rational. ∎

### 4.2 Application to filtering at boundaries

We now derive explicit forms of the matrix of Theorem 4.1 when the DG break points are uniform and the PSIAC filters are one-sided. For one-sided filters, is replaced by and for the left-sided and right-sided kernels respectively. Recalling that convolution reverses the direction of the filter kernel (cf. Fig. 2), it is natural to assume that the right-most knot of the left boundary kernel vanishes when evaluating at the left endpoint , i.e. . Together with the analoguous assumption for right boundary kernels, this determines

 λL=tn+ah,λR=t0+bh. (32)
###### Corollary 4.3 (PSIAC convolution coefficients for uniform DG knots).

Assume that the DG break point sequence is uniform, hence after scaling consists of consecutive integers. Without loss of generality, the DG output on each interval is defined in terms of Bernstein-Bézier polynomials , , of degree , i.e.

 B(i)ℓ(x):={(dℓ)(x−si)ℓ(si+1−x)d−ℓif x∈[si,si+1]0otherwise.

Let be the smallest integer greater than or equal to . Then, for , , and

 GλL((d+1)i+ℓ,j) =∫i+1iB(i)ℓ(t)B(t|tn−tn−j:jr−j))dt, (33) GλR((d+1)i+ℓ,j) =∫n0−i+1n0−iB(n0−i)d−ℓ(t)B(t|tj:j+k+1−t0)dt.
###### Proof.

First we consider . In Eq. (28), we change to the variable . Abbreviating , since the B-splines are translation invariant, for ,

 GλL(i,j)=∫tn−t00ϕi(t;s0:m−λL+tn)B(t|ˆtj:j+k+1)dt. (34)

Since , Eq. (32) implies that the first point of the sequence of translated DG break points equals , i.e., . Since the break points are consecutive integers starting from and