Nonuniform Discontinuous Galerkin
Filters via Shift and Scale
Abstract
Convolving the output of Discontinuous Galerkin (DG) computations with symmetric SmoothnessIncreasing AccuracyConserving (SIAC) filters can improve both smoothness and accuracy. To extend convolution to the boundaries, several onesided spline filters have recently been developed. We interpret these filters as instances of a general class of positiondependent spline filters that we abbreviate as PSIAC filters. These filters may have a nonuniform knot sequence and may leave out some Bsplines of the sequence.
For general positiondependent filters, we prove that rational knot sequences result in rational filter coefficients. We derive symbolic expressions for prototype knot sequences, typically integer sequences that may include repeated entries and corresponding Bsplines, some of which may be skipped. Filters for shifted or scaled knot sequences are easily derived from these prototype filters so that a single filter can be reused in different locations and at different scales. Moreover, the convolution itself reduces to executing a single dot product making it more stable and efficient than the existing approaches based on numerical integration. The construction is demonstrated for several established and one new boundary filter.
1 Introduction
Since the output of Discrete Galerkin (DG) computations often captures higher order moments of the true solution [ML78], postprocessing DG output by convolution can improve both smoothness and accuracy [BS77, CLSS03]. In the interior of the domain of computation, symmetric smoothness increasing accuracy conserving (SIAC) filters have been demonstrated to provide optimal accuracy [CLSS03]. However the symmetric footprint precludes using these filters near boundaries of the computational domain.
To address this problem Ryan and Shu [RS03] pioneered the use of onesided spline filters. The RyanShu filters improve the error, but not necessarily the pointwise errors. In fact, the filters are observed to increase the pointwise error near the boundary, motivating the design of the SRV filter [vSRV11], a filter kernel of increased support. Due to nearsingular calculations, a stable numerical derivation of the SRV filter requires quadruple precision. Indeed, the coefficients of the boundary filters [RS03, vSRV11, MRK12, RLKV15, MRK15] are computed by inverting a matrix whose entries are computed by Gaussian quadrature; and, as pointed out in [RLKV15], SRV filter matrices are close to singular. [RLKV15] therefore introduced the RLKV filter, that augments the filter of [RS03] by a single additional Bspline. This improves stability and retains the supportsize. However, RLKV filter errors on canonical test problems are higher than those of symmetric filters and they have suboptimal and superconvergence rates [RLKV15], as well as poorer derivative approximation [LRKV16] than SRV filters. Consequently, SRV filters merit attention if their stablity can be improved.
The contribution of this paper is to reinterpret the published onesided filters in the framework of positiondependent spline filters with general knot sequences [Pet15] (that were inspired by the partial symbolic expression of coefficients for uniform knot sequences in [MRK15]). This reinterpretation allows expressing and presolving them in sympbolic form, preempting instability so these filters can reach their full potential. Specifically, this paper

proves general properties of SIAC filters, when their knots are shifted or scaled;

uses the properties to express the kernels in a factored, semiexplicit form that becomes explicit for given knot patterns and yields rational coefficients for rational knot sequences;

characterizes a class of positiondependent SIAC filters, short PSIAC filters, based on splines with general knot sequences; the coefficients of PSIAC filters are polynomial expressions in the position;

illustrates the general framework by comparing the established numerical approach with the new symbolic filter derivation and by adding a new effective, stablycomputed filter of lower degree than SRV or RLKV.
(a) DG output error  (b) SRV numerical approach  (c) SRV symbolic approach 
(b’) Left zoom of (b)  (c’) Left zoom of (c) 
The new characterization allows, in standard double precision, to replace the current threestep numerical approach of approximate computation of the matrix, its inversion and application to the data by Gauss quadrature, by a singlestep symbolic approach derived in Theorem 4.1. Fig. 1 contrasts, for standard double precision, the noisy error of numerical SRV filtering with the error of the new symbolic formulation.
Not only is the symbolic approach more stable, but

scaled and shifted version of the filter are easily obtained from one symbolic prototype filter;

computation is more efficient: filtering the DG output reduces to a single dot product of two vectors of small size;

proves the smoothness of the sofiltered DG output to be .
The last point is of interest since [RS03, vSRV11] observed and conjectured that the smoothness of the filtered DG computation is the same as the smoothness in the domain interior where the symmetric filter applied. Not only does this conjecture hold true, but Theorem 4.1 implies that the smoothness is .
Organization.
Section 2 introduces the canonical test equation, Bsplines, convolution, and a generalization of the formula of [Pet15] for convolution with splines based on arbitrary knot sequences. Section 3 reformulates the generalized convolution formula and derives the convolution coefficients when the knot sequence is a scaled or shifted copy of a prototype sequence. Section 4, in particular Theorem 4.1, summarizes the resulting efficient convolution with PSIAC filters based on scaled and shifted copies of a prototype sequence. Section 5 shows that SRV and RLKV are PSIAC filters and advertises their true potential by comparing their computation using the numerical approach to using the more stable symbolic approach. To illustrate the generality of the setup, a new multipleknot linear filter is added to the picture.
2 Background
Here we establish the notation, distinguishing between filters and DG output, explain the canonical test problem, the DG method, Bsplines and reproducing filters and we review onesided and positiondependent SIAC filters in the literature.
2.1 Notation
Here and in the following, we abbreviate the sequences
We denote by the convolution of a function with a function , i.e.
for every where the integral exists and we use the terms filter, filter kernel and kernel interchangeably, even though, strictly speaking, filtering means convolving a function with a kernel. We reserve the following symbols:
The notation is illustrated by the following example.
Example 2.1.
Consider a degreeone spline filter, i.e. , defined over the knot sequence and associated with the index set . That is, the two Bsplines defined over the knot squences and are omitted, , and . A linear () DG output on 200 uniform segments of the interval implies , and .
2.2 The canonical test problem, the Discontinuous Galerkin method and Bsplines
To demonstrate the performance of the filters on a concrete example, [RS03] used the following univariate hyperbolic partial differential wave equation:
(1)  
subject to periodic boundary conditions, , or Dirichlet boundary conditions where, depending on the sign of , is either or . Subsequent work [RS03, vSRV11, RLKV15] adopted the same differential equation to test their new onesided filters and to compare to the earlier work. Eq. (1) is therefore considered the canonical test problem. We note, however, that SIAC filters apply more widely, for example to FEM and elliptic equations [BS77].
In the DG method, the domain is partitioned into intervals by a sequence of break points . Assuming that the sequence is rational, introducing will allow us later to consider the prototype sequence of integers. The DG method approximates the timedependent solution to Eq. (1) by
(2) 
where the scalarvalued functions , , are linearly independent and satisfy the scaling relations
(3) 
Examples of functions are nonuniform Bsplines [dB05], Chebysev polynomials and Lagrange polynomials. Relation (3) is typically used for refinement in FEM, DG or Isogeometric PDE solvers.
Setting , the weak form of Eq. (1) for ,
(4) 
forms a system of ordinary differential equations with the coefficients , , as unknowns.
The goal of SIAC filtering is to smooth in by convolution in with a linear combination of Bsplines. Typically filtering is applied after the last time step when . Following [Pet15], we characterize Bsplines in terms of divided differences [CS66]. For a sufficiently smooth univariate realvalued function with th derivative , divided differences are defined by
(5) 
If is a nondecreasing sequence, we call its elements knots and the classical definition of the Bspline of degree with knot sequence is
(6) 
Here acts on the function for a given . Consequently, a Bspline is a nonnegative piecewise polynomial function in with support on the interval . If is the multiplicity of the number in the sequence , then is at least times continuously differentiable at . This definition of agrees, after scaling, with the definition of the Bspline by recurrence [dB02]:
(7) 
2.3 SIAC filter kernel coefficients
A piecewise polynomial is said to be a SIAC spline kernel of reproduction degree if convolution of with monomials reproduces the monomials up to degree , i.e., if
(8) 
Mirzargar et al. [MRK15] derived semiexplicit formulas when the filter has uniform knots while [Pet15] gives semiexplicit formulas for the coefficients of spline kernels over general knot sequences. The following definition further generalizes these formulas by allowing to skip some Bsplines when constructing the kernel.
Definition 2.1 (SIAC spline filter kernel).
Let be a sequence of strictly increasing integers between and . A SIAC spline kernel of degree and reproduction degree with index sequence and knot sequence is a spline
of degree with coefficients chosen so that
(8’) 
Lemma 2.1 (SIAC coefficients).
The vector of Bspline coefficients of the SIAC filter with index sequence and knot sequence is
(9) 
Proof.
We prove invertibility of . The remaining claims of the lemma then follow as in [Pet15].
Let be a left null vector of , i.e. for all , and
(10) 
Let be the interpolant of at , i.e. spanning two consecutive hence overlapping knot sequences. If knots repeat, is a Hermite interpolant. By Rolle’s theorem, the derivative , vanishes at a set of knots interlaced with , i.e. . The inequality is strict unless represents a multiple root. Then (Hermite) interpolates at and by the relation between divided difference and derivatives,
Induction yields
for . That is, the th divided difference of each sequence equals the th derivative at points respectively ; and the shift of the subsequence of knots from to implies that where is either strictly to the right of or is a multiple root. Then (10) implies that , a polynomial of degree at most , has roots counting multiplicity, hence is the zero polynomial. Given the factor of , this can only hold if , i.e. has no nontrivial left null vector and, as a square matrix, is invertible.
Since all sequences , can be obtained by repeated shifts to the right, the conclusion holds for and is welldefined. ∎
2.4 Review of Symmetric and Boundary SIAC filters
We split the DG data at any known discontinuities and treat the domains separately. Then convolution can be applied throughout a given closed interval .
A SIAC spline kernel with knot sequence is symmetric (about the origin in ) if for . Convolution with a symmetric SIAC kernel of a function at then requires to be defined in a twosided neighborhood of . Near boundaries, Ryan and Shu [RS03] therefore suggested convolving the DG output with a onesided kernel whose support is shifted to one side of the origin: for near the left domain endpoint , the onesided SIAC kernel is defined over where is the degree of the DG output. The RyanShu positiondependent onesided kernel yields optimal convergence, but its pointwise error near can be larger than that of the DG output.
In [vSRV11], Ryan et al. improved the onesided kernel by increasing its monomial reproduction from degree to degree . This onesided kernel reduces the boundary error when but the kernel support is increased by additional knot intervals and numerical roundoff requires high precision calculations to determine the kernel’s coefficients. ([vSRV11] did not draw conclusions for degrees .)
RyanLiKirbyVuik [RLKV15] suggested an alternative positiondependent onesided kernel that has the same support size as the symmetric kernel and its reproduction degree is only higher by one. The new idea is that the spline space defining the kernel is enriched by one Bspline. The new kernel computation is stable up to degree in double precision. When the RLKV kernel’s pointwise error on the canonical test problem is as low as that of the symmetric SIAC kernel that is applied in the interior. However, when , the RLKV error is higher than that of the symmetric kernel.
In [RS03, vSRV11, MRK12, RLKV15] convolution with positiondependent onesided kernels is computed as follows. For each domain position ,

Calculate kernel coefficients:
compute the entries of the positiondependent reproduction matrix by Gaussian quadrature; solve a corresponding linear system for the kernel coefficients to match the monomials to be reproduced, collected in the vector . As pointed out in [RLKV15], may be close to singular (for example, when for the SRV kernels) so that higher numerical precision (e.g. quadruple precision) is required to assemble and solve the linear system. 
Convolve the kernel with the DG output by Gaussian quadrature.
Note that, unlike the (positionindependent) classical symmetric SIAC filter, the positiondependent boundary kernel coefficients have to be determined afresh for each point .
3 Coefficients of shifted and scaled filters
To calculate filters for DG output more efficiently and more stably, we reformulate Lemma 2.1 in multiindex notation:
as follows.
Lemma 3.1 (SIAC reproduction matrix).
The matrix Eq. (9) has the alternative form
(11) 
Proof.
Denote the reproduction matrix associated with the shifted knot sequence as
(12) 
Since each entry is a polynomial of degree in , the determinant of is a polynomial of degree in . Using for example Cramer’s rule, the entries of are rational functions in whose nominator and denominator are polynomials of degree in . Since the convolution coefficients are the entries of the first column of , it is remarkable, that we can show that the coefficients are not rational but polynomial and of degree rather than in .
To prove this claim, we employ the following technical result that generalizes a binomial indentity to multiple indices.
Lemma 3.2 (technical lemma).
We abbreviate and write to indicate that, for each component, . Then for ,
(13) 
Proof.
Lemma 3.3 (Reproduction matrix for shifted knots).
The matrices and are related by
(17) 
(In (17), only nonzero entries are shown.) Note that is the result of deleting the first rows and columns of the lower triangular Pascal matrix of order .
Proof.
Next we consider the effect of scaling knots, as might be done to refine a DG computation.
Lemma 3.4 (Reproduction matrix for scaled knots).
Let denote the square matrix with diagonal and zero otherwise. Then for
(20) 
Proof.
Alltogether, we obtain the following semiexplicit formula for the filter coefficients.
Theorem 3.1 (Scaled and shifted SIAC coefficients).
The SIAC filter coefficients associated with the knot sequence are polynomials of degree in :
(21) 
Proof.
By Lemma 3.3 and Lemma 3.4, the matrix corresponding to the scaled and shifted knot sequence has the inverse
(22) 
According to [ES04], and hence . Since Theorem 2.1 requires only the first column of , we replace, in (22), by its first column and obtain
(23) 
Eq. (21) follows, because the diagonal matrices commute. ∎
Compared to (23) formula (21) has the advantage that it groups together two matrices that can be precomputed independent of and .
The following corollary implies that the kernel coefficients , can be computed stably, as scaled integers.
Corollary 3.1 (Rational SIAC filter coefficients ).
If the knots are rational, then the filter coefficients are polynomials in and with rational coefficients.
Proof.
Lemma 3.1 implies that the entries of are rational if the knots are rational. Since the determinant of a matrix with rational entries is rational, for example Cramer’s rule implies that the convolution coefficients are rational. ∎
4 Positiondependent (PSIAC) filtering
In this section, we first derive a general factored expression for the convolution of PSIAC filters with DG data. Then we specialize the setup to onesided PSIAC filters when the DG breakpoint sequence is uniform.
4.1 Positiondependent (PSIAC) filters
When symmetric SIAC filtering increases the smoothness of the DG output, the result is in general a piecemeal function. One may expect the same of any onesided kernel. However, this section proves that convolution with positiondependent PSIACfilters yields a single polynomial piece over their interval of application. We start by defining positiondependent kernels.
Definition 4.1 (PSIAC kernel).
A PSIAC kernel at position has the form
(24) 
Here we introduced the scaling in anticipation of using a prototype knot sequence, typically a subset of integers, to create one filter and then apply its shifts and its scaled version that can be cheaply computed as explained in the previous section. That is, the DG output will be convolved with a PSIAC kernel of reproduction degree , associated with an index sequence and defined over the scaled and shifted knot sequence where the constant adjusts to the left or right boundary. Substituting for in Eq. (24), changing variable from to and recalling that , we obtain an alternative spline representation of with dependent coefficients over the shifted and reversed knot sequence :
(25) 
Now consider the DG output with break point sequence as in Eq. (2). Convolving with yields, after a change of variable , the filtered DG output
(26)  
We note that, in the last expression, the integral no longer depends on . Rather, the coefficient depends on : by Theorem 3.1 is a polynomial in . This yields the following factored representation of the convolution.
Theorem 4.1 (Efficient PSIAC filtering of DG output).
Let be a PSIAC kernel of reproduction degree with index sequence and knot sequence . Let and , be the DG output. Let be the set of indices of basis functions with support overlapping . Then the filtered DG approximation is a polynomial in of degree :
(27)  
(28) 
and the reversal matrix (1 on the antidiagonal and zero else).
Proof.
The factored representation implies that instead of recomputing the filter coefficients afresh for each point of the convolved output as in the established numerical approach, we simply compute the coefficients corresponding to one prototype knot sequence , scale by as needed and at runtime premultiply with the data and postmultiply with the vector of shifted monomials as stated in Eq. (27).
Increased multiplicity of an inner knot of the SIAC kernel reduces its smoothness, and this, in turn, reduces the smoothness of the filtered output. By contrast, Theorem 4.1 shows that when the PSIAC knots are shifted along evaluation points then PSIAC convolution yields a polyonomial, i.e. infinite smoothness regardless of the knot multiplicity. We may view positiondependent filtering as a form of polynomial approximation.
Additionally, the polynomial characterization directly provides a symbolic expression for the derivatives of the convolved DG output.
Corollary 4.1 (Derivatives of PSIACfiltered DG output).
(31) 
The following Corollary shows that for rational DG break points and rational kernel knots, we can precompute and store the prototype matrix stably in terms of integer fractions.
Corollary 4.2 (Rational PSIAC convolution coefficients).
With the assumptions and notation of Theorem 4.1, if the basis functions are piecewise polynomials with rational coefficients, the shift is rational and the sequences and are rational then the matrix has rational entries.
4.2 Application to filtering at boundaries
We now derive explicit forms of the matrix of Theorem 4.1 when the DG break points are uniform and the PSIAC filters are onesided. For onesided filters, is replaced by and for the leftsided and rightsided kernels respectively. Recalling that convolution reverses the direction of the filter kernel (cf. Fig. 2), it is natural to assume that the rightmost knot of the left boundary kernel vanishes when evaluating at the left endpoint , i.e. . Together with the analoguous assumption for right boundary kernels, this determines
(32) 
Corollary 4.3 (PSIAC convolution coefficients for uniform DG knots).
Assume that the DG break point sequence is uniform, hence after scaling consists of consecutive integers. Without loss of generality, the DG output on each interval is defined in terms of BernsteinBézier polynomials , , of degree , i.e.
Let be the smallest integer greater than or equal to . Then, for , , and
(33)  
Proof.
First we consider . In Eq. (28), we change to the variable . Abbreviating , since the Bsplines are translation invariant, for ,
(34) 
Since , Eq. (32) implies that the first point of the sequence of translated DG break points equals , i.e., . Since the break points are consecutive integers starting from and