Preconditioning rectangular spectral collocation

# Preconditioning rectangular spectral collocation

## Abstract

Rectangular spectral collocation (RSC) methods have recently been proposed to solve linear and nonlinear differential equations with general boundary conditions and/or other constraints. The involved linear systems in RSC become extremely ill-conditioned as the number of collocation points increase. By introducing suitable Birkhoff-type interpolation problems, we present pseudospectral integration preconditioning matrices for the ill-conditioned linear systems in RSC. The condition numbers of the preconditioned linear systems are independent of the number of collocation points. Numerical examples are given.

L
\slugger

mmsxxxxxxxx–x

agrange interpolation, Birkhoff-type interpolation, rectangular spectral collocation, integration preconditioning

{AMS}

65L60, 41A05, 41A10

## 1 Introduction

Rectangular spectral collocation methods [3] have recently been demonstrated to be a convenient means of solving the problems when the row replacement or ‘boundary bordering’ strategy of standard spectral collocation methods [4, 11, 1, 2, 9] becomes ambiguous. Specifically, an th-order differential operator is discretized by a rectangular matrix directly, allowing constraints to be appended to form an invertible square system. However, the involved linear systems become extremely ill-conditioned as the number of collocation points increases. Typically, the condition number grows like . Efficient preconditioners are highly required when solving the linear systems by an iterative method.

Recently, Wang, Samson, and Zhao [12] proposed a well-conditioned collocation method to solve linear differential equations with various types of boundary conditions. By introducing a suitable Birkhoff interpolation problem [10], they constructed a pseudospectral integration preconditioning matrix, which is the exact inverse of the pseudospectral discretization matrix of the th-order derivative operator together with boundary conditions. In this paper, we employ the similar idea to construct a pseudospectral integration matrix, which is the exact inverse of the discretization matrix arising in the rectangular spectral collocation method for th-order derivative operator together with general linear constraints. The condition number of the resulting linear system is independent of the number of collocation points when the new pseudospectral integration matrix is used as a right preconditioner for an th-order linear differential operator together with the same constraints.

The rest of the paper is organized as follows. In §2, we review several topics required in the following sections. In §3, we introduce the new pseudospectral integration matrix by a suitable Birkhoff-type interpolation problem. In §4, we present the preconditioning rectangular spectral collocation method. Numerical examples are reported in §5. We present brief concluding remarks in §6.

## 2 Preliminaries

### 2.1 Barycentric resampling matrix

Let be a set of distinct interpolation points satisfying

 (2.1) −1≤x0

The associated barycentric weights are defined by

 (2.2) wj,N=N∏n=0,n≠j(xj−xn)−1,j=0,1,…,N.

Let be another set of distinct interpolation points satisfying

 (2.3) −1≤y0

The barycentric resampling matrix [3], , which interpolates between the points and , is defined by

 Px↦y=[px↦yij]M,Ni=0,j=0,

where

 px↦yij=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩wj,Nyi−xj(N∑l=0wl,Nyi−xl)−1,yi≠xj,1,yi=xj.
{lemma}

If , then

### 2.2 Pseudospectral differentiation matrices

The Lagrange interpolation basis polynomials of degree associated with the points are defined by

 ℓj,N(x)=wj,NN∏n=0,n≠j(x−xn),j=0,1,…,N,

where is the barycentric weight (2.2). Define the pseudospectral differentiation matrices:

 D(m)x↦x=[ℓ(m)j,N(xi)]Ni,j=0,D(m)x↦y=[ℓ(m)j,N(yi)]M,Ni=0,j=0.

There hold

 D(m)x↦x=(D(1)x↦x)m,m≥1,

and

 D(m)x↦y=Px↦yD(m)x↦x.

The matrix is called a rectangular th-order differentiation matrix, which maps values of a polynomial defined on to the values of its th-order derivative on . Explicit formulae and recurrences for rectangular differentiation matrices are given in [13].

### 2.3 Chebyshev polynomials and Chebyshev points

The most widely used spectral methods for non-periodic problems are those based on Chebyshev polynomials and Chebyshev points. In this paper, we focus on these polynomials and points. However, everything we discuss can be easily generalized to the case of Jacobi polynomials and corresponding points.

The Chebyshev points of the first kind (also known as Gauss-Chebyshev points) are given by

 νj,N=−cos(2j+1)π2N+2,j=0,1,…,N.

In this case, the Gauss-Chebyshev quadrature weights are given by [5]

 ωνj,N=πN+1,j=0,1,…,N,

and the barycentric weights are given by [6]

 wνj,N=(−1)N−j2NN+1sin(2j+1)π2N+2,j=0,1,…,N.

Let be the set of all algebraic polynomials of degree at most . We have

 (2.4) ∫1−1p(x)√1−x2dx=N∑j=0ωνj,Np(νj,N),∀p(x)∈P2N+1.

The Chebyshev points of the second kind (also known as Gauss-Chebyshev-Lobatto points) are given by

 τj,N=−cosjπN,j=0,1,…,N.

In this case, the Gauss-Chebyshev-Lobatto quadrature weights are given by [5]

 ωτj,N=πρjN,j=0,…,N,

and the barycentric weights are given by [8]

 wτj,N=(−1)N−j2N−1ρjN,j=0,1,…,N,

where

 ρ0=ρN=2,ρ1=ρ2=⋯=ρN−1=1.

We have

 ∫1−1p(x)√1−x2dx=N∑j=0ωτj,Np(τj,N),∀p(x)∈P2N−1.

Let be the Chebyshev polynomials (see, for example, [5]) given by

 Tn(x)=cos(narccosx),x∈[−1,1].

They are mutually orthogonal:

 (2.5) ∫1−1Tk(x)Tj(x)√1−x2dx=ϱkπ2δkj,

where

 ϱk={2,k=0,1,k≥1,δkj={1,k=j,0,k≠j.

Let denote the Lagrange interpolation basis polynomials of degree associated with the points . The polynomial can be rewritten as

 (2.6) ℓj,M(x)=M∑k=0βkjTk(x),j=0,1,…,M.

If is a subset of or , can be obtained with ease. For example, suppose that is a proper subset of . Let denote the map such that . Let denote the set such that if then . By (2.4) and (2.5), we have, for

 βkj=2ϱkπ(Tk(yj)ωνϖ(j),N+∑i∈ITk(νi,N)ωνi,Nℓj,M(νi,N)),k=0,1,…,M.

Here , , can be obtained by solving the following linear system

 Tk(yj)ωνϖ(j),N+∑i∈ITk(νi,N)ωνi,Nℓj,M(νi,N)=0,k=M+1,…,N.

In particular, if denote the Lagrange interpolation basis polynomials of degree associated with , we have

 ℓνj,N(x)=N∑k=0βνkjTk(x),j=0,1,…,N,

where, for

 βν0j=1N+1, βνkj=2Tk(νj,N)N+1,k=1,2,…,N−1, βνNj=(−1)N−j2N+1sin(2j+1)π2N+2.

If denote the Lagrange interpolation basis polynomials of degree associated with , we have

 ℓτj,N(x)=N∑k=0βτkjTk(x),j=0,1,…,N,

where, for

 βτ0j=1ρjN, βτkj=2Tk(τj,N)ρjN,k=1,2,…,N−1, βτNj=(−1)N−jρjN.

Define the integral operators:

 ∂−1xv(x)=∫x−1v(t)dt;∂−kxv(x)=∂−1x(∂−(k−1)xv(x)),k≥2.

By

 Tn(x)=T′n+1(x)2(n+1)−T′n−1(x)2(n−1),n≥2,

and

 Tn(±1)=(±1)n,T′n(±1)=±(±1)nn2,

we have

 ∂−1xT0(x)=1+x, (2.7) ∂−1xT1(x)=x2−12, ∂−1xTn(x)=Tn+1(x)2(n+1)−Tn−1(x)2(n−1)−(−1)nn2−1,n≥2.

and

 ∂−2xT0(x)=(x+1)22, ∂−2xT1(x)=(x−2)(x+1)26, (2.8) ∂−2xT2(x)=x(x−2)(x+1)26 ∂−2xTn(x)=Tn+2(x)4(n+1)(n+2)−Tn(x)2(n2−1)+Tn−2(x)4(n−1)(n−2) −(−1)n(1+x)n2−1−3(−1)n(n2−1)(n2−4),n≥3.

## 3 Pseudospectral integration matrices

Given and with , we consider the Birkhoff-type interpolation problem:

 Unknown environment '%

where each is a linear functional. Let be the Lagrange interpolation basis polynomials of degree associated with the points . Then the Birkhoff-type interpolation polynomial takes the form

 (3.1) p(x)=M∑j=0cj∂−mxℓj,M(x)+m−1∑i=0αixi,

where can be determined by the linear constraints Obviously, the existence and uniqueness of the Birkhoff-type interpolation polynomial is equivalent to that of . After obtaining , we can rewrite (3.1) as

 p(x)=M+m∑j=0cjBj(x),Bj(x)∈PM+m.

Let and be the points as in (2.1). Define the th-order pseudospectral integration matrix (PSIM) as:

 B(−m)y↦x=[Bj(xi)]Ni,j=0.

Define the matrices

 B(k−m)y↦x=[B(k)j(xi)]Ni,j=0,k≥1.

It is easy to show that

 (3.2) B(k−m)y↦x=D(k)x↦xB(−m)y↦x,k≥1.

Let be the discretization of the linear constraints , . We have the following theorem.

{theorem}

If for any ,

 ⎡⎢ ⎢ ⎢⎣L1(p,…,p(m−1))⋮Lm(p,…,p(m−1))⎤⎥ ⎥ ⎥⎦=Lm⎡⎢ ⎢⎣p(x0)⋮p(xN)⎤⎥ ⎥⎦,

then

 [D(m)x↦yLm]B(−m)y↦x=IN+1.
{proof}

The result follows from

 Unknown environment '%

and Lemma 2.1.

Now we give concrete examples. Consider the non-separable linear constraint

 (3.3) ap(−1)+bp(1)=σ,

and the global linear constraint

 (3.4) ∫1−1p(x)dx=σ,

where , and are given constants. They are straightforward to discretize: for (3.3),

 [a0⋯0b]p=σ

and for (3.4),

 qTp=σ,

where

 Unknown environment 'array%

and is a column vector of Clenshaw-Curtis quadrature weights [5].

The first-order Birkhoff-type interpolation problem takes the form:

 Unknown environment '%
• Given with , we have

 Bj(x)=∂−1xℓj,M(x)−ba+b∫1−1ℓj,M(x)dx,j=0,1,…,M, BM+1(x)=1a+b.
• Given , we have

 Bj(x)=∂−1xℓj,M(x)−12∫1−1∂−1xℓj,M(x)dx,j=0,1,…,M, BM+1(x)=12.

By (2.6) and (2.7), the matrix can be computed stably even for thousands of collocation points.

The second-order Birkhoff-type interpolation problem takes the form:

 Unknown environment '%
• Given with , and , we have

 Bj(x)=∂−2xℓj,M(x)−bxb−a∫1−1∂−1xℓj,M(x)dx +((a+b)x2(b−a)−12)∫1−1∂−2xℓj,M(x)dx,j=0,1,…,M, BM+1(x)=xb−a, BM+2(x)=12−(a+b)x2(b−a),

and

 B′j(x)=∂−1xℓj,M(x)−bb−a∫1−1∂−1xℓj,M(x)dx +a+b2(b−a)∫1−1∂−2xℓj,M(x)dx,j=0,1,…,M, B′M+1(x)=1b−a, B′M+2(x)=−a+b2(b−a).

By (2.6), (2.7) and (2.8), the matrices and can be computed stably even for thousands of collocation points.

## 4 Preconditioning rectangular spectral collocation

Consider the th-order differential equations of the form

 (4.1) am(x)u(m)(x)+⋯+a1(x)u′(x)+a0(x)u(x)=f(x),

together with linear constraints

 (4.2) Li(u,…,u(m−1))=cM+i,i=1,2,…,m.

Let (with ) and be the points as defined in (2.1) and (2.3), respectively. The rectangular spectral collocation discretization [3] of (4.1) is given by

 AM+1u=f,

where

 AM+1=diag{am}D(m)x↦y+⋯+diag{a1}D(1)x↦y+diag{a0}D(0)x↦y.

Here we use boldface letters to indicate a column vector obtained by discretizing at the points except for the unknown . For example,

 a0=[a0(y0)a0(y1)⋯a0(yM)]T, f=[f(y0)f(y1)⋯f(yM)]T.

Let

 Lmu=cm

be the discretization of the linear constraints (4.2) and satisfy the condition in Theorem 3, where

 Unknown environment '%

The global collocation system is given by

 (4.3) Au=g

where

 A=[AM+1Lm],g=[fcm].

Consider the pseudospectral integration matrix (3) as a right preconditioner for the linear system (4.3). We need to solve the right preconditioned linear system

 AB(−m)y↦xv=g.

By (see Theorem 3)

 LmB(−m)y↦x=[0Im],

we have

 (4.4) AM+1B(−m)y↦x[IM+10]vM+1=f−AM+1B(−m)y↦x[0Im]cm.

There hold

 Unknown environment '% = diag{am}+diag{am−1}˜B(m−1−m)y↦y+⋯+diag{a0}˜B(0−m)y↦y,

and

 AM+1B(−m)y↦x[0Im]=diag{am−1}ˆB(m−1−m)y↦y+⋯+diag{a0}ˆB(0−m)y↦y,

where, for ,

 ˜B(k−m)y↦y=[B(k)j(yi)]Mi,j=0,ˆB(k−m)y↦y=[B(k)j(yi)]M,M+mi=0,j=M+1.

After solving (4.4), we obtain by

 u=B(−m)y↦x[vM+1cm].

## 5 Numerical results

In this section, we compare the rectangular spectral collocation (RSC) scheme (4.3) and the preconditioned rectangular spectral collocation (P-RSC) scheme (4.4). In all computations, the Chebyshev points of the second kind are chosen as and the Chebyshev points of the first kind are chosen as .

### Example 1

We consider the equation

 (5.1) u′(x)+a0(x)u(x)=f(x)

with the linear constraint

 (5.2) u(−1)+u(1)=σ,

or the linear constraint

 (5.3) ∫1−1u(x)dx=σ,

where is a given constant. We report in Table 1 the condition numbers of the linear systems in RSC and P-RSC with , and various . We observe that the condition numbers of P-RSC are independent of , while those of RSC behave like .

We next consider (5.1) with and the linear constraint (5.2). The function and are chosen such that an oscillatory solution of (5.1) is

 u(x)=100exp(−x2)∫x−1exp(t2)sin(2000t2)dt.

In Figure 1 (a) we plot the exact solution against the numerical solutions obtained by RSC and P-RSC with . In Figure 1 (b) we plot the maximum point-wise errors of RSC and P-RSC. It indicates that for this example, even for very large , both RSC and P-RSC are very stable.

### Example 2

We consider the equation

 (5.4) εu′′(x)−xu′(x)−u(x)=f(x)

with the linear constraints

 u(−1)−u(1)=σ1,∫1−1u(x)dx=σ2.

The function , and are chosen such that the exact solution of (5.4) is

 u(x)=exp(x2−12ε).

In Tables 2-4, we present the condition numbers, the maximum point-wise errors, and the number of iterations via the GMRES algorithm [7] with the relative tolerance equal to and the restart number equal to , for the cases , , and , respectively. We observe that the condition numbers of P-RSC are independent of , while those of RSC behave like .

## 6 Concluding remarks

We have proposed a preconditioning rectangular spectral collocation scheme for th-order ordinary differential equations together with general linear constraints. The condition number of the resulting linear system is typically independent of the number of collocation points. And the linear system can be solved by an iterative solver within a few iterations. The application of the preconditioning scheme to nonlinear problems is straightforward.

## Acknowledgment

We thank Prof. Li-Lian Wang (Nanyang Technological University, Singapore) for providing the MATLAB codes used in [12].

### References

1. J. P. Boyd, Chebyshev and Fourier spectral methods, Dover Publications, Inc., Mineola, NY, second ed., 2001.
2. C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral methods: Fundamentals in single domains, Scientific Computation, Springer-Verlag, Berlin, 2006.
3. T. A. Driscoll and N. Hale, Rectangular spectral collocation, IMA Journal of Numerical Analysis, to appear (2015).
4. B. Fornberg, A practical guide to pseudospectral methods, vol. 1 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, 1996.
5. D. Funaro, Polynomial approximation of differential equations, vol. 8 of Lecture Notes in Physics. New Series m: Monographs, Springer-Verlag, Berlin, 1992.
6. P. Henrici, Essentials of numerical analysis with pocket calculator demonstrations, John Wiley & Sons, Inc., New York, 1982.
7. Y. Saad and M. H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869.
8. H. E. Salzer, Lagrangian interpolation at the Chebyshev points , ; some unnoted advantages, Comput. J., 15 (1972), pp. 156–159.
9. J. Shen, T. Tang, and L.-L. Wang, Spectral methods, vol. 41 of Springer Series in Computational Mathematics, Springer, Heidelberg, 2011. Algorithms, analysis and applications.
10. Y. G. Shi, Theory of Birkhoff interpolation, Nova Science Publishers, Inc., Hauppauge, NY, 2003.
11. L. N. Trefethen, Spectral methods in MATLAB, vol. 10 of Software, Environments, and Tools, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000.
12. L.-L. Wang, M. D. Samson, and X. Zhao, A well-conditioned collocation method using a pseudospectral integration matrix, SIAM J. Sci. Comput., 36 (2014), pp. A907–A929.
13. K. Xu and N. Hale, Explicit construction of rectangular differentiation matrices, IMA Journal of Numerical Analysis, to appear (2015).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters