Preconditioning rectangular spectral collocation

Preconditioning rectangular spectral collocation

Abstract

Rectangular spectral collocation (RSC) methods have recently been proposed to solve linear and nonlinear differential equations with general boundary conditions and/or other constraints. The involved linear systems in RSC become extremely ill-conditioned as the number of collocation points increase. By introducing suitable Birkhoff-type interpolation problems, we present pseudospectral integration preconditioning matrices for the ill-conditioned linear systems in RSC. The condition numbers of the preconditioned linear systems are independent of the number of collocation points. Numerical examples are given.

L
\slugger

mmsxxxxxxxx–x

agrange interpolation, Birkhoff-type interpolation, rectangular spectral collocation, integration preconditioning

{AMS}

65L60, 41A05, 41A10

1 Introduction

Rectangular spectral collocation methods [3] have recently been demonstrated to be a convenient means of solving the problems when the row replacement or ‘boundary bordering’ strategy of standard spectral collocation methods [4, 11, 1, 2, 9] becomes ambiguous. Specifically, an th-order differential operator is discretized by a rectangular matrix directly, allowing constraints to be appended to form an invertible square system. However, the involved linear systems become extremely ill-conditioned as the number of collocation points increases. Typically, the condition number grows like . Efficient preconditioners are highly required when solving the linear systems by an iterative method.

Recently, Wang, Samson, and Zhao [12] proposed a well-conditioned collocation method to solve linear differential equations with various types of boundary conditions. By introducing a suitable Birkhoff interpolation problem [10], they constructed a pseudospectral integration preconditioning matrix, which is the exact inverse of the pseudospectral discretization matrix of the th-order derivative operator together with boundary conditions. In this paper, we employ the similar idea to construct a pseudospectral integration matrix, which is the exact inverse of the discretization matrix arising in the rectangular spectral collocation method for th-order derivative operator together with general linear constraints. The condition number of the resulting linear system is independent of the number of collocation points when the new pseudospectral integration matrix is used as a right preconditioner for an th-order linear differential operator together with the same constraints.

The rest of the paper is organized as follows. In §2, we review several topics required in the following sections. In §3, we introduce the new pseudospectral integration matrix by a suitable Birkhoff-type interpolation problem. In §4, we present the preconditioning rectangular spectral collocation method. Numerical examples are reported in §5. We present brief concluding remarks in §6.

2 Preliminaries

2.1 Barycentric resampling matrix

Let be a set of distinct interpolation points satisfying

(2.1)

The associated barycentric weights are defined by

(2.2)

Let be another set of distinct interpolation points satisfying

(2.3)

The barycentric resampling matrix [3], , which interpolates between the points and , is defined by

where

{lemma}

If , then

2.2 Pseudospectral differentiation matrices

The Lagrange interpolation basis polynomials of degree associated with the points are defined by

where is the barycentric weight (2.2). Define the pseudospectral differentiation matrices:

There hold

and

The matrix is called a rectangular th-order differentiation matrix, which maps values of a polynomial defined on to the values of its th-order derivative on . Explicit formulae and recurrences for rectangular differentiation matrices are given in [13].

2.3 Chebyshev polynomials and Chebyshev points

The most widely used spectral methods for non-periodic problems are those based on Chebyshev polynomials and Chebyshev points. In this paper, we focus on these polynomials and points. However, everything we discuss can be easily generalized to the case of Jacobi polynomials and corresponding points.

The Chebyshev points of the first kind (also known as Gauss-Chebyshev points) are given by

In this case, the Gauss-Chebyshev quadrature weights are given by [5]

and the barycentric weights are given by [6]

Let be the set of all algebraic polynomials of degree at most . We have

(2.4)

The Chebyshev points of the second kind (also known as Gauss-Chebyshev-Lobatto points) are given by

In this case, the Gauss-Chebyshev-Lobatto quadrature weights are given by [5]

and the barycentric weights are given by [8]

where

We have

Let be the Chebyshev polynomials (see, for example, [5]) given by

They are mutually orthogonal:

(2.5)

where

Let denote the Lagrange interpolation basis polynomials of degree associated with the points . The polynomial can be rewritten as

(2.6)

If is a subset of or , can be obtained with ease. For example, suppose that is a proper subset of . Let denote the map such that . Let denote the set such that if then . By (2.4) and (2.5), we have, for

Here , , can be obtained by solving the following linear system

In particular, if denote the Lagrange interpolation basis polynomials of degree associated with , we have

where, for

If denote the Lagrange interpolation basis polynomials of degree associated with , we have

where, for

Define the integral operators:

By

and

we have

(2.7)

and

(2.8)

3 Pseudospectral integration matrices

Given and with , we consider the Birkhoff-type interpolation problem:

where each is a linear functional. Let be the Lagrange interpolation basis polynomials of degree associated with the points . Then the Birkhoff-type interpolation polynomial takes the form

(3.1)

where can be determined by the linear constraints Obviously, the existence and uniqueness of the Birkhoff-type interpolation polynomial is equivalent to that of . After obtaining , we can rewrite (3.1) as

Let and be the points as in (2.1). Define the th-order pseudospectral integration matrix (PSIM) as:

Define the matrices

It is easy to show that

(3.2)

Let be the discretization of the linear constraints , . We have the following theorem.

{theorem}

If for any ,

then

{proof}

The result follows from

and Lemma 2.1.

Now we give concrete examples. Consider the non-separable linear constraint

(3.3)

and the global linear constraint

(3.4)

where , and are given constants. They are straightforward to discretize: for (3.3),

and for (3.4),

where

and is a column vector of Clenshaw-Curtis quadrature weights [5].

The first-order Birkhoff-type interpolation problem takes the form:

  • Given with , we have

  • Given , we have

By (2.6) and (2.7), the matrix can be computed stably even for thousands of collocation points.

The second-order Birkhoff-type interpolation problem takes the form:

  • Given with , and , we have

    and

By (2.6), (2.7) and (2.8), the matrices and can be computed stably even for thousands of collocation points.

4 Preconditioning rectangular spectral collocation

Consider the th-order differential equations of the form

(4.1)

together with linear constraints

(4.2)

Let (with ) and be the points as defined in (2.1) and (2.3), respectively. The rectangular spectral collocation discretization [3] of (4.1) is given by

where

Here we use boldface letters to indicate a column vector obtained by discretizing at the points except for the unknown . For example,

Let

be the discretization of the linear constraints (4.2) and satisfy the condition in Theorem 3, where

The global collocation system is given by

(4.3)

where

Consider the pseudospectral integration matrix (3) as a right preconditioner for the linear system (4.3). We need to solve the right preconditioned linear system

By (see Theorem 3)

we have

(4.4)

There hold

and

where, for ,

After solving (4.4), we obtain by

5 Numerical results

In this section, we compare the rectangular spectral collocation (RSC) scheme (4.3) and the preconditioned rectangular spectral collocation (P-RSC) scheme (4.4). In all computations, the Chebyshev points of the second kind are chosen as and the Chebyshev points of the first kind are chosen as .

Example 1

We consider the equation

(5.1)

with the linear constraint

(5.2)

or the linear constraint

(5.3)

where is a given constant. We report in Table 1 the condition numbers of the linear systems in RSC and P-RSC with , and various . We observe that the condition numbers of P-RSC are independent of , while those of RSC behave like .

Constraint (5.2) Constraint (5.3) Constraint (5.2) Constraint (5.3)
RSC P-RSC RSC P-RSC RSC P-RSC RSC P-RSC
128 6.86e+04 3.19 3.37e+04 2.54 3.04e+04 1.95 4.07e+04 1.95
256 3.87e+05 3.19 1.91e+05 2.54 1.72e+05 1.95 2.29e+05 1.95
512 2.19e+06 3.19 1.08e+06 2.54 9.68e+05 1.95 1.30e+06 1.95
1024 1.24e+07 3.19 6.10e+06 2.54 5.47e+06 1.95 7.32e+06 1.95
Table 1: Comparison of condition numbers.

    (a)                                                                     (b)

Figure 1: (a): exact solution versus numerical solutions. (b): comparison of numerical errors

We next consider (5.1) with and the linear constraint (5.2). The function and are chosen such that an oscillatory solution of (5.1) is

In Figure 1 (a) we plot the exact solution against the numerical solutions obtained by RSC and P-RSC with . In Figure 1 (b) we plot the maximum point-wise errors of RSC and P-RSC. It indicates that for this example, even for very large , both RSC and P-RSC are very stable.

Example 2

We consider the equation

(5.4)

with the linear constraints

The function , and are chosen such that the exact solution of (5.4) is

RSC P-RSC
Condition Error Iterations Condition Error Iterations
128 1.95e+08 8.41e-10 1000 2.73 6.66e-16 8
256 4.39e+09 9.23e-09 1000 2.73 6.66e-16 8
512 9.94e+10 7.84e-08 1000 2.73 8.88e-16 8
1024 2.25e+12 2.49e-06 1000 2.73 1.11e-15 8
Table 2: Comparison of condition numbers and iterations of different schemes for .

In Tables 2-4, we present the condition numbers, the maximum point-wise errors, and the number of iterations via the GMRES algorithm [7] with the relative tolerance equal to and the restart number equal to , for the cases , , and , respectively. We observe that the condition numbers of P-RSC are independent of , while those of RSC behave like .

RSC P-RSC
Condition Error Iterations Condition Error Iterations
128 6.74e+07 2.65e-10 1000 5.11e+02 1.14e-14 16
256 1.50e+09 5.95e-10 1000 5.11e+02 1.62e-14 16
512 3.35e+10 4.12e-09 1000 5.11e+02 1.58e-14 16
1024 7.55e+11 1.69e-07 1000 5.11e+02 1.49e-14 16
Table 3: Comparison of condition numbers and iterations of different schemes for .
RSC P-RSC
Condition Error Iterations Condition Error Iterations
128 4.47e+07 2.23e-10 1000 3.70e+05 3.11e-13 64
256 9.77e+08 2.20e-09 1000 3.70e+05 1.04e-12 65
512 2.16e+10 7.95e-09 1000 3.70e+05 1.34e-12 67
1024 4.84e+11 4.69e-07 1000 3.70e+05 5.35e-13 67
Table 4: Comparison of condition numbers and iterations of different schemes for .

6 Concluding remarks

We have proposed a preconditioning rectangular spectral collocation scheme for th-order ordinary differential equations together with general linear constraints. The condition number of the resulting linear system is typically independent of the number of collocation points. And the linear system can be solved by an iterative solver within a few iterations. The application of the preconditioning scheme to nonlinear problems is straightforward.

Acknowledgment

We thank Prof. Li-Lian Wang (Nanyang Technological University, Singapore) for providing the MATLAB codes used in [12].

References

  1. J. P. Boyd, Chebyshev and Fourier spectral methods, Dover Publications, Inc., Mineola, NY, second ed., 2001.
  2. C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral methods: Fundamentals in single domains, Scientific Computation, Springer-Verlag, Berlin, 2006.
  3. T. A. Driscoll and N. Hale, Rectangular spectral collocation, IMA Journal of Numerical Analysis, to appear (2015).
  4. B. Fornberg, A practical guide to pseudospectral methods, vol. 1 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, 1996.
  5. D. Funaro, Polynomial approximation of differential equations, vol. 8 of Lecture Notes in Physics. New Series m: Monographs, Springer-Verlag, Berlin, 1992.
  6. P. Henrici, Essentials of numerical analysis with pocket calculator demonstrations, John Wiley & Sons, Inc., New York, 1982.
  7. Y. Saad and M. H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869.
  8. H. E. Salzer, Lagrangian interpolation at the Chebyshev points , ; some unnoted advantages, Comput. J., 15 (1972), pp. 156–159.
  9. J. Shen, T. Tang, and L.-L. Wang, Spectral methods, vol. 41 of Springer Series in Computational Mathematics, Springer, Heidelberg, 2011. Algorithms, analysis and applications.
  10. Y. G. Shi, Theory of Birkhoff interpolation, Nova Science Publishers, Inc., Hauppauge, NY, 2003.
  11. L. N. Trefethen, Spectral methods in MATLAB, vol. 10 of Software, Environments, and Tools, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000.
  12. L.-L. Wang, M. D. Samson, and X. Zhao, A well-conditioned collocation method using a pseudospectral integration matrix, SIAM J. Sci. Comput., 36 (2014), pp. A907–A929.
  13. K. Xu and N. Hale, Explicit construction of rectangular differentiation matrices, IMA Journal of Numerical Analysis, to appear (2015).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
103199
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description