Decay of the Kolmogorov N-width for wave problems

# Decay of the Kolmogorov N-width for wave problems

Constantin Greif and Karsten Urban Ulm University, Institute of Numerical Mathematics, Helmholtzstr. 20, D-89081 Ulm, Germany
###### Abstract.

The Kolmogorov -width describes the rate of the worst-case error (w.r.t. a subset of a normed space ) arising from a projection onto the best-possible linear subspace of of dimension . Thus, sets a limit to any projection-based approximation such as determined by the reduced basis method. While it is known that decays exponentially fast for many linear coercive parametrized partial differential equations, i.e., , we show in this note, that only for initial-boundary-value problems of the hyperbolic wave equation with discontinuous initial conditions. This is aligned with the known slow decay of for the linear transport problem.

###### Key words and phrases:
Kolmogorov -width, wave equation
41A46, 65D15

## 1. Introduction

The Kolmogorov -width is a classical concept of (nonlinear) approximation theory as it describes the error arising from a projection onto the best-possible space of a given dimension , . This error is measured for a class of objects in the sense that the worst error over is considered. Here, we focus on subsets , where is some Banach or Hilbert space with norm . Then, the Kolmogorov -width is defined as

 (1.1) dN(M):=infVN⊂H; dimVN=Nsupu∈MinfvN∈VN∥u−vN∥H,

where are linear subspaces. The corresponding approximation scheme is nonlinear as one is looking for the best possible linear space of dimension . Due to the infimum, the decay of as sets a lower bound for the best possible approximation of all elements in by a linear approximation in .

Particular interest arises if the set is chosen as a set of solutions of certain equations such as partial differential equations (PDEs), which is the reason why sometimes (even though slightly misleading) is termed as ‘solution manifold’. In that setting, one considers a parameterized PDE (PPDE) with a suitable solution and ranges over some parameter set , i.e., , where we will skip the dependence on for notational convenience. As a consequence, the decay of the Kolmogorov -width is of particular interest for model reduction in terms of the reduced basis method. There, given a PPDE and a parameter set , one wishes to construct a possibly optimal linear subspace in an offline phase in order to highly efficiently compute a reduced approximation with degrees of freedom (in ) in an online phase. For more details on the reduced basis method, we refer the reader e.g. to the recent surveys [4, 5, 10].

It has been proven that for certain linear, coercive parameterized problems, the Kolmogorov -width decays exponentially fast, i.e.,

 dN(M)≤Ce−βN

with some constants and , see e.g. [2, 8]. This extremely fast decay is at the heart of any model reduction strategy (based upon a projection to ) since it allows us to chose a very moderate to achieve small approximation errors. It is worth mentioning that this rate can in fact be achieved numerically by determining by a greedy-type algorithm.

However, the situation dramatically changes when leaving the elliptic and parabolic realm. In fact, it has been proven in  that decays for certain first-order linear transport problems at most with the rate . This in turn implies that projection-based approximation schemes for transport problems severely lack efficiency, [1, 3]. In this note, we consider hyperbolic problems and show in a similar way as in  that

 dN(M)≥14N−1/2,

(see Thm. 4.5 below) for an example of the second-order wave equation. In Section 2, we describe the Cauchy problem of a second-order wave equation with discontinuous initial conditions and review the distributional solution concept. Section 3 is devoted to the investigation of a corresponding initial-boundary-value problem and Section 4 contains the proof of Thm. 4.5.

## 2. Distributional solution of the wave equation on R

We start by considing the univariate wave equation on the spatial domain and on the time interval (i.e., a Cauchy problem) for a real-valued parameter with discontinuous initial values, i.e.,

 (2.1a) ∂ttuμ(t,x)−μ2∂xxuμ(t,x) =0for(t,x)∈ΩI:=I×Ω, (2.1b) uμ(0,x) =u0(x):={1,if x<0,−1,if x≥0,x∈Ω, (2.1c) ∂tuμ(0,x) =0,x∈Ω.

This initial value problem has no classical solution, so that we consider a weak solution concept, namely we look for solutions in the distributional sense, which is known to be appropriate for hyperbolic problems.

###### Lemma 2.1.

A distributional solution of (2.1) is given, for , by

###### Proof.

We start by considering the following initial value problem

 (2.2) ∂ttGμ(t,x)−μ2⋅∂xxGμ(t,x)=0for(t,x)∈ΩI,Gμ(0,x)=0,∂tGμ(0,x)=δ(x),x∈Ω,

where denotes Dirac’s -distribution at 0. A solution of (2.2) is called fundamental solution (see e.g. [11, Ch. 5]) and can easily be seen to read , where denotes the Heaviside step function with distributional derivative . Hence, the distributional derivative of w.r.t.  reads

 (2.3) ∂tGμ(t,x)=12(δ(x+μt)+δ(x−μt))

and it is obvious that as well as for . By using the properties of the Dirac’s -distribution (see e.g. ) we observe that and in the distributional sense. Hence, satisfies (2.2).

Now, we consider the original problem (2.1). To this end, the following relation of the fundamental solution of (2.2) and the solution of (2.1) is well-known ,

 uμ(t,x)= ∫R∂tGμ(t,x−y)uμ(0,y)dy+∫RGμ(t,x−y)∂tuμ(0,y)dy.

Finally, inserting from (2.3), the initial condition in , and the Neumann initial condition in , yields

 uμ(t,x) =12∫R(δ(x−y+μt)+δ(x−y−μt))u0(y)dy =12[u0(x+μt)+u0(x−μt)]=⎧⎨⎩1,if x<−μt,−1,if x≥μt,0,else,

which proves the claim. ∎

## 3. The wave equation on the interval

Let us consider the wave equation (2.1a), but now on the bounded space-time domain with Dirichlet boundary conditions

 (2.1d) uμ(t,−1)=1,uμ(t,1)=−1,fort∈I:=(0,1),

and the initial conditions (2.1b,2.1c). It is readily seen that the functions defined by
(3.1)
for are contained in the solution manifold of (2.1a-d), i.e.,

 (3.2) {φμ:μ∈D}⊂M≡M(D):={uμ:μ∈D:=[0,1]}⊂L2(ΩI).

In fact, by Lemma 2.1, solves (2.1a-c) on and they also satisfy the boundary conditions (2.1d). The next step is the consideration of a specific family of functions to be defined now. For some and , let

 (3.3) ψM,m(t,x):=⎧⎪ ⎪⎨⎪ ⎪⎩1,if x∈[−mMt,−m−1Mt),−1,if x∈[m−1Mt,mMt),0,else,for(t,x)∈¯ΩI,

and we collect all , in

 (3.4) ΨM:={ψM,m:1≤m≤M}.

Note, that can be generated by

 (3.5) ΦM:={φmM:0≤m≤M}⊂{φμ:μ∈D},

as follows , , which in fact can be easily seen; see also Figure 1.

We will see later that . Moreover and these functions are pairwise orthogonal, i.e.

 (ψM,m1,ψM,m2)L2(ΩI)=1∫01∫−1ψM,m1(t,x) ψM,m2(t,x) dx dt=1Mδm1,m2,

where denotes the Kronecker- for . Thus,

 (3.6) ~ΨM:={~ψM,m:1≤m≤M},~ψM,m:=√MψM,m,1≤m≤M,

is a set of orthonormal functions.

## 4. Kolmogorov N-width of sets of orthonormal elements

Let us start by introducing the notation , so that the Kolmogorov -width in (1.1) can be rephrased as

 dN(M):=infVN∈VNsupu∈MinfvN∈VN∥u−vN∥H.

We are going to determine either the exact value or lower bounds of for certain sets of functions.

###### Lemma 4.1.

The canonical orthonormal basis of has the Kolmogorov -width .

###### Proof.

Let , with being an arbitrary set of orthonormal vectors in . Thus, is an arbitrary linear subspace of of dimension . Then, for any and the canonical basis vector , we get

where is the orthogonal projection of onto . Then,

 ∥PVN(ek)∥22=⟨N∑j=1(dj)kdj,N∑l=1(dl)kdl⟩=N∑j=1(dj)k⟨dj,N∑l=1(dl)kdl⟩=N∑j=1(dj)2k.

Next, for we get111We also refer to [6, 12], where it was proven that for any idempotent operator , i.e., (4).

 σVN(k)2 =∥ek−PVN(ek)∥22=∥PVN(ek)∥22−(PVN(ek))2k+(1−(PVN(ek))k)2 (4.1) =N∑j=1(dj)2k−(N∑j=1(dj)2k)2+1−2N∑j=1(dj)2k+(N∑j=1(dj)2k)2=1−N∑j=1(dj)2k.

Let us now assume that

 (4.2) N∑j=1(dj)2k>12for allk∈{1,…,2N}.

Then, we would have that

 N=N∑j=1∥dj∥22=N∑j=12N∑k=1(dj)2k=2N∑k=1N∑j=1(dj)2k>2N⋅12=N,

which is a contradiction, so that (4.2) must be wrong and we conclude that there exists a such that . This yields by (4) that . By using this , this leads us to

 dN({e1,…,e2N})=infVN∈VNsupk∈{1,…,2N}infv∈VN∥ek−v∥2≥infVN∈VNσVN(k∗)≥1√2.

To show equality, we consider generated by orthonormal vectors . Then, for any even (and analogous for odd ) we get by (4) that

 σVN(k)2=1−N∑j=1(dj)2k=1−(1√2(ek−1+ek))2k=1−(1√2)2=12,

which proves the claim. ∎

###### Remark 4.2.

We note that, more general, for , it holds that , which can easily be proven following the above lines.

Having these preparations at hand, we can now estimate the Kolmogorov -width for arbitrary orthonormal sets in Hilbert spaces.

###### Lemma 4.3.

Let be an infinite-dimensional Hilbert space and any orthonormal set of size . Then, .

###### Proof.

Since , we can consider the subspace instead of whole . The space with norm can be isometrically mapped to with canonical orthonormal basis and Euclidean norm . In fact, by defining the map with . for we get

 ∥f(w)−f(v)∥22 =∥∥2N∑i=1(w−v,~ψi)H ei∥∥22=2N∑i=1(w−v,~ψi)2H∥ei∥22=2N∑i=1(w−v,~ψi)2H =2N∑i=1(w−v,~ψi)2H∥~ψi∥2H=∥∥2N∑i=1(w−v,~ψi)H ~ψi∥∥2H=∥w−v∥2H.

Choosing , we have . Thus, Lemma 4.1, yields , which proves the claim. ∎

###### Proposition 4.4.

Let be the solution manifold of (2.1a – d) in (3.2) and , defined in (3.4, 3.5), . Then, for .

###### Proof.

By (3.2), we have , so that the first inequality is immediate. For the proof of the second inequality, we use the abbreviation . First, we denote some optimizing spaces and functions,

 VΨMN :=arginfVN∈VNsupψ∈ΨMinfv∈VN∥ψ−v∥, ψM,m∗ :=argsupψ∈ΨMinfv∈VψN∥ψ−v∥, VmN :=arginfVN∈VNinfv∈VN∥φmM−v∥, vm :=arginfv∈VmN∥φmM−v∥.

With those notations, we get

 dN(ΨM) =infVN∈VNsupψ∈ΨMinfv∈VN∥ψ−v∥=infv∈VΨMN||ψM,m∗−v|| ≤∥ψM,m∗−(vm∗−vm∗−1)∥=∥(φm∗−1M−φm∗M)−(vm∗−vm∗−1)∥ ≤∥φm∗−1M−vm∗−1∥+∥φm∗M−vm∗∥=infv∈Vm∗−1N∥φm∗−1M−v∥+infv∈Vm∗N∥φm∗M−v∥ =infVN∈VNinfv∈VN∥φm∗−1M−v∥+infVN∈VNinfv∈VN∥φm∗M−v∥≤infv∈WN∥φm∗−1M−v∥+infv∈WN∥φm∗M−v∥,

where . This gives

 infv∈WN∥φm∗−1M−v∥+infv∈WN∥φm∗M−v∥ =infVN∈VN(infv∈VN∥φm∗−1M−v∥+infv∈VN∥φm∗M−v∥) ≤infVN∈VN(2supφ∈ΦMinfv∈VN∥φ−v∥)=2⋅dN(ΦM),

which proves the second inequality. ∎

We can now prove the main result of this note.

###### Theorem 4.5.

For being defined as in (3.2), we have that .

###### Proof.

Using Proposition 4.4 with (which in fact maximizes ) yields . Since is a linear space, we have

 dN(Ψ2N)=dN({ψ2N,n:1≤n≤2N})=1√2NdN({√2Nψ2N,n:1≤n≤2N})=1√2NdN(~Ψ2N).

Applying now Lemma 4.3 for the orthonormal functions previously defined in (3.6) gives , which completes the proof. ∎

Theorem 4.5 shows the same decay of as for linear advection problems, . Thus, transport and hyperbolic parametrized problems are expected to admit a significantly slower decay as for certain elliptic and parabolic problems as mentioned in the introduction. We note, that this result is not limited to the specific discontinuous initial conditions (2.1b). In fact, also for continuous initial conditions with a smooth ‘jump’, one can construct similar orthogonal functions like (3.3) yielding the slow decay result.

## References

•  J. Brunken, K. Smetana, and K. Urban. (Parametrized) First Order Transport Equations: Realization of Optimally Stable Petrov–Galerkin Methods. SIAM J. Sci. Comput., 41(1):A592–A621, 2019.
•  A. Buffa, Y. Maday, A. T. Patera, C. Prud’homme, and G. Turinici. A priori convergence of the greedy algorithm for the parametrized reduced basis method. ESAIM Math. Model. Numer. Anal., 46(3):595–603, 2012.
•  W. Dahmen, C. Plesken, and G. Welper. Double greedy algorithms: reduced basis methods for transport dominated problems. ESAIM Math. Model. Numer. Anal., 48(3):623–663, 2014.
•  B. Haasdonk. Reduced Basis Methods for Parametrized PDEs — A Tutorial. In P. Benner, A. Cohen, M. Ohlberger, and K. Willcox, editors, Model Reduction and Approximation, chapter 2, pages 65–136. SIAM, Philadelphia, 2017.
•  J. S. Hesthaven, G. Rozza, and B. Stamm. Certified Reduced Basis Methods for Parametrized Partial Differential Equations. Springer International Publishing, 2016.
•  T. Kato. Estimation of iterated matrices, with application to the von Neumann condition. Numer. Math., 2:22–29, 1960.
•  D. Landers and L. Rogge. Nichtstandard Analysis. Springer-Lehrbuch. [Springer Textbook]. Springer-Verlag, Berlin, 1994.
•  M. Ohlberger and S. Rave. Reduced basis methods: Success, limitations and future challenges. Proceedings of the Conference Algoritmy, pages 1–12, 2016.
•  A. Pinkus. -widths in approximation theory, volume 7 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Springer-Verlag, Berlin, 1985.
•  A. Quarteroni, A. Manzoni, and F. Negri. Reduced basis methods for partial differential equations: An introduction. Springer International Publishing, Cham; Heidelberg, 2016.
•  M. Renardy and R. C. Rogers. An introduction to partial differential equations, volume 13 of Texts in Applied Mathematics. Springer-Verlag, New York, second edition, 2004.
•  J. Xu and L. Zikatanov. Some observations on Babuška and Brezzi theories. Numer. Math., 94(1):195–202, 2003.
Comments 0
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters Loading ...
362682 You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question Test
Test description