Discretization of the Poisson equation with non-smooth data andemphasis on non-convex domainsThe work was partially supported by Deutsche Forschungsgemeinschaft, priority program 1253 and IGDK 1754.

# Discretization of the Poisson equation with non-smooth data and emphasis on non-convex domains††thanks: The work was partially supported by Deutsche Forschungsgemeinschaft, priority program 1253 and IGDK 1754.

Thomas Apel thomas.apel@unibw.de, Universität der Bundeswehr München, Institut für Mathematik und Bauinformatik, D-85579 Neubiberg, Germany    Serge Nicaise snicaise@univ-valenciennes.fr, LAMAV, Institut des Sciences et Techniques de Valenciennes, Université de Valenciennes et du Hainaut Cambrésis, B.P. 311, 59313 Valenciennes Cedex, France    Johannes Pfefferer johannes.pfefferer@unibw.de, Universität der Bundeswehr München, Institut für Mathematik und Bauinformatik, D-85579 Neubiberg, Germany
###### Abstract

Several approaches are discussed how to understand the solution of the Dirichlet problem for the Poisson equation when the Dirichlet data are non-smooth such as if they are in only. For the method of transposition (sometimes called very weak formulation) three spaces for the test functions are considered, and a regularity result is proved. An approach of Berggren is recovered as the method of transposition with the second variant of test functions. A further concept is the regularization of the boundary data combined with the weak solution of the regularized problem. The effect of the regularization error is studied.

The regularization approach is the simplest to discretize. The discretization error is estimated for a sequence of quasi-uniform meshes. Since this approach turns out to be equivalent to Berggren’s discretization his error estimates are rendered more precisely. Numerical tests show that the error estimates are sharp, in particular that the order becomes arbitrarily small when the maximal interior angle of the domain tends to .

E

lliptic boundary value problem, method of transposition, very weak formulation, finite element method, discretization error estimate

AMS subject classification 65N30; 65N15

## 1 Introduction

The motivation for this paper is to consider the boundary value problem

 −Δy =fin Ω,y=uon Γ:=∂Ω, (1.1)

with right hand side and boundary data . We assume to be a bounded polygonal domain with boundary . Such problems arise in optimal control when the Dirichlet boundary control is considered in only, see for example the papers by Deckelnick, Günther, and Hinze, , French and King, , May, Rannacher, and Vexler, , and Apel, Mateos, Pfefferer, and Rösch, . On the continuous level we even admit more irregular data.

In Section 2 we analyze several ways how to understand the solution of the boundary value problem (1.1) for which we cannot expect a weak solution . The most popular method to solve problem (1.1) is the transposition method that goes back to Lions and Magenes  and that is based on the use of some integration by parts. This formally leads to the very weak formulation: Find such that

 (y,Δv)Ω=(u,∂nv)Γ−(f,v)Ω∀v∈V (1.2)

with denoting the scalar product or an appropriate duality product. The main issue is to find the appropriate trial space and test space . In the convex case, it turns out that a good choice is and . This case is investigated by many authors, some of them are mentioned in Subsection 2.1, but we will see that such a choice is not appropriate in the non-convex case (loss of uniqueness) and present some remedies (enlarged test spaces) in Subsections 2.2 and 2.3. We will further show in Subsection 2.2 that in the correct setting the very weak solution corresponds to the one obtained by an integral equation technique and hence has regularity.

The main drawback of the very weak formulation is the fact that a conforming discretization of the test space should be made by -elements. Hence Berggren proposed in  to introduce two new variables, and allowing to perform a simpler numerical analysis, see Subsection 2.4. In Subsection 2.5 we propose another method that consists of regularizing the Dirichlet datum by approximating it by a sequence of functions in using for example an interpolation operator. This allows to compute a sequence of weak solutions , and we show that they converge to the very weak solution with an explicit convergence rate.

A negative result about the well-posedness of the weak formulation with -data completes the discussion on the continuous level.

Section 3 is devoted to the numerical analysis. We start with Berggren’s numerical approach and recall his error estimates in Subsection 3.1 for completeness. Next we perform in Subsection 3.2 a numerical analysis of our regularization approach and prove error estimates for the piecewise linear approximation on a family of conforming, quasi-uniform finite element meshes. Notice that it turns out that on the discrete level Berggren’s approach is a particular case of our regularization strategy. The convergence order is in the convex case but smaller in the non-convex case. This reduction can be explained by the singular behaviour of the solution of the dual problem. In our paper  we investigate the singular complement method to remedy the suboptimality of the standard finite element method in non-convex domains.

Finally, in Section 4 we present numerical tests in order to illustrate that our error estimates are sharp. The paper ends with some remarks about the three-dimensional case and about data with different regularity than assumed above.

## 2 Analysis of the boundary value problem

In this section we analyze several ways how to understand the solution of the boundary value problem (1.1) for which we cannot expect a weak solution .

For keeping the notation succinct we assume that the polygonal domain has at most one non-convex corner with interior angle, denoted by . Let be the corresponding polar coordinates and define . The boundary segments of are denoted by , .

### 2.1 Method of transposition in convex polygonal domains

The method of transposition goes back at least to Lions and Magenes, , and is used by several other authors including French and King, , Casas and Raymond, , Deckelnick, Günther, and Hinze, , and May, Rannacher, and Vexler, . Since by partial integration the derivation

 (f,v)Ω =−(Δy,v)Ω=(∇y,∇v)Ω for v∈H10(Ω) =(y,∂nv)Γ−(y,Δv)Ω for v∈H2(Ω)∩H10(Ω)

is valid we get the very weak formulation: Find

 y∈L2(Ω):(y,Δv)Ω=(u,∂nv)Γ−(f,v)Ω∀v∈H2(Ω)∩H10(Ω). (2.1)

May, Rannacher, and Vexler, , proved the existence of a solution in in the case of a convex polygonal domain .

###### Lemma 2.1.

If the domain is convex there exists a unique solution of problem (2.1) that satisfies the a priori error estimate

 ∥y∥L2(Ω)≤c(∥u∥∏Nj=1H1/200(Γj)′+∥f∥(H2(Ω)∩H10(Ω))′) (2.2)

provided that and .

Recall that is the space of functions whose extension by zero to is in .

###### Proof.

For being self-contained we sketch here the proof of [17, Lemma 2.1]. The idea is first to assume more regular data, , such that a weak solution exists, then to show (2.2), and finally to use a standard density argument since and .

The estimate (2.2) is proven by using a duality argument. Due to the convexity of the domain there exists a solution of the auxiliary problem

 −Δw =yin Ω,w=0on Γ, (2.3)

such that

 ∥y∥2L2(Ω) =(y,−Δw)Ω=(f,w)Ω−(u,∂nw)Γ ≤c(∥u∥∏Nj=1H1/200(Γj)′+∥f∥(H2(Ω)∩H10(Ω))′)∥w∥H2(Ω),

where one uses that the mapping , , is surjective due to [13, Thm. 1.5.2.8]. The desired estimated (2.2) is then obtained by using the a priori estimate and by division by . ∎

The method of proof of this lemma will be revisited in Section 2.5, where we start the discussion of the regularization approach. However, there we will work with different function spaces such that non-convex domains are included in the theory as well. The given proof of Lemma 2.1 is even restricted to convex domains since the isomorphism

 Δw∈L2(Ω), w|Γ=0⇔w∈H2(Ω)∩H10(Ω)

is used. If the domain is non-convex one loses at least the uniqueness of the solution of (2.1). For example, take and , then both and are harmonic in and on . Both satisfy (2.1) with and on . Hence, one needs a larger test space in order to rule out .

### 2.2 Method of transposition in general polygonal domains

In a first instance we replace the test space by

 V:=H10(Ω)∩H1Δ(Ω)withH1Δ(Ω):={v∈H1(Ω):Δv∈L2(Ω)}. (2.4)

Since , the graph norm in , that is , is equivalent to such that we will use henceforth

 ∥v∥V=∥Δv∥L2(Ω).

Furthermore, let us denote by the space of normal derivatives of functions . According to [13, Theorem 1.5.3.10] this space is well-defined and a subspace of . The natural norm in is given by

 ∥g∥VΓ:=inf{∥v∥V:v∈V,∂nv=g}.

In particular, we have for that

 ∥∂nv∥VΓ≤∥v∥V. (2.5)

Since the previous definitions of the spaces and are rather formal let us discuss the structure of these spaces.

###### Remark 2.2.

The spaces and can be characterized as follows:

1. If is convex the spaces and coincide. However, in non-convex domains the situation is different. In this case there is the splitting

 V=(H2(Ω)∩H10(Ω))⊕Span{ξ(r)rλsin(λθ)},

where denotes a smooth cut-off function which is equal to one in the neighborhood of the non-convex corner. For more details we refer to [14, Sections 1.5, 2.3 and 2.4] and [13, Theorem 4.4.3.7].

2. The mapping , , is surjective due to [13, Thm. 1.5.2.8]. Accordingly, in the convex case is just , whereas in the non-convex case there holds

 VΓ=(N∏j=1H1/200(Γj))⊕Span{ξ(r)rλ−1}.
3. In the non-convex case there is for and for . This implies and .

###### Lemma 2.3.

Let and . Then there exists a unique solution

 y∈L2(Ω):(y,Δv)Ω=(u,∂nv)Γ−(f,v)Ω∀v∈V (2.6)

with

 ∥y∥L2(Ω)≤∥u∥V′Γ+∥f∥V′.
###### Proof.

The proof of this lemma is based on the Babuška–Lax–Milgram theorem. Due to (2.5) the right hand side of (2.6) defines a linear functional on . Moreover, the bilinear form is bounded on . The inf-sup conditions are proved by using the isomorphism

 Δv∈L2(Ω), v|Γ=0⇔v∈V.

In particular, we obtain by taking

 supy∈L2(Ω)|(y,Δv)Ω|∥y∥L2(Ω) ≥(Δv,Δv)Ω∥Δv∥L2(Ω)=∥Δv∥L2(Ω)=∥v∥V,

and by taking the solution of

 supv∈V|(y,Δv)Ω|∥v∥V ≥(y,y)Ω∥y∥L2(Ω)=∥y∥L2(Ω). (2.7)

The existence of the unique solution of problem (2.6) follows from the standard Babuška–Lax–Milgram theorem, see for example [3, Theorem 2.1]. The a priori estimate follows from (2.7), and (2.6) and (2.5),

 ∥y∥L2(Ω) ≤supv∈V|(y,Δv)Ω|∥v∥V=supv∈V|(u,∂nv)Γ−(f,v)Ω|∥v∥V≤∥u∥V′Γ+∥f∥V′.

Note that if a weak solution exists, then with the help of the Green formula

 (∂nv,χ)Γ =(∇v,∇χ)Ω+(Δv,χ)Ω∀v∈H1Δ(Ω)⊃V, ∀χ∈H1(Ω), (2.8)

see Lemma 3.4 in the paper  by Costabel, it is also a very weak solution. (Set and use .)

A possible difficulty with this formulation is that a conforming discretization with a finite-dimensional space would require the use of -functions. This is simple for one-dimensional domains but requires a lot of degrees of freedom in two (and more) dimensions.

We finish this subsection with a regularity result.

###### Lemma 2.4.

The unique solution of problem (2.6) with and belongs to and to

 ˜W1,2(Ω):={z∈L2(Ω):δ1/2∇z∈L2(Ω)2} (2.9)

where is the distance of to the boundary . Furthermore, there exists a positive constant such that

 ∥y∥H1/2(Ω)+∥δ1/2∇y∥L2(Ω)2≤c∥u∥L2(Γ). (2.10)
###### Remark 2.5.

The book by Chabrowski, , deals exclusively with the Dirichlet problem with boundary data for elliptic linear equations. The solution is searched there in the Sobolev space (2.9) but domains of class were considered only.

###### Proof of Lemma 2.4.

In a first step, we use an integral representation and some properties of the layer potentials to get a solution with the appropriate regularity. Theorem 4.2 of Verchota’s paper  shows that the operator , being the boundary double layer potential, is an isomorphism from into itself (see also Corollary 4.5 of ). According to the trace property for the double layer potential (see Section 1 and Corollary 3.2 of ), there exists a unique harmonic function such that

 z→u a.e. in nontangential cones

with the notation from , and is given by

 z=K(12I+K)−1u. (2.11)

But due to the above mentioned isomorphism property and Theorem 1 of Costabel’s paper  (and its following Remark), we obtain that

 ∥z∥H1/2(Ω)≤c∥u∥L2(Γ). (2.12)

Note that Theorems 5.3, 5.4 and Corollary 5.5 of the paper  by Jerison and Kenig also yield

 ∥δ1/2∇z∥L2(Ω)2≤c∥u∥L2(Γ). (2.13)

The second step is to show that is the very weak solution, hence by uniqueness, we will get . The regularity of and the estimate (2.10) follow from our first step. For that last purpose, we use a density argument. Indeed let be a sequence of functions such that

 un→u in L2(Γ), as n→∞. (2.14)

Consider and let be the unique solution of (2.6) with boundary datum and right hand side (that is in ). Then by the estimate (2.12) and Lemma 2.3, we get

 yn→y in L2(Ω), as n→∞, zn→z in H1/2(Ω), as n→∞.

Furthermore by Theorem 5.15 of  satisfies

 γzn=un on Γ,

where is the trace operator from into . Hence we directly deduce that and by the above convergence property we conclude that . ∎

###### Corollary 2.6.

The unique solution of problem (2.6) with and belongs to and to from (2.9). There exists a positive constant such that

 ∥y∥H1/2(Ω)+∥δ1/2∇y∥L2(Ω)2≤c(∥u∥L2(Γ)+∥f∥H−1(Ω)).

### 2.3 Method of transposition employing weighted Sobolev spaces

Alternatively to the space , one can use the test space

 V2,2β(Ω)∩H10(Ω),β>1−πω,

in the non-convex case, where is a weighted Sobolev space of the class

 Vk,pβ(Ω):={v∈D′(Ω):∥v∥Vk,pβ(Ω)<∞},∥v∥pVk,pβ(Ω):=∑|α|≤k∫Ω|rβ−k+|α|Dαv|p, (2.15)

where we use standard multi-index notation. For later use we also introduce

 L2β(Ω):=V0,2β(Ω).

First derivatives of -functions belong to by definition. The trace space of is , see [18, Lemma 1.2] or [20, Theorem 1.31]. In the next lemma, we will use the spaces

 Vβ:={V2,2β(Ω)∩H10(Ω)% for ω>π,H2(Ω)∩H10(Ω)for ω<π,Yβ:={L2−β(Ω)for ω>π,L2(Ω)for ω<π,

for . We endow with the -norm for and the -norm for , as well as with the -norm for and the -norm for .

###### Remark 2.7.

Let us discuss the definition of the space and the restriction of the weight to the interval in the non-convex case:

1. We require in order to have the isomorphism

 Δv∈L2β(Ω), v|Γ=0⇔v∈Vβ. (2.16)
2. It is possible to use the test space in convex domains as well. However, this implies a loss of information about the solution since this test space is smaller than due to the fact that the weight can be negative.

3. Note that for . This implies . This means that -boundary data are included in the following discussion if .

###### Lemma 2.8.

Let and assume that and . Then there exists a unique solution

 y∈Yβ↪L2(Ω):(y,Δv)Ω=(u,∂nv)Γ−(f,v)Ω∀v∈Vβ (2.17)

with

 ∥y∥Yβ≤c(∥f∥V′β+∥u∥(∏Nj=1V1/2,2β(Γj))′).
###### Proof.

The convex case was already treated in Lemma 2.1, hence we focus on the non-convex case. We proceed as in Lemma 2.3. The right hand side of (2.17) defines a continuous functional on . Furthermore, the bilinear form is bounded on ,

 |(y,Δv)Ω|≤∥r−βy∥L2(Ω)∥rβΔv∥L2(Ω)≤∥y∥Yβ∥v∥Vβ.

The inf-sup conditions for the bilinear form are proved by using the isomorphism (2.16), in particular ; we obtain by taking

 supy∈Yβ|(y,Δv)Ω|∥y∥Yβ ≥(rβΔv,rβΔv)Ω∥r2βΔv∥L2−β(Ω)=∥Δv∥2L2β(Ω)∥Δv∥L2β(Ω)≥c−1∥v∥Vβ,

and by taking the solution of with

 supv∈Vβ|(y,Δv)Ω|∥v∥Vβ ≥(r−βy,r−βy)Ωc∥y∥L2−β(Ω)=c−1∥y∥L2−β(Ω). (2.18)

The existence of the unique solution of problem (2.17) follows now from the standard Babuška–Lax–Milgram theorem, see for example [3, Theorem 2.1]. The a priori estimate is obtained with (2.18) and (2.17),

 ∥y∥Yβ≤csupv∈Vβ|(y,Δv)Ω|∥v∥Vβ=csupv∈Vβ|(u,∂nv)Γ−(f,v)Ω|∥v∥Vβ.

This ends the proof since we already noticed that the enumerator defines a continuous functional on . ∎

### 2.4 Berggren’s approach

Berggren’s approach  avoids test functions in in an explicit way. It can be explained as if we substitute and in (2.6),

 y∈L2(Ω):(y,φ)Ω =−(u,ζ)Γ+(f,v)Ω∀φ∈L2(Ω). (2.19)

The relationship between and both and can be expressed by the weak formulation of the Poisson equation,

 v∈V:(∇v,∇ψ)Ω =(φ,ψ)Ω∀ψ∈H10(Ω) (2.20)

and a reformulation of the Green formula (2.8) in the form

 ζ∈VΓ:(ζ,χ)Γ =(∇v,∇χ)Ω−(φ,χ)Ω∀χ∈H1(Ω)∖H10(Ω). (2.21)

Note that Berggren’s formulation is not a system with three unknown functions since the second and third equations compute actions on the test function . Indeed, let and be the solution operators of (2.20) and (2.21), respectively, defined by and , then we could also write

 y∈L2(Ω):(y,φ)Ω=−(u,Fφ)Γ−(f,Sφ)Ω∀φ∈L2(Ω)

###### Lemma 2.9.

Berggren’s formulation (2.19), (2.20), (2.21) is equivalent to the formulation (2.6).

###### Proof.

We first assume that satisfies (2.6) and show (2.19)–(2.21). For any let be the variational solution of defined by (2.20), hence and . Based on the formula (2.8), we obtain (2.21). With (2.6) we finally get also (2.19).

Let now satisfy (2.19)–(2.21). Since we get from (2.20) that . Moreover, we obtain from (2.21). Hence equation (2.19) becomes

 −(y,Δv)Ω=−(u,∂nv)Γ+(f,v)Ω∀v∈V

due to the isometry between and . ∎

###### Remark 2.10.

Berggren used the regularity with some which implies . However, he did not consider the maximal domain of the elliptic operator, i.e., and . But with the explanations of Subsection 2.2 these regularities should be obvious. Thus the result of Lemma 2.9 is slightly more general than that of Berggren.

### 2.5 The regularization approach

A further idea is to regularize the boundary data and then to apply standard methods. This approach has already been considered within the proof of Lemma 2.1. In contrast, we do not use the isomorphism

 Δw∈L2(Ω), w|Γ=0⇔w∈H2(Ω)∩H10(Ω),

which can only be employed in case of convex domains, but the isomorphism

 Δv∈L2(Ω), v|Γ=0⇔v∈V.

This allows us to apply the regularization approach in the non-convex case as well. Moreover, we propose two different strategies how the regularized Dirichlet boundary data can be constructed in an explicit way. Thereby we will be able in Subsection 3.2 to calculate approximate solutions of the regularized problems based on a finite element method. For the data we assume henceforth and . This is not only for simplicity but also due to the fact that already for Dirichlet boundary data in the convergence rates of the approximate solutions in Subsection 3.2 tend to zero as the maximal interior angle tends to .

We start with general convergence results for the regularized solutions. To this end let be a sequence of functions such that

 limh→0∥u−uh∥L2(Γ)=0.

Let now be the variational solution,

 yh∈Yh∗:(∇yh,∇v)Ω=(f,v)Ω∀v∈H10(Ω). (2.22)
###### Lemma 2.11.

Let and . Then the limit exists, belongs to , and is the very weak solution, that means it satisfies (2.6).

###### Proof.

First we show that is a Cauchy sequence in . From (2.22) and Green’s formula, we have for any ,

 (f,v)Ω=(∇yh,∇v)Ω =−(yh,Δv)Ω+(yh,∂nv)Γ, (f,v)Ω=(∇yh′,∇v)Ω =−(yh′,Δv)Ω+(yh′,∂nv)Γ.

Hence due to and on , we deduce that

 (yh−yh′,Δv)Ω=(uh−uh′,∂nv)Γ∀v∈V. (2.23)

Now for any , let be such that

 Δvz=z, (2.24)

that clearly satisfies

 ∥∂nvz∥L2(Γ)≤c∥vz∥Hs(Ω)≤c∥z∥L2(Ω) (2.25)

with some , . Finally, we obtain with (2.23) and (2.25)

 ∥yh−yh′∥L2(Ω) =supz∈L2(Ω),z≠0(yh−yh′,z)Ω∥z∥L2(Ω)=supz∈L2(Ω),z≠0(uh−uh′,∂nvz)Γ∥z∥L2(Ω) ≤∥uh−uh′∥L2(Γ)supz∈L2(Ω),z≠0∥∂nvz∥L2(Γ)∥z∥L2(Ω)=c∥uh−uh′∥L2(Γ).

Since converges in , it is a Cauchy sequence and hence also is a Cauchy sequence and converges in by the completeness of .

From we obtain by (2.22) and the Green formula (2.8)

 (f,v)Ω =(∇yh,∇v)Ω=(Δv,yh)Ω−(∂nv,yh)Γ =(Δv,yh)Ω−(∂nv,uh)Γ∀v∈V.

Since and we can pass to the limit and obtain that the limit function satisfies (2.6). ∎

We can estimate the regularization error by a similar technique.

###### Lemma 2.12.

Let if is convex and if is non-convex. Then the estimate

 ∥y−yh∥L2(Ω)≤c∥u−uh∥H−s(Γ)

holds.

###### Proof.

We use the approach of the proof of Lemma 2.11 and just replace by and by to get

 (y−yh,Δv)Ω=(u−uh,∂nv)Γ∀v∈V. (2.26)

Again, for any , we let be such that but estimate now in a sharper way

 ∥∂nvz∥Hs(Γ)≤c∥vz∥Hs+3/2(Ω)≤c∥z∥L2(Ω) (2.27)

As in the previous proof we get

 ∥y−yh∥L2(Ω) =supz∈L2(Ω),z≠0(u−uh,∂nvz)Ω∥z∥L2(Ω) ≤∥u−uh∥H−s(Γ)supz∈L2(Ω),z≠0∥∂nvz∥Hs(Γ)∥z∥L2(Ω) ≤c∥u−uh∥H−s(Γ).

Actually, the proof is for but of course the statement holds when is decreased. ∎

A choice for the construction of the regularized function could be the use of the -projection into a piecewise polynomial space on the boundary (which we call in Section 3) or the use of the Carstensen interpolant , see . Namely, if is the set of nodes of the triangulation on the boundary, we set

 Chu=∑x∈NΓπx(u)λx,

where is the standard hat function related to and

 πx(u)=∫Γuλx∫Γλx=(u,λx)Γ(1,λx)Γ.

The advantages of the interpolant in comparison with the -projection are its local definition and the property

 u∈[a,b]⇒Chu∈[a,b],

see ; a disadvantage for our application in optimal control is that for piecewise linear . We prove now regularization error estimates for the case that the regularized function is constructed via or .

###### Lemma 2.13.

If is the piecewise linear Carstensen interpolant of or the -projection of into a space of piecewise linear functions, then there holds

 ∥u−uh∥H−s(Γ)≤chs∥u∥L2(Γ),s∈[