Optimization Methods for Dirichlet Control Problems

Optimization Methods for Dirichlet Control Problems

\nameMariano Mateos Dpto. de Matemáticas, Universidad de Oviedo, Campus de Gijón, 33203 Gijón, Spain Email: mmateos@uniovi.es.
Abstract

We discuss several optimization procedures to solve finite element approximations of linear-quadratic Dirichlet optimal control problems governed by an elliptic partial differential equation posed on a 2D or 3D Lipschitz domain. The control is discretized explicitely using continuous piecewise linear approximations. Unconstrained, control-constrained, state-constrained and control-and-state constrained problems are analyzed. A preconditioned conjugate method for a reduced problem in the control variable is proposed to solve the unconstrained problem, whereas semismooth Newton methods are discussed for the solution of constrained problems. State constraints are treated via a Moreau-Yosida penalization. Convergence is studied for both the continuous problems and the finite dimensional approximations. In the finite dimensional case, we are able to show convergence of the optimization procedures even in the absence of Tikhonov regularization parameter. Computational aspects are also treated and several numerical examples are included to illustrate the theoretical results.

D

irichlet optimal control, discretization, constrained optimization, preconditionate conjugate gradient, semismooth Newton methods

{amscode}

49J20, 49M05, 49M15, 65N30

1 Introduction

Only in the last ten years it has been possible to develop a systematic study of Dirichlet optimal control problems governed by elliptic equations, which started with the seminal paper [1]. In that work a 2D control-constrained problem governed by a semilinear equation posed on a convex polygonal domain is studied. Several other works have been published about numerical error estimates; see [2] or [3, Ch 2.1] for a variational approach to control-constrained problems posed on smooth 2D or 3D domains, [4] for a superconvergence result on the state approximation for unconstrained 2D problems, [5] for control-and-state constrained problems posed on convex polygonal domains. In [6] the authors study the control-constrained problem in a 2D smooth convex domain taking into account the problems derived by the boundary approximation and in [7] an apparent paradox between [2] and [6] is explained. The regularity of the solution in possibly nonconvex polygonal plane domains is studied in [8]; see also the introduction of that paper for further references about related problems. In the recent publication [9], error estimates for a Dirichlet control problem governed by a parabolic equation are obtained. In this work, the spatial discretization of the control is studied in both the cases of continuous piecewise linear finite elements and variational approach.

Just before that, several works dealing with efficient optimization methods for control or state constrained problems had appeared; in [10, 11, 12, 13, 14, 15, 16] the semismooth Newton method is thoroughly studied. Since all these papers about optimization are previous to the papers about the numerical analysis of Dirichlet control problems, very little or no reference is made in them to their applicability to the problems we are going to deal with. Only in [16] two different Dirichlet control problems with more regular solutions than the ones we treat here are studied in the infinite-dimensional case. Let us also mention that in [17] the Dirichlet boundary condition is replaced by a Robin penalization. For this kind of penalization the methods developed in the aforementioned references are directly applicable. In [18], the semismooth Newton method is studied in the context of the variational approach of the control. Although the authors only exemplify their results through distributed and Robin boundary control, a combination of their results and some of the results we present in Section 4 can be applied to Dirichlet control. See Remark 4 below.

In this work we describe optimization methods for Dirichlet control problems in both the infinite and finite dimensional cases. Convergence proofs, examples and practical implementation details are discussed for all the algorithms through the paper.

In Section 2 we state the problem and prove that it is well posed in Lipschitz domains; see Lemma 2.1. So far, only smooth, convex, polygonal or polyhedral domains had been studied.

Next, we discretize the problem. As is usual in control problems, we have three choices to discretize the control: the first option is not to discretize the control, using a variational discretization as introduced in [19] for distributed problems; as a second option we may use piecewise constant functions; and finally we can use continuous piecewise linear functions.

The choice is not trivial because first order optimality conditions for Dirichlet control problems involve the normal derivative of the adjoint state.

If we discretize directly the optimality system using e.g. continuous piecewise linear finite elements, we would obtain two different approximations of the control: the trace of the discrete state would be continuous piecewise linear and the normal derivative of the discrete adjoint state would be piecewise constant. In [20, Ch. 3.2.7.3] and in [3, Example 2.1.11] this difficulty is solved using a mixed formulation of the state equation which is discretized with the lowest order Raviart-Thomas element. In [21] a convergence proof for this kind of discretization is given. In this way, both the trace of the discrete state and the normal derivative of the discrete adjoint state are piecewise constant functions, so the identification of both with the discrete control is meaningful. The discretization of the control in the presence of control constraints is carried out in these papers using a variational discretization. This kind of approximation may be convenient when the gradient of the state is the variable of interest, since this quantity is treated as one of the variables of the problem.

Another approach, in finite dimension, is to use the variational discrete normal derivative of the discrete adjoint state introduced in [1], which is a continuous piecewise linear function. Doing so, both the trace of the discrete state and the normal derivative of the discrete adjoint state are continuous piecewise linear functions, so the identification of both with the discrete control is meaningful. Following this idea, in [2] the control is not discretized for the control-constrained case, but a variational approach is followed.

In this work we investigate optimization methods for the case of discretizing the control explicitely using continuous piecewise linear functions.

The end of Section 2 is devoted to describe with some detail some computational aspects that will be important in the rest of the work.

We next provide in Section 3 an efficient method to solve the unconstrained problem. We propose in Subsection 3.2 a preconditioned conjugate gradient (pcg in the rest of the work) method for a reduced problem in the spirit of [22, Section 5]. We are able to prove convergence of the conjugate gradient even for the case where the Tikhonov regularization parameter vanishes.

In sections 4, 5 and 6 we study the convergence of the semismooth Newton method for the constrained problems and write practical algorithms for the solution of the finite dimensional approximations. In Section 4 we deal with the control-constrained problem. In many of the aforementioned references about the semismooth Newton method for control-constrained problems the authors study convergence of an abstract problem or an infinite dimensional problem. For distributed and Neumann control problems the results are immediately applicable to the finite dimensional approximations because the controls can be discretized using piecewise constant functions, and therefore the variational inequality arising from first order necessary optimality conditions can be written in an element-wise form. The same idea applies when we are dealing with a variational approach as proposed in [2, 3, 19] or a mixed formulation, as studied in [21, 20, 3]. When we use continuous piecewise linear elements,the variational inequality cannot be written in a point-wise or elementwise form; see (36d) and Remark 5. We include the analysis of Newton methods for both the original problem and the discretized one.

In Section 5 we study the state-constrained problem using a Moreau-Yosida penalization. Since there are no control constraints, the analysis of the semismooth Newton method for the infinite dimensional problem is applicable to the finite dimensional one, so we do not need to repeat it. We prove that the finite-element approximation of the penalized problems converge to the solution of the penalized problem. This result cannot be deduced straightforward from the ones in the literature since the penalized functional is not of class . A continuation strategy as proposed in [16] is developed. Finally, in Section 6 we discuss the problem with both control and state constraints.

It is well known that the main difficulty with Dirichlet control problems is the low regularity of the solutions. This regularity and related error estimates are, so far, well established in 2D polygonal domains [1, 5, 8] and 2D or 3D smooth domains [2] but there is not, up to our best knowledge, a general study in 3D polyhedral domains. Although the main focus of this work is on optimization methods, we also study the regularity of the solution and error estimates of the approximations in one example case in a polyhedron; see Example 3.6.

2 Statement of the problem and first properties

Let , or , be a bounded domain with Lipschitz boundary and in this domain consider a target state . Consider also the continuous linear operator such that if and only if is the solution in the transposition sense (see Definition 2.2 below) of

(1)

Let be a domain such that and define two pairs of functions and such that for all and for all . For some fixed regularization parameter , define

and consider the sets

and

In this work we will study optimization procedures for the following four Dirichlet control problems:

namely the unconstrained, control-constrained, state-constrained and control-and-state constrained problems.

Remark 1.

Almost all the literature related to these problems is written using the state equation (1). There would be no problem in taking into account an equation of the form

with regular enough , , and satysfing an uniform ellipticity condition.

Let us state precisely what we mean by solution in the transposition sense. Consider the space

This is a Banach space with the graph norm . Further, the functions in this space satisfy . This is known to be true for smooth domains; convex domains, see [17, Lemma A.1]; plane polygonal domains, see [8, Corollary 2.3]; or polyhedral domains, see [23, Theorem 2.6.7] and the usual trace theorem. We have not been able to find a proof of this fact for general Lipschitz domains. In [24, Theorem B.2] the regularity is proved. Nevertheless, as the authors notice in page 165 of this reference, the trace theorem is not valid neither in nor in , so we cannot deduce immediately that . The results in [25, 24] imply that the usual trace result can be extended to harmonic functions in the limit cases. We show next how to take advantage of this to prove that in Lipschitz domains. Regarding the analysis of semismooth Newton methods –see Lemma 4.1 below– we also prove regularity for some .

Lemma 2.1.

Let , or , be a bounded domain with Lipschitz boundary and consider . Then, there exists depending on the domain such that, for , we have and

(2)

If, further, is smooth or convex or polygonal or polyhedral, then there exists such that and

(3)
Proof.

Denote , extend by zero to , consider the Newtonian potential centered at the origin and define , the convolution product of and . Then and , so it is clear that for all if and all if , since the dimension of is . This implies that:

(a) because the boundary is Lipschitz and therefore it has a unit normal vector defined almost everywhere; and there exists such that

(4)

(b) due precisely to the definition of , and there exists such that

(5)

Define now the unique solution of in , on . Using [24, Theorem 5.6], we have that there exists such that if , then the nontangential maximal function of the gradient of satisfies and there exists such that

(6)

As is pointed out in [26, p. 438], this implies that has nontangential limit a.e. on and we can define the normal derivative of at a point as the nontangential limit as of . For a precise definition of nontangential limit and nontangential maximal function see, e.g., the introduction of the work [26]. This, together with inequalities (6) and (5) imply that and

(7)

So we have that and we can define in a natural way . The estimate (2) follows from (4) and (7).

For smooth, convex, polygonal or polyhedral domains, the second result follows from the regularity and the usual trace theorem for . See [17, Lemma A.1] for convex domains, [8, Corollary 2.3] for plane polygonal domains and [23, Theorem 2.6.7] for polyhedral domains. ∎

Definition 2.2.

We will say that is the solution in the transposition sense of (1) if

(8)

Here and in the rest of the work stands for the standard inner product in .

The adjoint operator of is defined by , where is the unique weak solution of

(9)

We can write now

where is a constant.

From this expression, we can easily compute the derivative of at a point in the direction :

For later use, we will define now for every , and the unique solution of

Discretization.

To discretize the problems. consider a regular family of triangulations of . To simplify the notation, we will suppose that is polygonal or polyhedral. Related to the mesh, we will call the number of nodes and the nodes of the mesh. We define the sets of interior indexes, boundary indexes and indexes in as , and . For the discretization of the state and the adjoint state we use the space of linear finite elements ,

As usual, we will abbreviate . For the control we use the space of continuous piecewise linear functions on

Notice that the elements of are the traces of the elements of . We will denote or the interpolation operator related to these spaces and or the projection onto this spaces in the sense. For all and all :

We discretize the state equation following the work by Berggren [27]: define such that for , if and only if is the unique solution of

(10)

where is the bilinear form associated to the operator in the PDE. In the case of the Laplace operator . The discrete functional is thus defined as

We define now

and

The discrete problems reads thus as

The adjoint operator of is given by the discrete variational normal derivative. See [1]. and if satisfies

(11)

where is the unique solution of

(12)

It is customary to write . For and , we have

We can then write

The computation of the derivative of at a point in the direction is then obtained with

Since our final objective is to optimize in , let us see in more detail how to make the computations in this space.

Consider the nodal basis in , where is the dimension of and satisfies , the latter being respectively the number of interior and boundary nodes. With an abuse of notation, we will also denote the restriction of to . Define the usual finite element stress, mass and boundary mass matrices by

We will also use for the identity matrix, for the zero matrix and usually refer to submatrices as or defined by taking the rows or columns designed by the sets of indexes in the subscripts, the semicolon meaning “all the indexes”. For instace, the usual boundary mass matrix is .

Given , we denote the vector and for we denote the vector . Using (10), we have that iff

(13)

Since is nonsingular, we can write this as

(14)

If we define as

we have that if and only if .

Given , let be the solution of (12) for . Denoting the corresponding vector, it can be computed as the solution of

(15)

Then we could compute , the vector whose components are the coefficients of the (minus) discrete normal derivative solving the system

(16)

Formally, we can also write that To finish this section we also define the matrix and the vector by

We have that

(17)

We also note that

(18)

To compute the vector , we consider the projection of onto in the sense, and denote , the vector whose -th component is , and

(19)

So for , the latter represented by the vector , we can compute

Notice that applying the equality (16), the explicit computation of is not necessary, and we can write

3 Unconstrained problem

Problem has a unique solution that satisfies . For every , problem has also a unique solution that satisfies . The problems being convex, these conditions are also sufficient. Moreover it is known that strongly in and also error estimates are available in terms of the mesh size if and the domain is either 2D and polygonal or smooth. See [1, 2, 6, 4, 5, 28].

Let us comment two different approaches for the solution of the FEM approximation of .

3.1 Solve the optimality system for the state, the adjoint state and the control.

We write first order optimality conditions for . There exists some such that for every , there exist unique , and satisfying the optimality system:

(20a)
(20b)
(20c)
(20d)

Taking into account the definition of discrete variational normal derivative and relations (14)–(16), we can write this optimality system as

We may eliminate with the boundary condition, and reorder the equations to get the linear system

(21)

Notice that system (21) is solvable even for . Solving this equation would completely solve the problem for the unconstrained case. When the discretization is very fine, the number of unknowns may make the solution of the system by direct methods too difficult. The preconditioned conjugate gradient method for this kind of linear systems is studied by Schöberl and Zulehner in [29] and Herzog and Sachs in [30]. A preconditioner can be built using matrices representing scalars products in and . Following Algorithm 1 in the last-mentioned reference, at each iteration three linear systems must be solved: two of size and one of size . In [30, Algorithm 3], the systems are solved using a multigrid method.

Reduced problems in the adjoint state variable and related pcg have also been studied in [31]. Nevertheless, the structure of the matrix we have in (21) is different to the one treated in that reference and application of their results is not straightforward.

3.2 Use an iterative method to solve a reduced problem in the control variable.

Let us see another way of solving . Using (17), first order optimality conditions can also be written as

(22)
Lemma 3.1.

The matrix is symmetric and positive definite for all and there exists independent of and such that its condition number is bounded by

(23)

where and are respectively the smallest and greatest eigenvalues of the matrices and .

Proof.

It is clear from (18) that is symmetric. The mass matrices and are symmetric and positive definite and therefore and . Since the boundary components of are exactly , we have that and hence

(24)

so is positive definite.

From [4, Equation (3.8)], we know that there exists some such that . Since

and

we obtain

The bound for the condition number follows directly from the last inequality and (24). ∎

For a graded family of mesh with grading parameter (see [32]), we usually define , being the number of nodes of the mesh and the dimension of . The case corresponds to that of a quasi-uniform mesh family.

Corollary 3.2.

If is a family of graded meshes with grading parameter , then there exists independent of and such that

(25)

In particular, for a quasi-uniform family, there exists such that

(26)
Proof.

from [33] and [32], we have that there exists constants

Estimate (25) follows then from (23). Estimate (26) follows from (25) for . ∎

The explicit computation of the matrix is out of the question, since it requires the solution of systems of size . Much better performance can be achieved using an iterative method which only requires the computation of . This is shown in Algorithm 1.

solve (14) for solve (15) for set
Algorithm 1 Function

Preconditioned conjugate gradient methods can be carried out at the cost of one evaluation of per iteration. The computation of can be done in a similar way, but of course it only must be computed once; see Algorithm 2.

define by for set in (15) and solve for set
Algorithm 2 Computation of

To finish this section, let us say a word about a preconditioner to solve (22). In practice, matrices like or make acceptable preconditioners. This is specially important when using graded meshes; see Example 3.4 below. Notice that at each pcg iterate, we only need to solve two linear systems, each of size , to compute , plus another one of size to compute .

Example 3.3.

All the code for all the examples has been made by the author using Matlab R2015b and has been run on a desktop PC with Windows 10 Pro 64bits with 16GB RAM DIMM 1333Mhz on an Intel Core i7 CP 870@2.93Ghz.

In this example we compare the different execution times, and , that we obtain when we solve the optimality system using a direct solver for equation (21) or using an iterative method, the preconditioned conjugate gradient method in our case, to solve the reduced problem (22). We have used respectively Matlab builtin commands mldivide and pcg.

We use the example in [5], where the interior of the convex hull of the points , , , , ; and .

A rough initial mesh is built with Matlab PDEToolbox (version 2.1) commands initmesh and ’hmax’ set to and subsequent nested meshes are obtained by regular diadic refinement using refinemesh. We use our own code to assemble the matrices , and .

For the pcg we use as initial guess the null vector, a relative tolerance of 1E-10 and the preconditioner . At each iteration we have to solve the linear systems (14) and (15). To do this, we first obtain the Cholesky factors of so that at each iteration we only have to solve triangular systems. The interior mesh nodes were ordered using a symmetric approximate minimum degree permutation in order to minimize the number of nonzero elements in the Cholesky factors of using Matlab command symamd. An analogous reordering of the boundary nodes is done to optimize the sparsity pattern of the Cholesky factors of the preconditioner. This reorderings and factorizations take more or less the same cputime than each single iteration of the pcg. This time is included in .

For reference and possible double-check, we also report on the optimal value of the functional. We have marked with the experiments in which we have run out of RAM.

From the data in Table 3.3 it is clear that the proposed iterative method for the reduced problem (22) has an overall better performance for Dirichlet control problems.

\tbl

Execution times for the 2D problem in Example 3.3. . (s) (s) pcg 10657 10337 320 0.5 0.2 7 0.3470689275 42305 41665 640 4.1 1.0 7 0.3471023330 168577 167297 1280 46.1 5.9 7 0.3471129922 673025 670465 2560 453.0 31.0 7 0.3471163823 2689537 2684417 5120 177.5 7 0.3471174585

Remark 2.

We have also coded Algorithm 1 in [30], though the practical comparison with the other two methods is less clear.

The main difficulty is the choice of appropriate scalar products. For instance, we have tried (using the notation in [30]) , or for the scalar product in and for the scalar product in , but we have observed that the number of iterations needed to achieve the prescribed accuracy grows as the number of unknowns increases. For the first three meshes in the 2D example described in Table 3.3 and we obtain 126, 173 and 236 iterations respectively.

Example 3.4.

In this example we show the effect of the use of as a preconditioner to solve (22). In this case , , and we use a mesh family graded at the origin with parameter ; see [32]. In Table 3.4 we compare the number of iterations used to reach the prescribed accuracy of E without and with preconditioner. This example shows very clearly the effectiveness of this strategy of preconditioning.

\tbl

Number of conjugate gradient iterations without and with preconditioner in Example 3.4. Graded mesh and . No 289 225 64 46 26 0.0228920077 1089 961 128 103 24 0.0230219261 4225 3969 256 214 24 0.0231321072 16641 16129 512 421 24 0.0231982941 66049 65025 1024 733 26 0.0232287479

Example 3.5.

In this example we show that the number of pcg iterations for each fixed is independent of the value of for all . We take the same domain and meshes as in the 2D problem in Example 3.3 and set , so that even for . The results are summarized in Table 3.5.

\tbl

Dependence of the number of pcg iterations w.r.t. the regularization parameter in Example 3.5. 1E4 1E2 1E-2 1E-4 1E-6 0 2 3 7 26 49 49 49 2 3 7 28 63 65 65 2 3 7 28 81 86 86

Example 3.6.

A 3D example in a polyhedron. Up to our best knowledge, there is not a general theory about regularity of the solutions or approximation results for 3D Dirichlet control problems posed on polyhedral domains. In [2], the authors study a semi-discrete or variational approach to 3D problems on regular domains. Although the semi-discrete approach coincides with the full approach for unconstrained problems, we cannot profit their results since the regularity of the solutions in a smooth domain is (see [2, Lemma 2.1]) much higher than the one we may obtain in polyhedral domains.

For this example we will take the unit cube , and . First, we obtain a regularity result for the solution of our problem.

Proposition 3.7.

Let be the interior of a rectangular parallelepiped and for some . Consider the solution of problem . Then, , and . Moreover, on the edges and corners of .

Proof.

Since , using transposition and interpolation it is clear that . Classical Sobolev embedding in 3D leads to . Since is a parallelepiped and is regular enough, [34, Theorem 1] states that , and hence , where , , are the faces of . This does not imply immediately that belongs to because , which is the topological dimension of , and some integral compatibility condition should be imposed on the edges; but for all , it is true that , and therefore . Using again Sobolev embeddings, we have that for all and hence . Applying once again [34, Theorem 1], we have that .

Now we have that and we can prove that if we define on the edges of , then we obtain a continuous function. To do this, we use a similar argument to the one used in [7, Section 4] for 2D problems. Since , . Consider two faces and with a common edge and let , be two linearly independent vectors tangent to face ( or ) such that is also tangent to the edge . Since on , we have that for every , and for every , . So for , we have that and therefore can be seen as a continuous function if we set its value to on the edges, despite the jump discontinuities of the normal vector .

So is continuous and hence . The regularity of the optimal state follows from the trace theorem; see e.g. [35]. ∎

Using this regularity, an error estimate can be proved for the example problem.

Proposition 3.8.

Let be a rectangular parallelepiped and for some . Consider be the solution of problem and be the solution of . Then there exists some and such that for all

The proof follows the same guidelines as those of [1] or [5, Section 6.1] and thus will be omitted. An interesting remark is that for 2D problems, we can deduce uniform convergence for the controls using the estimate and an inverse inequality, since the boundary is 1D. This does not work for 3D problems with the error estimate at hand, since now is 2D.

To solve the problem we have used a regular mesh of identical cubes of size , each of them divided into 6 tetrahedra according to the Kuhn triangulation of the cube (see e.g. [36]). Up to our best knowledge, the current version of the Matlab PDEToolbox (version 2.1) computes an approximation of using the barycenter formula. Although this does not affect the order of convergence for the FEM, this matrix plays a central role in Dirichlet control problems (for instance, is not singular, but the barycenter approximation may be singular), so we have computed it in an exact way (using the mid-sides formula). Mesh data, computation times and optimal values are displayed in Table 3.6.

\tbl

Execution times for the 3D problem in Example 3.3. . (s) (s) pcg 4913 3375 1538 0.7 0.2 6 0.4142332683 35937 29791 6146 123.2s 5.2 6 0.4159847757 274625 250047 24578 183.7 6 0.4164610762

4 Control constrained problems

Problem has a unique solution that satisfies for all . For every , problem has also a unique solution that satisfies