Optimization Methods for Dirichlet Control Problems
We discuss several optimization procedures to solve finite element approximations of linear-quadratic Dirichlet optimal control problems governed by an elliptic partial differential equation posed on a 2D or 3D Lipschitz domain. The control is discretized explicitely using continuous piecewise linear approximations. Unconstrained, control-constrained, state-constrained and control-and-state constrained problems are analyzed. A preconditioned conjugate method for a reduced problem in the control variable is proposed to solve the unconstrained problem, whereas semismooth Newton methods are discussed for the solution of constrained problems. State constraints are treated via a Moreau-Yosida penalization. Convergence is studied for both the continuous problems and the finite dimensional approximations. In the finite dimensional case, we are able to show convergence of the optimization procedures even in the absence of Tikhonov regularization parameter. Computational aspects are also treated and several numerical examples are included to illustrate the theoretical results.
irichlet optimal control, discretization, constrained optimization, preconditionate conjugate gradient, semismooth Newton methods
49J20, 49M05, 49M15, 65N30
Only in the last ten years it has been possible to develop a systematic study of Dirichlet optimal control problems governed by elliptic equations, which started with the seminal paper . In that work a 2D control-constrained problem governed by a semilinear equation posed on a convex polygonal domain is studied. Several other works have been published about numerical error estimates; see  or [3, Ch 2.1] for a variational approach to control-constrained problems posed on smooth 2D or 3D domains,  for a superconvergence result on the state approximation for unconstrained 2D problems,  for control-and-state constrained problems posed on convex polygonal domains. In  the authors study the control-constrained problem in a 2D smooth convex domain taking into account the problems derived by the boundary approximation and in  an apparent paradox between  and  is explained. The regularity of the solution in possibly nonconvex polygonal plane domains is studied in ; see also the introduction of that paper for further references about related problems. In the recent publication , error estimates for a Dirichlet control problem governed by a parabolic equation are obtained. In this work, the spatial discretization of the control is studied in both the cases of continuous piecewise linear finite elements and variational approach.
Just before that, several works dealing with efficient optimization methods for control or state constrained problems had appeared; in [10, 11, 12, 13, 14, 15, 16] the semismooth Newton method is thoroughly studied. Since all these papers about optimization are previous to the papers about the numerical analysis of Dirichlet control problems, very little or no reference is made in them to their applicability to the problems we are going to deal with. Only in  two different Dirichlet control problems with more regular solutions than the ones we treat here are studied in the infinite-dimensional case. Let us also mention that in  the Dirichlet boundary condition is replaced by a Robin penalization. For this kind of penalization the methods developed in the aforementioned references are directly applicable. In , the semismooth Newton method is studied in the context of the variational approach of the control. Although the authors only exemplify their results through distributed and Robin boundary control, a combination of their results and some of the results we present in Section 4 can be applied to Dirichlet control. See Remark 4 below.
In this work we describe optimization methods for Dirichlet control problems in both the infinite and finite dimensional cases. Convergence proofs, examples and practical implementation details are discussed for all the algorithms through the paper.
Next, we discretize the problem. As is usual in control problems, we have three choices to discretize the control: the first option is not to discretize the control, using a variational discretization as introduced in  for distributed problems; as a second option we may use piecewise constant functions; and finally we can use continuous piecewise linear functions.
The choice is not trivial because first order optimality conditions for Dirichlet control problems involve the normal derivative of the adjoint state.
If we discretize directly the optimality system using e.g. continuous piecewise linear finite elements, we would obtain two different approximations of the control: the trace of the discrete state would be continuous piecewise linear and the normal derivative of the discrete adjoint state would be piecewise constant. In [20, Ch. 126.96.36.199] and in [3, Example 2.1.11] this difficulty is solved using a mixed formulation of the state equation which is discretized with the lowest order Raviart-Thomas element. In  a convergence proof for this kind of discretization is given. In this way, both the trace of the discrete state and the normal derivative of the discrete adjoint state are piecewise constant functions, so the identification of both with the discrete control is meaningful. The discretization of the control in the presence of control constraints is carried out in these papers using a variational discretization. This kind of approximation may be convenient when the gradient of the state is the variable of interest, since this quantity is treated as one of the variables of the problem.
Another approach, in finite dimension, is to use the variational discrete normal derivative of the discrete adjoint state introduced in , which is a continuous piecewise linear function. Doing so, both the trace of the discrete state and the normal derivative of the discrete adjoint state are continuous piecewise linear functions, so the identification of both with the discrete control is meaningful. Following this idea, in  the control is not discretized for the control-constrained case, but a variational approach is followed.
In this work we investigate optimization methods for the case of discretizing the control explicitely using continuous piecewise linear functions.
The end of Section 2 is devoted to describe with some detail some computational aspects that will be important in the rest of the work.
We next provide in Section 3 an efficient method to solve the unconstrained problem. We propose in Subsection 3.2 a preconditioned conjugate gradient (pcg in the rest of the work) method for a reduced problem in the spirit of [22, Section 5]. We are able to prove convergence of the conjugate gradient even for the case where the Tikhonov regularization parameter vanishes.
In sections 4, 5 and 6 we study the convergence of the semismooth Newton method for the constrained problems and write practical algorithms for the solution of the finite dimensional approximations. In Section 4 we deal with the control-constrained problem. In many of the aforementioned references about the semismooth Newton method for control-constrained problems the authors study convergence of an abstract problem or an infinite dimensional problem. For distributed and Neumann control problems the results are immediately applicable to the finite dimensional approximations because the controls can be discretized using piecewise constant functions, and therefore the variational inequality arising from first order necessary optimality conditions can be written in an element-wise form. The same idea applies when we are dealing with a variational approach as proposed in [2, 3, 19] or a mixed formulation, as studied in [21, 20, 3]. When we use continuous piecewise linear elements,the variational inequality cannot be written in a point-wise or elementwise form; see (36d) and Remark 5. We include the analysis of Newton methods for both the original problem and the discretized one.
In Section 5 we study the state-constrained problem using a Moreau-Yosida penalization. Since there are no control constraints, the analysis of the semismooth Newton method for the infinite dimensional problem is applicable to the finite dimensional one, so we do not need to repeat it. We prove that the finite-element approximation of the penalized problems converge to the solution of the penalized problem. This result cannot be deduced straightforward from the ones in the literature since the penalized functional is not of class . A continuation strategy as proposed in  is developed. Finally, in Section 6 we discuss the problem with both control and state constraints.
It is well known that the main difficulty with Dirichlet control problems is the low regularity of the solutions. This regularity and related error estimates are, so far, well established in 2D polygonal domains [1, 5, 8] and 2D or 3D smooth domains  but there is not, up to our best knowledge, a general study in 3D polyhedral domains. Although the main focus of this work is on optimization methods, we also study the regularity of the solution and error estimates of the approximations in one example case in a polyhedron; see Example 3.6.
2 Statement of the problem and first properties
Let , or , be a bounded domain with Lipschitz boundary and in this domain consider a target state . Consider also the continuous linear operator such that if and only if is the solution in the transposition sense (see Definition 2.2 below) of
Let be a domain such that and define two pairs of functions and such that for all and for all . For some fixed regularization parameter , define
and consider the sets
In this work we will study optimization procedures for the following four Dirichlet control problems:
namely the unconstrained, control-constrained, state-constrained and control-and-state constrained problems.
Almost all the literature related to these problems is written using the state equation (1). There would be no problem in taking into account an equation of the form
with regular enough , , and satysfing an uniform ellipticity condition.
Let us state precisely what we mean by solution in the transposition sense. Consider the space
This is a Banach space with the graph norm . Further, the functions in this space satisfy . This is known to be true for smooth domains; convex domains, see [17, Lemma A.1]; plane polygonal domains, see [8, Corollary 2.3]; or polyhedral domains, see [23, Theorem 2.6.7] and the usual trace theorem. We have not been able to find a proof of this fact for general Lipschitz domains. In [24, Theorem B.2] the regularity is proved. Nevertheless, as the authors notice in page 165 of this reference, the trace theorem is not valid neither in nor in , so we cannot deduce immediately that . The results in [25, 24] imply that the usual trace result can be extended to harmonic functions in the limit cases. We show next how to take advantage of this to prove that in Lipschitz domains. Regarding the analysis of semismooth Newton methods –see Lemma 4.1 below– we also prove regularity for some .
Let , or , be a bounded domain with Lipschitz boundary and consider . Then, there exists depending on the domain such that, for , we have and
If, further, is smooth or convex or polygonal or polyhedral, then there exists such that and
Denote , extend by zero to , consider the Newtonian potential centered at the origin and define , the convolution product of and . Then and , so it is clear that for all if and all if , since the dimension of is . This implies that:
(a) because the boundary is Lipschitz and therefore it has a unit normal vector defined almost everywhere; and there exists such that
(b) due precisely to the definition of , and there exists such that
Define now the unique solution of in , on . Using [24, Theorem 5.6], we have that there exists such that if , then the nontangential maximal function of the gradient of satisfies and there exists such that
As is pointed out in [26, p. 438], this implies that has nontangential limit a.e. on and we can define the normal derivative of at a point as the nontangential limit as of . For a precise definition of nontangential limit and nontangential maximal function see, e.g., the introduction of the work . This, together with inequalities (6) and (5) imply that and
We will say that is the solution in the transposition sense of (1) if
Here and in the rest of the work stands for the standard inner product in .
The adjoint operator of is defined by , where is the unique weak solution of
We can write now
where is a constant.
From this expression, we can easily compute the derivative of at a point in the direction :
For later use, we will define now for every , and the unique solution of
To discretize the problems. consider a regular family of triangulations of . To simplify the notation, we will suppose that is polygonal or polyhedral. Related to the mesh, we will call the number of nodes and the nodes of the mesh. We define the sets of interior indexes, boundary indexes and indexes in as , and . For the discretization of the state and the adjoint state we use the space of linear finite elements ,
As usual, we will abbreviate . For the control we use the space of continuous piecewise linear functions on
Notice that the elements of are the traces of the elements of . We will denote or the interpolation operator related to these spaces and or the projection onto this spaces in the sense. For all and all :
We discretize the state equation following the work by Berggren : define such that for , if and only if is the unique solution of
where is the bilinear form associated to the operator in the PDE. In the case of the Laplace operator . The discrete functional is thus defined as
We define now
The discrete problems reads thus as
The adjoint operator of is given by the discrete variational normal derivative. See . and if satisfies
where is the unique solution of
It is customary to write . For and , we have
We can then write
The computation of the derivative of at a point in the direction is then obtained with
Since our final objective is to optimize in , let us see in more detail how to make the computations in this space.
Consider the nodal basis in , where is the dimension of and satisfies , the latter being respectively the number of interior and boundary nodes. With an abuse of notation, we will also denote the restriction of to . Define the usual finite element stress, mass and boundary mass matrices by
We will also use for the identity matrix, for the zero matrix and usually refer to submatrices as or defined by taking the rows or columns designed by the sets of indexes in the subscripts, the semicolon meaning “all the indexes”. For instace, the usual boundary mass matrix is .
Given , we denote the vector and for we denote the vector . Using (10), we have that iff
Since is nonsingular, we can write this as
If we define as
we have that if and only if .
Given , let be the solution of (12) for . Denoting the corresponding vector, it can be computed as the solution of
Then we could compute , the vector whose components are the coefficients of the (minus) discrete normal derivative solving the system
Formally, we can also write that To finish this section we also define the matrix and the vector by
We have that
We also note that
To compute the vector , we consider the projection of onto in the sense, and denote , the vector whose -th component is , and
So for , the latter represented by the vector , we can compute
Notice that applying the equality (16), the explicit computation of is not necessary, and we can write
3 Unconstrained problem
Problem has a unique solution that satisfies . For every , problem has also a unique solution that satisfies . The problems being convex, these conditions are also sufficient. Moreover it is known that strongly in and also error estimates are available in terms of the mesh size if and the domain is either 2D and polygonal or smooth. See [1, 2, 6, 4, 5, 28].
Let us comment two different approaches for the solution of the FEM approximation of .
3.1 Solve the optimality system for the state, the adjoint state and the control.
We write first order optimality conditions for . There exists some such that for every , there exist unique , and satisfying the optimality system:
We may eliminate with the boundary condition, and reorder the equations to get the linear system
Notice that system (21) is solvable even for . Solving this equation would completely solve the problem for the unconstrained case. When the discretization is very fine, the number of unknowns may make the solution of the system by direct methods too difficult. The preconditioned conjugate gradient method for this kind of linear systems is studied by Schöberl and Zulehner in  and Herzog and Sachs in . A preconditioner can be built using matrices representing scalars products in and . Following Algorithm 1 in the last-mentioned reference, at each iteration three linear systems must be solved: two of size and one of size . In [30, Algorithm 3], the systems are solved using a multigrid method.
3.2 Use an iterative method to solve a reduced problem in the control variable.
Let us see another way of solving . Using (17), first order optimality conditions can also be written as
The matrix is symmetric and positive definite for all and there exists independent of and such that its condition number is bounded by
where and are respectively the smallest and greatest eigenvalues of the matrices and .
It is clear from (18) that is symmetric. The mass matrices and are symmetric and positive definite and therefore and . Since the boundary components of are exactly , we have that and hence
so is positive definite.
For a graded family of mesh with grading parameter (see ), we usually define , being the number of nodes of the mesh and the dimension of . The case corresponds to that of a quasi-uniform mesh family.
If is a family of graded meshes with grading parameter , then there exists independent of and such that
In particular, for a quasi-uniform family, there exists such that
The explicit computation of the matrix is out of the question, since it requires the solution of systems of size . Much better performance can be achieved using an iterative method which only requires the computation of . This is shown in Algorithm 1.
Preconditioned conjugate gradient methods can be carried out at the cost of one evaluation of per iteration. The computation of can be done in a similar way, but of course it only must be computed once; see Algorithm 2.
To finish this section, let us say a word about a preconditioner to solve (22). In practice, matrices like or make acceptable preconditioners. This is specially important when using graded meshes; see Example 3.4 below. Notice that at each pcg iterate, we only need to solve two linear systems, each of size , to compute , plus another one of size to compute .
All the code for all the examples has been made by the author using Matlab R2015b and has been run on a desktop PC with Windows 10 Pro 64bits with 16GB RAM DIMM 1333Mhz on an Intel Core i7 CP firstname.lastname@example.orgGhz.
In this example we compare the different execution times, and , that we obtain when we solve the optimality system using a direct solver for equation (21) or using an iterative method, the preconditioned conjugate gradient method in our case, to solve the reduced problem (22). We have used respectively Matlab builtin commands mldivide and pcg.
We use the example in , where the interior of the convex hull of the points , , , , ; and .
A rough initial mesh is built with Matlab PDEToolbox (version 2.1) commands initmesh and ’hmax’ set to and subsequent nested meshes are obtained by regular diadic refinement using refinemesh. We use our own code to assemble the matrices , and .
For the pcg we use as initial guess the null vector, a relative tolerance of 1E-10 and the preconditioner . At each iteration we have to solve the linear systems (14) and (15). To do this, we first obtain the Cholesky factors of so that at each iteration we only have to solve triangular systems. The interior mesh nodes were ordered using a symmetric approximate minimum degree permutation in order to minimize the number of nonzero elements in the Cholesky factors of using Matlab command symamd. An analogous reordering of the boundary nodes is done to optimize the sparsity pattern of the Cholesky factors of the preconditioner. This reorderings and factorizations take more or less the same cputime than each single iteration of the pcg. This time is included in .
For reference and possible double-check, we also report on the optimal value of the functional. We have marked with the experiments in which we have run out of RAM.
We have also coded Algorithm 1 in , though the practical comparison with the other two methods is less clear.
The main difficulty is the choice of appropriate scalar products. For instance, we have tried (using the notation in ) , or for the scalar product in and for the scalar product in , but we have observed that the number of iterations needed to achieve the prescribed accuracy grows as the number of unknowns increases. For the first three meshes in the 2D example described in Table 3.3 and we obtain 126, 173 and 236 iterations respectively.
In this example we show the effect of the use of as a preconditioner to solve (22). In this case , , and we use a mesh family graded at the origin with parameter ; see . In Table 3.4 we compare the number of iterations used to reach the prescribed accuracy of E without and with preconditioner. This example shows very clearly the effectiveness of this strategy of preconditioning.
A 3D example in a polyhedron. Up to our best knowledge, there is not a general theory about regularity of the solutions or approximation results for 3D Dirichlet control problems posed on polyhedral domains. In , the authors study a semi-discrete or variational approach to 3D problems on regular domains. Although the semi-discrete approach coincides with the full approach for unconstrained problems, we cannot profit their results since the regularity of the solutions in a smooth domain is (see [2, Lemma 2.1]) much higher than the one we may obtain in polyhedral domains.
For this example we will take the unit cube , and . First, we obtain a regularity result for the solution of our problem.
Let be the interior of a rectangular parallelepiped and for some . Consider the solution of problem . Then, , and . Moreover, on the edges and corners of .
Since , using transposition and interpolation it is clear that . Classical Sobolev embedding in 3D leads to . Since is a parallelepiped and is regular enough, [34, Theorem 1] states that , and hence , where , , are the faces of . This does not imply immediately that belongs to because , which is the topological dimension of , and some integral compatibility condition should be imposed on the edges; but for all , it is true that , and therefore . Using again Sobolev embeddings, we have that for all and hence . Applying once again [34, Theorem 1], we have that .
Now we have that and we can prove that if we define on the edges of , then we obtain a continuous function. To do this, we use a similar argument to the one used in [7, Section 4] for 2D problems. Since , . Consider two faces and with a common edge and let , be two linearly independent vectors tangent to face ( or ) such that is also tangent to the edge . Since on , we have that for every , and for every , . So for , we have that and therefore can be seen as a continuous function if we set its value to on the edges, despite the jump discontinuities of the normal vector .
So is continuous and hence . The regularity of the optimal state follows from the trace theorem; see e.g. . ∎
Using this regularity, an error estimate can be proved for the example problem.
Let be a rectangular parallelepiped and for some . Consider be the solution of problem and be the solution of . Then there exists some and such that for all
The proof follows the same guidelines as those of  or [5, Section 6.1] and thus will be omitted. An interesting remark is that for 2D problems, we can deduce uniform convergence for the controls using the estimate and an inverse inequality, since the boundary is 1D. This does not work for 3D problems with the error estimate at hand, since now is 2D.
To solve the problem we have used a regular mesh of identical cubes of size , each of them divided into 6 tetrahedra according to the Kuhn triangulation of the cube (see e.g. ). Up to our best knowledge, the current version of the Matlab PDEToolbox (version 2.1) computes an approximation of using the barycenter formula. Although this does not affect the order of convergence for the FEM, this matrix plays a central role in Dirichlet control problems (for instance, is not singular, but the barycenter approximation may be singular), so we have computed it in an exact way (using the mid-sides formula). Mesh data, computation times and optimal values are displayed in Table 3.6.
4 Control constrained problems
Problem has a unique solution that satisfies for all . For every , problem has also a unique solution that satisfies