HJB equations in infinite dimension and optimal control of stochastic evolution equations via generalized Fukushima decomposition

HJB equations in infinite dimension and optimal control of stochastic evolution equations via generalized Fukushima decomposition

Giorgio Fabbri111Aix-Marseille Univ. (Aix-Marseille School of Economics), CNRS & EHESS. 5, Boulevard Maurice Bourdet, 13205 Marseille Cedex 01, France. E-mail: giorgio.fabbri@univ-amu.fr. The work of this author has been developed in the framework of the center of excellence LABEX MME-DII (ANR-11-LABX-0023-01).   and   Francesco Russo222ENSTA ParisTech, Université Paris-Saclay, Unité de Mathématiques appliquées, 828, Boulevard des Maréchaux, F-91120 Palaiseau, France. E-mail: francesco.russo@ensta-paristech.fr. The financial support of this author was partially provided by the DFG through the CRC ”Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their application”.
August 1st 2017

A stochastic optimal control problem driven by an abstract evolution equation in a separable Hilbert space is considered. Thanks to the identification of the mild solution of the state equation as -weak Dirichlet process, the value processes is proved to be a real weak Dirichlet process. The uniqueness of the corresponding decomposition is used to prove a verification theorem.

Through that technique several of the required assumptions are milder than those employed in previous contributions about non-regular solutions of Hamilton-Jacobi-Bellman equations.

KEY WORDS AND PHRASES: Weak Dirichlet processes in infinite dimension; Stochastic evolution equations; Generalized Fukushima decomposition; Stochastic optimal control in Hilbert spaces.

2010 AMS MATH CLASSIFICATION: 35Q93, 93E20, 49J20

1 Introduction

The goal of this paper is to show that, if we carefully exploit some recent developments in stochastic calculus in infinite dimension, we can weaken some of the hypotheses typically demanded in the literature of non-regular solutions of Hamilton-Jacobi-Bellman (HJB) equations to prove verification theorems and optimal syntheses of stochastic optimal control problems in Hilbert spaces.

As well-known, the study of a dynamic optimization problem can be linked, via the dynamic programming to the analysis of the related HJB equation, that is, in the context we are interested in, a second order infinite dimension PDE. When this approach can be successfully applied, one can prove a verification theorem and express the optimal control in feedback form (that is, at any time, as a function of the state) using the solution of the HJB equation. In this case the latter can be identified with the value function of the problem.

In the regular case (i.e. when the value function is , see for instance Chapter 2 of [16]) the standard proof of the verification theorem is based on the Itô formula. In this paper we show that some recent results in stochastic calculus, in particular Fukushima-type decompositions explicitly suited for the infinite dimensional context, can be used to prove the same kind of result for less regular solutions of the HJB equation.

The idea is the following. In a previous paper ([17]) the authors introduced the class of -weak Dirichlet processes (the definition is recalled in Section 2, is a Banach space strictly associated with a suitable subspace of ) and showed that convolution type processes, and in particular mild solutions of infinite dimensional stochastic evolution equations (see e.g. [8], Chapter 4), belong to this class. By applying this result to the solution of the state equation of a class of stochastic optimal control problems in infinite dimension we are able to show that the value process, that is the value of any given solution of the HJB equation computed on the trajectory taken into account333 The expression value process is sometime used for denoting the value function computed on the trajectory, often the two definition coincide but it is not always the case., is a (real-valued) weak Dirichlet processes (with respect to a given filtration), a notion introduced in [14] and subsequently analyzed in [30]. Such a process can be written as the sum of a local martingale and a martingale orthogonal process, i.e. having zero covariation with every continuous local martingale. Such decomposition is unique and in Theorem 3.7, we exploit the uniqueness property to characterize the martingale part of the value process as a suitable stochastic integral with respect to a Girsanov-transformed Wiener process which allows to obtain a substitute of the Itô-Dynkin formula for solutions of the Hamilton-Jacobi-Bellman equation. This is possible when the value process associated to the optimal control problem can be expressed by a function of the state process, with however a stronger regularity on the first derivative. We finally use this expression to prove the verification result stated in Theorem 4.1444A similar approach is used, when is finite-dimensional, in [29]. In that case things are simpler and there is not need to use the notion of -weak Dirichlet processes and and results that are specifically suited for the infinite dimensional case. In that case will be isomorphic to the full space ..

We think the interest of our contribution is twofold. On the one hand we show that recent developments in stochastic calculus in Banach spaces, see for instance [11, 12], from which we adopt the framework related to generalized covariations and Itô-Fukushima formulae, but also other approaches as [6, 32, 41] may have important control theory counterpart applications. On the other hand the method we present allows to improve some previous verification results weakening a series of hypotheses.

We discuss here this second point in detail. There are several ways to introduce non-regular solutions of second order HJB equations in Hilbert spaces. They are more precisely surveyed in [16] but they essentially are viscosity solutions, strong solutions and the study of the HJB equation through backward SDEs. Viscosity solutions are defined, as in the finite-dimensional case, using test functions that locally “touch” the candidate solution. The viscosity solution approach was first adapted to the second order Hamilton Jacobi equation in Hilbert space in [33, 34, 35] and then, for the “unbounded” case (i.e. including a possibly unbounded generator of a strongly continuous semigroup in the state equation, see e.g. equation (6)) in [40]. Several improvements of those pioneering studies have been published, including extensions to several specific equations but, differently from what happens in the finite-dimensional case, there are no verification theorems available at the moment for stochastic problems in infinite-dimension that use the notion of viscosity solution. The backward SDE approach can be applied when the mild solution of the HJB equation can be represented using the solution of a forward-backward system. It was introduced in [38] in the finite dimensional setting and developed in several works, among them [9, 19, 20, 21, 22]. This method only allows to find optimal feedbacks in classes of problems satisfying a specific “structural condition”, imposing, roughly speaking, that the control acts within the image of the noise. The same limitation concerns the approach introduced and developed in [1] and [24].

In the strong solutions approach, first introduced in [2], the solution is defined as a proper limit of solutions of regularized problems. Verification results in this framework are given in [25, 26, 27, 28]. They are collected and refined in Chapter 4 of [16]. The results obtained using strong solutions are the main term of comparison for ours both because in this context the verification results are more developed and because we partially work in the same framework by approximating the solution of the HJB equation using solutions of regularized problems. With reference to them our method has some advantages 555Results for specific cases, as boundary control problems and reaction-diffusion equation (see [4, 5]) cannot be treated at the moment with the method we present here.: (i) the assumptions on the cost structure are milder, notably they do not include any continuity assumption on the running cost that is only asked to be a measurable function; moreover the admissible controls are only asked to verify, together with the related trajectories, a quasi-integrability condition of the functional, see Hypothesis 3.3 and the subsequent paragraph; (ii) we work with a bigger set of approximating functions because we do not require the approximating functions and their derivatives to be uniformly bounded; (iii) the convergence of the derivatives of the approximating solution is not necessary and it is replaced by the weaker condition (17). This convergence, in different possible forms, is unavoidable in the standard structure of the strong solutions approach and it is avoided here only thanks to the use of Fukushima decomposition in the proof. In terms of the last just mentioned two points, our notion of solution is weaker than those used in the mentioned works, we need nevertheless to assume that the gradient of the solution of the HJB equation is continuous as an -valued function.

Even if it is rather simple, it is itself of some interest because, as far as we know, no explicit (i.e. with explicit expressions of the value function and of the approximating sequence) example of strong solution for second order HJB in infinite dimension are published so far.

The paper proceeds as follows. Section 2 is devoted to some preliminary notions, notably the definition of -weak-Dirichlet process and some related results. Section 3 focuses on the optimal control problem and the related HJB equation. It includes the key decomposition Theorem 3.7. Section 4 concerns the verification theorem. In Section 5 we provide an example of optimal control problem that can solved by using the developed techniques.

2 Some preliminary definitions and result

Consider a complete probability space . Fix and . Let be a filtration satisfying the usual conditions. Each time we use expressions as “adapted”, “martingale”, etc… we always mean “with respect to the filtration ”.

Given a metric space we denote by the Borel -field on . Consider two real Hilbert spaces and . By default we assume that all the processes are Bochner measurable functions with respect to the product -algebra with values in . Continuous processes are clearly Bochner measurable processes. Similar conventions are done for -valued processes. We denote by the projective tensor product of and , see [39] for details.

Definition 2.1.

A continuous real process is called weak Dirichlet process if it can be written as , where is a continuous local martingale and is a martingale orthogonal process in the sense that and for every continuous local martingale .

The following result is proved in Remarks 3.5 and 3.2 of [30].

Theorem 2.2.
  1. The decomposition described in Definition 2.1 is unique.

  2. A semimartingale is a weak Dirichlet process.

The notion of weak Dirichlet process constitutes a natural generalization of the one of semimartingale. To figure out this fact one can start by considering a real continuous semimartingale , where is a local martingale and is a bounded variation process vanishing at zero. Given a function of class , Itô formula shows that


is a semimartingale where is a local martingale and is a bounded variation process expressed in terms of the partial derivatives of . If then (1) still holds with the same , but now is only a martingale orthogonal process; in this case is generally no longer a semimartingale but only a weak Dirichlet process, see [30], Corollary 3.11. For this reason (1) can be interpreted as a generalized Itô formula.

Another aspect to be emphasized is that a semimartingale is also a finite quadratic variation process. Some authors, see e.g. [36, 13] have extended the notion of quadratic variation to the case of stochastic process taking values in a Hilbert (or even Banach) space . The difficulty is that the notion of finite quadratic variation process (but also the one of semimartingale or weak Dirichlet process) is not suitable in several contexts and in particular in the analysis of mild solutions of an evolution equations that cannot be expected to be in general neither a semimartingale nor a finite quadratic variation process. A way to remain in this spirit is to introduce a notion of quadratic variation which is associated with a space (called Chi-subspace) of the dual of the tensor product . In the rare cases when the process has indeed a finite quadratic variation then the corresponding would be allowed to be the full space .

We recall that, following [10, 12], a Chi-subspace (of ) is defined as any Banach subspace which is continuously embedded into and, following [17], given a Chi-subspace we introduce the notion of -covariation as follows.

Definition 2.3.

Given two process and , we say that admits a -covariation if the two following conditions are satisfied.


For any sequence of positive real numbers there exists a subsequence such that


where is the canonical injection between a space and its bidual.


If we denote by the application


the following two properties hold.

(i) There exists an application, denoted by , defined on with values in , satisfying666Given a separable Banach space and a probability space a family of processes is said to converge in the ucp (uniform convergence on probability) sense to , when goes to zero, if in probability i.e. if, for any , .


for every .

(ii) There exists a Bochner measurable process , such that

  • for almost all , is a (càdlàg) bounded variation process,

  • a.s. for all , .

If admits a -covariation we call -covariation of . If vanishes we also write that . We say that a process admits a -quadratic variation if admits a -covariation. In that case is called -quadratic variation of .

Definition 2.4.

Let and be two separable Hilbert spaces. Let be a Chi-subspace. A continuous adapted -valued process is said to be -martingale-orthogonal if , for any -valued continuous local martingale .

Lemma 2.5.

Let and be two separable Hilbert spaces, a bounded variation process.
For any any Chi-subspace , is -martingale-orthogonal.


We will prove that, given any continuous process and any Chi-subspace , we have . This will hold in particular if is a continuous local martingale.

By Lemma 3.2 of [17] it is enough to show that

in probability (the processes are extended on by defining, for instance, for any ). Now, since is continuously embedded in , there exists a constant such that so that


where the last step follows by Proposition 2.1 page 16 of [39]. Now, denoting the real total variation function of an -valued bounded variation function defined on the interval we get

So, by using Fubini’s theorem in (5),

where is the modulus of continuity of . Finally this converges to zero almost surely and then in probability. ∎

Definition 2.6.

Let and be two separable Hilbert spaces. Let be a Chi-subspace. A continuous -valued process is called -weak-Dirichlet process if it is adapted and there exists a decomposition where

  • is an -valued continuous local martingale,

  • is an -martingale-orthogonal process with .

The theorem below was the object of Theorem 3.19 of [17]: it extended Corollary 3.11 in [30].

Theorem 2.7.

Let be a Banach subspace continuously embedded in . Define and . Let be a -function. Denote with the Fréchet derivative of with respect to and assume that the mapping is continuous from to . Let for be an -weak-Dirichlet process with finite -quadratic variation. Then is a real weak Dirichlet process with local martingale part

3 The setting of the problem and HJB equation

In this section we introduce a class of infinite dimensional optimal control problems and we prove a decomposition result for the strong solutions of the related Hamilton-Jacobi-Bellman equation. We refer the reader to [42] and [8] respectively for the classical notions of functional analysis and stochastic calculus in infinite dimension we use.

3.1 The optimal control problem

Assume from now that and are real separable Hilbert spaces, , . Assume that is an -valued --Wiener process (with , a.s.) and denote by the Hilbert space of the Hilbert-Schmidt operators from to .

We denote by the generator of the -semigroup (for ) on . denotes the adjoint of . Recall that and are Banach spaces when endowed with the graph norm. Let be a Polish space.

We formulate the following standard assumptions that will be needed to ensure the existence and the uniqueness of the solution of the state equation.

Hypothesis 3.1.

is a continuous function and satisfies, for some ,

for all , , . is continuous and, for some , satisfies,

for all , .

Given an adapted process , we consider the state equation


The solution of (6) is understood in the mild sense: an -valued adapted process is a solution if



-a.s. for every . Thanks to Theorem 3.3 of [23], given Hypothesis 3.1, there exists a unique (up to modifications) continuous (mild) solution of (6).

Proposition 3.2.

Set , , The process is -weak-Dirichlet process admitting a -quadratic variation with decomposition where is the local martingale defined by and is a -martingale-orthogonal process.


See Corollary 4.6 of [17]. ∎

Hypothesis 3.3.

Let (the running cost) be a measurable function and (the terminal cost) a continuous function.

We consider the class of admissible controls constituted by the adapted processes such that is - is quasi-integrable. This means that, either its positive or negative part are integrable.

We consider the problem of minimizing, over all , the cost functional


The value function of this problem is defined, as usual, as


As usual we say that the control is optimal at if minimizes (8) among the controls in , i.e. if . In this case we denote by the process which is then the corresponding optimal trajectory of the system.

3.2 The HJB equation

The HJB equation associated with the minimization problem above is


In the above equation (respectively ) is the first (respectively second) Fréchet derivatives of with respect to the variable. Let , it is identified (via Riesz Representation Theorem, see [42], Theorem III.3) with elements of . which is a priori an element of is naturally associated with a symmetric bounded operator on , see [18], statement 3.5.7, page 192. In particular, if then . is the derivative with respect to the time variable.
The function


is called the current value Hamiltonian of the system and its infimum over


is called the Hamiltonian. We remark that . Using this notation the HJB equation (10) can be rewritten as


We introduce the operator on defined as


so that the HJB equation (13) can be formally rewritten as


Recalling that we suppose the validity of Hypothesis 3.3 we consider the two following definitions of solution of the HJB equation.

Definition 3.4.

We say that is a classical solution of (15) if

  • The function

    is well-defined and finite for all and it is continuous in the two variables

  • (15) is satisfied at any .

Definition 3.5.

Given we say that with 777The space of uniformly continuous functions on each ball of with values in . is a strong solution of (15) if the following properties hold.

  • The function is finite for all , it is continuous in the two variables and admits continuous extension on .

  • There exist three sequences , and fulfilling the following.

    1. For any , is a classical solution of the problem

    2. The following convergences hold:

      where the convergences in and are meant in the sense of uniform convergence on compact sets.

Remark 3.6.

The notion of classical solution as defined in Definition 3.4 is well established in the literature of second-order infinite dimensional Hamilton-Jacobi equations, see for instance Section 6.2 of [7], page 103. Conversely the denomination strong solution is used for a certain number of definitions where the solution of the Hamilton-Jacobi equation is characterized by the existence of a certain approximating sequence (having certain properties and) converging to the candidate solution. The chosen functional spaces and the prescribed convergences depend on the classes of equations, see for instance [2, 4, 26, 27, 37]. In this sense the solution defined in Definition 3.5 is a form of strong solution of (15) but, differently to all other papers we know888Except [29], but there the HJB equation and the optimal controls are finite dimensional. we do not require any form of convergence of the derivatives of the approximating functions to the derivative of the candidate solution. Moreover all the results we are aware of use sequences of bounded approximating functions (i.e. the in the definition are bounded) and this is not required in our definition. All in all the sets of approximating sequences that we can manage are bigger than those used in the previous literature and so the definition of strong solution is weaker.

3.3 Decomposition for solutions of the HJB equation

Theorem 3.7.

Suppose Hypothesis 3.1 is satisfied. Suppose that with is a strong solution of (15). Let be the solution of (6) starting at time at some and driven by some control . Assume that is of the form


where and satisfy the following conditions.

  • is bounded (being the pseudo-inverse of );

  • satisfies


    for each .



We fix in . We denote by the sequence of smooth solutions of the approximating problems prescribed by Definition 3.5, which converges to . Thanks to Itô formula for convolution type processes (see e.g. Corollary 4.10 in [17]), every verifies


Using Girsanov’s Theorem (see [8] Theorem 10.14) we can observe that

is a -Wiener process with respect to a probability equivalent to on the whole interval . We can rewrite (20) as


Since is a classical solution of (16), the expression above gives


Since we wish to take the limit for , we define


is a sequence of real -local martingales converging ucp, thanks to the definition of strong solution and Hypothesis (18), to


Since the space of real continuous local martingales equipped with the ucp topology is closed (see e.g. Proposition 4.4 of [30]) then is a continuous -local martingale indexed by .

We have now gathered all the ingredients to conclude the proof. We set , . Proposition 3.2 ensures that is a -weak Dirichlet process admitting a -quadratic variation with decomposition where is the local martingale (with respect to ) defined by and is a -martingale-orthogonal process. Now

where and , , is a bounded variation process. Thanks to [31] Theorem 2.14 page 14-15, is a -local martingale. Moreover is a bounded variation process and then, thanks to Lemma 2.5, it is a -martingale orthogonal process. So is a again (one can easily verify that the sum of two -martingale-orthogonal processes is again a -martingale-orthogonal process) a -martingale orthogonal process and is a -weak Dirichlet process with local martingale part , with respect to . Still under , since , Theorem 2.7 ensures that the process is a real weak Dirichlet process on , whose local martingale part being equal to

On the other hand, with respect to , (24) implies that


is a decomposition of as - semimartingale, which is also in particular, a -weak Dirichlet process. By Theorem 2.2 such a decomposition is unique on and so , so .



Example 3.8.

The decomposition (17) with validity of Hypotheses and in Theorem 3.7 are satisfied if is a strong solution of the HJB equation in the sense of Definition 3.5 and, moreover the sequence of corresponding functions converge to in . In that case we simply set and . This is the typical assumption required in the standard strong solutions literature.

Example 3.9.

Again the decomposition (17) with validity of Hypotheses and in Theorem 3.7 is fulfilled if the following assumption is satisfied.

for all choice of admissible controls . In this case we apply Theorem 3.7 with and .

4 Verification Theorem

In this section, as anticipated in the introduction, we use the decomposition result of Theorem 3.7 to prove a verification theorem.

Theorem 4.1.

Assume that Hypotheses 3.1 and 3.3 are satisfied and that the value function is finite for any . Let with be a strong solution of (10) and suppose that there exists two constants and such that for all .
Assume that for all initial data and every control can be written as with and satisfying hypotheses (i) and (ii) of Theorem 3.7. Then we have the following.

  1. on .

  2. Suppose that, for some , there exists a predictable process such that, denoting simply by , we have


    a.e. Then is optimal at ; moreover