Pointwise second-order necessary conditions for stochastic optimal controls, Part I: The case of convex control constraintThis work is partially supported by the National Basic Research Program of China (973 Program) under grant 2011CB808002, by NSF of China under grant 11231007, and by the PCSIRT (from the Chinese Education Ministry) under grant IRT1273.

Pointwise second-order necessary conditions for stochastic optimal controls, Part I: The case of convex control constraintthanks: This work is partially supported by the National Basic Research Program of China (973 Program) under grant 2011CB808002, by NSF of China under grant 11231007, and by the PCSIRT (from the Chinese Education Ministry) under grant IRT1273.

Haisen Zhang School of Mathematics, Sichuan University, Chengdu 610064, Sichuan Province, China. E-mail:haisenzhang@yeah.net.    Xu Zhang School of Mathematics, Sichuan University, Chengdu 610064, Sichuan Province, China. E-mail:zhangxu@scu.edu.cn.
Abstract

This paper is the first part of our series work to establish pointwise second-order necessary conditions for stochastic optimal controls. In this part, both drift and diffusion terms may contain the control variable but the control region is assumed to be convex. Under some assumptions in terms of Malliavin calculus, we establish the desired necessary condition for stochastic singular optimal controls in the classical sense.

Key words. Stochastic optimal control, Malliavin calculus, pointwise second-order necessary condition, variational equation, adjoint equation.

AMS subject classifications. Primary 93E20; Secondary 60H07, 60H10.

1 Introduction

Let and be a complete filtered probability space (satisfying the usual conditions), on which a -dimensional standard Wiener process is defined such that is the natural filtration generated by (augmented by all of the -null sets).

In this paper, we shall consider the following controlled stochastic differential equation

\hb@xt@.01(1.1)

with a cost functional

\hb@xt@.01(1.2)

Here is the control variable valued in a set (for some ), is the state variable valued in (for some ), and , and are given functions (satisfying some conditions to be given later). As usual, when the context is clear, we omit the argument in the defined functions.

Denote by the Borel -field of a metric space , and by the set of -measurable and -adapted stochastic processes valued in . Any is called an admissible control. The stochastic optimal control problem considered in this paper is to find a control such that

\hb@xt@.01(1.3)

Any satisfying (LABEL:minimum_J) is called an optimal control. The corresponding state (to (LABEL:controlsys)) is called an optimal state, and is called an optimal pair.

In optimal control theory, one of the central topics is to establish the first-order necessary condition for optimal controls. We refer to [15] for an early study on the first-order necessary condition for stochastic optimal controls. After that, many authors contributed on this topic, see [2, 3, 12] and references cited therein. Compared to the deterministic setting, new phenomenon and difficulties appear when the diffusion term of the stochastic control system contains the control variable and the control region is nonconvex. The corresponding first-order necessary condition for this general case was established in [18].

For some optimal controls, it may happen that the first-order necessary conditions turn out to be trivial. For deterministic control systems, there are two types of such optimal controls. One of them, called the singular optimal control in the classical sense, is the optimal control for which the gradient and the Hessian of the corresponding Hamiltonian with respect to the control variable vanish/degenerate. The other one, called the singular optimal control in the sense of Pontryagin-type maximum principle, is the optimal control for which the corresponding Hamiltonian is equal to a constant in the control region. When an optimal control is singular, the first-order necessary condition cannot provide enough information for the theoretical analysis and numerical computing, and therefore one needs to study the second-order necessary conditions. In the deterministic setting, one can find many references in this direction (See [1, 7, 9, 10, 11, 13, 14, 16] and references cited therein).

Compared to the deterministic control systems, there are only two papers ([4, 19]) addressed to the second-order necessary condition for stochastic optimal controls. In [19], a pointwise second-order maximum principle for stochastic singular optimal controls in the sense of Pontryagin-type maximum principle was established for the case that the diffusion term is independent of the control ; while in [4], an integral-type second-order necessary condition for stochastic optimal controls was derived under the assumption that the control region is convex.

The main purpose of this paper is to establish a pointwise second-order necessary condition for stochastic optimal controls. In this work, both drift and diffusion terms, i.e., and , may contain the control variable , and we assume that the control region is convex. The key difference between [4] and our work is that we consider here the pointwise second-order necessary condition, which is easier to be verified in practical applications. We remark that, quite different from the deterministic setting, there exist some essential difficulties to derive the pointwise second-order necessary condition from an integral-type one when the diffusion term of the control system contains the control variable, even for the case of convex control constraint (See the first 4 paragraphs of Subsection LABEL:sub3.2 for a detailed explanation). We overcome these difficulties by means of some technique from the Malliavin calculus. The method developed in this work can be adopted to establish a pointwise second-order necessary condition for stochastic optimal controls for the general case when the control region is nonconvex but the analysis is much more complicated, and therefore we shall give the details in another paper [21].

The rest of this paper is organized as follows. In Section 2, we list some notations, spaces and preliminary results from Malliavin calculus. In Section 3, we introduce the main results of this paper and give some examples. Finally, in Section 4 we give the proofs of the main results.

2 Some preliminaries

In this section, we present some preliminaries.

2.1 Some notations and spaces

We introduce some notations and spaces which will be used in the sequel.

Denote by and respectively the inner product and norm in or , which can be identified from the contexts. Let be the space of all -matrices. For any , denote by the transpose of and by the norm of . Also, write .

Let be a given function. For a.e. , we denote by , the first order partial derivatives of with respect to and at , by the Hessian of with respect to at , and by , , the second order partial derivatives of with respect to and at .

For any and , we denote by the space of -valued, measurable random variables such that ; by the space of -valued, -measurable processes such that ; by the space of -valued, -measurable, -adapted processes such that ; by the space of -valued, -measurable, and -adapted continuous processes such that , by the space of -valued, -measurable processes such that ; and by the -valued, measurable functions such that for any , is -adapted and .

2.2 Some concepts and results from Malliavin calculus

In this subsection, we recall some concepts and results from Malliavin calculus (See [17] for a detailed discussion on this topic).

Denote by the set of -smooth functions with bounded partial derivatives. For any , write . Define

\hb@xt@.01(2.1)

Clearly, is a linear subspace of . For any (in the form of that in (LABEL:mJ)), its Malliavin derivative is defined as follows:

Write

Obviously, is a norm on . It is shown in [17] that the operator has a closed extension to the space , the completion of with respect to the norm . When , the following Clark-Ocone representation formula holds:

\hb@xt@.01(2.2)

Furthermore, if is -measurable, then for any .

Define to be the space of processes such that

  1. For , ;

  2. The function admits a measurable version; and

Denote by the set of all adapted processes in .

In addition, write

Denote

For any , denote .

When is adapted, for any . In this case, , and . Denote by the set of all adapted processes in .

Roughly speaking, an element is a stochastic process whose Malliavin derivative has suitable continuity on some neighbourhood of . Examples of such process can be found in [17]. Especially, if is continuous from (for some ) to , then and, .

To end this section, we show the following technical result which will be use in the sequel.

Lemma 2.1

Let . Then, there exists a sequence of positive numbers such that as and

\hb@xt@.01(2.3)

Proof. For any , we take the convention that

whenever . From the definition of , it follows that

which implies (LABEL:equence_converg_for_malliavin_deriv).     

3 Second-order necessary conditions

In this section, we shall present several second-order necessary conditions for stochastic optimal controls.

To begin with, we assume that

  1. The control region is nonempty, bounded, and convex.

  2. The functions , , , and satisfy the following:

    1. For any , the stochastic processes and are -measurable and -adapted. For a.e. , the functions and are continuously differentiable up to order , and all of their partial derivatives are uniformly bounded (with respect to ). There exists a constant such that for a.e. and for any and ,

    2. For any , the stochastic process is -measurable and -adapted, and the random variable is -measurable. For a.e. , the functions and are continuously differentiable up to order , and for any and ,

When the condition is satisfied, the state (of (LABEL:controlsys)) is uniquely defined by any given initial datum and admissible control , and the cost functional (LABEL:costfunction) is well-defined on . In what follows, represents a generic constant, depending on and , but independent of any other parameter, which can be different from line to line.

3.1 Integral-type second-order conditions

Let be an optimal pair, and be any given admissible control. Let , and write

\hb@xt@.01(3.1)

Since is convex, . Denote by the state of (LABEL:controlsys) with respect to the control , and put . For , denote

, ,
, ,
.

First, similar to [4], we introduce the following two variational equations:

\hb@xt@.01(3.2)

and

\hb@xt@.01(3.3)

By (LABEL:firstvariequconvex)–(LABEL:secondvariequconvex) and similar to [4, Lemmas 3.5 and 3.11], one has the following estimates.

Proposition 3.1

Let (C2) hold. Then, for any ,

Proof. The proof is very close to that of [4, Lemmas 3.5 and 3.11], and therefore, we omit the details.     

Next, define the Hamiltonian

\hb@xt@.01(3.4)

We introduce respectively the following two adjoint equations for (LABEL:firstvariequconvex)–(LABEL:secondvariequconvex):

\hb@xt@.01(3.5)

and

\hb@xt@.01(3.6)

where .

From [8], it is easy to check that, for any , the equation (LABEL:firstajointequconvex) admits a unique strong solution , and (LABEL:secondajointequconvex) admits a unique strong solution .

Also, we define

, and denote

\hb@xt@.01(3.8)

We have the following result.

Proposition 3.2

Let (C1)–(C2) hold. Then, the following variational equality holds for any :

where , .

Proof. By (LABEL:3z1), using Taylor’s formula and Proposition LABEL:estimateofvariequ, similar to [4, Subsection 3.2], we have

By Itô’s formula, we have

and (noting that and