# Shadow price of information in discrete time stochastic optimization

## Abstract

The shadow price of information has played a central role in stochastic optimization ever since its introduction by Rockafellar and Wets in the mid-seventies. This article studies the concept in an extended formulation of the problem and gives relaxed sufficient conditions for its existence. We allow for general adapted decision strategies, which enables one to establish the existence of solutions and the absence of a duality gap e.g. in various problems of financial mathematics where the usual boundedness assumptions fail. As applications, we calculate conjugates and subdifferentials of integral functionals and conditional expectations of normal integrands. We also give a dual form of the general dynamic programming recursion that characterizes shadow prices of information.

Dedicated to R. T. Rockafellar on his 80th Birthday

## 1 Introduction

Let be a probability space with a filtration and consider the multistage stochastic optimization problem

(SP) |

where denotes the space of decision strategies adapted to the filtration, is a convex normal integrand on and denotes the associated integral functional on . Here and in what follows, and denotes the linear space of equivalence classes of -valued -measurable functions. As usual, two functions are equivalent if they are equal -almost surely. Throughout, we define the expectation of a measurable function as unless its positive part is integrable.

Problems of the form ((SP)) have been extensively studied since their introduction in the mid 70’s; see [16, 17, 19]. Despite its simple appearance, problem ((SP)) is a very general format of stochastic optimization. Indeed, various pointwise (almost sure) constraints can be incorporated in the objective by assigning the value when the constraints are violated. Several examples can be found in the above references. Applications to financial mathematics are given in [8, 10, 9]. Our formulation of problem ((SP)) extends it’s original formulations by allowing for general filtrations as well as general adapted strategies instead of bounded ones. This somewhat technical extension turns out to be quite convenient e.g. in financial applications.

We will use the short hand notation and define the function by

We assume throughout that is finite and that is proper on . Clearly is the optimum value of ((SP)) while in general, gives the optimum value that can be achieved in combination with an essentially bounded nonadapted strategy . Note also that for all .

The space is in separating duality with under the bilinear form

A is said to be a shadow price of information for problem ((SP)) if it is a subgradient of at the origin, i.e., if

The following result, the proof of which is given in the appendix, shows that the shadow price of information has the same fundamental properties here as in Rockafellar and Wets [19] where the primal solutions were restricted to be essentially bounded. Here and in what follows, denotes the conjugate of defined for each as

The annihilator of will be denoted by .

###### Theorem 1.

We have . In particular, is a shadow price of information if and only if it solves the dual problem

and the optimum value equals . In this case, an is optimal if and only if and it minimizes the function almost surely.

The notion of a shadow price of information first appeared in a general single period model in Rockafellar [15, Example 6 in Section 10] and Rockafellar and Wets [18, Section 4]. Extension to finite discrete time was given in [19]. Continuous-time extensions have been studied in Wets [24], Back and Pliska [1], Davis [4] and Davis and Burstein [5] under various structural assumptions. The shadow price of information has been found useful in formulating dual problems and deriving optimality condition in general parametric stochastic optimization problems; see e.g. [20, 1, 2]. The shadow price of information is useful also in subdifferential calculus involving conditional expectations; see [21] and Section 3.2 below. As a further application, we give a dual formulation of the general dynamic programming recursion from [19] and [6]; see Section 3.3.

The main result of this paper gives new generalized sufficient conditions for the existence of a shadow price of information for the discrete time problem ((SP)). Its proof is obtained by extending the original argument of [19] and by relaxing some of the technical assumptions made there. As already noted, we do not require the decision strategies to be essentially bounded. This allows one to establish the existence of solutions and the absence of a duality gap e.g. in various problems in financial mathematics; see [10, 11]. We also relax the assumptions made in [19] on the normal integrand .

We will denote the adapted projection of an by , that is, , where denotes the conditional expectation with respect to . We will also use the notation .

###### Assumption 1.

For every and every , there exists such that .

It was assumed in [19] (conditions C and D, respectively) that the sets are closed, uniformly bounded, and “nonanticipative” and that there exists a such that for all . The nonanticipativity means the projection mappings are -measurable for all . These conditions imply Assumption 1. Indeed, if , then almost surely and, by Jensen’s inequality, almost surely as well. By the measurable selection theorem (see [22, Corollary 14.6]), there exists a such that and almost surely. The uniform boundedness of implies that while the upper bound gives .

We will also use the following.

###### Assumption 2.

There exists such that, for every , there exists such that .

Assumption 2 holds, in particular, if for all . In the single-step case where , this latter condition coincides with Assumption 1. Assumption 2 is also implied by the strict feasibility assumption made in [19, Theorem 2]. Indeed, strict feasibility implies that contains an open ball so that .

In order to clarify the structure and the logic of its proof, we have split our main result in two statements of independent interest, Theorems 4 and 5 below. Combining them gives the following extension of [19, Theorem 2].

###### Theorem 2.

## 2 Existence of a shadow price of information

Our main results are derived by analyzing the auxiliary value function defined by

Here decision strategies are restricted to be essentially bounded like in [19]. Clearly . Our strategy is to establish the existence of a subgradient of at the origin much like in [19]. By the following simple lemma, this will then serve as a shadow price of information for the general problem ((SP)). Following [13], we denote the biconjugate of a function by .

###### Lemma 3.

We have . If is nonempty, then .

###### Proof.

The general idea in [19] was first to prove the existence of a subgradient for with respect to the pairing of with its Banach dual . This was then modified to get a subgradient with respect to the pairing of with . By [25], any can be expressed as where and is such that there is a decreasing sequence of sets such that and

for any that vanishes on . The representation is known as the Yosida–Hewitt decomposition of . In order to control the singular component , we have introduced Assumption 1.

Below, the strong topology will refer to the norm topology of .

###### Theorem 4.

Let Assumption 1 hold. If is proper and strongly closed at the origin, then is closed at the origin and . If is strongly subdifferentiable at the origin, then .

###### Proof.

By Lemma 3, the first claim holds as soon as , while the second holds if . Strong closedness of at the origin means that for every there is a such that , or equivalently,

which means that and

(1) |

Similarly, is strongly subdifferentiable at the origin iff and (1) holds with .

We will prove the existence of a which has and satisfies (1) with multiplied with . Similarly to the above, this means that is closed (if (1) holds with all ) or subdifferentiable (if ) at the origin with respect to the weak topology. The existence will be proved recursively by showing that if satisfies (1) and for (this holds for as noted above), then there exists a which satisfies (1) with multiplied by and for .

Thus, assume that for and let and be such that . Combined with (1) and noting that , we get

Let and let be as in Assumption 1. By Theorem 14 in the appendix,

(2) |

and

(3) |

Since and for by assumption, (3) means that

Each term in the sum can be written as , where denotes the adjoint of . Moreover, since , we have so, in the last term, . Thus, combining (3) and (2) gives

where

It is easily checked that we still have but now for every as desired. Since was arbitrary and , we see that satisfies (1) with multiplied by . This completes the proof since was arbitrary. ∎

The general idea of the above proof is from [19, Theorem 2] where the imposed assumptions guarantee the strong continuity of at the origin, which in turn guarantees subdifferentiability. The following two results give more general conditions under which the subdifferentiability holds.

###### Theorem 5.

Let Assumption 2 hold. If is strongly continuous at a point of relative to , then is strongly subdifferentiable at the origin.

###### Proof.

We may assume without loss of generality that there exist such that for all with . It is straightforward to check that and . Assumption 2 implies that if , then for some with . Indeed, each can be expressed as , where and , while Assumption 2 gives the existence of a such that . Setting , we have and as claimed.

Now, if is such that , then , so . Since is finite by assumption, this implies that is strongly continuous and thus subdifferentiable on ; see [15, Theorem 11]. By the Hahn–Banach theorem, relative subgradients on can be extended to subgradients on . ∎

If is a closed proper and convex with closed, then is continuous on , the relative strong interior of (recall that the relative interior of a set is defined as its interior with respect to its affine hull). Indeed, is a Banach space whenever it is closed, and then is strongly continuous relative to ; see e.g. [15, Corollary 8B].

The following result gives sufficient conditions for to be strongly closed and to be nonempty. Its proof, contained in the appendix, is obtained by modifying the proof of [14, Theorem 2] which required that almost surely. Recall that the set-valued mappings and are measurable; see [22, Proposition 14.8 and Exercise 14.12].

###### Theorem 6.

Assume that the set

is nonempty and contained in . Then is closed proper and convex, is closed and . In particular, is strongly continuous throughout relative to .

###### Remark 1.

Under the assumptions Theorem 6, is subdifferentiable throughout . Indeed, the construction of in the proof shows that , since almost surely.

###### Example 1.

The extension of the integrability condition of [14, Theorem 2] in Theorem 6 is needed, for example, in problems of the form

where is a convex normal integrand such that for every , is a measurable matrix and is a measurable vector of appropriate dimensions such that the problem is feasible. Indeed, this fits the general format of ((SP)) with

so that and .

## 3 Calculating conjugates and subgradients

This section applies the results of the previous sections to calculate subdifferentials and conjugates of certain integral functionals and conditional expectations of normal integrands.

### 3.1 Integral functionals on

Let be a normal integrand and consider the associated integral functional with respect to the pairing . We assume throughout this section that .

If and , then for all , so

(4) |

The following theorem gives sufficient conditions for this to hold as an equality. We will use the convention that the subdifferential of a function at a point is nonempty unless the function is finite at the point.

###### Theorem 7.

Assume that is such that the function ,

is closed at the origin. Then

If is subdifferentiable at the origin, then the infimum is attained. If this holds for every , then

###### Proof.

To prove the conjugate formula, note first that . By the Fenchel inequality, we always have for all , so we may assume that is proper. In this case we have the expression ; see the proof of Lemma 3.

Assume now that is subdifferentiable at the origin for . Then the infimum in the expression for is attained and , so there is a such that , and thus . Clearly, . Thus, while the reverse inclusion is always valid by (4). ∎

Combining the previous theorem with the results of Section 2, we get global conditions when the subdifferential of coincides with the optional projection of the subdifferential of with respect to the pairing .

###### Corollary 8.

###### Proof.

Let . Since , we have . If , then is trivially closed at the origin. Assume now that . The assumed properties of imply that Assumptions 1 and 2 are satisfied by and that is continuous at a point of relative to . By Theorem 5 and Theorem 4, is subdifferentiable at the origin. If , Fenchel’s inequality implies . The assumptions of Theorem 7 are thus satisfied. ∎

### 3.2 Conditional expectation of a normal integrand

Results of the previous section allow for a simple proof of the interchange rule for subdifferentiation and conditional expectation of a normal integrand. Commutation of the two operations has been extensively studied ever since the introduction of the notion of a conditional expectation of a normal integrand in Bismut [3]; see Rockafellar and Wets [21], Truffert [23] and the references there in. The results of the previous section allow us to relax some of the continuity assumption made in earlier works.

Given a sub-sigma-algebra , the -conditional expectation of a normal integrand is a -measurable normal integrand such that

for all such that either or . If , then the conditional expectation exists and is unique in the sense that if is another function with the above property, then almost surely; see e.g. [23, Theorem 2.1.2].

The -conditional expectation of an -measurable set-valued mapping is a -measurable closed-valued mapping such that

The conditional expectation is well-defined and unique as soon as admits at least one integrable selection; see Hiai and Umegaki [7, Theorem 5.1].

The general form of “Jensen’s inequality” in the following lemma is from [23, Corollary 2.1.2]. We give a direct proof for completeness.

###### Lemma 9.

If is a convex normal integrand such that and , then

almost surely for all and

for every .

###### Proof.

Fenchel’s inequality and the assumption imply that is well defined for all . To prove the first claim, assume, for contradiction, that there is a and a set with on which the inequality is violated. Passing to a subset of if necessary, we may assume that and thus,

This cannot happen since, by Fenchel’s inequality

where the equality follows by applying the interchange rule in .

Given , we have

almost surely. Let so that is bounded. Since , Fenchel inequality implies that integrable. Taking conditional expectations,

so by the first part,

which means that almost surely on . This finishes the proof since was arbitrary. ∎

###### Remark 3.

If in Lemma 9, is normal -integrand, then the inequality can be written in the more familiar form .

The following gives conditions for the equalities in Lemma 9 to hold.

###### Theorem 10.

Let be a convex normal integrand such that and . If is such that the function ,

is subdifferentiable at the origin, then there is a such that and

If and the above holds for every , then

###### Proof.

Applying Theorem 7 with and gives the existence of a such that and

On the other hand, by definition, so , by [12, Theorem 2]. The first claim now follows from the fact that almost surely, by Lemma 9.

If , we have

By the first part, there is a such that and

It follows that

which by the Fenchel inequality, implies so . Combining this with Lemma 9 completes the proof. ∎

Sufficient conditions for the subdifferentiability condition are again obtained from Theorems 5 and 6.

###### Corollary 11.

Let be a convex normal integrand such that , for all and is strongly continuous at a point of relative to . Then for every there is a such that and

Moreover,

for every .

###### Proof.

The above subdifferential formula was obtained in [21] while the expression for the conjugate was given in [23, Corollary 2.2.3]. Both assumed the stronger condition that be continuous at a point relative to all of . A more abstract condition (not requiring the relative continuity assumed here) for the subdifferential formula is given in the corollary in Section 2.2.2 of [23].

Let be a convex normal integrand. The -conditional expectation of the epigraphical mapping is also an epigraphical mapping of some normal integrand as soon as has an integrable selection; see [23, p. 136 and 140]. We denote by the normal integrand whose epigraphical mapping is the -conditional expectation of the epigraphical mapping of . We get from [23, Theorem 2.1.2 and Corollary 2.1.1.1] that

(5) |

whenever there exists and . Thus results of this section concerning with can be expressed as well in terms .

### 3.3 Dynamic programming

Consider again problem ((SP)) and define extended real-valued functions by the recursion

(6) |

This far reaching generalization of the classical dynamic programming recursion for control systems was introduced in [19] and [6]. The following result from [10] relaxes the compactness assumptions made in [19] and [6]. In the context of financial mathematics, this allows for various extensions of certain fundamental results in financial mathematics; see [10] for details.

###### Theorem 12 ([10]).

Assume that for an and that

is a linear space. The functions are then well-defined normal integrands and we have for every that

(7) |

Optimal solutions exist and they are characterized by the condition

which is equivalent to having equalities in (7).

Consider now the dual problem

from Theorem 1. We know that the optimum dual value is at least and that if the values are equal, the shadow prices of information are exactly the dual solutions. Note also that when the functions and in the dynamic programming equations are well-defined, their conjugates solve the dual dynamic programming equations

(8) |

Much like Theorem 12 characterizes optimal primal solutions in terms of the dynamic programming equations (6), the following result characterizes optimal dual solutions in terms of the dual recursion (8).

###### Theorem 13.

Assume that the dual problem is proper and that there is a feasible for the primal problem. Then the dual dynamic programming equations are well-defined and we have for every that

(9) |

In the absence of a duality gap, optimal dual solutions are characterized by having equalities in (9) while and are primal and dual optimal, respectively, if and only if , and

which is equivalent to having

###### Proof.

Let be feasible for the dual problem. We first show inductively that and which implies, in particular, that each is well-defined. For , this is trivial. Assume that the claim holds for some . Then, for every , we have