Multi-Objective Linear Quadratic Team Optimization This work was supported by the Swedish Research Council.

Multi-Objective Linear Quadratic Team Optimization thanks: This work was supported by the Swedish Research Council.

Ather Gattami A. Gattami is with the Electrical Engineering School, KTH-Royal Institute of Technology, SE-100 44 Stockholm, Sweden. gattami@kth.se
Abstract

In this paper, we consider linear quadratic team problems with an arbitrary number of quadratic constraints in both stochastic and deterministic settings. The team consists of players with different measurements about the state of nature. The objective of the team is to minimize a quadratic cost subject to additional finite number of quadratic constraints. We will first consider the Gaussian case, where the state of nature is assumed to have a Gaussian distribution, and show that the linear decisions are optimal and can be found by solving a semidefinite program We then consider the problem of minimizing a quadratic objective for the worst case scenario, subject to an arbitrary number of deterministic quadratic constraints. We show that linear decisions can be found by solving a semidefinite program.

Key words. Team Decision Theory, Game Theory, Convex Optimization.

AMS subject classifications. 99J04, 49K04

1 Introduction

We consider the problem of distributed decision making with information constraints under linear quadratic settings. For instance, information constraints appear naturally when making decisions over networks. These problems can be formulated as team problems. The team problem is an optimization problem with several decision makers possessing different information aiming to optimize a common objective. Early results in [11] considered static team theory in stochastic settings and a more general framework was introduced by Radner [12], where existence and uniqueness of solutions where shown. Connections to dynamic team problems for control purposes where introduced in [9]. In [4], the team problem with two team members was solved. The solution cannot be easily extended to more than two players since it uses the fact that the two members have common information; a property that doesn’t necessarily hold for more than two players. Also, a nonlinear team problem with two team members was considered in [2], where one of the team members is assumed to have full information whereas the other member has only access to partial information about the state of the world. Related team problems with exponential cost criterion were considered in [10]. Optimizing team problems with respect to affine decisions in a minimax quadratic cost was shown to be equivalent to stochastic team problems with exponential cost, see [5]. The connection is not clear when the optimization is carried out over nonlinear decision functions. The deterministic version (minimizing the worst case scenario) of the linear quadratic team decision problem was solved in [8].

In this paper, we will consider both Gaussian and deterministic settings(worst case scenario) for team decision problems under additional quadratic constraints. It’s well-known that additional constraints, although convex, could give rise to complex optimization problems if the optimized variables are functions (as opposed to real numbers). For instance linear functions, that is functions of the form where is a real matrix, are no longer optimal. We will illustrate this fact by the following example:

Example 1

For , we want to minimize the objective function

subject to

Some Hilbert space theory shows that the optimal is given by

and

Obviously, the optimal is a nonlinear function of .

Increasing the dimension of , and adding constraints on the structure of , for instance and , certainly makes the constrained optimization more complicated. The example above shows that, in spite of having a convex optimization carried out over a Hilbert space, the optimal decision function is nonlinear. However, we show in the upcoming sections that multi-objective problems behave nicely when considering the expected values of the objectives in the Gaussian case, in the sense that linear decisions are optimal. For the deterministic counterpart which is not an optimization problem over a Hilbert space, we show how to find the linear optimal decisions by semidefinite programming. However, the optimality of the linear decisions remains and open question.

2 Notation

The following table gives a list of the notation we are going to usee throughout the text:

The set of symmetric matrices.
The set of symmetric positive
semidefinite matrices.
The set of symmetric positive
definite matrices.
The set of measurable functions.
The set of functions with
,
, , .
The element of in position .
.
.
The Kronecker binary operation
between two matrices and , .
is the trace of the matrix .
The set of Gaussian variables with
mean and covariance .

3 Linear Quadratic Gaussian Team Theory

In this section we will review some classical results in stochastic team theory with new simpler proofs for the linear quadratic case, that first appeared in [6] and [7].

In the static team decision problem, one would like to solve

\hb@xt@.01(3.1)
subject to

Here, and are independent Gaussian variables taking values in and , respectively, with and . Also, and will be stochastic variables taking values in , , respectively, and . We assume that

\hb@xt@.01(3.2)

and , .

If full state information about is available to each decision maker , the minimizing can be found easily by completion of squares. It is given by , where is the solution to

Then, the cost function in (LABEL:static) can be rewritten as

\hb@xt@.01(3.3)

Minimizing the cost function , is equivalent to minimizing

since nothing can be done about (the cost when has full information).

The next theorem is due to Radner [12], but we give a different formulation and proof that is simpler, which relies on the structure of the linear quadratic Gaussian setting:

Theorem 1

Let and be Gaussian variables with zero mean, taking values in and , respectively, with . Also, let be a stochastic variable taking values in , , , , , for . Then, the optimal decision to the optimization problem

\hb@xt@.01(3.4)
subject to

is unique and linear in .

Proof. Let be the linear space of functions such that if is a linear transformation of , that is for some real matrix . Since , is a linear space under the inner product

and norm

The optimization problem in (LABEL:opt1) where we search for the linear optimal decision can be written as

\hb@xt@.01(3.5)

Finding the best linear optimal decision to the above problem is equivalent to finding the shortest distance from the subspace to the element , where the minimizing is the projection of on , and hence unique. Also, since is the projection, we have

for all . In particular, for , we have

The Gaussian assumption implies that is independent of , for all linear transformations . This gives in turn that is independent of . Hence, for any decision , linear or nonlinear, we have that

and

with equality if and only if . This concludes the proof.
    

Proposition 1

Let and be independent Gaussian variables taking values in and , respectively with , . Also, let be a stochastic variable taking values in , , , , , and . Set . Then, the optimal solution to the optimization problem

\hb@xt@.01(3.6)
subject to

is the solution of the linear system of equations

\hb@xt@.01(3.7)

Proof. Let and The problem of finding the optimal linear feedback law can be written as

\hb@xt@.01(3.8)
subject to

Now

\hb@xt@.01(3.9)

A minimizing is obtained by solving :

\hb@xt@.01(3.10)

Since , we get that

and the equality in (LABEL:almost) is equivalent to

and the proof is complete.     

In general, separation does not hold for the static team problem when constraints on the information available for every decision maker are imposed. That is, the optimal decision is not given by , where is the optimal estimated value of by decision maker . We show it by considering the following example.

Example 2

Consider the team problem

minimize
subject to

The data we will consider is:

The best decision with full information is given by

The optimal estimate of of decision maker 1 is

and of decision maker 2

Hence, the decision where each decision maker combines the best deterministic decision with her best estimate of is given by

for . This policy gives a cost equal to . However, solving the team problem yields , and hence the optimal team decision is given by

The cost obtained from the team problem is . Clearly, separation does not hold in team decision problems.

4 Team Decision Problems with Power Constraints

Consider the modified version of the optimization problem (LABEL:static):

\hb@xt@.01(4.1)
subject to

The difference from Radner’s original formulation is that we have added power constraints to the decision functions, .

In optimization (minimization) problems, you define the value to be infinite if there doesn’t exist any feasible decision variable that satisfy the constraints. Therefore, usually, one assumes that there is a feasible point, and hence the value must be finite. Existence conditions are hard to derive usually in spite of problems might be convex. So in practice, you run the algorithm and either you get a finite number, or it goes indefinitely. Conditions where you a decide whether you have a feasible problem or not are of great interest of course. It’s a nontrivial problem that is outside the scope of this paper.

In the sequel, we will prove a more general theorem, where we consider power constraints on a set of quadratic forms in both the state and the decision function .

Theorem 2

Let be a Gaussian variable with zero mean and given covariance matrix , taking values in . Also, let , , , and , for . Assume that the optimization problem

\hb@xt@.01(4.2)
subject to

is feasible. Then, linear decisions given by , with , are optimal.

Proof. Consider the expression

Take the expectation of a quadratic form with index to be larger than . Then, makes the value of the expression above infinite. On the other hand, if the expectation of a quadratic form with index is smaller than , then the maximizer is optimal for .

Now let be the optimal value of the optimization problem (LABEL:main), and consider the objective function

We have that , since it’s the Schur complement of in the positive semi-definite matrix Since , a necessary condition for the objective function to be zero is that , and so must be linear (In order for to have the structure given by , must be in , to satisfy the information constraints).

Now assume that . We have

\hb@xt@.01(4.3)

Now introduce and the matrix

and consider the minimax problem

\hb@xt@.01(4.4)

Note that a maximizing must be positive, since implies that , while gives . We can always recover the optimal solutions of (LABEL:pstar) from that of (LABEL:p0) by dividing all variables by , that is , , and . Now we have the obvious inequality ()

For any fixed values of , we have , so Theorem LABEL:radner1 gives the equality

where the minimizing is unique. Thus,

The objective function is radially unbounded in since . Hence, it can be restricted to a compact subset of . Thus,

where the equality is obtained by applying Proposition LABEL:minmaxtheorem in the Appendix, the second inequality follows from the fact that the set of linear decisions , , is a subset of , and the second equality follows from the definition of . Hence, linear decisions are optimal, and the proof is complete.     

Remark: Although Theorem 2 is stated and proved for and , it extends easily to the case for any matrix , which often is the case in applications.

5 Computation of The Optimal Team Decisions

The optimization problem that we would like to solve when assuming linear decisions is

\hb@xt@.01(5.1)
subject to

Note that we can write the constraints as

\hb@xt@.01(5.2)

where we used that . Hence, we obtain a set of convex quadratic inequalities (convex since for all )

There are many existing computational methods to solve convex quadratic optimization problems (see [3]).

Alternatively, we can formulate the optimization problem as a set of linear matrix inequalities as follows. For simplicity, we will assume that for all (The case is analogue with some technical conditions).

Theorem 3

The team optimization problem (LABEL:linopt) is equivalent to the semi-definite program

\hb@xt@.01(5.3)
subject to

Proof. Introduce the matrices , and write the given constraints as

\hb@xt@.01(5.4)

Now we have that

\hb@xt@.01(5.5)

Since , the quadratic inequality above can be transformed to a linear matrix inequality using the Schur complement ([3]), which is given by

Hence, our optimization problem to be solved is given by

\hb@xt@.01(5.6)
subject to

which proves our theorem.     

6 Minimax Team Theory

We considered the problem of static stochastic team decision in the previous sections. This section treats an analogous version for the deterministic (or worst case) problem. Although the problem formulation is very similar, the ideas of the solution are considerably different, and in a sense more difficult.

The deterministic problem considered is a quadratic game between a team of players and nature. Each player has limited information that could be different from the other players in the team. This game is formulated as a minimax problem, where the team is the minimizer and nature is the maximizer.

6.1 Deterministic Team Problems

Consider the following team decision problem

\hb@xt@.01(6.1)
subject to

where , , .
is a quadratic cost given by

where

We will be interested in the case . The players ,…, make up a team, which plays against nature represented by the vector , using , that is

Theorem 4

If the value of the game (LABEL:minimax) is equal to , then there is a linear decision , with , achieving that value.

Proof. For a proof, consult [8].     

6.2 Relation with The Stochastic Minimax Team Decision Problem

Now consider the stochastic minimax team decision problem

Taking the expectation of the cost in the stochastic problem above yields the equivalent problem

where is a positive semi-definite matrix, and is the covariance matrix of , i. e. . Hence, we see that the stochastic minimax team problem is equivalent to the deterministic minimax team problem, where nature maximizes with respect to all covariance matrices of the stochastic variable with variance .

7 Deterministic Team Problems with Quadratic Constraints

Consider the team problem (LABEL:minimax). An equivalent condition for the existence of a decision function that achieves the value of the game is that

for all , which is equivalent to

for all . This is an example of a power constraint. We could also have a set of power constraints that have to be mutually satisfied. For instance, in addition to the minimization of the worst case quadratic cost, we could have constraints on the induced norms of the decision functions

or equivalently given by the quadratic inequalities

Also, the team members could share a common power source, and the power is proportional to the squared norm of the decisions :

for some positive real number .

It’s not clear whether linear decisions are optimal, since the example give at the introduction indicates that, in deterministic settings, nonlinear decision are optimal. However, the next result shows how to obtain the linear optimal decisions by solving a semidefinite program.

Theorem 5

Let , for . Let for , and for