#
Multidimensional Rational Covariance Extension

with Approximate Covariance Matching^{†}^{†}thanks: This work was supported by the Swedish Research Council (VR), the Swedish Foundation of Strategic Research (SSF), and the ACCESS Linnaeus Center, KTH Royal Institute of Technology.

###### Abstract

In our companion paper [54] we discussed the multidimensional rational covariance extension problem (RCEP), which has important applications in image processing, and spectral estimation in radar, sonar, and medical imaging. This is an inverse problem where a power spectrum with a rational absolutely continuous part is reconstructed from a finite set of moments. However, in most applications these moments are determined from observed data and are therefore only approximate, and RCEP may not have a solution. In this paper we extend the results [54] to handle approximate covariance matching. We consider two problems, one with a soft constraint and the other one with a hard constraint, and show that they are connected via a homeomorphism. We also demonstrate that the problems are well-posed and illustrate the theory by examples in spectral estimation and texture generation.

Key words. Approximate covariance extension, trigonometric moment problem, convex optimization, multidimensional spectral estimation, texture generation.

## 1 Introduction

Trigonometric moment problems are ubiquitous in systems and control, such as spectral estimation, signal processing, system identification, image processing and remote sensing [5, 20, 59]. In the (truncated) multidimensional trigonometric moment problem we seek a nonnegative measure on satisfying the moment equation

(1.1) |

where , , and is the scalar product in . Here is a finite index set satisfying and . A necessary condition for (LABEL:eq:Cov) to have a solution is that the sequence

(1.2) |

satisfy the symmetry condition . The space of sequences (LABEL:eq:covariances) with this symmerty will be denoted and will be represented by vectors , formed by ordering the coefficient in some prescribed manner, e.g., lexiographical. Note that is isomorphic to , where is the cardinality of . However, as we shall see below, not all are bona fide moments for nonnegative measures .

In many of the applications mentioned above there is a natural complexity constraint prescribed by design specifications. In the context of finite-dimensional systems these constraints often arise in the requirement that transfer functions be rational. This leads to the rational covariance extension problem, which has been studied in various degrees of generality in [25, 26, 36, 53, 54] and can be posed as follows.

Define and let \cref@addtoresetequationparentequation

(1.3a) | |||

be the (unique) Lebesgue decomposition of (see, e.g., [56, p. 121]), where | |||

is the (normalized) Lebesgue measure and is a singular measure. Then given a , we are interested in parameterizing solutions to (LABEL:eq:Cov) such that the absolutely continuous part of the measure (LABEL:eq:decomp) takes the form | |||

(1.3b) |

where is the closure of the convex cone of the coefficients corresponding to trigonometric polynomials

(1.4) |

that are positive for all .

The reason for referring to this problem as a rational covariance extension problem is that the numbers (LABEL:eq:covariances) correspond to covariances of a discrete-time, zero-mean, and homogeneous^{1}^{1}1Homogeneity generalizes stationarity in the case . stochastic process . The corresponding power spectrum, representing the energy distribution across frequencies, is defined as the nonnegative measure on whose Fourier coefficients are the covariances (LABEL:eq:covariances).
A scalar version of this problem () was first posed by Kalman [34] and has been extensively studied and solved in the literature [24, 12, 6, 21, 48, 13, 41, 7, 61, 47].
It has been generalized to more general scalar moment problems [8, 27, 9] and to the multidimensional setting [26, 25, 54, 53, 36]. Also worth mentioning here is work by Lang and McClellan [39, 40, 45, 46, 38, 37] considering the multidimensional maximum entropy problem, which hence has certain overlap with the above literature.

The multidimensional rational covariance extension problem posed above has a solution if and only if , where is the open convex cone

where is the inner product in (Theorem LABEL:thm:rational). However, the covariances are generally determined from statistical data. Therefore the condition may not be satisfied, and testing this condition is difficult in the multidimensional case. Therefore we may want to find a positive measure and a corresponding , namely

(1.5) |

so that is close to in some norm, e.g., the Euclidian norm . This is an ill-posed inverse problem which in general has an infinite number of solutions . As we already mentioned, we are interested in rational solutions (LABEL:eq:ratinaldmu), and to obtain such solutions we use regularization as in [54]. Hence, we seek a that minimizes

subject to (LABEL:eq:r), where is a regularization parameter and

(1.6) |

is the nomalized Kullback-Leibler divergence [33, ch. 4] [15, 61]. As will be explained in Section LABEL:sec:prel, is always nonnegative and has the property .

In this paper we shall consider a more general problem in the spirit of [22]. To this end, for any Hermitian, positive definite matrix , we define the weighted vector norm and consider the problem

(1.7) | ||||

which is the same as the problem above with . We shall refer to as the weight matrix.

Using the same principle as in [57], we shall also consider the problem to minimize subject to (LABEL:eq:r) and the hard constraint

(1.8) |

Since (LABEL:eq:r) are bona fide moments and hence , while in general, this problem will not have a solution if the distance from to is greater than . Hence the choice of must be made with some care. Analogously with the rational covariance extension with soft constraints in (LABEL:eq:primal_relax), we shall consider the more general problem

(1.9) | ||||

to which we shall refer as the rational covariance extension problem with hard constraints. Again this problem reduces to the simpler problem by setting .

As we shall see, the soft-constrained problem (LABEL:eq:primal_relax) always has a solution, while the hard-constrained problem (LABEL:eq:primal_relax_new) may fail to have a solution for some weight matrices . However, in Section LABEL:sec:homeomorphism we show that the two problems are in fact equivalent in the sense that whenever (LABEL:eq:primal_relax_new) has a solution there is a corresponding in (LABEL:eq:primal_relax) that gives the same solution, and any solution of (LABEL:eq:primal_relax) can also be obtained from (LABEL:eq:primal_relax_new) by a suitable choice of . The reason for considering both formulations is that one formulation might be more suitable than the other for the particular application at hand. For example, an absolute error estimate for the covariances is more naturally incorporated in the formulation with hard constraints. A possible choice of the weight matrix in either formulation would be the covariance matrix of the estimated moments, as suggested in [22]. This corresponds to the Mahalanobis distance and could be a natural way to incorporate uncertainty of the covariance estimates in the spectral estimation procedure.

Previous work in this direction can be found in [58, 22, 10, 57, 35], where [58, 35, 10] consider the problem of selecting an appropriate covariances sequence to match in a given confidence region. The two approximation problems considered here are similar to the ones considered in [57] and [22]. (For more details, also see [3, Ch. B].)

We begin in Section LABEL:sec:prel by reviewing the regular multidimensional rational covariance extension problem for exact covariance matching in a broader perspective. In Section LABEL:sec:approxsoft we present our main results on approximate rational covariance extension with soft constraints, and in Section LABEL:sec:wellposed we show that the dual solution is well-posed. In Section LABEL:sec:no_singular_part we investigate conditions under which there are solutions without a singular part. The approximate rational covariance extension with hard constraints is considered in Section LABEL:sec:approxhard, and in Section LABEL:sec:homeomorphism we establish a homeomorhism between the weight matrices in the two problems, showing that the problems are actually equivalent when solutions exist. We also show that under certain conditions the homeomorphism can be extended to hold between all sets of parameters, allowing us to carry over results from the soft-constrained setting to the hard-constrained one. In Section LABEL:sec:covest we discuss the properties of various covariance estimators, in Section LABEL:sec:example we give a 2D example from spectral estimation, and in Section LABEL:sec:ex_texture we apply our theory to system identification and texture reconstruction. Some of the results of this paper were announced in [55] without proofs.

## 2 Rational covariance extension with exact matching

The trigonometric moment problem to determine a positive measure satisfying (LABEL:eq:Cov) is an inverse problem that has a solution if and only if [36, Theorem 2.3], where is the closure of , and then in general it has infinitely many solutions. However, the nature of possible rational solutions (LABEL:eq:ratinaldmu) will depend on the location of in . To clarify this point we need the following lemma.

###### Lemma 2.1.

.

###### Proof.

Obviously the inner product can be expressed in the integral form

(2.1) |

and therefore for all , as and can have zeros only on sets of measure zero. Hence the statement of the lemma follows.

Therefore, under certain particular conditions, the multidimensional rational covariance extension problem has a very simple solution with a polynomial spectral density, namely

(2.2) |

###### Proposition 2.2.

The multidimensional rational covariance extension problem has a unique polynomial solution (LABEL:eq:dmu=Pdm) if and only if , namely , where

The proof of Proposition LABEL:prop:polsolution is immediate by noting that any such is a bona fide spectral density and noting that .

As seen from the following result presented in [36, Section 6], the other extreme occurs for , when only singular solutions exist.

###### Proposition 2.3.

For any there is a solution of (LABEL:eq:Cov) with support in at most points. There is no solution with a absolutely continuous part .

However, for any , there is a rational solution (LABEL:eq:ratinaldmu) parametrized by , as demonstrated in [54] by considering a primal-dual pair of convex optimization problems. In that paper the primal problem is a weighted maximum entropy problem, but as also noted in [54, Sec. 3.2], it is equivalent to

(2.3) | ||||

where is the absolutely continuous part of . This amounts to minimizing the (regular) Kullback-Leibler divergence between and , subject to matching the given data [27, 54]. In the present case of exact covariance matching, this problem is equivalent to minimizing (LABEL:eq:normalizedKullbackLeibler) subject to (LABEL:eq:Cov), since is fixed and the total mass of is determined by the :th moment . Hence both and are constants in this case. Hence problem (LABEL:eq:primal_relax) is the natural extension of (LABEL:eq:primal) for the case where the covariance sequence is not known exactly.

The primal problem (LABEL:eq:primal) is a problem in infinite dimensions, but with a finite number of constraints. The dual to this problem will then have a finite number of variables but an infinite number of constraints and is given by

(2.4) |

In particular, Theorem 2.1 in [54], based on corresponding analysis in [36], reads as follows.

###### Theorem 2.4.

Problem (LABEL:eq:primal) has a solution if and only if . For every and the functional in (LABEL:eq:dual) is strictly convex and has a unique minimizer . Moreover, there exists a unique and a (not necessarily unique) nonnegative singular measure with support

(2.5) |

such that \cref@addtoresetequationparentequation

(2.6a) | |||

(2.6b) |

For any such , the measure

(2.7) |

is an optimal solution to the problem (LABEL:eq:primal). Moreover, can be chosen with support in at most points, where is the cardinality of the index set .

If , only a singular measure with finite support would match the moment condition (Proposition LABEL:prop:singular). In this case, the problem (LABEL:eq:primal) makes no sense, since any feasible solution has infinite objective value.

In [36] we also derived the KKT conditions \cref@addtoresetequationparentequation

(2.8a) | |||

(2.8b) |

which are necessary and sufficient for optimality of the primal and dual problems.

Since (LABEL:eq:primal) is an inverse problem, we are interested in how the solution depends on the parameters of the problem. From Propositions 7.3 and 7.4 in [36] we have the following result.

###### Proposition 2.5.

Let , and be as in Theorem LABEL:thm:rational. Then the map is continuous.

To get a full description of well-posedness of the solution we would like to extend this continuity result to the map . However, such a generalization is only possible under certain conditions. The following result is a consequence of Proposition LABEL:prop:cp2qhat and [54, Corollary 2.3].

###### Proposition 2.6.

Let , , and be as in Theorem LABEL:thm:rational. Then, for and all , the mapping is continuous.

Corollary 2.3 in [54] actually ensures that for and . However, in Section LABEL:ssec:qhat2chat we present a generalization of Proposition LABEL:prop:c2qhatchat2 to cases with , where then may be nonzero. (The proof of this generalization can be found in [52].) Here we shall also consider an example where continuity fails when belongs to the boundary , i.e., the corresponding nonnegative trigonometric polynomial is zero in at least one point.

## 3 Approximate covariance extension with soft constraints

To handle the case with noisy covariance data, when may not even belong to , we relax the exact covariance matching constraint (LABEL:eq:Cov) in the primal problem (LABEL:eq:primal) to obtain the problem (LABEL:eq:primal_relax). In this case it is natural to reformulate the objective function in (LABEL:eq:primal) to include a term that also accounts for changes in the total mass of . Consequently, we have exchanged the objective function in (LABEL:eq:primal) by the normalized Kullback-Leibler divergence (LABEL:eq:normalizedKullbackLeibler) plus a term that ensures approximate data matching.

Using the normalized Kullback-Leibler divergence, as proposed in [33, ch. 4] [15, 61], is an advantage in the approximate covariance matching problem since this divergence is always nonnegative, precisely as is the case for probability densities. To see this, observe that, in view of the basic inequality ,

since is a nonnegative measure. Moreover, , as can be seen by taking in (LABEL:eq:normalizedKullbackLeibler).

The problem under consideration is to find a nonnegative measure minimizing

subject to (LABEL:eq:r). To derive the dual of this problem we consider the corresponding maximization problem and form the Lagrangian

where are Lagrange multitipliers and is the corresponding trigonometric polynomial (LABEL:Ptrigpol). However,

(3.1) |

and therefore

(3.2) |

where , and for , and hence .

In deriving the dual functional

to be minimized, we only need to consider , as will take infinite values for . In fact, following along the lines of [54, p. 1957], we note that, if , (LABEL:eq:Lagrangian) will tend to infinity when . Moreover, since , there is a neighborhood where , letting tend to infinity in this neighborhood, (LABEL:eq:Lagrangian) will tend to infinity if . We also note that the nonnegative function can only be zero on a set of measure zero; otherwise the first term in (LABEL:eq:Lagrangian) will be .

The directional derivative^{2}^{2}2Formally, the Gateaux differential [44]. of the Lagrangian (LABEL:eq:Lagrangian) in any feasible direction , i.e., any direction such that for sufficiencly small , is easily seen to be

In particular, the direction is feasible since for . Therefore, any maximizing must satisfy and hence (LABEL:eq:rat). Moreover, a maximizing choice of will require that

(3.3) |

as this nonnegative term can be made zero by the simple choice , and consequently (LABEL:supp(dnu)) must hold. Finally, the directional derivative

is zero for all if

(3.4) |

Inserting this together with (LABEL:eq:rat) and (LABEL:eq:intQdnu=0) into (LABEL:eq:Lagrangian) then yields the dual functional

Consequently the dual of the (primal) optimization problem (LABEL:eq:primal_relax) is equivalent to

(3.5) |

###### Theorem 3.1.

For every the functional in (LABEL:eq:dual_relax) is strictly convex and has a unique minimizer . Moreover, there exists a unique , a unique and a (not necessarily unique) nonnegative singular measure with support

(3.6) |

such that \cref@addtoresetequationparentequation

(3.7a) | |||

(3.7b) |

and the measure

(3.8) |

is an optimal solution to the primal problem (LABEL:eq:primal_relax). Moreover, can be chosen with support in at most points.

###### Proof.

The objective functional of the dual problem (LABEL:eq:dual_relax) can be written as the sum of two terms, namely

where . The functional is strictly convex (Theorem LABEL:thm:rational), and trivially the same holds for since it is a positive definite quadratic form. Consequently, is strictly convex, as claimed. Moreover, is lower semicontinuous [54, Lemma 3.1] with compact sublevel sets [54, Lemma 3.2]. Likewise, is continuous with compact sublevel sets. Therefore is lower semicontinuous with compact sublevel sets and therefore has a minimum , which must be unique by strict convexity.

In view of (LABEL:eq:q2r), the optimal value of is given by

(3.9) |

and is hence unique. Since therefore the linear term in the gradient of takes the value at the optimal point, the analysis in [54, sect. 3.1.5] applies with obvious modifications, showing that there is a , which then must be unique, such that

Moreover, there is a discrete measure with support in at most points such that (LABEL:chat) holds; see, e.g., [36, Proposition 2.4]. Then (LABEL:rhat) holds as well. In view of (LABEL:eq:intQdnu=0),

(3.10) |

and consequently , and the support of must satisfy (LABEL:supp(dnu)soft).

Finally, let be given in terms of by (LABEL:eq:r), and let be the corresponding primal functional in (LABEL:eq:primal_relax). Then, for any such ,

and hence is an optimal solution to the primal problem (LABEL:eq:primal_relax), as claimed.

We collect the KKT conditions in the following corollary.

###### Corollary 3.2.

The conditions \cref@addtoresetequationparentequation

(3.11a) | |||

(3.11b) | |||

(3.11c) |

are necessary and sufficient conditions for optimality of the dual pair (LABEL:eq:primal_relax) and (LABEL:eq:dual_relax) of optimization problems.

## 4 On the well-posedness of the soft-constrained problem

In the previous sections we have shown that the primal and dual optimization problems are well-defined. Next we investigate the well-posedness of the primal problem as an inverse problem. Thus, we first establish continuity of the solutions in terms of the parameters , and .

### 4.1 Continuity of with respect to , and

We start considering the continuity of the optimal solution with respect to the parameters. The parameter set of interest is

(4.1) |

###### Theorem 4.1.

Let

(4.2) |

Then the map is continuous on .

###### Proof.

Following the procedure in [36, Proposition 7.3] we use the continuity of the optimal value (Lemma LABEL:lem:W2Jcont) to show continuity of the optimal solution. To this end, let be a sequence of parameters in converging to as . Moreover, defining and for simplicity of notation, let and . By Lemma LABEL:lem:W2Jcont, is bounded, and hence there is a subsequence, which for simplicity we also call , converging to a limit . If we can show that , then the theorem follows. To this end, choosing a , we have

Consequently, by Lemma LABEL:lem:W2Jcont,

However , and, since is continuous in , we obtain

(4.3) |

Letting in (LABEL:Jpestimate), we obtain the inequality . By strict convexity of the optimal solution is unique, and hence .

### 4.2 Continuity of with respect to

We have now established continuity from to . In the same way as in Proposition LABEL:prop:c2qhatchat2 we are also interested in continuity of the map . This would follow if we could show that the map from to is continuous. From the KKT condition (LABEL:eq:opt_cond_relax_matchError), it is seen that is continuous in , and . In view of (LABEL:eq:opt_cond_relax_KKT), i.e.,

continuity of would follow if is continuous in whenever it is finite. If , this follows from the continuity the map in . For the case , this is trivial since if is finite, then and is bounded away from zero (cf., Proposition LABEL:prop:c2qhatchat2). However, for the case the optimal may belong to the boundary , i.e., is zero in some point. The following proposition shows continuity of for certain cases.

###### Proposition 4.2.

For , let and suppose that the Hessian is positive definite in each point where is zero. Then and the mapping from the coefficient vector to is continuous in the point .

The proof of this proposition is given in [52]. From Propositions LABEL:prp:qQinvLt and LABEL:prop:c2qhatchat2 the following continuity result follows directly.

###### Corollary 4.3.

For all , the mapping is continuous in any point for which the Hessian is positive definite in each point where is zero.

The condition is needed, since we may have pole-zero cancelations in when , and then may be finite even if . The following example shows that this may lead to discontinuities in the map (cf. Example 3.8 in [36]).

###### Example 1

Let

where and is the singular measure with support in . Since is positive, . Moreover, since

we have that (see, e.g., [41, p. 2853]). Thus we know [54, Corollary 2.3] that for each we have a unique such that matches , and hence . However, for we have that and (Theorem LABEL:thm:rational). Then, for the sequence , where , we have , so

which shows that the mapping is not continuous.

## 5 Tuning to avoid a singular part

In many situations we prefer solutions where there is no singular measure in the optimal solution. An interesting question is therefore for what prior and weight we obtain . The following result provides a sufficient condition.

###### Proposition 5.1.

Let and let be the Fourier coefficients of the prior . If the weight satisfies^{3}^{3}3Here denotes the subordinate (induced) matrix norm.

(5.1) |

then the optimal solution of (LABEL:eq:primal_relax) is on the form

i.e., the singular part vanishes.

###### Remark 1

Note that for a scalar weight, the bound (LABEL:eq:supnormbound) simplifies to

(5.2) |

where is the cardinality of index set .

For the proof of Proposition LABEL:boundlprop we need the following lemma.

###### Lemma 5.2.

Condition (LABEL:eq:supnormbound) implies

(5.3) |

where is the optimal value of in problem (LABEL:eq:primal_relax).

###### Proof.

Let

(5.4) |

be the cost function of problem (LABEL:eq:primal_relax), and let be the optimal solution. Clearly, , and consequently

since and . Therefore,

which is less than one by (LABEL:eq:supnormbound). Hence (LABEL:eq:supnormbound) implies (LABEL:eq:newnormbound).

###### Proof (Proof of Proposition LABEL:boundlprop).

Suppose the optimal solution has a nonzero singular part , and form the directional derivative of (LABEL:eq:primalcost) at in the direction . Then in (LABEL:eq:decomp) does not vary, and

where

Then for all , and hence

by (LABEL:eq:newnormbound) (Lemma LABEL:boundlemma). Consequently,

whenever , which contradicts optimality. Hence must be zero.

The condition of Proposition LABEL:boundlprop is just sufficient and is in general conservative. To illustrate this, we consider a simple one-dimensional example ().

###### Example 2

Consider a covariance sequence , where , and a prior , and set . Then, since

the sufficient condition (LABEL:eq:scalarbound) for an absolutely continuous solution is

(5.5) |

We want to investigate how restrictive this condition is.

Clearly we will have a singular part if and only if , in which case we have