Approximating the Real Structured Stability Radius with Frobenius Norm Bounded Perturbations

# Approximating the Real Structured Stability Radius with Frobenius Norm Bounded Perturbations

N. Guglielmi Dipartimento di Matematica Pura e Applicata, Università dell’Aquila, via Vetoio (Coppito), I- L’Aquila, Italy (guglielm@univaq.it). Supported in part by the Italian Ministry of Education, Universities and Research (M.I.U.R.) and by Istituto Nazionale di Alta Matematica - Gruppo Nazionale di Calcolo Scientifico (INdAM-G.N.C.S).     M. Gürbüzbalaban Department of Management Science and Information Systems, Rutgers University, New Brunswick, NJ, USA (mgurbuzbalaban@business.rutgers.edu)    T. Mitchell Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, 39106 Germany (mitchell@mpi-magdeburg.mpg.de). Previously supported by National Science Foundation Grant DMS-1317205 at New York University.    M. L. Overton Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012, USA (overton@cims.nyu.edu). Supported in part by National Science Foundation Grant DMS-1620083.
Dec. 30th, 2016
###### Abstract

We propose a fast method to approximate the real stability radius of a linear dynamical system with output feedback, where the perturbations are restricted to be real valued and bounded with respect to the Frobenius norm. Our work builds on a number of scalable algorithms that have been proposed in recent years, ranging from methods that approximate the complex or real pseudospectral abscissa and radius of large sparse matrices (and generalizations of these methods for pseudospectra to spectral value sets) to algorithms for approximating the complex stability radius (the reciprocal of the norm). Although our algorithm is guaranteed to find only upper bounds to the real stability radius, it seems quite effective in practice. As far as we know, this is the first algorithm that addresses the Frobenius-norm version of this problem. Because the cost mainly consists of computing the eigenvalue with maximal real part for continuous-time systems (or modulus for discrete-time systems) of a sequence of matrices, our algorithm remains very efficient for large-scale systems provided that the system matrices are sparse.

## 1 Introduction

Consider a linear time-invariant dynamical system with output feedback defined, for continuous-time systems, by matrices , , and as

 ˙x =Ax+Bw (1) z =Cx+Dw (2)

where is a disturbance feedback depending linearly on the output [HP05, p. 538]. For simplicity, we restrict our attention to continuous-time systems for most of the paper, but we briefly explain how to extend our results and methods to discrete-time systems in Section 6.

The real stability radius, sometimes called the real structured stability radius, is a well known quantity for measuring robust stability of linear dynamical systems with output feedback [HP90a, HP90b, HK94, HP05, ZGD95, Kar03]. It measures stability under a certain class of real perturbations where the size of the perturbations are measured by a given norm . Most of the literature has focused on spectral norm bounded perturbations for which there exists a characterization in terms of an explicit formula [QBR95] and a level-set algorithm [SVDT96]. This algorithm has been proven to be convergent; however, it is not practical for systems where large and sparse matrices arise as it requires a sequence of Hamiltonian eigenvalue decompositions, each with a complexity of .

As an alternative to the spectral norm, Frobenius-norm bounded perturbations have also been of interest to the control community [LKL96, BS99, BS98, BB01, Bob99, BBD01]. It has been argued that the Frobenius norm is easier to compute and is more advantageous to consider in certain types of control systems [Bob99, BBD01], admitting natural extensions to infinite-dimensional systems [BB01]. In the special case , there exists an algorithm [Bob99, BBD01] that gives upper and lower bounds for the Frobenius-norm bounded real stability radius; however, there is no algorithm to our knowledge that is applicable in the general case. Indeed, [BV14] describes this as an unsolved research problem. In this paper, we present the first method to our knowledge that provides good approximations to the Frobenius-norm bounded real stability radius.

Our method relies on two foundations. The first is the theory of spectral value sets associated with the dynamical system (1)–(2) as presented in [HP05, Chapter 5]. The second is the appearance of a number of recent iterative algorithms that find rightmost points of spectral value sets of various sorts, beginning with the special case of matrix pseudospectra (the case ) [GO11], followed by a related method for pseudospectra [KV14] and extensions to real-structured pseudospectra [GL13, GM15, Ros15, Gug16], and to spectral value sets associated with (1)–(2) [GGO13, MO16] and related descriptor systems [BV14].

The paper is organized as follows. In Section 2 we introduce spectral value sets, establishing a fundamental relationship between the spectral value set abscissa and the stability radius. In Section 3 we introduce an ordinary differential equation whose equilibria are generically associated with rightmost points of Frobenius-norm bounded real spectral value sets. In Section 4 we present a practical iterative method to compute these points. This leads to our method for approximating the Frobenius-norm bounded real stability radius, presented in Section 5. We outline extensions to discrete-time systems in Section 6, present numerical results in Section 7, and make concluding remarks in Section 8.

## 2 Fundamental Concepts

Throughout the paper, denotes the matrix 2-norm (maximum singular value), whereas denotes the Frobenius norm (associated with the trace inner product). The usage means that the norm may be either or , or both when they coincide, namely for vectors or rank-one matrices. We use the notation to denote the open left half-plane and to denote the closed upper half-plane .

### 2.1 Spectral Value Sets and μ-Values

Given real matrices and defining the linear dynamical system (1)–(2), linear feedback leads to a perturbed system matrix with the linear fractional form

 M(Δ)=A+BΔ(I−DΔ)−1Cfor Δ∈Kp×m (3)

where the field is either or . Note that since

 ∥DΔ∥2≤∥D∥2∥Δ∥2≤∥D∥2∥Δ∥F (4)

we can ensure that is well defined by assuming and , regardless of whether is or .

###### Definition 2.1.

Let , with Define the spectral value set with respect to the norm and the field as

 σK,∥⋅∥ε(A,B,C,D)=⋃{σ(M(Δ)):Δ∈Kp×m,∥Δ∥≤ε}.

Here denotes spectrum. Note that

 σK,∥⋅∥2ε(A,B,C,D)⊇σK,∥⋅∥Fε(A,B,C,D)⊇σK,∥⋅∥0(A,B,C,D)=σ(A).

It is well known that when , the spectral value set can equivalently be defined as the set of points for which the spectral norm, i.e., the largest singular value, of the transfer matrix

 G(s)=C(sI−A)−1B+D

takes values at least [HP05, Chap. 5]. Furthermore, it is well known that

 σC,∥⋅∥ε(A,B,C,D)=⋃{σ(M(Δ)):Δ∈Cp×m,∥Δ∥≤ε and rank(Δ)≤1}. (5)

As a consequence of this rank-one property, it is clear that

 σC,∥⋅∥2ε(A,B,C,D)=σC,∥⋅∥Fε(A,B,C,D).

In contrast, when perturbations are restricted to be real, the inclusion

 σR,∥⋅∥2ε(A,B,C,D)⊇σR,∥⋅∥Fε(A,B,C,D)

is generically strict. Instead of ordinary singular values, we must consider real structured singular values or real -values, that is with perturbations restricted to real matrices. This topic is discussed at length in [HP05, Section 4.4], allowing additional structure to be imposed on beyond simply , and treating a general class of operator norms, including the spectral norm, but not, however, the Frobenius norm. See also [Kar03, Chapter 6].

###### Definition 2.2.

The -value of a matrix with respect to the field and the norm is defined by

 (6)

We use the convention that taking the infimum over the empty set always yields and that , so that . Definition 2.2 defines the real -value when for complex matrices as well as real matrices.

The following lemma is well known in the case of the spectral norm, where it is usually known as the Eckart-Young theorem. See [QBR95, Lemma 1] and [HP05, Prop. 4.4.11] for extensions to other structures and other operator norms. A key point to note here is that while this result holds both for and , it does not hold for the real value when is complex.

###### Lemma 2.3.

Let and let be either or . Then,

###### Proof.

If , the result is clear. Assume . If , then we have from (4) that . This shows that . For the reverse inequality, let have the singular value decomposition where and are unitary and

 Σ=diag{σ1(H),σ2(H),…,σmin{p,m}(H)}

is a diagonal matrix with singular values on the diagonal in descending order by magnitude. Define

 Δ=Vdiag{σ1(H)−1,0,…,0}UT.

Then and . Furthermore, if then since is real, is also real. This shows that . ∎

Combining [HP05, Lemma 5.2.7] with Lemma 2.3 results in the following corollary. This may be compared with [HP05, Corollary 5.2.8], which treats a more general class of structured perturbations, but is restricted to operator norms.

###### Corollary 2.4.

Let be either or . Let and and define

Then,

 ε(s)=(μ∥⋅∥K(G(s)))−1.

This leads to the following theorem which may be compared to [HP05, Theorem 5.2.9], which again does not treat the Frobenius norm.

###### Theorem 2.5.

Let be either or . Suppose and . Then

 σK,∥⋅∥ε(A,B,C,D)=σ(A) ⋃ {s∈C∖σ(A) : μ∥⋅∥K(G(s))≥ε−1}.
###### Proof.

Suppose , and . By [HP05, Lemma 5.2.7], we have . Conversely, if and , then we have and by Corollary 2.4, there exists with such that . ∎

Note that even when are real, the transfer function is normally complex for so it is not generally the case that . For real spectral value sets defined by the spectral norm, the optimal perturbation that appears in Definition 2.2 of the -value can in fact always be chosen to have rank at most two [QBR95, Section 2], leading to the formula

 σR,∥⋅∥2ε(A,B,C,D)=⋃{σ(M(Δ)):Δ∈Rp×m,∥Δ∥2≤ε and rank(Δ)≤2}.

We make the assumption in this paper that the same property holds for the Frobenius norm, but, for brevity, we leave a detailed justification of this to future work.

Because we are focusing on the continuous-time dynamical system (1)–(2), the stability region of interest is the open left half-plane . We say that is stable if , in which case, for sufficiently small , the spectral value set is also in . The stability radius measures the size of the minimal perturbation that destabilizes the matrix or results in being undefined [HP05, Def. 5.3.1].

###### Definition 2.6.

The stability radius is defined with respect to the field and the norm as

 r∥⋅∥K(A,B,C,D)=inf{∥Δ∥:Δ∈Kp×m,det(I−DΔ)=0 or σ(M(Δ))⊄C−}.

The characterization

 r∥⋅∥K(A,B,C,D)=min([μ∥⋅∥K(D)]−1,infω∈R[μ∥⋅∥K(G(iω))]−1) (7)

is well known for operator norms [HP05, Theorem 5.3.3]. Corollary 2.4 and Lemma 2.3 extend [HP05, Theorem 5.3.3] beyond operator norms to the Frobenius norm leading to a similar formula

 (8)
###### Remark 2.7.

As the function is upper semi-continuous both for operator norms and the Frobenius norm (see [Kar03, Lemma 1.7.1]), we have but

 (9)

with a possible strict inequality (see [HP05, Remark 5.3.17 (i)] and [HP05, Example 5.3.18] for an example with ). Therefore, when , we cannot eliminate the first term in (8). Either or the infimum in (8) is strictly less than in which case it has to be attained at a finite ; otherwise, we would obtain a contradiction as by the inequality (9). However, in the special case when , we can interpret (see the paragraph after Definition 2.2) and dispense with the first term in (8).

In the complex case , the spectral norm and Frobenius norms define the same stability radius . In this case also we can eliminate the first term in (8), since (9) holds with equality, and the second term is simply the reciprocal of the norm of the transfer matrix on the boundary of the stability region. The standard method to compute it is the Boyd-Balakrishnan-Bruinsma-Steinbuch (BBBS) algorithm [BB90, BS90]. This algorithm is globally and quadratically convergent, but is not practical when is large due to its computational complexity: it requires repeated computation of all eigenvalues of Hamiltonian matrices. The first constructive formula to compute and hence , the real value and the real stability radius for the spectral norm, was given in [QBR95]; this led to a practical level-set algorithm [SVDT96]. However, this is significantly more involved than the BBBS algorithm and hence is also impractical in the large-scale case. To our knowledge, no efficient algorithm to compute is known when is large. As noted in the introduction, much less attention has been given to the Frobenius-norm case, though it is clearly of interest in applications. As far as we know, no constructive method has been given to approximate or , even if is small.

### 2.3 The Spectral Value Set Abscissa

The spectral value set abscissa measures how far the spectral value set extends rightwards into the complex plane for a prescribed value of .

###### Definition 2.8.

For , , the spectral value set abscissa (w.r.t. the norm and the field ) is

 αK,∥⋅∥ε(A,B,C,D)=max{Re λ:λ∈σK,∥⋅∥ε(A,B,C,D)} (10)

with , the spectral abscissa of .

In the case , is called the real spectral value set abscissa.

###### Definition 2.9.

A rightmost point of a set is a point where the maximal value of the real part of the points in is attained. A locally rightmost point of a set is a point which is a rightmost point of for some neighborhood of with .

###### Remark 2.10.

Since is compact, its rightmost points, that is the maximizers of the optimization problem in (10) lie on its boundary. When , there can only be a finite number of these [GGO13, Remark 2.14]. However, when , there can be an infinite number of points on the boundary with the same real part, as shown by the following example.

###### Example 2.11.

Let

 A=(0−11  0),B=(01),C=(10),D=0.

For , consists of two line segments on the imaginary axis, which merge into one line segment when .

###### Remark 2.12.

Since are real, is symmetric with respect to the real axis, so without loss of generality, when we refer to a rightmost in , we imply that , the closed upper half-plane, and when we say that the rightmost point is unique, we mean considering only points in . The same convention applies to the spectrum, so that a rightmost eigenvalue is understood to be in .

There is a key relationship between the spectral value set abscissa and the stability radius that is a consequence of Theorem 2.5:

###### Corollary 2.13.
 r∥⋅∥K(A,B,C,D) =inf{ε: ε∥D∥2<1~{}or~{}αK,∥⋅∥ε(A,B,C,D)≥0} (11) =min(∥D∥−12,inf{ε: αK,∥⋅∥ε(A,B,C,D)≥0}). (12)
###### Proof.

That the right-hand sides of (11) and (12) are the same is immediate. Hence, it suffices to show that both of the infimum terms in (8) and (12) are attained and are equal when the upper bound is not active. The infimum in (12) is attained because is a monotonically increasing continuous function of (see [Kar03, Chapter 2] for continuity properties of real spectral value sets) and the infimum in (8) is attained at a finite point by Remark 2.7. Finally, the infimal values are equal by Theorem 2.5. ∎

The algorithm developed in this paper for approximating the real stability radius when is large depends on the fundamental characterization (12). This was also true of the recent algorithms developed in [GGO13, BV14, MO16] for the complex stability radius when is large, for which the equivalence (12) is more straightforward and well known.111For a different approach to approximating when is large, namely the “implicit determinant” method, see [FSVD14].

## 3 An Ordinary Differential Equation

This section extends the method of Guglielmi and Lubich [GL13, Sec. 2.1] for approximating the real pseudospectral abscissa (the real spectral value set abscissa in the case , ) to the spectral value set abscissa for general , using the Frobenius norm. As we shall see, the extension is not straightforward as additional subtleties arise in the general case that are not present in the pseudospectral case.

We consider to be fixed with throughout this section and the next section. We start by considering the variational behavior of eigenvalues of the perturbed system matrix defined in (3). It is convenient to assume a smooth parametrization mapping to , with for all . We will use to denote the derivative .

We need the following lemma.

###### Lemma 3.1.

Given a smooth parametrization with , we have

 ddt(Δ(t)(I−DΔ(t))−1)=(I−Δ(t)D)−1˙Δ(t)(I−DΔ(t))−1. (13)
###### Proof.

For conciseness, we omit the dependence on , differentiate and regroup terms as

 ddt(Δ(I−DΔ)−1) =˙Δ(I−DΔ)−1+Δddt(I−DΔ)−1 =˙Δ(I−DΔ)−1+Δ(I−DΔ)−1D˙Δ(I−DΔ)−1 =(I+Δ(I−DΔ)−1D)˙Δ(I−DΔ)−1. (14)

We then observe that

 I+Δ(I−DΔ)−1D=I+Δ(∞∑k=0(DΔ)k)D=I+∞∑k=1(ΔD)k=(I−ΔD)−1. (15)

Combining (3) and (15) yields the result. ∎

The following definition from [GO11, MO16] is useful.

###### Definition 3.2.

Let be a simple eigenvalue of a matrix with associated right eigenvector satisfying and left eigenvector satisfying . We refer to , , as an RP-compatible eigentriple of if is real and positive and , and as a rightmost eigentriple if is a rightmost eigenvalue of in .

Note that if is an RP-compatible eigentriple of , so is , , for any .

Then we have:

###### Lemma 3.3.

Given a smooth parametrization with , let be a continuously varying simple eigenvalue of

 M(Δ(t))=A+BΔ(t)(I−DΔ(t))−1C.

Then is differentiable with

 Re˙λ(t)=1y(t)∗x(t)Re(u(t)∗˙Δ(t)v(t))

where , , is an RP-compatible eigentriple of and

 u(t)=(I−Δ(t)D)−TBTy(t),v(t)=(I−DΔ(t))−1Cx(t).
###### Proof.

Applying standard eigenvalue perturbation theory [HJ90, Theorem 6.3.12], together with Lemma 3.1, we find that is differentiable with

 (16)

where we omitted the dependence on for conciseness. The result is then immediate. ∎

In what follows next it is convenient to define , so that for all . Consequently, we have

 Re˙λ(t)=εy(t)∗x(t)Re(u(t)∗˙E(t)v(t))=εy(t)∗x(t)⟨Re(u(t)v(t)∗),˙E(t)⟩, (17)

where for ,

 ⟨R,S⟩=TrRTS=∑i,jRijSij,

the trace inner product on associated with the Frobenius norm. The condition that is constant is equivalent to

 (18)

Our aim, given , is to choose to maximize (17) subject to the constraint (18), leading to an optimization problem whose solution is given by the following lemma. The proof is a straightforward application of first-order optimality conditions; see also [GL13, Lemma 2.4].

###### Lemma 3.4.

Let have unit Frobenius norm, and let be given complex vectors such that . A solution to the optimization problem

 ~Z=argmaxZ∈Ω Re(u∗Zv),Ω={Z∈Rp×m,∥Z∥F=1,⟨E,Z⟩=0} (19)

exists and it satisfies

 τ~Z=(Re(uv∗)−⟨E,Re(uv∗)⟩E), (20)

where is the Frobenius norm of the matrix on the right-hand side in .

This suggests consideration of the following ordinary differential equation (ODE) on the manifold of real matrices of unit Frobenius norm:

 ˙E(t)=Re(u(t)v(t)∗)−⟨E(t),Re(u(t)v(t)∗)⟩E(t), (21)

with and defined by

 u(t)=(I−εE(t)D)−TBTy(t),v(t)=(I−εDE(t))−1Cx(t) (22)

where , , is a rightmost RP-compatible eigentriple for the matrix (see Definition 3.2). Assume the initial condition , a given matrix with unit Frobenius norm, chosen so that has a unique rightmost eigenvalue (considering only eigenvalues in ), and that this eigenvalue, , is simple.

### 3.1 Equilibrium Points of the ODE

We now focus on the properties of the ODE (21), in particular, characterizing equilibrium points.

###### Theorem 3.5.

Let with unit Frobenius norm satisfy the differential equation (21) initialized as described above. There exists such that, for all

• The eigenvalue is the unique rightmost eigenvalue of in and this eigenvalue is simple, so the ODE is well defined.

• .

• .

Furthermore, at a given value , the following three conditions are equivalent:

• .

• One of the following two mutually exclusive conditions holds:

 Re(u(t)v(t)∗)=0  or  E(t)=Re(u(t)v(t)∗)∥Re(u(t)v(t)∗)∥F. (23)
• .

Finally, if the second alternative in (23) holds at , then there does not exist any locally differentiable path , with and , for which the rightmost eigenvalue of , say , has .

###### Proof.

(1) Because the rightmost eigenvalue is unique and simple for , the same property must hold for sufficiently small positive , establishing the existence of such that the ODE is well defined for all . (2) Taking the trace inner product of with the ODE (21) we find that (18) holds for , so the norm is preserved by the ODE. (3) Substituting the ODE (21) into (17) we obtain from Cauchy-Schwartz that

 Re˙λ(t)=εy(t)∗x(t)[∥Re(u(t)v(t)∗)∥2F−⟨Re(u(t)v(t)∗),E(t)⟩2]≥0, (24)

establishing (i). For (ii), equality holds in (24) at a given if and only if one of the two alternatives in (23) holds. That (iii) is an equivalent condition follows directly from the ODE (21). The final statement follows from the optimality property of Lemma 3.4. ∎

Thus, equilibria of the ODE come in two flavors. When the second alternative in (23) holds, a first-order optimality condition for to be a rightmost point of holds, implying in particular that it is on the boundary of . However, when the first alternative holds, we cannot make any such claim. In the special case of pseudospectra, that is with and , the outer product reduces to , whose real part cannot be zero as was shown in [GL13, Sec. 2.1.6]: in fact, the proof given there shows that has rank one when is real and rank two when it is complex.

If , we say that is a static point. Note that this generalizes the notions of uncontrollability and unobservability, because if is unobservable, then , implying , while if it is uncontrollable, then , implying . Example 2.11 given earlier shows that it is possible that even if is controllable and observable. In this example, for all , setting , we find that and are both nonzero but that . However, we do not know whether it is possible for the solution of the ODE to converge to a static point if it is not initialized there.

The next lemma explicitly states formulas for and bounds on its rank.

###### Lemma 3.6.

Fix and let be defined by (22) for some vectors and . If , then we can choose , , and to be real, with having rank 1. If , set , , so . Then

 Re(uv∗)=(I−εED)−TBTYXTCT(I−εDE)−T

with

 rank(Re(uv∗))=rank(BTYXTCT)≤2.

Furthermore, if , then .

###### Proof.

The first statement follows from the definition (22), noting that and are real. The rank results follow from submultiplicativity. ∎

As already mentioned, the argument given in [GL13, Sec. 2.1.6] shows that when is not real, the matrix has rank two, so when , we can expect that will also have rank two for generic and .

If the ODE is initialized so that for all , the rightmost eigenvalue of is unique (considering only eigenvalues in ) and is simple, then we can take in Theorem 3.5. If we further suppose that the number of equilibrium points of the ODE is finite, then since (21) is a gradient system (with Lyapunov function ), we can apply Lasalle’s Theorem [LR14] allowing us to state that converges to an equilibrium point and hence converges to some . Suppose that is a unique rightmost eigenvalue of and is simple, with associated RP-compatible eigentriple , , , and define

 ~u=(I−ε~ED)−TBT~y,~v=(I−εD~E)−1C~x,

by analogy with (22). Then, if , we have, by taking limits in Theorem 3.5, that , and that there does not exist any locally differentiable path , with and , for which the rightmost eigenvalue of , say , has .

To summarize this section, we have characterized equilibria of the ODE (21) as those which have a first-order local optimality property with respect to rightmost points of , with exceptions that are apparently nongeneric. A natural idea would be to attempt to approximate by integrating the ODE (21) numerically to determine its equilibria, guaranteeing monotonicity by step-size control. Such a method would generally find locally rightmost points of , although it could not be guaranteed to find globally rightmost points. However, a serious drawback of this approach is the fact that the solution (and hence most likely its discretization) does not preserve the low rank-structure even if both the initial point and the limit of as both have rank two. Although it is possible to consider an ODE defined on the manifold of rank-two matrices, as done in [GL13] and [GM15] for the special case , , we instead develop an efficient discrete iteration that is nonetheless based on the ODE (21).

## 4 An Iterative Method to Approximate the Frobenius-norm Real Spectral Value Set Abscissa

As in the previous section, assume is fixed with . Following an idea briefly mentioned in [GM15], we consider the following implicit-explicit Euler discretization of (21) with a variable step-size :

 (25)

where

 uk+1=(I−εEkD)−TBTyk,vk+1=(I−εDEk)−1Cxk

and , , is a rightmost RP-compatible eigentriple of . The method is clearly consistent and converges with order with respect to .

###### Lemma 4.1.

Let , be given complex vectors with and set . Let . Then the difference equation (25) has the solution

 Ek+1=Re(uk+1v∗k+1)∥Re(uk+1v∗k+1)∥F (26)

as long as the rightmost eigenvalue of is simple and unique (considering only those in ) and as long as , for all .

###### Proof.

The result is easily verified by substituting (26) into (25). The assumptions ensure that the difference equation is well defined. ∎

Equivalently, let be the current perturbation, with , and be a rightmost eigenvalue of . Then by setting

 Xk=(Rexk,Imxk), Yk=(Reyk,Imyk)

we can write (26) in the form

 Ek+1=Uk+1VTk+1 with ∥Uk+1VTk+1∥F=1 (27)

where

 ˆUk+1 =(I−εUkVTkD)−TBTYk, (28) ˆVk+1 =(I−εDUkVTk)−1CXk, (29) βk+1 =∥ˆUk+1ˆVTk+1∥−1F, (30) Uk+1 =√βk+1ˆUk+1, (31) Vk+1 =√βk+1ˆVk+1. (32)

Since has rank at most two, we can simplify these expressions using the Sherman-Morrison-Woodbury formula [GV83] as follows:

 (I−εUkVTkD)−1 =I+εUk(I−εVTkDUk)−1VTkD, (33) (I−εDUkVTk)−1 =I+εDUk(I−εVTkDUk)−1VTk. (34)

Note that and is invertible since by assumption. The second formula (34) can be also used to simplify the definition of the perturbed system matrix in (3) as follows:

 M(Δk) =M(εEk)=M(εUkVTk) =A+εBUkVTk(I−εDUkVTk)−1C =A+εBUkVTk(I+εDUk(I−εVTkDUk)−1VTk)C =A+(εBUk)[I+ε(VTkDUk)(I−εVTkDUk)−1](VTkC) (35)

The product is never computed explicitly, but retained in factored form, so that the eigenvalues of with largest real part can be computed efficiently by an iterative method. The Frobenius norm of the product can be obtained using the following equivalence:

 ∥UVT∥F=[Tr(VUTUVT)]12=[Tr((UTU)(VTV))]12

which requires only inner products to compute the matrices and .

As with the spectral value set abscissa (SVSA) iteration for complex valued spectral values sets given in [GGO13], there is no guarantee that the full update step for the real Frobenius-norm bounded case will satisfy monotonicity, that is, may or may not hold, where is a rightmost eigenvalue of (35). However, the line search approach to make a monotonic variant [GGO13, Sec. 3.5] does extend to the real rank-2 iteration described above, although, as the derivation is quite lengthy [Mit14, Sec. 6.3.3], we only outline the essential components here. Let the pair and define the current perturbation, with , and let the pair and be the updated perturbation described above, with . Consider the evolution of a continuously varying simple rightmost eigenvalue defined on of the perturbed system matrix. The interpolated perturbation is defined using and such that the interpolated perturbation also has unit Frobenius norm, that is, is an eigenvalue of

 M(Δ(t))=A+BΔ(t)(I−DΔ(t))−1C, (36)

where

 Δ(t)=εU(t)V(t)T∥U(t)V(t)T∥ (37)

and and . In [Mit14, Sec. 6.3.3] it is shown that as long does not hold, then can be ensured, though it may requiring flipping the signs of both and . This result allows a line search to be employed to find a such that is guaranteed in an actual implementation of the iteration. We now have all the essential pieces necessary to describe approximating ; these are given in Algorithm SVSA-RF.