Differential equations for defectivity measures

Differential equations for real-structured (and unstructured) defectivity measures

Abstract.

Let be either a complex or real matrix with all distinct eigenvalues. We propose a new method for the computation of both the unstructured and the real-structured (if the matrix is real) distance (where if general complex matrices are considered and if only real matrices are allowed) of the matrix from the set of defective matrices, that is the set of those matrices with at least a multiple eigenvalue with algebraic multiplicity larger than its geometric multiplicity. For , this problem is closely related to the computation of the most ill-conditioned -pseudoeigenvalues of , that is points in the -pseudospectrum of characterized by the highest condition number. The method we propose couples a system of differential equations on a low rank (possibly structured) manifold which computes the -pseudoeigenvalue of which is closest to coalesce, with a fast Newton-like iteration aiming to determine the minimal value such that such an -pseudoeigenvalue becomes defective. The method has a local behaviour; this means that in general we find upper bounds for . However, they usually provide good approximations, in those (simple) cases where we can check this. The methodology can be extended to a structured matrix where it is required that the distance is computed within some manifold defining the structure of the matrix. In this paper we extensively examine the case of real matrices but we also consider pattern structures. As far as we know there do not exist methods in the literature able to compute such distance.

Key words and phrases:
Pseudospectrum, structured pseudospectrum, low rank differential equations, defective eigenvalue, distance to defectivity, Wilkinson problem.
2010 Mathematics Subject Classification:
15A18, 65K05

1. Introduction

Let be a complex () or real () matrix with all distinct eigenvalues. We are interested to compute the following distance,

(1.1)

where is the Frobenius norm (when this turns out to be equivalent to consider the -norm). We recall that a matrix is defective if its Jordan canonical form has at least a non diagonal block associated to an eigenvalue . Let us introduce the -pseudospectrum of ,

(1.2)

where we refer to the classical monograph by Trefethen and Embree [TE05] for an extensive treatise. In his seminal paper [Wil65], Wilkinson defined the condition number of a simple eigenvalue as

Observing that for a defective eigenvalue since , the search of the closest defective matrix can be pursued by looking for the minimal value such that there exists with , where for some of norm , being and the normalized left and right eigenvectors associated to .

The distance was introduced by Demmel [Dem83] in his well-known PhD thesis and has been studied by several authors, not only for its theoretical interest but also for its practical one (see, e.g., [Ala06] for the bounds on presented in the literature, [AFS13] and references therein for physical applications). An interesting formula for computing this distance in the -norm has been given by Malyshev [M99].

In the recent and very interesting paper by Alam, Bora, Byers and Overton [ABBO11], which provides an extensive historical analysis of the problem, the authors have shown that when the infimum in (1.1) is actually a minimum. Furthermore, in the same paper, the authors have proposed a computational approach to approximate the nearest defective matrix by a variant of Newton’s method. Such method is well suited to dense problems of moderate size and - even for a real matrix - computes a nearby defective complex matrix, that is a minimizer for (1.1) in . A recent fast algorithm has been given in [AFS13], which is based on an extension of the implicit determinant method proposed in [PS05]. This method also deals with the unstructured and provides a nearby defective complex matrix.

The aim of this paper is that of providing a different approach to the approximation of for both and , which may be extended to the approximation of more general structured distances, that is, for example, when restricting the admissible perturbations of to the set of matrices with a prescribed nonzero pattern. However, a rigorous analysis of general structures is beyond the scope of this paper and we limit the discussion to complex and real perturbations.

The methodology we propose is splitted in two parts. First, for a given we are interested to compute the following quantity,

(1.3)

Secondly, we are interested to find the smallest solution to the equation , or more in general, possibly introducing a small threshold , to the equation

(1.4)

Note that - in this second case - we obtain anyway, as a byproduct of our method, an estimate for a solution of . We remark that any solution to gives in general an upper bound to .

By the results of Alam and Bora [AB05], we deduce that the distance we compute is the same one obtains by replacing the Frobenius norm by the -norm. Instead, for the real case, the distance we compute is in general larger than the corresponding distance in the -norm.

The paper is organized as follows. In Section 2 we analyze the case of general complex perturbations and in Section 3 we derive a system of differential equations which form the basis of our computational framework. By a low rank property of the stationary points of the system of ODEs, which identify the matrices which allow to compute approximations of (1.1), we consider the projected system of ODEs on the corresponding low rank manifold in Section 4 and prove some peculiar results of the corresponding flow. In Section 5 we pass to consider the case of real matrices with real perturbations and obtain a new system of ODEs for the computation of (1.3), which are discussed in Section 6. In Section 7 we present some theoretical results which allow us to obtain a fast method to solve (1.4) and compute approximations to . In the same section we present the complete algorithm. Afterwards, in Section 8 we focus our attention on a few implementation issues and in Section 9 show some numerical examples. Finally in Section 10 we conclude the paper by providing an extension of the method to matrices with a prescribed sparsity pattern.

2. The complex case

We denote by the Frobenius norm of the matrix , where, for any given , .

We also need the following definition.

Definition 2.1.

Let be a singular matrix with a simple zero eigenvalue. The group inverse (reduced resolvent) of , denoted , is the unique matrix satisfying , and .

Given a matrix function , smoothly depending on the real parameter , we recall results concerning derivatives of right and left eigenvector and , respectively, associated to a simple eigenvalue of . We denote by the group-inverse of and assume smoothly depending on and such that . In the sequel, we shall often omit the explicit dependence on in the notation. The following expressions for the derivatives can be found in [MS88, Theorem 2],

(2.1)

Given let be a simple eigenvalue of . We denote by the unitary hyper-sphere in . By continuity, there exists such that for any and the matrix has a simple eigenvalue close to .

Lemma 2.2.

Given let

(2.2)

being the group-inverse of , whose left and right null vectors are and (recall we assume , and note that as is simple). Then, for any smooth path we have,

(2.3)

In particular, the steepest descent direction is given by

(2.4)

where is such that the right-hand side has unit Frobenius norm.

Proof.

By (2.1) and using that , , we get,

that is, recalling the definition (2.2), , from which (2.3) follows using that . The steepest descent direction is then given by the solution of the variational problem (2.4). ∎

Remark 2.3.

A consequence of Lemma 2.2 is that the gradient of the function is proportional to .

Let be the manifold of matrices of rank-. Any can be (non uniquely) represented in the form , where have orthonormal columns and is nonsingular. We will use instead a unique decomposition in the tangent space: every tangent matrix is of the form,

(2.5)

where , and are such that

(2.6)

This representation is discussed in [KL07] for the case of real matrices, but the extension to the case of complex matrices is straightforward.

Under the assumptions above, the orthogonal projection of a matrix onto the tangent space is given by

(2.7)

3. System of ODEs

Let , , and as before. The following theorem characterizes an evolution onto governed by the steepest descent direction of , where and are unit-norm right and left eigenvectors, respectively, associated to the simple eigenvalue .

Theorem 3.1.

Given , consider the differential system,

(3.1)

where is defined in (2.2).

  1. The right-hand side of (3.1) is antiparallel to the projection onto the tangent space of the gradient of . More precisely,

    (3.2)

    In particular, if and only if .

  2. The matrix defined in (2.2) has rank if , whereas (the zero matrix) if . As a consequence, the equilibria of system (3.1) for which (in particular, the minimizers of ) are rank- matrices.

Proof.

1) The assertion is an immediate consequence of Lemma 2.2. In particular, the derivative (3.2) is obtained by plugging the right-hand side of (3.1) in (2.3).

2) By (2.2) the matrix has rank not greater than . Moreover, it has rank less than if and only if or for some . We claim the constant must vanish in both cases. Indeed, since and , we have or , whence as . Therefore, the rank of is less than if and only if or . As has rank , both conditions are equivalent to have for some , that is to say . Finally, the assertion concerning the equilibria comes from the fact that at a stationary point. ∎

Theorem 3.2.

Let and be unit-norm right and left eigenvectors, respectively, associated to the simple eigenvalue of the matrix . If then, for small enough, the system (3.1) has only two stationary points, that correspond to the minimum and the maximum of on , respectively.

Proof.

As , by item 2) of Theorem 3.1 we have that with a non zero constant matrix and

(3.3)

The equation for the equilibria reads , where

It is readily seen that if and only if . Moreover, the Jacobian matrix of with respect to at the point is given by the linear operator , such that

We shall prove below that is invertible. By the Implicit Function Theorem this implies that there are , , and such that for and if and only if . On the other hand, by (3.3), if then or . We conclude that are the unique equilibria on the whole hyper-sphere for small enough. Clearly, [resp. ] is the maximizer [resp. minimizer] of .

To prove the invertibility of we observe that, recalling , the equation reads, , which gives and therefore . ∎

4. Projected system of ODEs

By Theorem 3.1, the minimizers of onto are rank- matrices. This suggests to use a rank- dynamics, obtained as a suitable projection of (3.1) onto .

Theorem 4.1 (The projected system).

Given , consider the differential system,

(4.1)

where the orthogonal projection is defined in (2.7). Then, the right-hand side of (4.1) is antiparallel to the projection onto the tangent space of the gradient of . More precisely,

(4.2)

In particular, if and only if .

Proof.

We remark that the definition is well posed, i.e., . Indeed, as , and

as is an orthogonal projection. We next observe that, by (2.3), the steepest descent direction is given by the variational problem,

Since for any , the solution to this problem is

This proves the first assertion, while the derivative (4.2) is obtained by plugging the right-hand side of (4.1) in (2.3) and using . ∎

Remark 4.2.

It is worthwhile to notice that along the solutions to (3.1) or (4.1) we have , so that the condition is preserved by the dynamics.

In the sequel, it will be useful the following rewriting of the projected system, in terms of the representations of and discussed at the end of Section 2. By introducing the notation,

(4.3)

we can write (4.1) as

(4.4)

4.1. Stationary points of the projected ODEs

We start by providing a characterizing result for stationary points of system (4.1).

Lemma 4.3.

Given , assume is a stationary point of (4.1) (or equivalently of (4.4)) such that is not an eigenvalue of and . Then .

Proof.

Assume by contradiction that ; then we would get

The previous implies

whence, as onto , , . Inserting these formulæ into (4.4) we would obtain

(4.5)

In order that it has to hold necessarily and . Since implies and , the previous relations imply and , so that and . As a consequence, would be an eigenvalue of which contradicts the assumptions. This means that . ∎

At a stationary point the equation reads,

(4.6)

for some nonzero . In this case, as

assuming , we get

(4.7)

Recalling (2.7), we are interested in studying

(4.8)
Theorem 4.4.

Given , assume is a stationary point of (4.1) (or equivalently of (4.4)) such that is not an eigenvalue of and . Then it holds for some real .

Proof.

In order to prove the theorem we show that . From the nonsingularity of we get at a stationary point,

(4.9)
(4.10)

The assumption that is not an eigenvalue of implies and , that means and . Moreover, by Lemma 4.3 the condition (4.7) is satisfied in our case. In order that (4.9) and (4.10) are fulfilled we have several possible cases.

Consider (4.9). The following are the possible cases.

  • , .

  • , . This would imply

    and thus .

  • . This would imply .

Now consider (4.10). The following are the possible cases.

  • , .

  • , . This would imply

    and thus .

  • . This would imply .

Assume that , which excludes (1-ii) and (2-ii). Assume (1-i) and (2-i) hold. This would imply

which would contradict (4.7) .

Assume (1-iii) and (2-iii) hold. This would imply , and from the first of (4.4), that has rank- which contradicts the invertibility of . The same conclusion holds if (1-i) and (2-iii) hold, since would also imply that has rank- and if (1-iii) and (2-i) hold, because in this case we would have and still of rank-. ∎

To summarize, if is not an eigenvalue of the stationary points of the projected and the unprojected ODEs coincide (recall that if this is obvious as in this case, see Theorem 3.1). Moreover, since is proportional to at such points, by the same arguments leading to Theorem 3.2 we obtain the following result.

Corollary 4.5.

Under the same assumptions of Theorem 4.4, for sufficiently small the projected ODE (4.1) has only two stationary points.

5. The real structured case

We now assume that the matrix is real and we restrict the perturbations to be real as well. To our knowledge there are no methods to compute the most ill conditioned eigenvalue in the real -pseudospectrum,

(5.1)

that is the eigenvalue of to which corresponds .

We denote by the unitary hyper-sphere in and fix such that for any and the matrix has a simple eigenvalue close to . The same reasoning as in the proof of Lemma 2.2 gives, for a smooth path ,

(5.2)

from which the steepest descent direction is given by the variational problem,

where is the normalization constant. Note that the matrix has rank not greater than . Clearly, for a real eigenvalue is real and the situation is identical to the one considered in the unstructured case ; so the peculiar difference arises when we consider non-real eigenvalues.

Let be the manifold of the real matrices of rank-. The matrix representations both in and in the tangent space are analogous to (2.5), (2.6), provided that have orthonormal columns and is nonsingular, see [KL07, Sect. 2.1]. More precisely, any rank- matrix of order can be written in the form,

(5.3)

with, now, and such that and , where is the identity matrix of order 4, and is nonsingular. As before, since this decomposition is not unique, we use a unique decomposition on the tangent space. For a given choice of any matrix can be uniquely written as

with , . Accordingly, the orthogonal projection of a matrix onto the tangent space is defined by

(5.4)

where and .

6. System of ODEs in the real case

Given the role of the differential system in (3.1) is now played by

(6.1)

More precisely, the right-hand side of (6.1) is antiparallel to the projection onto the tangent space of the gradient of and

(6.2)

The proof of Theorem 3.2 is easily adapted to the real case. Therefore, under the same hypothesis, the system (6.1) has only two stationary points, that correspond to the minimum and the maximum of on , respectively.

A natural question is concerned with the possibility of to vanish. We have the following result concerning the matrix .

Theorem 6.1.

Assume that the matrix is real and has a pair of simple complex conjugate eigenvalues and . Let and be its left and right eigenvectors associated to such that . Let be the G-inverse of and be given by (2.2). Then is different from zero.

Proof.

First observe that the eigenvectors and are necessarily genuinely complex vectors, that is , , , and . Let us denote the range of a matrix as . By definition of (see (2.2)) we have .

We prove the result by contradiction. Assume that is purely imaginary, that is . Under this assumption, recalling that is a rank- matrix, we have that implies , and therefore

has dimension . Being and linearly independent, we get . A left premultiplication by gives

which is due to the fact that (i)  , by well-known bi-orthogonality of left and right eigenvectors, (ii)  , a property of the group inverse , and (iii)   by simplicity of . This implies . In a specular way, we have that

has dimension . Proceeding in the same way we obtain that .

Now recall that (see [MS88]) a vector is an eigenvector of corresponding to the eigenvalue if and only if is an eigenvector of corresponding to the eigenvalue , where if and if . To recap we have

(6.3)
(6.4)

with and .

Since and are the only vectors in the kernels of and , respectively, we deduce by (6.3) and (6.4),

which imply . The previous implies, by bi-orthogonality of left and right eigenvectors, that the set is orthogonal to the set consisting of the remaining right eigenvectors of and similarly the set is orthogonal to the set of the remaining left eigenvectors of . Note that and are right-invariant subspaces of both and .

Denote by a real orthonormal basis for the range of and define (as determined by the procedure to get the Schur canonical form of ) and . Set , which implies is an orthogonal matrix. Now consider the similarity transformation associated to ,

where stands for the -dimensional zero matrix, and . This means that is block-diagonal and the matrix

(6.5)

is such that and with so that has eigenvalues and . If then is normal, which implies that the pair of right and left eigenvectors associated to , say (scaled to have unit -norm and real and positive Hermitian scalar product) is such that . Since and , the orthogonality of implies , which gives a contradiction. As a consequence we can assume .

By the properties of the G-inverse we have that

where is the group inverse of and is the inverse of , which is nonsingular. It is direct to verify the following formula for the G-inverse, by simply checking the three conditions in Definition 2.1,

It follows that also is block triangular so that we write

(6.6)

with

(6.7)

where and are the projections on (the subspace spanned by the first two vectors of the canonical basis) of the eigenvectors of associated to , that is and ,

where and are such that and . Finally, we obtain