Fast QR iterations for unitary plus low rank matrices
Abstract
Some fast algorithms for computing the eigenvalues of a block companion matrix , where is unitary block circulant and , have recently appeared in the literature. Most of these algorithms rely on the decomposition of as product of scalar companion matrices which turns into a factored representation of the Hessenberg reduction of . In this paper we generalize the approach to encompass Hessenberg matrices of the form where is a general unitary matrix. A remarkable case is unitary diagonal which makes possible to deal with interpolation techniques for rootfinding problems and nonlinear eigenvalue problems. Our extension exploits the properties of a larger matrix obtained by a certain embedding of the Hessenberg reduction of suitable to maintain its structural properties. We show that can be factored as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first rows, and, moreover, such a datasparse representation is well suited for the design of fast eigensolvers based on the QR/QZ iteration. The resulting algorithm is fast and backward stable.
AMS classification: 65F15
nitary matrix, lowrank modification, rank structure, QR eigenvalue algorithm, complexity.
1 Introduction
Let be a rankstructured matrix where is unitary and with . Such matrices do arise commonly in the numerical treatment of structured (generalized) eigenvalue problems. Recently in [2] some fast eigenvalue algorithms have been developed for the case where is block companion. These algorithms exploit the decomposition of as product of scalar companion matrices which provides a suitable factored representation of the Hessenberg reduction of to be used in the QR/QZ iterative process.
In this paper we generalize the approach pursued in [2] to deal with Hessenberg matrices where is a general unitary matrix. Our extension considers a larger matrix determined by a certain embedding of the Hessenberg reduction of suitable to keep its essential properties, that is, the Hessenberg form as well as the unitary plus low rank structure. Then we introduce a factored representation of in compressed form as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first rows. Specifically, is factored as , where is the product of unitary lower Hessenberg matrices, is the product of unitary upper Hessenberg matrices and the middle factor is a unitary upper Hessenberg matrix perturbed in the first rows. The representation thus involves data storage consisting of vectors of length and Givens rotations.
Rankstructured matrices of the form can directly be generated in the process of reducing a unitary diagonal plus a rank matrix in Hessenberg form by means of the algorithm introduced in [16]. Perturbed unitary diagonal matrices arise from the application of interpolation techniques for solving linear and nonlinear eigenproblems [1, 5, 21, 6] as well as in fast schemes for updating the spectral decomposition of a modified unitary matrix [17]. In the most basic case where the eigenvalues sought are those in the unit disc the roots of unity can be used for the interpolation nodes when constructing the polynomial interpolant and the appropriate linearization [12]. Also, block companion matrices can easily be reduced in unitary diagonal plus low rank form by a fast Fourier transform. Other interesting examples of generalized companion matrices which can be specified in the format are obtained from the rearrangements of certain Fiedler pencils [11].
We show that the representation can be efficiently exploited in the QR/QZ iteration for eigenvalue computation. The bulgechasing scheme can be adjusted to work on the unitary structures only. The deflation can be revealed in the middle factor converging to an upper triangular matrix in the limit. Each iterate can be specified in the compressed format and the same holds for the limiting Schur normal form of . Therefore one single QR iteration requires ops only and it is backward stable.
The paper is organized as follows. In Section 2 we recall some preliminary material about the structural properties of possibly perturbed unitary matrices. Section 3 gives the theoretical foundations of our algorithm which is presented and analyzed in Section 4. In Section 5 the backward stability of the algorithm is formally proved. Finally, in Section 6 we show the results of numerical experiments followed by some conclusions and future work in Section 7.
2 Preliminaries
We first recall some basic properties of unitary matrices which play an important role in the derivation of our methods. {lemma} Let be a unitary matrix of size . Then
where and and are subsets of . If an we have
This well known symmetry in the rankstructure of unitary matrices follows by a straightforward application of the nullity theorem [14].
Let be a unitary matrix of size , and let , such that , then
See Gantmacher [15], p. 21, property 2.
A matrix is called upper Hessenberg if when . Similarly, is called lower Hessenberg if when . In addition, when is upper Hessenberg (lower Hessenberg) and the outermost entries are nonzero, that is, (), , then the matrix is called proper.
Note that for , that is when the matrix is in Hessenberg form, the notion of properness coincides with that of being unreduced. Also, a upper Hessenberg matrix is proper iff . Similarly a lower Hessenberg matrix is proper iff .
It is a well know [25] that, given a nonzero vector we can build a zero creating matrix from a product of Givens matrices , where and is a complex Givens rotations of the form such that , with . The subscript index indicates the active part of the matrix . The descending sequence of Givens rotations turns out to be a unitary upper Hessenberg matrix such that , and . Note that is proper if and only if all the Givens matrices appearing in its factorization are different from a unitary diagonal matrix [4]. Generalizing this result we obtain the following lemma. {lemma} Let , , with full rank. Then

there exist a unitary upper Hessenberg matrix and an upper triangular matrix with non singular such that
(2.1) 
The product of the outermost entries of is given by
(2.2) where are the singular values of .

Let be the maximum index such that , then for .
The existence of is proved by construction. Relation (2.1) defines a QR decomposition of the matrix . The unitary factor can be determined as product of unitary upper Hessenberg matrices such that , and where , .
Now, let us split into blocks
where is and is , upper triangular. The product of the outermost entries of is given by . Since is unitary, and we have
From Lemma 2 we have
We get relation (2.2) observing that if is the SVD decomposition if , is the SVD decomposition of and hence , .
Finally, let be the maximum index such that . Then and, moreover, from we obtain that since is nonsingular. Using Lemma 2 we have , meaning that has full rank equal to . Since is upper triangular this implies that for .
Remark \thetheorem
From the proof of Lemma 2 we know that can be factored as product of upper Hessenberg matrices, i.e. . The of these Hessenberg matrices is the one annihilating column th of from row to row . Then each can be factorized as the product of Givens rotations. From this observation we get that where each is a Givens rotation acting on rows . This decomposition of corresponds to annihilate progressively the lower subdiagonals of by means of rotations working on the left. Alternatively, we can proceed by zeroing the lower subdiagonals of by means of rotations working on the right and acting on the columns of . In this way we find a different factorization of the form where and is unitary diagonal.
3 Representation
Generalizing the approach discussed in [4] for the companion matrix, it is useful to embed the unitary plus lowrank matrix into a larger matrix to guarantee the properness of some factors of the representation we are going to introduce.
Let be such that with unitary and full rank matrices. We can construct an matrix , , such that , with unitary, full rank matrices, , and such that
(3.3) 
for a suitable matrix . {proof} The proof is constructive. Let , since is full rank, the matrix is Hermitian positive definite. Let and . We note that , while .
Define
(3.4) 
where and
(3.5) 
Note that has the structure described in (3.3) and, moreover by direct calculation we can verify that is unitary.
From now on we denote by the dimension of the matrix . It is worth pointing out that in view of the block triangular structure in (3.3) the Hessenberg reduction of the original matrix can be easily adjusted to specify the Hessenberg reduction of the larger matrix . Thus, in the following it is always assumed that both and are in upper Hessenberg form.
Let be the upper Hessenberg matrix, obtained by embedding an unreduced Hessenberg matrix as described in Theorem 3. Then we can factorize as follows
(3.6) 
where

is a proper unitary lower Hessenberg matrix.

is a unitary upper Hessenberg matrix. Moreover, the leading entries in the outermost diagonal of , , are nonzero.

, where is a block diagonal unitary Hessenberg matrix, , with proper, with upper triangular, and , with full rank.
If in addition is nonsingular then is proper. {proof} First note that from the properness of it follows that . From Theorem 3 we have that has full rank, and , hence, by Lemma 2 we can find a proper and a nonsingular square triangular such that , with . For the properness of and , we get that is a proper upper Hessenberg matrix and moreover the matrix , is unitary and still a proper upper Hessenberg matrix because is null under the th row.
Now the matrix can be factorized as , where is unitary upper Hessenberg, and is the unitary lower Hessenberg matrix obtained as the product of the Givens rotations annihilating from the top the entries in the outermost diagonal of , i.e. , where acts on rows . Since the first rows are not involved, the matrix has the structure , where is unitary Hessenberg. Moreover, since is proper, is proper as well.
From the definitions of , and we have:
where . Matrix is full rank, since is unitary and is full rank.
Now let us consider the submatrices , for and . In both cases, from the relation and the structural properties of the matrices involved therein, we have that
For , since is unreduced, the rank of that submatrix is . This implies that the entries , are nonzero. For , if is non singular, then the rank is , so is nonzero.
The following Theorem proves that the product of factors having the properties stated in Theorem 3 is indeed an upper Hessenberg matrix with the last rows equal to zero. It reveals also that deflation can be performed only when one of the subdiagonal entries of approaches zero. {theorem} Let , where is a proper unitary lower Hessenberg matrix and is a unitary upper Hessenberg matrix. Let be a block diagonal unitary upper Hessenberg matrix of the form , with unitary Hessenberg and a non singular upper triangular matrix. Then

is nonsingular.

Setting , , and , we have that

the matrix is an upper Hessenberg matrix, with , that is

is a unitary plus rank matrix.


If is proper then the upper Hessenberg matrix is nonsingular. In this case is proper if and only if is proper.
To prove part 1, we have to show that is nonsingular. We apply Lemma 2 to the unitary lower Hessenberg matrix ; we have . Since is an lower triangular matrix and is proper, we get and the thesis.
For part 2, let us consider the matrix . This matrix is unitary with a quasiseparable structure below the th upper diagonal. In fact, for any we have
Applying Lemma 2 we have , implying that also .
Since non singular, we conclude that . From this observation we can then find a set of generators and a upper Hessenberg matrix such that so that (see [8, 13]). Moreover, we have , which is nonsingular. Then we can recover the rank correction from the leftlower corner of obtaining
Since , we get that , and hence . We conclude the proof of part (b), by noticing that is upper Hessenberg as it is the product of a upper Hessenberg matrix by a upper Hessenberg matrix. Moreover, we find that since .
To prove part 3., as already observed in the proof of Theorem 3, we use the rank equation
thus, if is proper then is nonsingular. In this case, from the properness of and noticing that
(3.7) 
we get that iff .
Remark \thetheorem
From the previous Theorem, one sees that when a matrix is represented in the form where and have the structural properties required, then is nonsingular if and and only if is proper. Moreover, from equation 3.7 one deduces that one of the outermost entries can be zero only if we have either or . Viceversa, we can have that without any subdiagonal entry of be equal to zero. This is the only case where is unreduced and singular.
The next theorem shows that the compressed representation is eligible to be used under the QR eigenvalue algorithm for computing the eigenvalues of and, a fortiori, of .
Let be a Hessenberg matrix obtained by embedding an unreduced Hessenberg matrix as described in Theorem 3. Let be the unitary factor of the factorization of , where is a monic polynomial of degree . Let be the matrix obtained by applying a step of the multishifted algorithm to the matrix with shifts being the roots of . Then, we have that
where is the matrix generated by applying one step of the multishifted algorithm to the matrix with shifts being the roots of . Both and are upper Hessenberg and if is unreduced then the factorization of exists and has still the same properties stated in Theorem 3; in particular, is proper and, if is nonsingular also are proper. {proof} From equation (3.3) we have
Since is also block triangular, we can take
(3.8) 
where and are unitary. Hence,
where is the matrix generated by applying one step of the multishifted algorithm to the matrix with shifts being the roots of . We have , setting and . Because is unitary, we have that , then the conditions given by Lemma 2 are satisfied and we can conclude that is proper. We note that and are upper Hessenberg for the well known properties of the shifted algorithm. When is unreduced then we can apply Theorem 3 which guarantees the existence of the representation of .
The algorithm we propose is an implicitly shifted , and hence the factors are obtained manipulating Givens rotations. In Section 4 we describe the algorithm and we show that the factors obtained with the implicit procedure agree with the requirements given in Theorem 3. The implicit QTheorem [18] guarantees that the matrix obtained after an implicit step is basically the same one can get with an explicit one. Next result gives a quantitative measure of the properness of matrices and generated along the QR iterative method.
Let as described in Theorem 3 and let as in Theorem 3. Let , where are the singular values of . We have:

the module of the product of the outermost entries of , is such that and is constant over steps. Moreover for each outermost entry of we have .

the module of the product of the outermost entries of is and is constant over steps. Moreover for each outermost entry of we have .
To prove part 1. we first observe that , because by construction. To prove that the product of the outermost entries remains unchanged over steps, we use Theorem 3 observing that and that and have the same singular values. We get the thesis applying part 2. of Lemma 2.
We can also see that and that . Since we have .
The relation on is similarly deduced applying Binet rule to equality . After a step the first rows of are orthonormal and, moreover, the submatrix in the right upper corner of satisfies
Remark \thetheorem
As observed in [4, 2] also for our representation it is possible to recover the structure of the matrix from the representation (3.6). In fact, we have . Since is nonsingular we find that
(3.9) 
where , . Note that is nonsingular upper triangular because product of two nonsingular upper triangular matrices. Indeed, the matrix is nonsingular upper triangular by construction (see Lemma 2). Also, if is maintained along the QR process then turns out to be nonsingular upper triangular because of the particular structure of the Hessenberg factors in the factorization of .
4 The Algorithm
In this section we show how to perform a single step of Francis’s implicit shifted algorithm acting directly on the representation of the matrix described in Section 3. In the sequel it is supposed that is a proper upper Hessenberg matrix. In the view of the previous sections this means that is nonsingular. If, otherwise, is singular then we can perform a step of the QR algorithm with zero shift to remove the singularity. In this way the parametrization of is automatically adjusted to specify a proper matrix in its active part.
It is convenient to describe the representation and the algorithm using a pictorial representation already introduced in several papers (compare with [3] and the references given therein). Specifically, the action of a Givens rotation acting on two consecutive rows of the matrix is depicted as . A chain of descending twopointed arrows as below
represents a unitary upper Hessenberg matrix (in this case a matrix). Viceversa, since any unitary Hessenberg matrix of size can be factorized as , where , , with , and is unitary diagonal, we have that can be represented as follows
where the represent the entries of the diagonal phase matrix . Similarly ther chain reported below
represents a unitary lower Hessenberg matrix. As observed in Remark 2 the Hessenberg matrices and appearing in the representation of can be factorized as the product of unitary Hessenberg matrices, and any unitary Hessenberg can be represented through their Schur parametrization [19] by ascending or descending chains of Givens rotations times a unitary diagonal matrix. In our case the unitary diagonal matrices that would be necessary to get the Schur parametrization in terms of Givens factors, can all be accumulated in the unitary factor of . In the light of Theorem 3 the careful reader will not be surprised by the shape of the chains of Givens rotations in the factorization of factors and where some of the Givens rotations are missing. Hence, using our pictorial representations we can exemplify the case , , as follows
We have used the fact that
and are Givens matrices acting on rows and is a unitary diagonal matrix. Furthermore, the structure of takes into account that is maintained along the QR process.
Givens transformations can also interact with each other by means of the fusion or the turnover operations (see [24], pp.112115). The fusion operation will be depicted as follows:
and consists in the concatenation of two Givens transformations acting on the same rows. The turnover operation allows to rearrange the order of some Givens transformations (see [24]).
Graphically we will depict this rearrangement of Givens transformations as follows.
Each fusion and turnover has a constant cost as involves computations on or matrices. Note that while the fusion between two Givens rotations can result in a trivial rotation, this is never the case when we perform a turnover between three non trivial rotations.
4.1 Inizialization and bulge chasing
As observed in Remark 3 we do not have to perform the Givens transformations on the rank one part since the matrix can be recovered at the end of the QR process and matrix is not affected by the transformations which act on rows to . The implicit QR algorithm starts with the computation of the shift. By ddopting a Wilkinson shift strategy we need to reconstruct the lowerright hand corner of . This can be done by operating on the representation and it requires flops. Once the shift is computed, we retrieve the first two components of the first column of , i.e. and we compute the Givens rotation such that
Let , we have that matrix becomes
Applying a series of turnovers operations we can pass trough the ascending sequence of Givens transformations, and a new Givens transformation acting on rows and , will appear before the parenthesis, and then is fused with the first nontrivial rotation of .
Similarly the Givens rotation on the right is shifted trough the sequence of Givens transformations representing and applied to the columns of and on the right of . Then another turnover operation is applied and the resulting
At the end of the initialization phase the Givens rotation on the right of can be brought on the left giving rise to the bulge represented by a Givens rotation acting on rows 2 and 3. The bulge needs to be chased down.
At this point we have , where . Performing a similarity transform by multiplying on the left for and on the right for , we have that the Givens matrix on the right can be brought to the right applying turnover operations. Repeating the same reasoning times, we have to get rid of a Givens rotation acting on columns and .
With the application of turnover operations, we get that , where . The Givens rotation acts on the last two columns and will modify the last two columns of and then fuse with matrix . At this point the Hessenberg structure has restored, and the implicit step has concluded.
The graphical representation of the algorithm corresponds to the following updating of the matrices involved in the representation.
In particular in are gathered the rotations needed to restore the Hessenberg structure of the full matrix, is the product of the Givens rotations that have shifted through the factor when turnover operations are performed, and similarly is the product of the Givens matrices shifted through on the right.
To show that this corresponds actually to a QR step it is sufficient to verify that we are under the hypothesis of Theorem 3, i.e. that and are Hessenberg matrices, is still of the form and that has the structure described in point 2 of Theorem 3.
From the description of the algorithm is easy to see that matrices and are block diagonal with the leading block of size equal to the identity matrix. We note that at the end of the chasing steps the Hessenberg structure of and is restored, and because . is still proper since the turnover operations cannot introduce trivial rotations. Matrix is still blockdiagonal with the leading block unitary diagonal, and the tailing block Hessenberg. For we need to prove that , where . From (3.8), we have , while . Substituting we get