Preconditioned Iterative Solves in Model Reduction

Preconditioned Iterative Solves in Model Reduction of Second Order Linear Dynamical Systems

Navneet Pratap Singh Discipline of Computer Science and Engineering, Indian Institute of Technology Indore, India phd1301201002@iiti.ac.in Kapil Ahuja Discipline of Computer Science and Engineering, Indian Institute of Technology Indore, India kahuja@iiti.ac.in  and  Heike Fassbender Institut Computational Mathematics, Technische Universität Braunschweig, Germany h.fassbender@tu-braunschweig.de
Abstract.

Recently a new algorithm for model reduction of second order linear dynamical systems with proportional damping, the Adaptive Iterative Rational Global Arnoldi (AIRGA) algorithm [8], has been proposed. The main computational cost of the AIRGA algorithm is in solving a sequence of linear systems. These linear systems do change only slightly from one iteration step to the next. Here we focus on efficiently solving these systems by iterative methods and the choice of an appropriate preconditioner. We propose the use of relevant iterative algorithm and the Sparse Approximate Inverse (SPAI) preconditioner. A technique to cheaply update the SPAI preconditioner in each iteration step of the model order reduction process is given. Moreover, it is shown that under certain conditions the AIRGA algorithm is stable with respect to the error introduced by iterative methods. Our theory is illustrated by experiments. It is demonstrated that SPAI preconditioned Conjugate Gradient (CG) works well for model reduction of a one dimensional beam model with AIRGA algorithm. Moreover, the computation time of preconditioner with update is on an average -rd of the computation time of preconditioner without update. With average timings running into hours for very large systems, such savings are substantial.

Key words and phrases:
Model Order Reduction, Global Arnoldi Algorithm, Moment Matching, Iterative Methods, Preconditioner and Stability Analysis.
2010 Mathematics Subject Classification:
Primary 34C20, 65F10, 65L20
This work was supported by DAAD grant number A/14/04422 under the IIT-TU9 exchange of faculty program.

1. Introduction

A continuous time-invariant second order linear dynamical system is of the form

(1.1)

where are mass, damping and stiffness matrices, respectively, are constant matrices. In (1.1), is the state, is the input, and is the output. If and both are one, then we have a Single-Input Single-Output (SISO) system. Otherwise ( and the system is called Multi-Input Multi-Output (MIMO). We assume the case of proportional damping, i.e., , where the coefficients and are chosen based on experimental results [4, 8]. For our derivations, the system matrices and need not hold any specific property (e.g., symmetry, positive definiteness etc.).

It is assumed that the order of the system (1.1) is extremely high. The simulation of large dynamical systems can be unmanageable due to high demands on computational resources, which is the main motivation for model reduction. The goal of model reduction is to produce a low dimensional system that has, as best as possible, the same characteristics as the original system but whose simulation requires significantly less computational effort. The reduced system of (1.1) is described by

(1.2)

where , ,  and . The damping property of the original system needs to be reflected in the reduced system. That is, is required, where and remain unchanged from the original system.

Model reduction can be done in many ways, see, e.g., [2]. We will focus on a projection based method, specifically Galerkin projection [8]. For this a matrix with orthonormal columns is chosen and the system (1.1) is projected

(1.3)

Comparing (1.3) with (1.2) yields

(1.4)

The matrix can be obtained in many ways, see, e.g., [2]. The focus in this paper will be on the Adaptive Iterative Rational Global Arnoldi (AIRGA) algorithm [8] in which is generated by an Arnoldi based approach.

The main contributions of this paper are as follows: Section 2 summarizes the AIRGA model reduction process which uses a direct solver for solving the linear systems arising in each iteration step. In Section 3, we discuss the use of iterative solvers and preconditioners for these linear systems. Preconditioned iterative solvers are a good choice here since they scale well. They have time complexity , where is the size of the system and is the number of nonzeros in system matrices as compared to for direct solvers [17]. The choice of iterative algorithm is problem dependent. We show that Sparse Approximate Inverse (SPAI) preconditioners are well suited for solving the linear systems arising in the model reduction process. These linear systems change at each model reduction iteration, but this change is small. Exploiting this fact we propose a cheap preconditioner update. Using an iterative solver introduces additional errors in the computation since the linear systems are not solved exactly. Hence, we discuss the stability of AIRGA in Section 4. In Section 5, an numerical experiment is given to support our preconditioned iterative solver theory. The cheap updates to the SPAI preconditioner, with CG as the underlying iterative algorithm, leads to about -rd savings in time. Finally, we give some conclusions and point out future work in Section 6.

For the rest of this paper, denotes the Frobenius norm, the 2-norm, the -norm, and the -norm [2]. Also, denotes the factorization [21].

2. Arnoldi Based Projection Method

In this section, we first describe how to obtain V such that the first few moments of the transfer functions of the original and the reduced order transfer function are matched. We then state the AIRGA algorithm [8] based on this approach.

The transfer function of (1.1) is given by

where is the state variable in frequency domain. The power series expansion of state variable around expansion point is given as [24]

(2.1)

where,

(2.2)

Here, is called the -order system moment of at .

Similarly, the transfer function of the reduced system (1.2) is given by

where The power series expansion of the reduced state space around expansion point is

(2.3)

Here, is defined analogoulsy to the . It is called the -order system moment of at .

The goal of moment-matching approaches is to find a reduced order model such that the first few moments of (2.1) and (2.3) are matched, that is, for for some

Define

then from (2.2) we have

for

The second order Krylov subspace [3] is defined as

where for

Let . For the special case of proportionally damped second-order systems, it has been observed in [4]

where is the standard block Krylov subspace

Thus, we need a good basis of . This can be obtained efficiently by, e.g., the block or the global Arnoldi algorithm [17, 14, 18].

The AIRGA algorithm, as proposed in [8], is one of the latest methods based on the global Arnoldi method. It is given in Algorithm 1. In this method, moment matching is done at multiple expansion points rather than just at as earlier. This ensures a better reduced model in the entire frequency domain of interest.

The initial selection and further the computation of expansion points has been discussed in [8] and [13]. We adopt the choices described in Section 5.0.1 of [8]. The initial expansion points could be either real or imaginary, both of which have their merits. This is problem dependent and discussed in results section. After the first AIRGA iteration, the expansion points are chosen from the eigenvalues of the quadratic eigenvalue problem (at line 33).

The method is adaptive, i.e., it automatically chooses the number of moments to be matched at each expansion point This is controlled by the while loop at line 9. The variable stores the number of moments matched. The upper bound on is , where is the maximum dimension to which we want to reduce the state variable (input from the user), and is the dimension of the input; see [8] for a detailed discussion. At exit of this while loop, .

1:Input: {;   is the set initial expansion points }
2:
3:while no convergence do
4:     for  do   
5:         
6:         Compute
7:     end for
8:     j = 1
9:     while no convergence and  do
10:         Let be expansion point corresponding to maximum moment error of
11:   reduced system at
12:         
13:         for  do
14:              if (then
15:                  
16:              else
17:              end if
18:              for  do
19:                  
20:                  
21:              end for
22:         end for
23:          for .
24:         .
25:         Compute
26:         Compute reduced system matrices and with as in (1.4)
27:         
28:         j = j+1
29:     end while
30:     Set and pick corresponding to maximum moment error of reduced
31:   system at
32:      and .
33:     Compute
34:     Compute reduced system matrices and with as in (1.4),
35:   and take , , , for the next iteration.
36:     Choose new expansion points
37:     
38:     
39:end while
40:Compute the remaining reduced system matrices with as in (1.4)
Algorithm 1 Adaptive Iterative Rational Global Arnoldi Algorithm [8]

At line 3, no convergence implies that the norm of the difference between two consecutive reduced systems, computed at line 34, is greater than a certain tolerance. Similarly, at line 9, no convergence implies that the norm of the difference between two consecutive intermediate reduced systems, computed at line 26, is greater than a certain tolerance.

This algorithm requires solving a linear system at line 5 and 14. As the change in each iteration step, the linear systems to be solved change in each iteration step. As discussed in Section 1, since solving such systems by direct methods is quite expensive, we propose to use iterative methods. As the change in the will be small (at least after the first iteration step), we can develop a cheap update of the necessary preconditioner.

3. Preconditioned Iterative Method

There are two types of methods for solving linear systems of equations; a) direct methods and b) iterative methods. For large systems, direct methods are not preferred because they are too expensive in terms of storage and operation. On the other hand, iterative methods require less storage and operations than direct methods. For a large linear system with and , an iterative method finds a sequence of solution vectors which (hopefully) converges to the desired solution. Krylov subspace based methods are an important and popular class of iterative methods. If is the initial solution and is the initial residual, then Krylov subspace methods find the approximate solution by projecting onto the Krylov subspace

There are many types of Krylov subspace algorithms [17]. Some popular ones include Conjugate Gradient (CG), Generalized Minimal Residual (GMRES), Minimum Residual (MINRES), and BiConjugate Gradient (BiCG). Block versions of these algorithms do exist. The choice of algorithm is problem dependent. In results section (Section 5), the coefficient matrices arising from the problem are symmetric positive definite (SPD). Hence, we use CG algorithm, which is ideal for such systems.

In Krylov subspace methods, the conditioning of the system is very important. “Conditioning pertains to the perturbation behavior of a mathematical problem [21]”. For example, in a well-conditioned problem, a small perturbation of the input leads to a small change in the output. This is not guaranteed for an ill-conditioned problem, where a small perturbation in the input may change the output drastically [21]. Preconditioning is a technique to well-condition an ill-conditioned problem. We discuss that next.

Preconditioning is a technique for improving the performance of iterative methods. It transforms a difficult system (ill-conditioned system) to another system with more favorable properties for iterative methods. For example, a preconditioned matrix may have eigenvalues clustered around one. This means that the preconditioned matrix is close to the identity matrix, and hence, the iterative method will converge faster. For a symmetric positive definite (SPD) system, the convergence rate of iterative methods depends on the distribution of the eigenvalues of the coefficient matrix. However, for a non-symmetric system, the convergence rate may depend on pseudo-spectra as well  [20, 16].

If is a nonsingular matrix which approximates A; that is, then the system

may be faster to solve than the original one. The above system represents preconditioning from left. Similarly, right and split preconditioning is given by two equations below, respectively.

The type of preconditioning technique to be used depends on the problem properties as well as on the choice of the iterative solver. For example, for SPD systems, all have same eigenvalue spectrum, and hence, left, right and split preconditioners behave the same way, respectively. For a general system, this need not be true [6].

Besides making the system easier to solve by an iterative method, a preconditioner should be cheap to construct and apply. Some existing preconditioning techniques include Successive Over Relaxation, Polynomial, Incomplete Factorizations, Sparse Approximate Inverse (SPAI), and Algebraic Multi-Grid [6, 7, 9, 10].

We use SPAI preconditioner here since these (along with incomplete factorizations) are known to work in the most general setting. Also, SPAI preconditioners are easily parallelizable, hence, have an edge over incomplete factorization based preconditioners [23].

In Section 3.1 we summarize the SPAI preconditioner from [10] and we discuss the use of SPAI in the AIRGA algorithm. Since the change in the coefficient matrix of the linear system to be solved is small from one step of AIRGA to the next, we update the preconditioner from one step to the next. This aspect is covered in Section 3.2.

3.1. Sparse Approximate Inverse (SPAI) Preconditioner

In constructing a preconditioner for a coefficient matrix at the outer AIRGA iteration (i.e., ), we would like (for left preconditioning) and (for right preconditioning). SPAI preconditioners find by minimizing the associated error norm or for a given sparsity pattern. If the norm used is Frobenius norm, then the minimization function will be

where is a set of certain sparse matrices. The above approach produces a right approximate inverse. Similarly, a left approximate inverse can be computed by solving the minimization problem . For non-symmetric matrices, the distinction between left and right approximate inverses is important. There are some situations where it can be difficult to find a right approximate inverse but finding a left approximate inverse can be easy. Whether left or right preconditioning should be used is problem dependent [17]. Since the SPAI preconditioner was originally proposed for right preconditioning [10], we focus on the same here. Similar derivation can be done for the left preconditioning as well.

1:Input: {}
2:  where    and is the dimension of
3:for  do
4:     Define
5:     
6:     while  do
7:         
8:         
9:         
10:         
11:         
12:     end while
13:end for
Algorithm 2 : Sparse Approximate Inverse (SPAI) Preconditioner [10]

The minimization problem can be rewritten as

(3.1)

where and are columns of the matrix and I (identity matrix), respectively. The minimization problem (3.1) is essentially just one least squares problem, to be solved for different right-hand sides. Here it is solved iteratively.

The algorithm for computing a SPAI preconditioner for right preconditioning is given in Algorithm 2. The inputs to this algorithm is (coefficient matrix) and (stopping residual of the minimization problem for each column); is picked based on experience. This is ALGORITHM 2.5 of [10] with two minor differences.

First, we do not list the code related to sparsity pattern matching (for obtaining a sparse preconditioner) because the goal here is to motivate SPAI update, and for our problems the original matrix is very sparse so the preconditioner stays sparse any ways. This aspect can be easily incorporated. Second, we use a loop at line 6 of Algorithm 2 instead of a loop. This is because with a loop one has to decide the stopping count in advance (which is chosen heuristically). We use a more certain criteria. That is, residual of the minimization problem for each column less than . This is linear cost (for each column) and we are doing such computation anyways.

The initial guess for approximate inverse is usually taken as where
(see line 2). This initial scaling factor is minimizes the spectral radius of  [7, 10, 15].

The AIRGA algorithm with the SPAI preconditioner is given in Algorithm 3. Here, we only show those parts of AIRGA algorithm that require changes.

1:while no convergence do
2:     for  do
3:         Let
4:         Compute preconditioner by solving
5:         Solve with
6:     end for
7:     j = 1
8:     while no convergence and  do
9:         for  do
10:              Only right hand sides are changing, so above preconditioner can
11:      be applied as it is, i.e.,
12:      Solve with
13:         end for
14:     end while
15:     j = j+1
16:end while
Algorithm 3 : AIRGA Algorithm with SPAI Preconditioner

3.2. SPAI Update Preconditioner

Let and be two coefficient matrices for different expansion points and , respectively. These expansion points can be at the same or different AIRGA iteration. If the difference between and is small, then one can exploit this while building preconditioners for this sequence of matrices. This has been considered in the quantum Monte Carlo setting [1] and for model reduction of first order linear dynamical systems [12, 25].

Let be a good initial preconditioner for . As will be seen, a cheap preconditioner update can be obtained by asking for , where and, as earlier, denotes the number of expansion points. Expressing in terms of , we get

Now we enforce or

where .

Let , then the above implies or . This leads us to the following idea: instead of solving for from , we solve a simpler problem

where and denote the columns of and , respectively. Compare this minimization problem with the one in SPAI (Equation (3.1) in Section 3.1). Earlier, we were finding the preconditioner for by solving . Here, we are finding the preconditioner (i.e., ) by solving . The second formulation is much easier to solve since in the first could be very different from , while in the second and are similar (change only in the expansion points).

The SPAI algorithm (Algorithm 2) adapted for finding the preconditioner by minimizing this new expression is given in Algorithm 4. The inputs to this algorithm include , , and (stopping residual of the minimization problem for each column); is picked based on experience. The initial guess for the approximate inverse is usually taken as Similar to before, is chosen to minimize the spectral radius of

1:Input: {}
2:
3:where    and is the dimension of
4:for  do
5:     Define
6:     
7:     while  do
8:         
9:         
10:         
11:         
12:         
13:     end while
14:end for
Algorithm 4 : SPAI Update Preconditioner
1:
2:while no convergence do
3:     for i = 1 to  do
4:         if  then
5:              
6:              Compute initial by solving
7:              Solve with
8:         else
9:              
10:              Compute by solving
11:              Solve
12:      with
13:         end if
14:     end for
15:     
16:     while no convergence and  do
17:         for i = 1 to  do
18:              Only right hand sides are changing, so above preconditioners can be
19:      applied as it is, i.e.,
20:      Solve
21:      with
22:         end for
23:     end while
24:     
25:end while
26:
Algorithm 5 : AIRGA with SPAI Update Preconditioner
Outer AIRGA Itn
Exp Pnt 1
Exp Pnt 2
Exp Pnt 3
Exp Pnt
1
2
Table 1. Change in Expansion Points

In AIRGA, this update to the preconditioner can be done in two ways. To understand these ways, let us look at how expansion points change in AIRGA. Table 1 shows changing expansion points (labeled as Exp Pnt) for two iterations of the while loop at line 3 of Algorithm 1. First, we can update the preconditioner when expansion points change from to , to , to and so on (horizontal update). Second, we can update the preconditioner when expansion points change from to , to , to and so on (vertical update). Since in AIRGA, vertical change in expansion points is less (which means vertically the coefficient matrices are close), we use this strategy. Thus, we are updating preconditioner from (which is ) to (which is ).

The AIRGA algorithm with the SPAI update preconditioner is given in Algorithm 5. Here, we only show those parts of the AIRGA algorithm that require changes. At line 10 of Algorithm 5, after solving for , ideally one would obtain the preconditioner at the AIGRA iteration as . Since this involves a matrix-matrix multiplication, it would be expensive and defeat the purpose of using a SPAI update. Instead, we never explicitly build except at the first AIRGA iteration where we directly obtain (see line 6 of Algorithm 5). From the second AIRGA iteration onwards, we pass the all ’s (until one reaches