Preconditioned Linear Solves for Parametric Model Order Reduction

# Preconditioned Linear Solves for Parametric Model Order Reduction

Navneet Pratap Singh and Kapil Ahuja Computer Science and Engineering, Indian Institute of Technology Indore, India
###### Abstract

There exist many classes of algorithms for computing reduced-order models of parametric dynamical systems, commonly termed as parametric model order reduction algorithms. The main computational cost of these algorithms is in solving sequences of very large and sparse linear systems of equations, which are predominantly dependent on slowly varying parameter values. We focus on efficiently solving these linear systems, arising while reducing second-order linear dynamical systems, by iterative methods with appropriate preconditioners. We propose that the choice of underlying iterative solver is problem dependent. Since for many parametric model order reduction algorithms, the linear systems right-hand-sides are available together, we propose the use of block variant of the underlying iterative method.

Due to constant increase in the input model size and the number of parameters in it, computing a preconditioner in a parallel setting is increasingly becoming a norm. Since, Sparse Approximate Inverse (SPAI) preconditioner is a general preconditioner that can be naturally parallelized, we propose its use. Our most novel contribution is a technique to cheaply update the SPAI preconditioner, while solving the parametrically changing linear systems. We support our proposed theory by numerical experiments where we first show that using a block variant of the underlying iterative solver saves 80% of the computation time over the non-block version. Further, and more importantly, SPAI with updates saves 70% of the time over SPAI without updates.

###### keywords:
Parametric Model Order Reduction, Parametrically Dependent Linear Systems, Iterative Methods, SPAI Preconditioner, and Preconditioner Updates.
###### Msc:
[2010] 34C20, 65F10
journal: Intl. of Journal of Computer Mathematics\PassOptionsToPackage

squarenatbib

## 1 Introduction

Dynamical systems arise while modelling of many engineering and scientific applications Ogata2001 (); ANTOULAS200419 (). These dynamical systems depend upon parameters, which vary with different design stages or computer experiments. Substantial work has been done for the first order linear systems Ogata2001 (); ANTOULAS200419 (), and hence, we focus on second order linear systems here. Higher order systems can also be looked at, which is part of our future work.

A parameterized second order linear dynamical system is usually of the form

 M(pj)¨x(t)+D(pj)˙x(t)+K(pj)x(t)=Bu(t),y(t)=C1(pj)˙x(t)+C2(pj)x(t), (1)

where , , and for , are the parameters. Also, is the vector of all states, and are the inputs and the outputs of the system, respectively.

Parameterized dynamical systems of this type are usually very large in size. Solving such systems by traditional simulation methods is often very time-consuming. In these systems, the parameters are to be provided as fixed values that cannot be changed during simulation. Moreover, since many runs with differing parameters values are required lihong2014 (), the simulation process is enormous.

Model order reduction, traditionally developed for non-parametric systems Meyer1996 (); Beattie2005 (); BONIN20161 (), is a very popular technique to overcome such issues. A reduced system can be derived by model order reduction that can then be used for simulation instead of the full system. This process often saves substantial simulation time. Model order reduction for parametric systems, parametric model order reduction, preserves the parameters of the original system as a symbolic quantities in the reduced system. Whenever there is a change in the parameters, we need not recompute the new reduced system. Instead, we simply use the changed parameters while solving the reduced system.

Many algorithms exist for model order reduction of parametrized second order linear dynamical systems. Some common ones are as follows: Robust Algorithm for Parametric Model Order Reduction (RPMOR) lihong2014 (), which is based on moment matching; Data-Driven Parametrized Model Reduction algorithm in the Loewner Framework (PMOR-L) Ionita2014 (); and Parametric Balanced Truncation Model Reduction algorithm (PBTMR) sonN2017 (), which is based on a Greedy approach. Next, we summarize these three algorithms, and then abstract out the computational bottleneck step of solving linear systems.

### 1.1 Robust Algorithm for Parametric Model Order Reduction

RPMOR lihong2014 () is a projection based model order reduction algorithm and is mainly used for the reduction of parametric first and second order linear dynamical systems. Here, the state variable is projected onto a smaller dimensional subspace. Let be a projection matrix determined by RPMOR. Using , , and in (1), we obtain the following system:

 M(pj)V¨^x(t)+D(pj)V˙^x(t)+K(pj)V^x(t)−Bu(t)=r(t),^y(t)=CTV^x(t),

where is the residual after projection. Applying the Galerkin approach by multiplying in the first equation above we get

 VT(M(pj)V¨^x(t)+D(pj)V˙^x(t)+K(pj)V^x(t)−Bu(t))=0,^y(t)=CTV^x(t)or
 ^M(pj)¨^x(t)+^D(pj)˙^x(t)+^K(pj)^x(t)−^Bu(t)=0,^y(t)=^CT^x(t), (2)

where , and . We want should be nearly equal to for all acceptable inputs.

The projection matrix can be determined by many ways. One common way is by moment matching Benner2015 (); lihong2014 (); Serkan2002 (), which is discussed next. In the frequency domain, (1) with and is given as

 (s2M(pj)+sD(pj)+K(pj))x(s)=Bu(s),y(s)=CTx(s),

where is the new parameter (frequency parameter corresponding to time ). The above equation can also be rewritten as

 A(s,pj)x(s)=Bu(s),y(s)=CTx(s), (3)

where is the parametrized matrix. Next, the system in (3) is transformed to an affine form as

 (A0+~s1A1+⋯+~swAw+~sw+1Aw+1)x(s)=Bu(s),y(s)=CTx(s), (4)

where , the new parameters and are some functions (polynomial, rational, etc.) of the parameters and , respectively. Next, the state in (4) is computed at initial expansion point (from here onwards we represent set of parameters as an expansion point) as

 x(s) =[I−(σ1M1+…+σwMw+σw+1Mw+1)]−1(A(1))−1Bu(s), (5)

where and

 A(1)=A0+~s11A1+~s12A2+…+~s1w+1Aw+1. (6)

Applying Taylor series expansion on (5) we get

 x(s)=∞∑h=0[σ1M1+…+σwMw+σw+1Mw+1]h~Bu(s),=∞∑h=0x(h)(σ1,…,σw,σw+1)u(s), (7)

where

 ~B =(A(1))−1B, x(0)(σ1,…,σw,σw+1) =~B, x(1)(σ1,…,σw,σw+1) =[σ1M1+…+σwMw+σw+1Mw+1]x(0)(σ1,…,σw,σw+1), ⋮ x(h)(σ1,…,σw,σw+1) =[σ1M1+…+σwMw+σw+1Mw+1]x(h−1)(σ1,…,σw,σw+1).

Here, is called the -order system moment at . Similarly for the reduced system (2), the state variable can be written as

 ^x(s)=∞∑h=0^x(h)(σ1,…,σw,σw+1)u(s). (8)

In the reduced system, the -order system moment is defined similar to . The goal of moment matching approach is to find a reduced system such that the first few moments of (7) and (8) are matched. This provides the orthogonal projection matrix . The columns of are given by , where .

After obtaining the first few columns of corresponding to the current expansion point, the above process is repeated with new set of expansion point , and we get

 A(2)=A0+~s21A1+~s22A2+…+~s2w+1Aw+1. (9)

A similar process can be used for the afterwards set of expansion points for , and we get

 A(ℓ)=A0+~sℓ1A1+~sℓ2A2+…+~sℓw+1Aw+1. (10)

The work in lihong2014 () proposes a popular algorithm based upon this theory (Algorithm 6.1 in lihong2014 ()).

### 1.2 Other Parametric Model Order Reduction Algorithms

PMOR-L Ionita2014 () is a Loewner framework based model order reduction algorithm and is used for the model reduction of all types of parametric dynamical systems (linear-nonlinear; first order-higher orders). One important step here is computing a matrix called the Loewner matrix. The computation of this matrix requires computation of the transfer function of the dynamical system. Since we are focussing on second order linear systems, this function for (1) with and is given by

 H(sk,pj)=C(pj)TA(sk,pj)−1B(pj)  % for k=1,…,v, and j=1,…,w, (11)

where

 A(sk,pj)=(s2kM(pj)+skD(pj)+K(pj)), (12)

and . The variables and are the frequency variables and parameters, respectively.

PBTMR sonN2017 () is based upon balanced truncation theory and is used for model reduction of parametric first and second order linear dynamical systems. Here, the second order system is transformed to the first order as

 E(pj)˙z(t)=A(pj)z(t)+B(pj)u(t),y(t)=C(pj)z(t), (13)

where

 (14)

Like in the case of RPMOR, here also one needs to build a projection matrix . This requires solving Lyapunov equation of the form below for sonN2017 ().

 (15)

where denotes vectorization of a matrix into a column vector. The above equation can be rewritten as

 A(pj)z(pj)= b(pj), (16)

where , , and denotes the standard Kronecker product. Finally, the matrix is obtained as follow:

 V= [z(p1),z(p1),⋯,z(pj)].

### 1.3 Solving Sequences of Linear Systems

All these algorithms require solving sequences of the linear systems, which is a key computational bottleneck when using them for reducing large dynamical systems. All the three algorithms lead to linear system matrices being dependent on parameters and have a similar form.

The linear systems arising in RPMOR lihong2014 () have the form as follows:

 (17a) ⋮ A(ℓ)x(0)=BandA(ℓ)[x(1) ⋯ x(w+1)]=⎡⎢ ⎢⎣A1⋮Aw+1⎤⎥ ⎥⎦x(0), (17b)

where is given in (1); are given in (4); and for are given in (6), (9) (10).

To compute the transfer function in PMOR-L Ionita2014 (), one needs to solve sequences of linear systems as

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩A(s1,p1)x(11)=B(p1),A(s1,p2)x(12)=B(p2),⋮A(s1,pj)x(1j)=B(pj),⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩A(s2,p1)x(21)=B(p1),A(s2,p2)x(22)=B(p2),⋮A(s2,pj)x(2j)=B(pj),⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩A(sk,p1)x(k1)=B(p1),A(sk,p2)x(k2)=B(p2),⋮A(sk,pj)x(kj)=B(pj), (18)

where , for and and are given in (11) – (12). This gives us the transfer function .

Solving the Lyapunov equations in PBTMR sonN2017 () gives rise to the sequence of the linear systems as follows

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩A(p1)z(1)=b(p1),A(p2)z(2)=b(p2),⋮A(pj)z(j)=b(pj), (19)

where and for and are given in (13) – (16).

If the dimensions of for , and for , are very large, one should use iterative methods (instead of direct method) to solve the above linear systems since they scale well. The time complexity of direct methods is whereas for iterative methods it is , where represents the number of unknowns and is the number of non-zeros in system matrix. In RPMOR lihong2014 (), the right hand side vectors of the second equations in (17a) and (17b) are available together. Hence, one can easily solve these linear systems simultaneously. For this, we can use a block version of the relevant iterative method OLEARY1980293 (); SIMONCINI1996457 (); Parks2006 ().

Preconditioning is a technique commonly used to accelerate the performance of iterative methods. In the algorithms above, the linear system matrices change with the change in parameters, however, this change is small.

Since computing a new preconditioner for every new linear system is expensive, we propose a cheap preconditioner update that avoids this. Here, we compute a preconditioner for the initial linear system very accurately, and from the next linear systems, we use this initial preconditioner along with a cheap update. People have proposed this for quantum Monte Carlo (QMC) ahuja2011improved (), model order reduction of non-parametric first order linear dynamical systems grim2015reusing (); Wyatt2012 () and model order reduction of non-parametric second order linear dynamical systems Navneet2016 (), but not for parametric model order reduction (and hence, not specifically for model order reduction of parametric second order linear dynamical systems, which is our focus). The main challenge in this approach (cheap update) is to generate the best sequence of preconditioners corresponding to the parametric coefficient matrices.

The main contributions of this paper are as follows: Section 2 discusses the use of iterative methods and preconditioners in this context. We propose our cheap preconditioner update techniques here as well. To support our theory, numerical results are provided in Section 3. Finally, we give conclusions and discuss future work in Section 4.

## 2 Our Approach

Here, we first discuss preconditioned iterative methods in Section 2.1, Next, for accelerating the iterative method, we discuss preconditioners in Section 2.2. We propose the theory of cheap preconditioner updates in Section 2.3. Finally, we discuss an application of preconditioner updates to the earlier discussed parametric model order reduction algorithms in Section 2.4

### 2.1 Iterative Methods

For solving linear systems of equations, either direct methods or iterative methods are used. If a linear system is of large size, as discussed earlier, iterative methods are preferred over direct methods because the latter is too expensive in terms of both storage and operation.

Krylov subspace based methods are very popular class of iterative methods Greenbaum1997 (); Saad2003 (); van2003iterative (). There are many types of Krylov subspace methods. Some commonly used ones are Conjugate Gradient (CG), Generalized Conjugate Residual Orthogonal (GCRO), Generalized Minimal Residual (GMRES), Minimum Residual (MINRES), and BiConjugate Gradient (BiCG) etc. Greenbaum1997 (); Saad2003 (); van2003iterative (). The choice of method is problem dependent.

As discussed in Section 1, if the linear systems have multiple right hand sides (available together), then one can solve such linear systems by block iterative methods. This concept was introduced for the first time with Conjugate Gradient (CG) method OLEARY1980293 (). A similar study with GMRES was proposed in SIMONCINI1996457 (). Here, we give a brief overview of block iterative methods. Let the linear systems with multiple right-hand sides is given as

 AX=B,

where , , . Given (i.e. ) and as the initial residual and the initial solution, respectively, these methods build the block Krylov subspace , and find solution in it OLEARY1980293 (); SIMONCINI1996457 ().

### 2.2 Preconditioners

Preconditioning is used to accelerate the performance of iterative methods. If is a non-singular matrix that approximates the inverse of , that is , then the system may be faster to solve than the original one (i.e. )111Here, we use right preconditioning, i.e. the preconditioner is applied to the right of the linear system matrix. Similar analysis can be done with left preconditioning, i.e. with the preconditioner on the left of the linear system matrix.. For most of the dynamical systems reduced by the earlier discussed algorithms lihong2014 (); Ionita2014 (); sonN2017 (), the iterative methods stagnate or are slow in convergence (see Numerical Results section). Hence, we use a preconditioner.

Besides making the system easier to solve by an iterative method, a preconditioner should be cheap to construct and apply. Some existing preconditioning techniques include Successive Over Relaxation, Polynomial Based, Incomplete Factorizations, Sparse Approximate Inverse (SPAI), and Algebraic Multi-Grid Benzi2002 (). SPAI preconditioners are known to work in the most general setting and can be easily parallelized Benzi2002 (); saadchow (). Among the others, Incomplete Factorizations are also general but these cannot be easily parallelized. Hence, we use a parallel version of SPAI. We briefly discuss SPAI preconditioner next.

In constructing a preconditioner for a coefficient matrix , we would like ( is the identity matrix). SPAI preconditioner finds by minimizing the associated error norm . If the norm used is Frobenius norm, then the minimization problem becomes

 minP∥I−AP∥f.

This minimization problem can be rewritten as

 minP∥I−AP∥2f=minpın∑ı=1∥eı−Apı∥22,

where and are the columns of and matrices, respectively. This minimization problem is just a least square problem, to be solved for different right hand sides saadchow (); Alexander2007 ().

### 2.3 Theory of Cheap Preconditioner Updates

In general the sequences of linear systems (17a-17b), (18) and (19) can be written as

 A(1)X(1)=B(1),A(2)X(2)=B(2),⋮A(i)X(i)=B(i), (20)

where and . Let be a good preconditioner for (i.e. computed by ) then, (preconditioner corresponding to , for ) can be computed as given in Table 1.

The approaches provided in Table 1 have competing trade-offs. In the first approach, the minimization is harder to solve because and would be closer than and (since the sequences of matrices in (20) change slowly). However, is more accurate since is very accurate (see ). In the second approach, the minimization is easier using the same argument as earlier, while the preconditioner at each step is less accurate ( is formed from , which already has approximation errors).

In non-parametric model order reduction, the relative difference between and is substantial, which means and are further away (expansion points change rapidly, however, they are still “close” to be able to apply cheap update (see Wyatt2012 (); Navneet2016 ())). Hence, the first approach is not very efficient there and the second approach fits well Navneet2016 (). In the parametric case, change in to is such that and are not too far (or the relative difference between and is not substantial). Hence, the minimization problem is almost as easy to solve as . Since, the first approach has an extra advantage of less loss of accuracy during building the preconditioner after minimization ( as compared to ), we propose its use. The experimental results support our this argument as well.

To summarize, when using basic SPAI we need to solve , which we transform to first , and subsequently to . This last formulation is usually much easier to solve since and are close to each other (change in expansion points only), as compared to the first formulation where and could be very different.

### 2.4 Application of Cheap Preconditioner Updates

Recall (17a)-(17b) in RPMOR lihong2014 (), let and be two coefficient matrices for different expansion points and , respectively. If the difference between and is small, then one can exploit this while building preconditioners for this sequence of matrices.

Let be a good initial preconditioner for . Then, a cheap preconditioner update can be obtained by making , where , and denotes the number of expansion points. Expressing in terms of we get

 A(ℓ)=A(1)[I+(~sℓ1−~s11)(A(1))−1A1+(~sℓ2−~s12)(A(1))−1A2+⋯+(~sℓw+1−~s1w+1)(A(1))−1Aw+1].

Now we enforce or

where

 Pℓ=[I+(~sℓ1−~s11)(A(1))−1A1+(~sℓ2−~s12)(A(1))−1A2+⋯+(~sℓw+1−~s1w+1)(A(1))−1Aw+1]−1P1.

Let

then the above implies . This leads us to the idea that instead of solving for leading to , we solve a simpler problem given below

 minQℓ||A(1)−A(ℓ)Qℓ||2f=min(qℓ)(ı)n∑ı=1∣∣∣∣(a1)(ı)−A(ℓ)(qℓ)(ı)∣∣∣∣22,

where and denote the columns of and , respectively.

Next, we look at a cheap update for PMOR-L Ionita2014 (). Here, we can express the relation between coefficient matrices of the two consecutive linear systems by the following two ways: First, by capturing the changes in frequency variables , and second, by capturing the changes in parameters . In general, savings in computation time is less in the first case. This is because, as earlier, frequency variables change more rapidly than parameters. We discuss both cases here in the above order.

Let and be two coefficient matrices for parameter and different values of frequency parameters for (recall (12) and (18)). Now, expressing in terms of we get

 A(sk,p1)= A(s1,p1)[I+(s2k−s21)A(s1,p1)−1M(p1)+(sk−s1)A(s1,p1)−1D(p1)].

Enforcing we have

 A(s1,p1)P11= A(s1,p1)[I+(s2k−s21)A(s1,p1)−1M(p1)+(sk−s1)A(s1,p1)−1D(p1)] [I+(s2k−s21)A(s1,p1)−1M(p1)+(sk−s1)A(s1,p1)−1D(p1)]−1P11, = A(sk,p1)Pk1,

where .
Let , then the above implies . This leads us to the idea that instead of solving for leading to , we solve a simpler problem given below

 minQk1∥A(s1,p1)−A(sk,p1)Qk1∥2f.

Similarly, for any parameter we solve for from

 minQkj∥A(s1,pj)−A(sk,pj)Qkj∥2f,

In the second case, we are unable to express the two linear systems in-terms of each other, i.e. in-terms of or in-terms of unless we know how the matrices and from (1) depend on . However, this can be easily worked out once the input dynamical system is known. We give this derivation for a commonly used example from Ionita2014 () (the paper which proposed PMOR-L) in Appendix I.

A cheap update for PBTMR sonN2017 () can be worked out as below. Let and be two coefficient matrices for

 A(p1) =−E(p1)⊗A(p1)−A(p1)⊗E(p1), =−[−K(p1)00M(p1)]⊗[0−K(p1)−K(p1)−D(p1)]−[0−K(p1)−K(p1)−D(p1)]⊗[−K(p1)00M(p1)], A(pj) =−E(pj)⊗A(pj)−A(pi)⊗E(pj), =−[−K(pj)00M(pj)]⊗[0−K(pj)−K(pj)−D(pj)]−[0−K(pj)−K(pj)−D(pj)]⊗[−K(pj)00M(pj)],

Here, we are unable to express the linear system in-terms of the first linear system (i.e. in-terms of ). However, we can see that the two linear systems have structural similarities (only parameters are varying). We can abstract out the relationship between these linear systems if structure of and is more explicitly known. Hence, we have worked out a cheap preconditioner update for a popular example as used in the paper that proposed PBTMR, i.e. sonN2017 (), in Appendix II.

## 3 Numerical Results

We demonstrate our proposed preconditioned iterative solver theory using RPMOR lihong2014 () as our candidate parametric model order reduction algorithm and a micro-gyroscope model lihong2013 () as our test dynamical system (earlier version of RPMOR in lihong2013 () is tested on a micro-gyroscope model). This model is a parametric Single Input Single Output (SISO) second order linear dynamical systems of size , and is given as

 s2M(d)x+sD(θ,α,β,d)x+K(d)x=Bu(s),y=Cx,

where