Simulation Algorithms with Exponential Integration for Time-Domain Analysis of Large-Scale Power Delivery Networks

Simulation Algorithms with Exponential Integration for Time-Domain Analysis of Large-Scale Power Delivery Networks

Hao Zhuang, Wenjian Yu, Shih-Hung Weng, Ilgweon Kang, Jeng-Hau Lin, Xiang Zhang, Ryan Coutts, and Chung-Kuan Cheng,
Manuscript received in May 2015; Revised in August, 2015 and November 2015. Accepted in Jan, 2016. This work was supported in part by NSF-CCF 1017864, NSFC 61422402 and NSFC 61274033. H. Zhuang, I. Kang, J.-H. Lin, and C. K. Cheng are with the Department of Computer Science and Engineering, University of California, San Diego, CA. (e-mail: hao.zhuang@cs.ucsd.edu, igkang@ucsd.edu, jel252@eng.ucsd.edu, ckcheng@ucsd.edu)W. Yu is with Tsinghua National Laboratory for Information Science and Technology, the Department of Computer Science and Technology, Tsinghua University, Beijing, China. (e-mail: yu-wj@tsinghua.edu.cn) X. Zhang and R. Coutts are with Department of Electrical and Computer Engineering, University of California, San Diego, CA. (e-mail:xiz110@eng.ucsd.edu, rmcoutts@eng.ucsd.edu) S.-H. Weng was with the Department of Computer Science and Engineering, University of California, San Diego, CA. He is now with Facebook, Inc., Menlo Park, CA, USA. (e-mail: shweng@fb.com)
Abstract

We design an algorithmic framework using matrix exponentials for time-domain simulation of power delivery network (PDN). Our framework can reuse factorized matrices to simulate the large-scale linear PDN system with variable step-sizes. In contrast, current conventional PDN simulation solvers have to use fixed step-size approach in order to reuse factorized matrices generated by the expensive matrix decomposition. Based on the proposed exponential integration framework, we design a PDN solver R-MATEX with the flexible time-stepping capability. The key operation of matrix exponential and vector product (MEVP) is computed by the rational Krylov subspace method.

To further improve the runtime, we also propose a distributed computing framework DR-MATEX. DR-MATEX reduces Krylov subspace generations caused by frequent breakpoints from a large number of current sources during simulation. By virtue of the superposition property of linear system and scaling invariance property of Krylov subspace, DR-MATEX can divide the whole simulation task into subtasks based on the alignments of breakpoints among those sources. The subtasks are processed in parallel at different computing nodes without any communication during the computation of transient simulation. The final result is obtained by summing up the partial results among all the computing nodes after they finish the assigned subtasks. Therefore, our computation model belongs to the category known as Embarrassingly Parallel model.

Experimental results show R-MATEX and DR-MATEX can achieve up to around and runtime speedups over traditional trapezoidal integration based solver with fixed time-step approach.

Circuit simulation, power delivery/distribution networks, power grid, time-domain simulation, transient simulation, matrix exponential, Krylov subspace method, parallel processing.

I Introduction

Modern VLSI design verification relies heavily on the analysis of power delivery network (PDN) to estimate power supply noises [1, 2, 3, 4, 5, 6, 7, 8]. The performance of power delivery network highly impacts on the quality of global, detailed and mixed-size placement [9, 10, 11], clock tree synthesis [12], global and detailed routing [13], as well as timing [14] and power optimization. Lowering supply voltages, increasing current densities as well as tight design margins demand more accurate large-scale PDN simulation. Advanced technologies [15, 16], three dimensional (3D) IC structures [17, 18, 19], and increasing complexities of system designs all make VLSI PDNs extremely huge and the simulation tasks time-consuming and computationally challenging. Due to the enormous size of modern designs and long simulation runtime of many cycles, instead of general nonlinear circuit simulation [20, 21], PDN is often modeled as a large-scale linear circuit with voltage supplies and time-varying current sources [22, 23, 24]. Those linear matrices are obtained by parasitic extraction process [25, 26, 27, 28, 4]. After those processes, we need time-domain large-scale linear circuit simulation to obtain the transient behavior of PDN with above inputs.

Traditional methods in linear circuit simulation solve differential algebra equations (DAE) numerically in explicit ways, e.g., forward Euler (FE), or implicit ways, e.g., backward Euler (BE) and trapezoidal (TR), which are all based on low order polynomial approximations for DAEs [29]. Due to the stiffness of systems, which comes from a wide range of time constants of a circuit, the explicit methods require extremely small time step sizes to ensure the stability. In contrast, implicit methods can handle this problem with relatively large time steps because of their larger stability regions. However, at each time step, these methods have to solve a linear system, which is sparse and often ill-conditioned. Due to the requirement of a robust solution, compared to iterative methods [30], direct methods [31] are often favored for VLSI circuit simulation, and thus adopted by state-of-the-art power grid (PG) solvers in TAU PG simulation contest [32, 33, 34]. Those solvers only require one matrix factorization (LU or Cholesky factorization) at the beginning of the transient simulation. Then, at each fixed time step, the following transient computation requires only pairs of forward and backward substitutions, which achieves better efficiency over adaptive stepping methods by reusing the factorization matrix [34, 32, 24] in their implicit numerical integration framework. However, the maximum of step size choice is limited by the smallest distance among the breakpoints [35]. Some engineering efforts are spent to break this limitation by sacrificing the accuracy. In our work, we always obey the upper limit of time step to maintain the fidelity of model, which means the fixed time step cannot go beyond in case of missing breakpoints.

Beyond traditional methods, a class of methods called matrix exponential time integration has been embraced by MEXP [36]. The major complexity is caused by matrix exponential computations. MEXP utilizes standard Krylov subspace method [37] to approximate matrix exponential and vector product. MEXP can solve the DAEs with much higher order polynomial approximations than traditional ones [36, 37]. Nevertheless, when simulating stiff circuits with standard Krylov subspace method, it requires the large dimension of subspace in order to preserve the accuracy of MEXP approximation and poses memory bottleneck and degrade the adaptive stepping performance of MEXP.

Nowadays, the emerging multi-core and many-core platforms bring powerful computing resources and opportunities for parallel computing. Even more, cloud computing techniques [38] drive distributed systems scaling to thousands of computing nodes [39, 40, 41], etc. Distributed computing systems have been incorporated into products of many leading EDA companies and in-house simulators [42, 43, 44, 45, 46]. However, building scalable and efficient distributed algorithmic framework for transient linear circuit simulation is still a challenge to leverage these powerful computing tools. The papers [47, 48] show great potentials by parallelizing matrix exponential based method to achieve the runtime performance improvement and maintain high accuracy.

In this work, we develop a transient simulation framework using matrix exponential integration scheme, MATEX, for PDN simulation. Following are the challenges we need to address. First, when the circuit is stiff, the standard Krylov subspace has convergence problem and slows down the computation of MEVP. Second, the frequent time breakpoints due to the transitions of PDN current sources modeling triggers the generations of Krylov subspace. Therefore, we might gain performance where we leverage the large time stepping, but we also lose runtime for the small step size. Our contributions are listed as below:

  • MEVP in MATEX is efficiently computed by rational or invert Krylov subspace method. Compared to the commonly adopted framework using TR with fixed time step (TR-FTS), the proposed MATEX can reuse factorized matrix at the beginning of transient simulation to perform flexible adaptive time stepping.

  • Among different Krylov subspace methods, we find rational Krylov subspace is the best strategy for MEVP in PDN simulation. Therefore, we design R-MATEX based on that and achieve up to around runtime speedup against the benchmarks over the traditional method TR-FTS with good accuracy.

  • Furthermore, DR-MATEX is designed to improve R-MATEX with distributed computing resources.

    • First, PDN’s current sources are partitioned into groups based on their alignments. They are assigned to different computing nodes. Each node runs its corresponding PDN transient simulation task and has no communication overhead with other nodes.

    • After all nodes finish the simulation computations, the results are summed up based on the linear superposition property of the PDN system.

    • Proposed current source partition can reduce the chances of generating Krylov subspaces and prolong the time periods of reusing computed subspace at each node, which brings huge computational advantage and achieves up to speedup over traditional method TR-FTS.

The rest of this paper is organized as follows. Sec. II introduces the background of linear circuit simulation and matrix exponential formulations. Sec. III illustrates the Krylov techniques to accelerate matrix exponential and vector product computation. Sec. IV presents MATEX circuit solver and the parallel framework DR-MATEX. Sec. V shows numerical results and Sec. VI concludes this paper.

Ii Background

Ii-a Transient Simulation of Linear Circuit

Transient simulation of linear circuit is the foundation of modern PDN simulation. It is formulated as DAEs via modified nodal analysis (MNA),

(1)

where is the matrix for capacitive and inductive elements. is the matrix for conductance and resistance, and is the input selector matrix. is the vector of time-varying node voltages and branch currents. is the vector of supply voltage and current sources. In PDN, such current sources are often characterized as pulse or piecewise-linear inputs [22, 24] to represent the activities under the networks. To solve Eq. (1) numerically, the system is discretized with time step and transformed to a linear algebraic system. Given an initial condition from DC analysis or previous time step and a time step , can be obtained by traditional low order approximation methods [29].

Ii-B Traditional Low Order Time Integration Schemes

Ii-B1 Be

Backward Euler based time integration scheme (Eq.(2)) is a robust implicit first-order method.

(2)

Ii-B2 Tr

Trapezoidal based time integration scheme (Eq.(II-B2)) is a popular implicit second-order method.

It is probably the most commonly used strategy for large-scale circuit simulation, which has higher accuracy than BE.

Ii-B3 BE-FTS and TR-FTS

Methods BE and TR with fixed time step (FTS) are efficient approaches, which were adopted by the top PG solvers in 2012 TAU PG simulation contest [34, 32, 33, 24]. If only one is used for the entire simulation, the choice is limited by the minimum breakpoint [35] distance among all the input sources. Fig. 1 (a) has as the upper limit for in BE-FTS and TR-FTS. When the alignments of inputs change (as shown in Fig. 1 (b)) shift by , the resulting upper limit for becomes for the approaches with fixed step size. If is larger than the limit, it is impossible to guarantee the accuracy since we may skip pivot points of the inputs.

Fig. 1: Example: Interleave two input sources to create smaller transition time. ( a) Before interleaving, the smallest transition time of the input sources is ; (b) After interleaving, the smallest transition time of the input sources is .

Ii-C Matrix Exponential Time Integration Scheme

The solution of Eq. (1) can be obtained analytically [29]. For a simple illustration, we convert Eq. (1) into

(4)

when is not singular111The assumption is to simplify the explanation in this section. After Sec. III-B, we use I-MATEX, R-MATEX and DR-MATEX to compute the solution of DAE without inversion of . Therefore, the methods are suitable for general DAE system, i.e., Eq. (1) without the assumption here.,

Given a solution at time and a time step , the solution at is

(5)

Assuming that the input is a piecewise linear (PWL) function of , we can integrate the last term of Eq. (5) analytically, turning the solution with matrix exponential operator:

(6)

For the time step choice, breakpoints (also known as input transition spots (TS) [47]) are the time points where slopes of input vector change. Therefore, for Eq. (II-C), the maximum time step starting from is , where is the smallest one in larger than . In matrix exponential based framework, the limitation of time step size is not the local truncation error (LTE), but the activities among all input sources.

Iii Krylov Subspace Methods for Matrix Exponential and Vector Product (MEVP)

In PDN simulation, is usually above millions and makes the direct computation of matrix exponential infeasible. The alternative way to compute the product is through Krylov subspace method [37]. In this section, we first introduce the background of standard Krylov subspace for MEVP. Then, we discuss invert (I-MATEX) and rational Krylov subspace (R-MATEX) methods, which highly improve the runtime performance for MEVP.

Iii-a MEXP: MEVP Computation via Standard Krylov Subspace Method

The complexity of can be reduced using Krylov subspace method and still maintained in a high order polynomial approximation [37]. MEXP [36] uses standard Krylov subspace, which uses directly to generate subspace basis through Arnoldi process (Algorithm 1). First, we reformulate Eq. (II-C) into

(7)

where

(8)

and

(9)

The standard Krylov subspace

(10)

obtained by Arnoldi process has the relation

(11)

where is the upper Hessenberg matrix

(12)

is a matrix by , and is the -th unit vector with dimension . MEVP is computed via

(13)

The posterior error term is

(14)

where . However, for an autonomous system in circuit simulation, we consider the residual between and , which is

instead of

in . This leads to

(15)

and helps us mitigate the overestimation of the error bound.

To generate by Algorithm 1, we use

(16)

where

as inputs for standard Krylov subspace. The error budget and Eq. (14) are used to determine the convergence when time step is and Krylov subspace dimension is (from line 1 to line 1 in Algorithm 1).

Input:
Output:
1 ;
2 ;
3 for do
       ;
       /* a pair of forward and backward substitutions. */
4       for do
5             ;
6             ;
7            
8      end ;
9       ;
10       if then
11             ;
12             break;
13            
14      end
15end ;
Algorithm 1 MATEX_Arnoldi

The standard Krylov subspace may not be efficient when simulating stiff circuits [36, 49]. For the accuracy of approximation of , a large dimension of Krylov subspace basis is required, which not only brings the computational complexity but also consumes huge amount of memory. The reason is that the Hessenberg matrix of standard Krylov subspace tends to approximate the large magnitude eigenvalues of [50]. Due to the exponential decay of higher order terms in Taylor’s expansion, such components are not the crux of circuit system’s behavior [51, 50]. Therefore, to simulate stiff circuit, we need to gather more vectors into subspace basis and increase the size of to fetch more useful components, which results in both memory overhead and computational complexity to Krylov subspace generations for each time step. In the following subsections, we adopt ideas from spectral transformation [51, 50] to effectively capture small magnitude eigenvalues in , leading to a fast and accurate MEVP computation.

Iii-B I-MATEX: MEVP Computation via Invert Krylov Subspace Method

Instead of , we use (or ) as our target matrix to form

(17)

Intuitively, by inverting , the small magnitude eigenvalues become the large ones of . The resulting is likely to capture these eigenvalues first. Based on Arnoldi algorithm, the invert Krylov subspace has the relation

(18)

The matrix exponential is calculated as

(19)

To put this method into Algorithm 1 is just by modifying the inputs for the LU decomposition in Eq. (16), and . In the line 1 of Algorithm 1,

for the invert Krylov version. The posterior error approximation [47] is

(20)

which is derived from residual based error approximation in [51]. However, as mentioned in Sec. III-A, we consider the residual of , instead of , which leads to

(21)

We use Eq. (21) for the line 1 of Alg. 1.

Iii-C R-MATEX: MEVP Computation via Rational Krylov Subspace Method

The shift-and-invert Krylov subspace basis [50] is designed to confine the spectrum of . Then, we generate Krylov subspace via

(22)

where is a predefined parameter. With this shift, all the eigenvalues’ magnitudes are larger than one. Then the inverse limits the magnitudes smaller than one. According to [51, 50], the shift-and-invert basis for matrix exponential based transient simulation is not very sensitive to , once it is set to around the order near time steps used in transient simulation. The similar idea has been applied to simple power grid simulation with matrix exponential method [52]. Here, we generalize this technique and integrate into MATEX. The Arnoldi process constructs and . We have

(23)

We can project the onto the rational Krylov subspace as follows.

(24)

In the line 1 of Algorithm 1,

Following the same procedure [47, 51], the posterior error approximation is derived as

(25)

Note that in practice, instead of computing directly, is utilized. The corresponding Arnoldi process shares the same skeleton of Algorithm 1 with input matrices

for the LU decomposition Eq. (16), and

The residual estimation is

(26)

Then, we plug Eq. (26) into the line 1 of Algorithm 1.

Iii-D Regularization-Free MEVP Computation

When is a singular matrix, MEXP [36] needs the regularization process [53] to remove the singularity of DAE in Eq. (1). It is because MEXP needs factorize directly to form the input for Algorithm 1. This brings extra computational overhead when the case is large [53]. It is not necessary if we can obtain the generalized eigenvalues and corresponding eigenvectors for matrix pencil . Based on [54], we derive the following lemma,

Lemma 1.

Considering a homogeneous system

and are the eigenvector and eigenvalue of matrix pencil , then

is a solution of the system.

Proof.
222We repeat the proof from [54] with some modifications for our formulation.

If and are an eigenvalue and eigenvector of a generalized eigenvalue problem

Then, is the solution of . ∎

Because we do not need to compute explicitly during Krylov subspace generation, I-MATEX and R-MATEX are regularization-free. Instead, we factorize for invert Krylov subspace basis generation (I-MATEX), or for rational Krylov subspace basis (R-MATEX).333It is also applied to the later work of DR-MATEX in Sec. IV-B. Besides, their Hessenberg matrices Eq. (12) are invertible, which contain corresponding important generalized eigenvalues/eigenvectors from matrix pencil , and define the behavior of linear dynamic system in Eq. (1) of interest.

Iii-E Comparisons among Different Krylov Subspace Algorithms for MEVP Computation

In order to observe the error distribution versus dimensions of standard, invert, and rational Krylov subspace methods for MEVP, we construct a RC circuit with stiffness

where and are the maximum and minimum eigenvalues of . Fig. 2 shows the relative error reductions along the increasing Krylov subspace dimension. The error reduction rate of rational Krylov subspace is the best, while the one of standard Krylov subspace requires huge dimension to capture the same level of error. For example, it costs almost of the size to achieve around relative error compared to Invert and Rational Krylov subspace methods. The relative error is

where , . The matrix is a relatively small matrix and computed by MATLAB function. The result of serves as the baseline for accuracy. The relative error is the real relative difference compared to the analytical solution of the ODE

with an initial vector , which is generated by MATLAB function.

The error reduction rate of standard Krylov subspace is the worst, while the rational Krylov subspace is the best. It is the reason that we prefer rational Krylov subspace (R-MATEX). The relative errors of BE, TR and FE are , , and , respectively. The large error of FE is due to the instability issue of its low order explicit time integration scheme. In Fig. 2, when , standard, invert and rational Krylov subspace methods have , , and , respectively. It illustrates the power of matrix exponential method. Our proposed methods are all stable and can achieve improved error numbers.

In order to observe the different stiffness effects on Krylov subspace methods, we change the entries in and to make the different stiffness value . Fig. 3 illustrates the stable reduction rate of rational method. The stiffness degrades the performance of standard Krylov subspace method. Both invert and rational Krylov subspace methods are good candidates for stiff circuit system.

Fig. 2: The relative error vs. dimensional of different Krylov subspace methods. The relative error is , where , . Note: The relative error is the difference compared to analytical solution of the ODE with an initial vector , which is generated by MATLAB function, and its entries are positive numbers in .
Fig. 3: The relative error vs. dimension of different Krylov subspace methods. The relative error is , where , . The rational Krylov subspace has very stable error reduction rate. The number in the bracket represents the stiffness value of the system.

Regarding the relative error distributions vs. time step and dimension , Fig. 4, Fig. 5, and Fig. 6 are computed by standard, invert, and rational Krylov subspaces (), respectively. Fig. 4 shows that the errors generated by standard Krylov subspace method has flat region with high error values in time-step range of interests. The small (‘unrealistic’) time step range has small error values. Compared to Fig. 4, invert (Fig. 5) and rational (Fig. 6) Krylov subspace methods reduce errors quickly for large . The explanation is that a relatively small portion of the eigenvalues and corresponding invariant subspaces determines the final result (vector) when time step is larger [50], which are efficiently captured by invert and rational Krylov subspace methods.

Fig. 4: The error of MEVP via standard Krylov Subspace: vs. time step and dimension of standard Krylov subspace basis (). The standard Krylov subspace approximates the solution well in extremely small , since it captures the important eigenvalues and eigenvectors of at that region. However, the small is not useful for the circuit simulation. For large , it costs large to reduce the error.
Fig. 5: The error of MEVP via invert Krylov Subspace: vs. time step and dimension of invert Krylov subspace basis (). Compared to Fig. 4, invert Krylov subspace method reduces the errors for large .
Fig. 6: The error of MEVP via rational Krylov Subspace: , where , vs. time step and dimension of rational Krylov subspace basis (). Compared to Fig. 4, rational Krylov subspace method reduces the errors for large as Fig. 5.

The error of rational Krylov subspace is relatively insensitive to when it is selected between the time-step range of interests (Fig. 7). Above all, rational Krylov (R-MATEX) and invert Krylov (I-MATEX) subspace methods have much better performance than standard version. When we deal with stiff cases, standard Krylov subspace is not a feasible choice due to the large dimension of Krylov subspace, which causes huge memory consumption and poor runtime performance.

Fig. 7: The error of MEVP via rational Krylov Subspace , where . The flat region shows the error is actually relatively insensitive to , when is in the range of step size of interests.

Iv MATEX Framework

Iv-a MATEX Circuit Solver

We incorporate matrix exponential based integration scheme with Krylov subspace method into our MATEX framework, which is summarized in Algorithm 2. We set and in Line 2 based on the choice of Krylov subspace method as follows,

  • I-MATEX:

  • R-MATEX:

For linear system of PDN, the matrix factorization in line 2 is only performed once, and the matrices and are reused in the while loop from line 2 to line 2. Line 2 uses Arnoldi process with corresponding inputs to construct Krylov subspace as shown in Algorithm 1.

Input: , and time span .
Output: The set of from .
1 Set ;
2 ;
3 DC_analysis;
4 ;
5 while do
6       Compute maximum allowed step size ;
7       Update ;
8       Obtain by Algorithm 1 with inputs ;
9       ;
10      
end
Algorithm 2 MATEX Circuit Solver

Iv-B DR-MATEX (Distributed R-MATEX Framework) by Decomposition of Input Sources, Linear Superposition, and Parallel Computation Model

Iv-B1 Motivation

There are usually many input sources in PDNs as well as their transition activities, which might narrow the regions for the stepping of matrix exponential based method due to the unaligned breakpoints. In other words, the region before the next transition may be shortened when there are a lot of activities from the input sources. It leads to more chances of generating new Krylov subspace bases. We want to reduce the number of subspace generations and improve the runtime performance.444The breakpoints also put the same constraint on TR-FTS and BE-FTS. However, their time steps are fixed already, which refrains them from reaching this problem in the first place.

Iv-B2 Treatment and Methodology

In matrix exponential based integration framework, we can choose any time spot with computed Krylov subspace basis. The solution of is computed by scaling the existing Hessenberg matrix with the time step as below

(27)

This is an important feature for computing the solutions at intermediate time points without generating the Krylov subspace basis, when there is no current transition. Besides, since the PDN is linear dynamical system, we can utilize the well-known superposition property of linear system and distributed computing model to tackle this challenge.

To illustrate our distributed version of MATEX framework, we first define three terms to categorize the breakpoints of input sources:

  • Local Transition Spot (): the set of at an input source to the PDN.

  • Global Transition Spot (): the union of among all the input sources to the PDN.

  • Snapshot: a set at one input source.

If we simulate the PDN with respect to all the input sources, the points in the set of are the places where generations of Krylov subspace cannot be avoided. For example, there are three input sources in a PDN (Fig. 8). The input waveforms are shown in Fig. 9. The first line is , which is contributed by the union of in input sources #1, #2 and #3. However, we can partition the task into subtasks by simulating each input source individually. Then, each subtask generates Krylov subspaces based on its own and keeps track of for the later usage of summation via linear superposition. Between two LTS points and , the Snapshot points

can reuse the Krylov subspace generated at . For each node, the chances of generation of Krylov subspaces are reduced. The time periods of reusing latest Krylov subspaces are enlarged locally and bring the runtime improvement. Besides, when subtasks are assigned, there is no communication among the computing nodes, which leads to so-called Embarrassingly Parallel computation model.

Fig. 8: Part of a PDN model with input sources from Fig. 9.
Fig. 9: Illustration of input transitions. : Global Transition Spots; : Local Transition Spots; : the crossing positions by dash lines and LTS # without solid points.
Fig. 10: Grouping of “Bump” shape transitions for sub-task simulation. The matrix exponential based method can utilize adaptive stepping in each and reuse the Krylov subspace basis generated at the latest point in . However, traditional methods (TR, BE, etc.) still need to do time marching, either by pairs of forward and backward substitutions and proceed with fixed time step, or by re-factorizing matrix and solving linear system for adaptive stepping. (Pulse input information: : delay time; : rise time; : pulse width; : fall time; and : period).
Fig. 11: DR-MATEX: The distributed MATEX framework using R-MATEX circuit solver.

Iv-B3 More Aggressive Tasks Decomposition

We divide the simulation task based on the alignments of input sources. More aggressively, we can decompose the task according to the “bump” shapes of the input sources.555IBM power grid benchmarks provide the pulse input model in SPICE format. We group the input sources, which have the same

into one set. For example, the input source #1 of Fig. 9 is divided to #1.1 and #1.2 in Fig. 10. The input source #2 in Fig. 9 is divided to #2.1 and #2.2 in Fig. 10. Therefore, there are four groups in Fig. 10, Group 1 contains . Group 2 contains . Group 3 contains . Group 4 contains and . Our proposed framework MATEX is shown in Fig. 11. After pre-computing and decomposing based on “bump” shape (Fig. 10), we group them and form .666There are alternative decomposition strategies. It is also easy to extend the work to deal with different input waveforms. We try to keep this part as simple as possible to emphasize our framework.

Iv-B4 MATEX Scheduler in DR-MATEX

In DR-MATEX, the role of MATEX scheduler is just to send out and to different MATEX slave nodes and collect final results after all the subtasks of transient simulation are finished. The node number is based on the total number of subtasks, which is the group number after PDN source decomposition. Then the simulation computations are performed in parallel. Each node has its own inputs. For example, Node# has , #, and , which contain the corresponding for node . Scheduler does not need to do anything during the transient simulation, since there are no communications among nodes before the stage of “write back” (in Fig. 11), by when all nodes complete their transient simulations.

Within each slave node, the circuit solver (Algorithm 3) computes transient response with varied time steps. Solutions are obtained without re-factorizing matrix during the computation of transient simulation. The computing nodes write back the results and inform the MATEX scheduler after finishing their own transient simulation.

Input: #, , , , error tolerance , and simulation time span .
Output: Local solution along in node , where is the number of nodes
1 , , and ;
2 ;
3 ;
4 while do
5       Compute maximum allowed step size based on ;
6       if LTS# then
             /* Generate Krylov subspace for the point at and compute */
7             ;
8             ;
9            
10      end else
             /* Obtain at with computed Krylov subspace */
11             ;
12             ;
13            
14      end ;
15      
end
Algorithm 3 DR-MATEX: The distributed MATEX framework using R-MATEX at Node#.

Iv-C Runtime Analysis of MATEX PDN Solver

Suppose we have the dimension of Krylov subspace basis on average for each time step and one pair of forward and backward substitutions consumes runtime . The total time of serial parts is , which includes matrix factorizations, result collection, etc. For , the evaluation of matrix exponential with is , which is in proportion to the time complexity . Besides, we need extra to form , which is proportional to by .

Given points of , without decomposition of input sources, the runtime is

(28)

After dividing the input transitions and sending to enough computing nodes, we have points of for each node based on feature extraction and grouping (e.g., for one “bump” shape feature). The total computation runtime is

(29)

where contains the portion of computing in DR-MATEX mode. The speedup of DR-MATEX over single MATEX is

(30)

For R-MATEX, we have small . Besides, is relatively larger than in our targeted problem. Therefore, the most dominating part is the in Eq. (28). We can always decompose input source transitions, and make smaller than .

In contrast, suppose the traditional method with fixed step size has steps for the entire simulation, the runtime is

Then, the speedup of distributed DR-MATEX over the traditional method is

(31)

Note that, when the minimum distance among input source breakpoints decreases, large time span or many cycles is required to simulate PDNs, the schemes with such uniform step size would degrade runtime performance furthermore due to the increase of . In contrast, in MATEX PDN solver, is not so sensitive to such constraints. Besides, can be maintained in a small number based on the decomposition strategy. Therefore, the speedups of our proposed methods tend to be larger when the simulation requirements become harsher.

V Experimental Results

We implement all the algorithms in MATLAB R2014b777Measurements reported are on MATLAB implementations. They are subject to limitations and are not directly comparable to C++ implementations reported in literature such as [44]. and use UMFPACK package for LU factorization. First, we compare I-MATEX, R-MATEX and TR in order to show our runtime improvements in single machine framework in Table II. Second, we show our distributed framework DR-MATEX achieves large speedups in Table III. The experiments are conducted on the server with Intel(R) Xeon (R) E5-2640 v3 2.60GHz processor and 125GB memory.

V-a Performance of I-MATEX and R-MATEX in Sec. Iv-A

We compare our proposed I-MATEX and R-MATEX against the popular TR-FTS on the IBM power grid benchmarks [22]. Among the current sources, the smallest interval between two breakpoints is , which puts the upper limit of the TR’s step size. All of these cases have very large numbers of input current sources. Table I shows the details of each benchmark circuit of which size ranges from K up to M. The simulation time is . From ibmpg1t to ibmpg6t, TR uses fixed step size in . We also change the IBM power grid benchmark to make the smallest distance among breakpoints by interleaving input sources’ breakpoints (similar as Fig. 1). Therefore, the fixed step size method can only use at most . The names of those benchmarks are ibmpg1t_new, ibmpg2t_new, ibmpg3t_new, ibmpg4t_new, ibmpg5t_new and ibmpg6t_new.

Design #R #C #L #I #V #Nodes
ibmpg1t 41K 11K 277 11K 14K 54K
ibmpg2t 245K 37K 330 37K 330 165K
ibmpg3t 1.6M 201K 955 201K 955 1.0M
ibmpg4t 1.8M 266K 962 266K 962 1.2M
ibmpg5t 1.6M 473K 277 473K 539K 2.1M
ibmpg6t 2.4M 761K 381 761K 836K 3.2M
TABLE I: Specifications of IBM power grid benchmarks.

After DC analysis in TR-FTS, we LU factorize matrix once for the later transient simulation, which only contains time stepping. Actually, multiple factorized matrices can be deployed [55, 56]. We can choose one of them during the stepping. The problem is the memory and runtime overhead for the multiple matrix factorizations. Another point is if large time step is chosen, the standard low order scheme cannot maintain the accuracy.

Experiment is conducted on a single computing node. In Table II, we record the total simulation runtime Total(s), which includes the processes of DC and transient simulation, but excludes the non-numerical computation before DC, e.g., netlist parsing and matrix stamping. We also record the part of transient simulation Tran(s), excluding DC analysis and LU decompositions. The speedup of I-MATEX is not as large as R-MATEX, because I-MATEX with a large spectrum of generates large dimension of Krylov subspace. Meanwhile, the step size is not large enough to let it fully harvest the gain from time marching with stepping. In contrast, R-MATEX needs small dimension numbers of rational Krylov subspace, which ranges from to in those cases. Therefore, they can benefit from large time stepping, shown as SPDP. For ibmpg4t, R-MATEX achieves maximum speedup resulted from the relatively small number of breakpoints in that benchmark, which is around points, while the majority of others have over points.

In Table II, our single mode R-MATEX achieves the average speedup over TR-FTS. Note the average speedup number of single mode R-MATEX over TR-FTS for the original IBM benchmark (ibmpg1tibmpg6t) is less than the speedup of the new test cases (ibmpg1t_newibmpg6t_new). As we mentioned before, ibmpg1t_newibmpg6t_new have harsher input constraints, making the available step size only . Therefore, the adaptive stepping by R-MATEX is more beneficial to the runtime performance in ibmpg1t_newibmpg6t_new than ibmpg1tibmpg6t.

[b] Design DC(s) TR-FTS I-MATEX R-MATEX Speedups Tran(s) Total(s) Tran(s) Total(s) Df(uV) Tran(s) Total(s) Df(uV) SPDP SPDP ibmpg1t 0.2 5.7 6.00 30 28.8 28.9 5 10.1 10.3 ibmpg2t 0.8 40.0 41.9 28 130.0 130.9 5 35.6 37.4 ibmpg3t 16.4 263.2 295.0 29 1102.5 1115.1 5 275.5 301.0 ibmpg4t 13.5 460.8 501.9 29 433.8 458.2 5 200.5 239.1 ibmpg5t 9.0 476.6 498.0 30 1934.4 1944.5 5 383.1 401.9 ibmpg6t 15.3 716.0 749.1 25 2698.9 2713.7 5 773.5 800.5 ibmpg1t_new 0.2 51.3 51.7 30 27.2 27.4 5 11.7 12.1 ibmpg2t_new 0.9 431.4 433.5 28 114.9 115.7 5 43.3 44.9 ibmpg3t_new 16.3 3716.5 3749.0 29 1219.3 1232.6 5 481.7 508.2 ibmpg4t_new 18.3 5044.6 5085.3 29 753.5 776.4 6 350.9 387.2 ibmpg5t_new 10.5 5065.9 5110.1 30 2494.0 2504.7 5 746.2 766.4 ibmpg6t_new 13.1 7015.3 7059.7 25 3647.9 3663.1 6 895.1 923.1 Average

TABLE II: Performance comparisons (single computing node): TR-FTS, I-MATEX, and R-MATEX. DC(s): Runtime of DC analysis (seconds); : The maximum of Krylov subspace in I-MATEX. Tran(s): Runtime of transient simulation after DC (seconds), excluding the matrix factorization runtime; Total(s): Runtime of overall transient simulation (seconds);  Df(uV): Maximum and average voltage differences compared to provided solutions (uV); : The maximum of Krylov subspace in R-MATEX SPDP: Speedup of R-MATEX over TR-FTS with respect to Tran(s); SPDP: Speedup of R-MATEX over I-MATEX with respect to Tran(s).

V-B Performance of DR-MATEX in Sec. Iv-B

We test our distributed DR-MATEX in the following experiments with the same IBM power grid benchmarks. These cases have many input transitions () that limit step sizes of R-MATEX. We divide the region before the computation of simulation. We decompose the input sources by the approach discussed in Sec. IV-B3 and obtain much fewer transitions of for computing nodes. The original input source numbers are over ten thousand in the benchmarks. However, based on “bump” feature (as shown in Fig. 10), we obtain a fairly small numbers for each computing node, which is shown as Group # in Table III. (Now, the fact that hundred machines to process in parallel is quite normal [57, 38] in the industry.) We pre-compute and groups and assign sub-tasks to corresponding nodes888 Based on the feature of input sources available, the preprocessing is very efficient, which takes linear time complexity to obtain GTS, LTS and separates the sources into different groups.. MATEX scheduler is only responsible for simple superposition calculation at the end of simulation. Since the slave nodes are in charge of all the computing procedures (Fig. 11) for the computation of their own transient simulation tasks, and have no communications with others, our framework falls into the category of Embarrassingly Parallelism model. We can easily emulate the multiple-node environment. We simulate each group using the command “matlab -singleCompThread” in our server. We record the runtime numbers for each process (slave nodes) and report the maximum runtime as the total runtime “Total(s)” of DR-MATEX in Table III. We also record “pure transient simulation” as “Tran(s)”, which is the maximum runtime of the counterparts among all computing nodes.

For TR-FTS, we use , so there are 1,000 pairs of forward and backward substitutions during the process of pure transient simulation for ibmpg1tibmpg6t; We use for ibmpg1t_newibmpg6t_new. Therefore, we have 10,000 pairs of forward and backward substitutions for stepping. In DR-MATEX, the circuit solver uses R-MATEX with , which is set to sit among the order of varied time steps during the simulation (since Sec. III-E discusses the insensitivity of around the step size of interests). TR-FTS is not distributed because it has no gain by dividing the current source as we do for the DR-MATEX. TR-FTS cannot avoid the repeated pairs of forward and backward substitutions. Besides, adaptive stepping for TR-FTS only degrades the performance, since the process requires extra matrix factorizations.

In Table III, our distributed mode gains up to for the pure transient computing. The average peak dimension of rational Krylov subspace is . The memory overhead ratio for each node (around over TR-FTS on average) is slightly larger, which is worthwhile with respect to the large runtime improvement. With the huge reduction of runtime for Krylov subspace generations, the serial parts, including LU and DC, play more dominant roles in DR-MATEX, which can be further improved using advance matrix solvers, such as [58].

Design DR-MATEX Speedups Peak Mem. Ratio
Group # Tran(s) Total(s) Max Df.(V) Avg Df.(V) SPDP SPDP over TR-FTS
ibmpg1t 100 1.4 1.9 5.3e-5 8.6e-6 6 1.9
ibmpg2t 100 8.9 11.4 4.6e-5 8.6e-6 7 1.9
ibmpg3t 100 91.7 129.9 9.6e-5 19.7e-6 6 1.5
ibmpg4t 15 52.3 112.2 9.9e-5 27.9e-6 8 1.4
ibmpg5t 100 148.4 178.9 9.0e-5 1.1e-6 7 1.5
ibmpg6t 100 189.9 234.2 3.4e-5 7.2e-6 7 1.5
ibmpg1t_new 100 2.4 2.8 5.3e-5 8.6e-6 6 1.9
ibmpg2t_new 100 5.6 7.0 4.6e-5 8.6e-6 7 1.9
ibmpg3t_new 100 103.0 140.9 9.8e-5 19.9e-6 7 1.5
ibmpg4t_new 15 51.5 108.4 9.9e-5 27.6e-6 8 1.4
ibmpg5t_new 100 185.6 227.8 9.9e-5 2.2e-6 7 1.5
ibmpg6t_new 100 274.8 317.7 3.4e-5 7.1e-6 7 1.5
Average 7.1e-5 12.3e-6 6.7 1.6
TABLE III: The performance of DR-MATEX (Distributed R-MATEX). Group #: Group number of the testcases. This number represents the total number of simulation sub-tasks for the design; Tran(s): Runtime of transient simulation after DC (seconds); Total(s): Runtime of overall transient simulation (seconds); Max. Df.(V) and Avg. Df.(V): maximum and average differences compared to the solutions of all output nodes provided by IBM power grid benchmarks. SPDP: Speedup over TR-FTS’s Tran(s) in Table II; SPDP: Speedup over R-MATEX’s Tran(s) in Table II; Peak : the peak dimension used in DR-MATEX for MEVP; Mem. Ratio over TR-FTS: The peak memory comparison between the maximum memory consumption of DR-MATEX over TR-FTS in Table II.

Vi Conclusions and Future Directions

In this work, we propose an efficient framework MATEX for accurate PDN time-domain simulation based on the exponential integration scheme. We visualize the error distributions to show the advantages of using rational (R-MATEX) and invert (I-MATEX) Krylov subspace methods for matrix exponential and vector product (MEVP) over standard Krylov subspace method (MEXP). For the PDN simulation, our time integration scheme can perform adaptive time stepping without repeating matrix factorizations, which cannot be achieved by traditional methods using implicit numerical integration with fixed time-step scheme. Compared to the commonly adopted framework TR with fixed time step (TR-FTS), our single mode framework (R-MATEX) gains runtime speedup up to around . We also show that the distributed MATEX framework (DR-MATEX) leverages the superposition property of linear system and decomposes the task based on the feature of input sources, so that we reduce chances of Krylov subspace generations for each node. We achieve runtime improvement up to speedup.

We show the exponential integration with Krylov subspace methods maintains high order accuracy and flexible time stepping ability. The exponential integration framework was actually mentioned by the very early work in circuit simulation algorithms [29], but it had not attracted too much attention due to the high computational complexities of matrix exponential during that time. Nowadays, the progress of Krylov subspace methods provides efficient way to compute matrix exponential and vector product, so that we can utilize certain features of exponential integration, which are hardly obtained by traditional time integration schemes. Exponential integration can also serve as stable explicit schemes [59, 60] for general dynamical systems. It is a promising framework for the future circuit simulation algorithms and software. The opportunities of parallel and distributed computing with the cutting-edge multi-core and many-core hardware are also worth exploring for the further parallelism and runtime improvement.

Acknowledgment

We thank Prof. Scott B. Baden, Prof. Mike Botchev, Dr. Quan Chen, Dr. Peng Du, Dr. Martin Gander, Prof. Nicholas Higham, Prof. Marlis Hochbruck, Junkai Jiang, Dr. John Loffeld, Dr. Jingwei Lu, Mulong Luo, Prof. Alexander Ostermann, Prof. Mayya Tokman and Chicheng Zhang, for the informative discussions. Hao Zhuang thanks the supports from UCSD’s Powell Fellowship and Qualcomm FMA Fellowship. Last but not the least, we thank for all the insightful comments from the reviewers.

References

  • [1] D. Kouroussis and F. N. Najm, “A static pattern-independent technique for power grid voltage integrity verification,” in Proc. IEEE/ACM Design Autom. Conf., pp. 99–104, 2003.
  • [2] S. R. Nassif and J. N. Kozhaya, “Fast power grid simulation,” in Proc. IEEE/ACM Design Autom. Conf., pp. 156–161, 2000.
  • [3] M. S. Gupta, J. L. Oatley, R. Joseph, G.-Y. Wei, and D. M. Brooks, “Understanding voltage variations in chip multiprocessors using a distributed power-delivery network,” in Proc. DATE, pp. 1–6, 2007.
  • [4] S. Lin, M. Nagata, K. Shimazake, K. Satoh, M. Sumita, H. Tsujikawa, and A. T. Yang, “Full-chip vectorless dynamic power integrity analysis and verification against 100uv/100ps-resolution measurement,” in Proc. IEEE CICC, pp. 509–512, 2004.
  • [5] S. Lin and N. Chang, “Challenges in power-ground integrity,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 651–654, 2001.
  • [6] R. Zhang, B. H. Meyer, W. Huang, K. Skadron, and M. R. Stan, “Some limits of power delivery in the multicore era,” WEED, 2012.
  • [7] R. Zhang, K. Wang, B. H. Meyer, M. R. Stan, and K. Skadron, “Architecture implications of pads as a scarce resource,” in Proc. IEEE/ACM Int’l Symp. Computer Architecture, pp. 373–384, 2014.
  • [8] K. Wang, B. H. Meyer, R. Zhang, K. Skadron, and M. R. Stan, “Walking pads: Fast power-supply pad-placement optimization.,” in Proc. IEEE/ACM Asia South Pac. Design Autom. Conf., vol. 20, p. 4, 2014.
  • [9] J. Lu, P. Chen, C.-C. Chang, L. Sha, D. Huang, C.-C. Teng, and C.-K. Cheng, “ePlace: Electrostatics based placement using Nesterov’s method,” in Proc. IEEE/ACM Design Autom. Conf., 2014.
  • [10] M. Pan, N. Viswanathan, and C. Chu, “An efficient and effective detailed placement algorithm,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 48–55, 2005.
  • [11] J. Lu, H. Zhuang, P. Chen, H. Chang, C.-C. Chang, Y.-C. Wong, L. Sha, D. Huang, Y. Luo, C.-C. Teng, and C. K. Cheng, “ePlace-MS: Electrostatics based placement for mixed-size circuits,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 34, no. 5, pp. 685–698, 2015.
  • [12] L. Xiao, Z. Xiao, Z. Qian, Y. Jiang, T. Huang, H. Tian, and E. F. Y. Young, “Local clock skew minimization using blockage-aware mixed tree-mesh clock network,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 458–462, 2010.
  • [13] Y. Zhang and C. Chu, “GDRouter: Interleaved global routing and detailed routing for ultimate routability,” in Proc. IEEE/ACM Design Autom. Conf., pp. 597–602, 2012.
  • [14] A. B. Kahng, S. Kang, H. Lee, I. L. Markov, and P. Thapar, “High-performance gate sizing with a signoff timer,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 450–457, 2013.
  • [15] C. Zhuo, G. Wilke, R. Chakraborty, A. Aydiner, S. Chakravarty, and W.-K. Shih, “A silicon-validated methodology for power delivery modeling and simulation,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 255–262, 2012.
  • [16] Z. Zeng, X. Ye, Z. Feng, and P. Li, “Tradeoff analysis and optimization of power delivery networks with on-chip voltage regulation,” in Proc. IEEE/ACM Design Autom. Conf., pp. 831–836, 2010.
  • [17] H. Zhuang, J. Lu, K. Samadi, Y. Du, and C. K. Cheng, “Performance-driven placement for design of rotation and right arithmetic shifters in monolithic 3D ICs,” in Proc. IEEE Intl. Conf. Communications, Circuits and Systems, vol. 2, pp. 509–513, 2013.
  • [18] S. K. Samal, K. Samadi, P. Kamal, Y. Du, and S. K. Lim, “Full chip impact study of power delivery network designs in monolithic 3D ICs,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 565–572, 2014.
  • [19] D. Xiang and K. Shen, “A thermal-driven test application scheme for pre-bond and post-bond testing of three-dimensional ICs,” ACM Journal on Emerging Technologies of Computing Systems, vol. 10, no. 2, p. Art. no. 18, 2014.
  • [20] H. Zhuang, X. Wang, I. Kang, J.-H. Lin, and C. K. Cheng, “Dynamic analysis of power delivery network with nonlinear components using matrix exponential method,” in IEEE Intl. Symp. EMC&SI, 2015.
  • [21] H. Zhuang, W. Yu, I. Kang, X. Wang, and C. K. Cheng, “An algorithmic framework for efficient large-scale circuit simulation using exponential integrators,” in Proc. IEEE/ACM Design Autom. Conf., 2015.
  • [22] S. R. Nassif, “Power grid analysis benchmarks,” in Proc. IEEE/ACM Asia South Pac. Design Autom. Conf., pp. 376–381, 2008.
  • [23] Z. Ye, Z. Zhu, and J. R. Phillips, “Sparse implicit projection (SIP) for reduction of general many-terminal networks,” in Proc. Intl. Conf. Computer-Aided Design, pp. 736–741, 2008.
  • [24] Z. Li, R. Balasubramanian, F. Liu, and S. Nassif, “2012 tau power grid simulation contest: benchmark suite and results,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 643–646, 2012.
  • [25] C. Zhuo, H. Gan, and W.-K. Shih, “Early-stage power grid design: Extraction, modeling and optimization,” in Proc. IEEE/ACM Design Autom. Conf., pp. 1–6, 2014.
  • [26] H. Zhuang, W. Yu, G. Hu, Z. Liu, and Z. Ye, “Fast floating random walk algorithm for multi-dielectric capacitance extraction with numerical characterization of Green’s functions,” in Proc. IEEE/ACM Asia South Pac. Design Autom. Conf., pp. 377–382, 2012.
  • [27] C. Zhang and W. Yu, “Efficient space management techniques for large-scale interconnect capacitance extraction with floating random walks,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst, vol. 32, no. 10, pp. 1633–1637, 2013.
  • [28] W. Yu, H. Zhuang, C. Zhang, G. Hu, and Z. Liu, “RWCap: A floating random walk solver for 3-D capacitance extraction of very-large-scale integration interconnects,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst, vol. 32, no. 3, pp. 353–366, 2013.
  • [29] L. O. Chua and P.-M. Lin, Computer Aided Analysis of Electric Circuits: Algorithms and Computational Techniques. Prentice-Hall, 1975.
  • [30] Y. Saad, Iteravite Methods for Sparse Linear Systems. SIAM, 2003.
  • [31] T. A. Davis, Direct Method for Sparse Linear Systems. SIAM, 2006.
  • [32] T. Yu and M. D.-F. Wong, “PGT_SOLVER: An efficient solver for power grid transient analysis,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 647–652, 2012.
  • [33] J. Yang, Z. Li, Y. Cai, and Q. Zhou, “Powerrush: Efficient transient simulation for power grid analysis,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 653–659, 2012.
  • [34] X. Xiong and J. Wang, “Parallel forward and back substitution for efficient power grid simulation,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 660–663, 2012.
  • [35] L. Nagel, SPICE2: A computer program to simulate semiconductor circuits. Ph.D. dissertation, 1975.
  • [36] S.-H. Weng, Q. Chen, and C. K. Cheng, “Time-domain analysis of large-scale circuits by matrix exponential method with adaptive control,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst, vol. 31, no. 8, pp. 1180–1193, 2012.
  • [37] Y. Saad, “Analysis of some krylov subspace approximations to the matrix exponential operator,” SIAM J. Numer. Anal., vol. 29, no. 1, pp. 209–228, 1992.
  • [38] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A view of cloud computing,” Communications of the ACM, vol. 53, no. 4, pp. 50–58, 2010.
  • [39] J. Dean and S. Ghemawat, “Mapreduce: simplified data processing on large clusters,” CACM, vol. 51, no. 1, pp. 107–113, 2008.
  • [40] D. Xiang, Y. Zhang, and Y. Pan, “Practical deadlock-free fault-tolerant routing in meshes based on the planar network fault model,” IEEE Trans. Computers, vol. 58, no. 5, pp. 620–633, 2009.
  • [41] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, “Spark: cluster computing with working sets,” in HotCloud, pp. 10–10, 2010.
  • [42] Q. He, W. Au, A. Korobkov, and S. Venkateswaran, “Parallel power grid analysis using distributed direct linear solver,” in IEEE Intl. Symp. EMC, pp. 866–871, 2014.
  • [43] N. Gupte and J. Wang, “Secure power grid simulation on cloud,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst, vol. 34, no. 3, pp. 422–432, 2015.
  • [44] J. Wang and X. Xiong, “Scalable power grid transient analysis via MOR-assisted time-domain simulations,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, 2013.
  • [45] X.-X. Liu, H. Wang, and S. X.-D. Tan, “Parallel power grid analysis using preconditioned gmres solver on CPU-GPU platforms,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 561–568, IEEE, 2013.
  • [46] Z. Feng and P. Li, “Multigrid on GPU: tackling power grid analysis on parallel simt platforms,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 647–654, 2008.
  • [47] H. Zhuang, S.-H. Weng, J.-H. Lin, and C. K. Cheng, “MATEX: A distributed framework of transient simulation of power distribution networks,” in Proc. IEEE/ACM Design Autom. Conf., 2014.
  • [48] M. J. Gander and S. Güttel, “PARAEXP: A parallel integrator for linear initial-value problems,” SIAM Journal on Scientific Computing, vol. 35, no. 2, pp. C123–C142, 2013.
  • [49] S.-H. Weng, Q. Chen, N. Wong, and C. K. Cheng, “Circuit simulation via matrix exponential method for stiffness handling and parallel processing,” in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design, pp. 407–414, 2012.
  • [50] J. van den Eshof and M. Hochbruck, “Preconditioning Lanczos approximations to the matrix exponential,” SIAM J. Sci. Comput., vol. 27, no. 4, pp. 1438–1457, 2006.
  • [51] M. A. Botchev, “A short guide to exponential krylov subspace time integration for maxwell’s equations,” Dept. of Applied Mathematics, Univ. of Twente, 2012.
  • [52] H. Zhuang, S.-H. Weng, and C. K. Cheng, “Power grid simulation using matrix exponential method with rational krylov subspaces,” in Proc. IEEE Intl. Conf. ASIC, 2013.
  • [53] Q. Chen, S.-H. Weng, and C. K. Cheng, “A practical regularization technique for modified nodal analysis in large-scale time-domain circuit simulation,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst, vol. 31, no. 7, pp. 1031–1040, 2012.
  • [54] J. Wilkinson, “Kronecker’s canonical form and the QZ algorithm,” Linear Algebra and its Applications, vol. 28, pp. 285–303, 1979.
  • [55] P. Li, “Parallel circuit simulation: A historical perspective and recent developments,” Foundations and Trends in Electronic Design Automation, vol. 5, no. 4, pp. 211–318, 2012.
  • [56] X. Ye, M. Zhao, R. Panda, P. Li, and J. Hu, “Accelerating clock mesh simulation using matrix-level macromodels and dynamic time step rounding,” in ISQED, pp. 627–632, 2008.
  • [57] B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R. Katz, S. Shenker, and I. Stoica, “Mesos: A platform for fine-grained resource sharing in the data center,” in NSDI, pp. 22–22, 2011.
  • [58] X. Chen, Y. Wang, and H. Yang, “NICSLU: An adaptive sparse matrix solver for parallel circuit simulation,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst, vol. 32, no. 2, pp. 261–274, 2013.
  • [59] M. Hochbruck and A. Ostermann, “Exponential integrators,” Acta Numerica, vol. 19, pp. 209–286, 2010.
  • [60] M. Hochbruck, C. Lubich, and H. Selhofer, “Exponential integrators for large systems of differential equations,” SIAM J. Sci. Comput., vol. 19, no. 5, pp. 1552–1574, 1998.

Hao Zhuang(S’11) is a Ph.D. candidate at the Department of Computer Science and Engineering, University of California, San Diego, CA, USA (UC San Diego). He received his C.Phil. degree in Computer Science from UC San Diego in June, 2015. His current research interests include numerical and optimization algorithms, with the applications in design automation, very large-scale integration systems, network and data analysis.

He has industrial experiences in several companies, including Synopsys, Inc., Mountain View, CA, USA, and Ansys, Inc., San Jose, CA, USA, where he worked on the large-scale circuit analysis and dynamic power network simulation algorithms within several products. He was one of the main software developers of RWCap, a parallel program for VLSI capacitance extraction using floating random walk algorithm at Tsinghua University. At UC San Diego, he designed the numerical algorithms for large-scale circuit simulation using matrix exponentials, and optimization algorithms for electrostatics based VLSI global placement. Mr. Zhuang was the recipient of the Charles Lee Powell Fellowship and the Qualcomm FMA Fellowship. He is a student member of IEEE, ACM and SIAM. He serves as a reviewer for IEEE Transactions on Computer Aided Design (TCAD), and an external reviewer for Design Automation Conference (DAC) and International Symposium on Physical Design (ISPD).

Wenjian Yu (S’01-M’04-SM’10) received the B.S. and Ph.D. degrees in computer science from Tsinghua University, Beijing, China, in 1999 and 2003, respectively.

In 2003, he joined Tsinghua University, where he is an Associate Professor with the Department of Computer Science and Technology. He was a Visiting Scholar with the Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, USA, twice during the period from 2005 to 2008. He serves as an associate editor of IEEE Transactions on Computer Aided Design since 2016. His current research interests include numerical modeling and simulation techniques for designing complex systems (like integrated circuit, touchscreen, cyber physical system, etc.), and the matrix computation algorithms for BIG DATA analysis. He has authored three books and over 130 papers in refereed journals and conferences. Dr. Yu was a recipient of the Distinguished Ph.D. Award from Tsinghua University in 2003 and the Excellent Young Scholar Award from the National Science Foundation of China in 2014.

Shih-Hung Weng is currently with Facebook as Research Scientist and working on large scale distributed system for payments services. He received the B.S. and M.S degrees in Computer Science from National Tsing Hua University, Hsinchu, Taiwan, in 2006 and 2008, and the Ph.D. degree from University of California San Diego, La Jolla, U.S.A in 2013 under the supervision of Dr. Chung-Kuan Cheng. His thesis focused on parallel circuit simulation for power-grid noise analysis in very large scale integration (VLSI) system.

Ilgweon Kang(S’12) received the B.S and M.S. degrees in electrical and electronic engineering from Yonsei University, Seoul, Korea, in 2006 and 2008, respectively. He is currently pursuing the Ph.D degree at University of California at San Diego (UCSD). He was with the Research and Development Division, SK Hynix, Icheon, Korea, from 2008 to 2012, where he was involved in development of 3D stacked DRAM with (Through Silicon-Via) TSV technology.

His current research interests include 3D IC design and optimization, CAD, and design for testability

Jeng-Hau Lin (S’15) is a Ph.D student at Department of Computer Science and Engineering, University of California, San Diego, CA, USA. He received his B.S degree in Electrical Engineering in 2005 and M.S. degree in Communication Engineering in 2007 from National Taiwan University.

His research interests includes time-frequency analyses on signal integrity and the consensus of multi-agent system.

Xiang Zhang (M’13) received the B.Eng. in Information Engineering from Shanghai Jiao Tong University (SJTU), China and M.S. in Electrical and Computer Engineering from University of Arizona, Tucson, AZ, in 2010.

From 2011 to 2014, he was a Senior Hardware Engineer with Qualcomm Technologies, Inc. In 2014, he joined Apple Inc as an iPhone Hardware System Engineer. Since 2012, he has been working towards the Ph.D. degree in Department of Electrical and Computer Engineering at University of California, San Diego, CA, USA. His current research interests include power distribution network design and optimization, circuit simulation and system-on-chip design.

Ryan Coutts received a B.S. in electrical engineering from the University of California, Riverside in 2005. He then received his M.S. in electrical engineering from Stanford University in 2006. After receiving his M.S. degree, he worked as a signal and power integrity engineer at NVIDIA Inc. specializing in FSB, DDR and power integrity. He is currently working at Qualcomm Inc. where he works in low power design methodologies for mobile covering the range from power integrity, thermal optimization and on-chip timing. Mr. Coutts has filed 13 patents related to his innovation as well and is currently pursuing a Ph.D at the University of California, San Diego.

Chung-Kuan Cheng (S’82-M’84-SM’95-F’00) received the B.S. and M.S. degrees in electrical engineering from National Taiwan University, and the Ph.D. degree in electrical engineering and computer sciences from University of California, Berkeley in 1984.

From 1984 to 1986 he was a senior CAD engineer at Advanced Micro Devices Inc. In 1986, he joined the University of California, San Diego, where he is a Distinguished Professor in the Computer Science and Engineering Department, an Adjunct Professor in the Electrical and Computer Engineering Department. He served as a principal engineer at Mentor Graphics in 1999. He was an associate editor of IEEE Transactions on Computer Aided Design for 1994-2003. He is a recipient of the best paper awards, IEEE Trans. on Computer-Aided Design in 1997, and in 2002, the NCR excellence in teaching award, School of Engineering, UCSD in 1991, IBM Faculty Awards in 2004, 2006, and 2007, the Distinguished Faculty Certificate of Achievement, UJIMA Network, UCSD in 2013. He is appointed as an Honorary Guest Professor of Tsinghua University 2002-2008, and a Visiting Professor of National Taiwan University 2011, and 2015. His research interests include medical modeling and analysis, network optimization and design automation on microelectronic circuits.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10703
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description