Simulation Algorithms with Exponential Integration for TimeDomain Analysis of LargeScale Power Delivery Networks
Abstract
We design an algorithmic framework using matrix exponentials for timedomain simulation of power delivery network (PDN). Our framework can reuse factorized matrices to simulate the largescale linear PDN system with variable stepsizes. In contrast, current conventional PDN simulation solvers have to use fixed stepsize approach in order to reuse factorized matrices generated by the expensive matrix decomposition. Based on the proposed exponential integration framework, we design a PDN solver RMATEX with the flexible timestepping capability. The key operation of matrix exponential and vector product (MEVP) is computed by the rational Krylov subspace method.
To further improve the runtime, we also propose a distributed computing framework DRMATEX. DRMATEX reduces Krylov subspace generations caused by frequent breakpoints from a large number of current sources during simulation. By virtue of the superposition property of linear system and scaling invariance property of Krylov subspace, DRMATEX can divide the whole simulation task into subtasks based on the alignments of breakpoints among those sources. The subtasks are processed in parallel at different computing nodes without any communication during the computation of transient simulation. The final result is obtained by summing up the partial results among all the computing nodes after they finish the assigned subtasks. Therefore, our computation model belongs to the category known as Embarrassingly Parallel model.
Experimental results show RMATEX and DRMATEX can achieve up to around and runtime speedups over traditional trapezoidal integration based solver with fixed timestep approach.
I Introduction
Modern VLSI design verification relies heavily on the analysis of power delivery network (PDN) to estimate power supply noises [1, 2, 3, 4, 5, 6, 7, 8]. The performance of power delivery network highly impacts on the quality of global, detailed and mixedsize placement [9, 10, 11], clock tree synthesis [12], global and detailed routing [13], as well as timing [14] and power optimization. Lowering supply voltages, increasing current densities as well as tight design margins demand more accurate largescale PDN simulation. Advanced technologies [15, 16], three dimensional (3D) IC structures [17, 18, 19], and increasing complexities of system designs all make VLSI PDNs extremely huge and the simulation tasks timeconsuming and computationally challenging. Due to the enormous size of modern designs and long simulation runtime of many cycles, instead of general nonlinear circuit simulation [20, 21], PDN is often modeled as a largescale linear circuit with voltage supplies and timevarying current sources [22, 23, 24]. Those linear matrices are obtained by parasitic extraction process [25, 26, 27, 28, 4]. After those processes, we need timedomain largescale linear circuit simulation to obtain the transient behavior of PDN with above inputs.
Traditional methods in linear circuit simulation solve differential algebra equations (DAE) numerically in explicit ways, e.g., forward Euler (FE), or implicit ways, e.g., backward Euler (BE) and trapezoidal (TR), which are all based on low order polynomial approximations for DAEs [29]. Due to the stiffness of systems, which comes from a wide range of time constants of a circuit, the explicit methods require extremely small time step sizes to ensure the stability. In contrast, implicit methods can handle this problem with relatively large time steps because of their larger stability regions. However, at each time step, these methods have to solve a linear system, which is sparse and often illconditioned. Due to the requirement of a robust solution, compared to iterative methods [30], direct methods [31] are often favored for VLSI circuit simulation, and thus adopted by stateoftheart power grid (PG) solvers in TAU PG simulation contest [32, 33, 34]. Those solvers only require one matrix factorization (LU or Cholesky factorization) at the beginning of the transient simulation. Then, at each fixed time step, the following transient computation requires only pairs of forward and backward substitutions, which achieves better efficiency over adaptive stepping methods by reusing the factorization matrix [34, 32, 24] in their implicit numerical integration framework. However, the maximum of step size choice is limited by the smallest distance among the breakpoints [35]. Some engineering efforts are spent to break this limitation by sacrificing the accuracy. In our work, we always obey the upper limit of time step to maintain the fidelity of model, which means the fixed time step cannot go beyond in case of missing breakpoints.
Beyond traditional methods, a class of methods called matrix exponential time integration has been embraced by MEXP [36]. The major complexity is caused by matrix exponential computations. MEXP utilizes standard Krylov subspace method [37] to approximate matrix exponential and vector product. MEXP can solve the DAEs with much higher order polynomial approximations than traditional ones [36, 37]. Nevertheless, when simulating stiff circuits with standard Krylov subspace method, it requires the large dimension of subspace in order to preserve the accuracy of MEXP approximation and poses memory bottleneck and degrade the adaptive stepping performance of MEXP.
Nowadays, the emerging multicore and manycore platforms bring powerful computing resources and opportunities for parallel computing. Even more, cloud computing techniques [38] drive distributed systems scaling to thousands of computing nodes [39, 40, 41], etc. Distributed computing systems have been incorporated into products of many leading EDA companies and inhouse simulators [42, 43, 44, 45, 46]. However, building scalable and efficient distributed algorithmic framework for transient linear circuit simulation is still a challenge to leverage these powerful computing tools. The papers [47, 48] show great potentials by parallelizing matrix exponential based method to achieve the runtime performance improvement and maintain high accuracy.
In this work, we develop a transient simulation framework using matrix exponential integration scheme, MATEX, for PDN simulation. Following are the challenges we need to address. First, when the circuit is stiff, the standard Krylov subspace has convergence problem and slows down the computation of MEVP. Second, the frequent time breakpoints due to the transitions of PDN current sources modeling triggers the generations of Krylov subspace. Therefore, we might gain performance where we leverage the large time stepping, but we also lose runtime for the small step size. Our contributions are listed as below:

MEVP in MATEX is efficiently computed by rational or invert Krylov subspace method. Compared to the commonly adopted framework using TR with fixed time step (TRFTS), the proposed MATEX can reuse factorized matrix at the beginning of transient simulation to perform flexible adaptive time stepping.

Among different Krylov subspace methods, we find rational Krylov subspace is the best strategy for MEVP in PDN simulation. Therefore, we design RMATEX based on that and achieve up to around runtime speedup against the benchmarks over the traditional method TRFTS with good accuracy.

Furthermore, DRMATEX is designed to improve RMATEX with distributed computing resources.

First, PDN’s current sources are partitioned into groups based on their alignments. They are assigned to different computing nodes. Each node runs its corresponding PDN transient simulation task and has no communication overhead with other nodes.

After all nodes finish the simulation computations, the results are summed up based on the linear superposition property of the PDN system.

Proposed current source partition can reduce the chances of generating Krylov subspaces and prolong the time periods of reusing computed subspace at each node, which brings huge computational advantage and achieves up to speedup over traditional method TRFTS.

The rest of this paper is organized as follows. Sec. II introduces the background of linear circuit simulation and matrix exponential formulations. Sec. III illustrates the Krylov techniques to accelerate matrix exponential and vector product computation. Sec. IV presents MATEX circuit solver and the parallel framework DRMATEX. Sec. V shows numerical results and Sec. VI concludes this paper.
Ii Background
Iia Transient Simulation of Linear Circuit
Transient simulation of linear circuit is the foundation of modern PDN simulation. It is formulated as DAEs via modified nodal analysis (MNA),
(1) 
where is the matrix for capacitive and inductive elements. is the matrix for conductance and resistance, and is the input selector matrix. is the vector of timevarying node voltages and branch currents. is the vector of supply voltage and current sources. In PDN, such current sources are often characterized as pulse or piecewiselinear inputs [22, 24] to represent the activities under the networks. To solve Eq. (1) numerically, the system is discretized with time step and transformed to a linear algebraic system. Given an initial condition from DC analysis or previous time step and a time step , can be obtained by traditional low order approximation methods [29].
IiB Traditional Low Order Time Integration Schemes
IiB1 Be
Backward Euler based time integration scheme (Eq.(2)) is a robust implicit firstorder method.
(2) 
IiB2 Tr
Trapezoidal based time integration scheme (Eq.(IIB2)) is a popular implicit secondorder method.
It is probably the most commonly used strategy for largescale circuit simulation, which has higher accuracy than BE.
IiB3 BEFTS and TRFTS
Methods BE and TR with fixed time step (FTS) are efficient approaches, which were adopted by the top PG solvers in 2012 TAU PG simulation contest [34, 32, 33, 24]. If only one is used for the entire simulation, the choice is limited by the minimum breakpoint [35] distance among all the input sources. Fig. 1 (a) has as the upper limit for in BEFTS and TRFTS. When the alignments of inputs change (as shown in Fig. 1 (b)) shift by , the resulting upper limit for becomes for the approaches with fixed step size. If is larger than the limit, it is impossible to guarantee the accuracy since we may skip pivot points of the inputs.
IiC Matrix Exponential Time Integration Scheme
The solution of Eq. (1) can be obtained analytically [29]. For a simple illustration, we convert Eq. (1) into
(4) 
when is not singular^{1}^{1}1The assumption is to simplify the explanation in this section. After Sec. IIIB, we use IMATEX, RMATEX and DRMATEX to compute the solution of DAE without inversion of . Therefore, the methods are suitable for general DAE system, i.e., Eq. (1) without the assumption here.,
Given a solution at time and a time step , the solution at is
(5) 
Assuming that the input is a piecewise linear (PWL) function of , we can integrate the last term of Eq. (5) analytically, turning the solution with matrix exponential operator:
(6) 
For the time step choice, breakpoints (also known as input transition spots (TS) [47]) are the time points where slopes of input vector change. Therefore, for Eq. (IIC), the maximum time step starting from is , where is the smallest one in larger than . In matrix exponential based framework, the limitation of time step size is not the local truncation error (LTE), but the activities among all input sources.
Iii Krylov Subspace Methods for Matrix Exponential and Vector Product (MEVP)
In PDN simulation, is usually above millions and makes the direct computation of matrix exponential infeasible. The alternative way to compute the product is through Krylov subspace method [37]. In this section, we first introduce the background of standard Krylov subspace for MEVP. Then, we discuss invert (IMATEX) and rational Krylov subspace (RMATEX) methods, which highly improve the runtime performance for MEVP.
Iiia MEXP: MEVP Computation via Standard Krylov Subspace Method
The complexity of can be reduced using Krylov subspace method and still maintained in a high order polynomial approximation [37]. MEXP [36] uses standard Krylov subspace, which uses directly to generate subspace basis through Arnoldi process (Algorithm 1). First, we reformulate Eq. (IIC) into
(7) 
where
(8) 
and
(9) 
The standard Krylov subspace
(10) 
obtained by Arnoldi process has the relation
(11) 
where is the upper Hessenberg matrix
(12) 
is a matrix by , and is the th unit vector with dimension . MEVP is computed via
(13) 
The posterior error term is
(14) 
where . However, for an autonomous system in circuit simulation, we consider the residual between and , which is
instead of
in . This leads to
(15) 
and helps us mitigate the overestimation of the error bound.
To generate by Algorithm 1, we use
(16) 
where
as inputs for standard Krylov subspace. The error budget and Eq. (14) are used to determine the convergence when time step is and Krylov subspace dimension is (from line 1 to line 1 in Algorithm 1).
The standard Krylov subspace may not be efficient when simulating stiff circuits [36, 49]. For the accuracy of approximation of , a large dimension of Krylov subspace basis is required, which not only brings the computational complexity but also consumes huge amount of memory. The reason is that the Hessenberg matrix of standard Krylov subspace tends to approximate the large magnitude eigenvalues of [50]. Due to the exponential decay of higher order terms in Taylor’s expansion, such components are not the crux of circuit system’s behavior [51, 50]. Therefore, to simulate stiff circuit, we need to gather more vectors into subspace basis and increase the size of to fetch more useful components, which results in both memory overhead and computational complexity to Krylov subspace generations for each time step. In the following subsections, we adopt ideas from spectral transformation [51, 50] to effectively capture small magnitude eigenvalues in , leading to a fast and accurate MEVP computation.
IiiB IMATEX: MEVP Computation via Invert Krylov Subspace Method
Instead of , we use (or ) as our target matrix to form
(17) 
Intuitively, by inverting , the small magnitude eigenvalues become the large ones of . The resulting is likely to capture these eigenvalues first. Based on Arnoldi algorithm, the invert Krylov subspace has the relation
(18) 
The matrix exponential is calculated as
(19) 
To put this method into Algorithm 1 is just by modifying the inputs for the LU decomposition in Eq. (16), and . In the line 1 of Algorithm 1,
for the invert Krylov version. The posterior error approximation [47] is
(20) 
which is derived from residual based error approximation in [51]. However, as mentioned in Sec. IIIA, we consider the residual of , instead of , which leads to
(21) 
IiiC RMATEX: MEVP Computation via Rational Krylov Subspace Method
The shiftandinvert Krylov subspace basis [50] is designed to confine the spectrum of . Then, we generate Krylov subspace via
(22)  
where is a predefined parameter. With this shift, all the eigenvalues’ magnitudes are larger than one. Then the inverse limits the magnitudes smaller than one. According to [51, 50], the shiftandinvert basis for matrix exponential based transient simulation is not very sensitive to , once it is set to around the order near time steps used in transient simulation. The similar idea has been applied to simple power grid simulation with matrix exponential method [52]. Here, we generalize this technique and integrate into MATEX. The Arnoldi process constructs and . We have
(23) 
We can project the onto the rational Krylov subspace as follows.
(24) 
Following the same procedure [47, 51], the posterior error approximation is derived as
(25) 
Note that in practice, instead of computing directly, is utilized. The corresponding Arnoldi process shares the same skeleton of Algorithm 1 with input matrices
for the LU decomposition Eq. (16), and
The residual estimation is
(26) 
IiiD RegularizationFree MEVP Computation
When is a singular matrix, MEXP [36] needs the regularization process [53] to remove the singularity of DAE in Eq. (1). It is because MEXP needs factorize directly to form the input for Algorithm 1. This brings extra computational overhead when the case is large [53]. It is not necessary if we can obtain the generalized eigenvalues and corresponding eigenvectors for matrix pencil . Based on [54], we derive the following lemma,
Lemma 1.
Considering a homogeneous system
and are the eigenvector and eigenvalue of matrix pencil , then
is a solution of the system.
Proof.
^{2}^{2}2We repeat the proof from [54] with some modifications for our formulation.If and are an eigenvalue and eigenvector of a generalized eigenvalue problem
Then, is the solution of . ∎
Because we do not need to compute explicitly during Krylov subspace generation, IMATEX and RMATEX are regularizationfree. Instead, we factorize for invert Krylov subspace basis generation (IMATEX), or for rational Krylov subspace basis (RMATEX).^{3}^{3}3It is also applied to the later work of DRMATEX in Sec. IVB. Besides, their Hessenberg matrices Eq. (12) are invertible, which contain corresponding important generalized eigenvalues/eigenvectors from matrix pencil , and define the behavior of linear dynamic system in Eq. (1) of interest.
IiiE Comparisons among Different Krylov Subspace Algorithms for MEVP Computation
In order to observe the error distribution versus dimensions of standard, invert, and rational Krylov subspace methods for MEVP, we construct a RC circuit with stiffness
where and are the maximum and minimum eigenvalues of . Fig. 2 shows the relative error reductions along the increasing Krylov subspace dimension. The error reduction rate of rational Krylov subspace is the best, while the one of standard Krylov subspace requires huge dimension to capture the same level of error. For example, it costs almost of the size to achieve around relative error compared to Invert and Rational Krylov subspace methods. The relative error is
where , . The matrix is a relatively small matrix and computed by MATLAB function. The result of serves as the baseline for accuracy. The relative error is the real relative difference compared to the analytical solution of the ODE
with an initial vector , which is generated by MATLAB function.
The error reduction rate of standard Krylov subspace is the worst, while the rational Krylov subspace is the best. It is the reason that we prefer rational Krylov subspace (RMATEX). The relative errors of BE, TR and FE are , , and , respectively. The large error of FE is due to the instability issue of its low order explicit time integration scheme. In Fig. 2, when , standard, invert and rational Krylov subspace methods have , , and , respectively. It illustrates the power of matrix exponential method. Our proposed methods are all stable and can achieve improved error numbers.
In order to observe the different stiffness effects on Krylov subspace methods, we change the entries in and to make the different stiffness value . Fig. 3 illustrates the stable reduction rate of rational method. The stiffness degrades the performance of standard Krylov subspace method. Both invert and rational Krylov subspace methods are good candidates for stiff circuit system.
Regarding the relative error distributions vs. time step and dimension , Fig. 4, Fig. 5, and Fig. 6 are computed by standard, invert, and rational Krylov subspaces (), respectively. Fig. 4 shows that the errors generated by standard Krylov subspace method has flat region with high error values in timestep range of interests. The small (‘unrealistic’) time step range has small error values. Compared to Fig. 4, invert (Fig. 5) and rational (Fig. 6) Krylov subspace methods reduce errors quickly for large . The explanation is that a relatively small portion of the eigenvalues and corresponding invariant subspaces determines the final result (vector) when time step is larger [50], which are efficiently captured by invert and rational Krylov subspace methods.
The error of rational Krylov subspace is relatively insensitive to when it is selected between the timestep range of interests (Fig. 7). Above all, rational Krylov (RMATEX) and invert Krylov (IMATEX) subspace methods have much better performance than standard version. When we deal with stiff cases, standard Krylov subspace is not a feasible choice due to the large dimension of Krylov subspace, which causes huge memory consumption and poor runtime performance.
Iv MATEX Framework
Iva MATEX Circuit Solver
IvB DRMATEX (Distributed RMATEX Framework) by Decomposition of Input Sources, Linear Superposition, and Parallel Computation Model
IvB1 Motivation
There are usually many input sources in PDNs as well as their transition activities, which might narrow the regions for the stepping of matrix exponential based method due to the unaligned breakpoints. In other words, the region before the next transition may be shortened when there are a lot of activities from the input sources. It leads to more chances of generating new Krylov subspace bases. We want to reduce the number of subspace generations and improve the runtime performance.^{4}^{4}4The breakpoints also put the same constraint on TRFTS and BEFTS. However, their time steps are fixed already, which refrains them from reaching this problem in the first place.
IvB2 Treatment and Methodology
In matrix exponential based integration framework, we can choose any time spot with computed Krylov subspace basis. The solution of is computed by scaling the existing Hessenberg matrix with the time step as below
(27) 
This is an important feature for computing the solutions at intermediate time points without generating the Krylov subspace basis, when there is no current transition. Besides, since the PDN is linear dynamical system, we can utilize the wellknown superposition property of linear system and distributed computing model to tackle this challenge.
To illustrate our distributed version of MATEX framework, we first define three terms to categorize the breakpoints of input sources:

Local Transition Spot (): the set of at an input source to the PDN.

Global Transition Spot (): the union of among all the input sources to the PDN.

Snapshot: a set at one input source.
If we simulate the PDN with respect to all the input sources, the points in the set of are the places where generations of Krylov subspace cannot be avoided. For example, there are three input sources in a PDN (Fig. 8). The input waveforms are shown in Fig. 9. The first line is , which is contributed by the union of in input sources #1, #2 and #3. However, we can partition the task into subtasks by simulating each input source individually. Then, each subtask generates Krylov subspaces based on its own and keeps track of for the later usage of summation via linear superposition. Between two LTS points and , the Snapshot points
can reuse the Krylov subspace generated at . For each node, the chances of generation of Krylov subspaces are reduced. The time periods of reusing latest Krylov subspaces are enlarged locally and bring the runtime improvement. Besides, when subtasks are assigned, there is no communication among the computing nodes, which leads to socalled Embarrassingly Parallel computation model.
IvB3 More Aggressive Tasks Decomposition
We divide the simulation task based on the alignments of input sources. More aggressively, we can decompose the task according to the “bump” shapes of the input sources.^{5}^{5}5IBM power grid benchmarks provide the pulse input model in SPICE format. We group the input sources, which have the same
into one set. For example, the input source #1 of Fig. 9 is divided to #1.1 and #1.2 in Fig. 10. The input source #2 in Fig. 9 is divided to #2.1 and #2.2 in Fig. 10. Therefore, there are four groups in Fig. 10, Group 1 contains . Group 2 contains . Group 3 contains . Group 4 contains and . Our proposed framework MATEX is shown in Fig. 11. After precomputing and decomposing based on “bump” shape (Fig. 10), we group them and form .^{6}^{6}6There are alternative decomposition strategies. It is also easy to extend the work to deal with different input waveforms. We try to keep this part as simple as possible to emphasize our framework.
IvB4 MATEX Scheduler in DRMATEX
In DRMATEX, the role of MATEX scheduler is just to send out and to different MATEX slave nodes and collect final results after all the subtasks of transient simulation are finished. The node number is based on the total number of subtasks, which is the group number after PDN source decomposition. Then the simulation computations are performed in parallel. Each node has its own inputs. For example, Node# has , #, and , which contain the corresponding for node . Scheduler does not need to do anything during the transient simulation, since there are no communications among nodes before the stage of “write back” (in Fig. 11), by when all nodes complete their transient simulations.
Within each slave node, the circuit solver (Algorithm 3) computes transient response with varied time steps. Solutions are obtained without refactorizing matrix during the computation of transient simulation. The computing nodes write back the results and inform the MATEX scheduler after finishing their own transient simulation.
IvC Runtime Analysis of MATEX PDN Solver
Suppose we have the dimension of Krylov subspace basis on average for each time step and one pair of forward and backward substitutions consumes runtime . The total time of serial parts is , which includes matrix factorizations, result collection, etc. For , the evaluation of matrix exponential with is , which is in proportion to the time complexity . Besides, we need extra to form , which is proportional to by .
Given points of , without decomposition of input sources, the runtime is
(28) 
After dividing the input transitions and sending to enough computing nodes, we have points of for each node based on feature extraction and grouping (e.g., for one “bump” shape feature). The total computation runtime is
(29) 
where contains the portion of computing in DRMATEX mode. The speedup of DRMATEX over single MATEX is
(30) 
For RMATEX, we have small . Besides, is relatively larger than in our targeted problem. Therefore, the most dominating part is the in Eq. (28). We can always decompose input source transitions, and make smaller than .
In contrast, suppose the traditional method with fixed step size has steps for the entire simulation, the runtime is
Then, the speedup of distributed DRMATEX over the traditional method is
(31) 
Note that, when the minimum distance among input source breakpoints decreases, large time span or many cycles is required to simulate PDNs, the schemes with such uniform step size would degrade runtime performance furthermore due to the increase of . In contrast, in MATEX PDN solver, is not so sensitive to such constraints. Besides, can be maintained in a small number based on the decomposition strategy. Therefore, the speedups of our proposed methods tend to be larger when the simulation requirements become harsher.
V Experimental Results
We implement all the algorithms in MATLAB R2014b^{7}^{7}7Measurements reported are on MATLAB implementations. They are subject to limitations and are not directly comparable to C++ implementations reported in literature such as [44]. and use UMFPACK package for LU factorization. First, we compare IMATEX, RMATEX and TR in order to show our runtime improvements in single machine framework in Table II. Second, we show our distributed framework DRMATEX achieves large speedups in Table III. The experiments are conducted on the server with Intel(R) Xeon (R) E52640 v3 2.60GHz processor and 125GB memory.
Va Performance of IMATEX and RMATEX in Sec. IvA
We compare our proposed IMATEX and RMATEX against the popular TRFTS on the IBM power grid benchmarks [22]. Among the current sources, the smallest interval between two breakpoints is , which puts the upper limit of the TR’s step size. All of these cases have very large numbers of input current sources. Table I shows the details of each benchmark circuit of which size ranges from K up to M. The simulation time is . From ibmpg1t to ibmpg6t, TR uses fixed step size in . We also change the IBM power grid benchmark to make the smallest distance among breakpoints by interleaving input sources’ breakpoints (similar as Fig. 1). Therefore, the fixed step size method can only use at most . The names of those benchmarks are ibmpg1t_new, ibmpg2t_new, ibmpg3t_new, ibmpg4t_new, ibmpg5t_new and ibmpg6t_new.
Design  #R  #C  #L  #I  #V  #Nodes 
ibmpg1t  41K  11K  277  11K  14K  54K 
ibmpg2t  245K  37K  330  37K  330  165K 
ibmpg3t  1.6M  201K  955  201K  955  1.0M 
ibmpg4t  1.8M  266K  962  266K  962  1.2M 
ibmpg5t  1.6M  473K  277  473K  539K  2.1M 
ibmpg6t  2.4M  761K  381  761K  836K  3.2M 
After DC analysis in TRFTS, we LU factorize matrix once for the later transient simulation, which only contains time stepping. Actually, multiple factorized matrices can be deployed [55, 56]. We can choose one of them during the stepping. The problem is the memory and runtime overhead for the multiple matrix factorizations. Another point is if large time step is chosen, the standard low order scheme cannot maintain the accuracy.
Experiment is conducted on a single computing node. In Table II, we record the total simulation runtime Total(s), which includes the processes of DC and transient simulation, but excludes the nonnumerical computation before DC, e.g., netlist parsing and matrix stamping. We also record the part of transient simulation Tran(s), excluding DC analysis and LU decompositions. The speedup of IMATEX is not as large as RMATEX, because IMATEX with a large spectrum of generates large dimension of Krylov subspace. Meanwhile, the step size is not large enough to let it fully harvest the gain from time marching with stepping. In contrast, RMATEX needs small dimension numbers of rational Krylov subspace, which ranges from to in those cases. Therefore, they can benefit from large time stepping, shown as SPDP. For ibmpg4t, RMATEX achieves maximum speedup resulted from the relatively small number of breakpoints in that benchmark, which is around points, while the majority of others have over points.
In Table II, our single mode RMATEX achieves the average speedup over TRFTS. Note the average speedup number of single mode RMATEX over TRFTS for the original IBM benchmark (ibmpg1tibmpg6t) is less than the speedup of the new test cases (ibmpg1t_newibmpg6t_new). As we mentioned before, ibmpg1t_newibmpg6t_new have harsher input constraints, making the available step size only . Therefore, the adaptive stepping by RMATEX is more beneficial to the runtime performance in ibmpg1t_newibmpg6t_new than ibmpg1tibmpg6t.
VB Performance of DRMATEX in Sec. IvB
We test our distributed DRMATEX in the following experiments with the same IBM power grid benchmarks. These cases have many input transitions () that limit step sizes of RMATEX. We divide the region before the computation of simulation. We decompose the input sources by the approach discussed in Sec. IVB3 and obtain much fewer transitions of for computing nodes. The original input source numbers are over ten thousand in the benchmarks. However, based on “bump” feature (as shown in Fig. 10), we obtain a fairly small numbers for each computing node, which is shown as Group # in Table III. (Now, the fact that hundred machines to process in parallel is quite normal [57, 38] in the industry.) We precompute and groups and assign subtasks to corresponding nodes^{8}^{8}8 Based on the feature of input sources available, the preprocessing is very efficient, which takes linear time complexity to obtain GTS, LTS and separates the sources into different groups.. MATEX scheduler is only responsible for simple superposition calculation at the end of simulation. Since the slave nodes are in charge of all the computing procedures (Fig. 11) for the computation of their own transient simulation tasks, and have no communications with others, our framework falls into the category of Embarrassingly Parallelism model. We can easily emulate the multiplenode environment. We simulate each group using the command “matlab singleCompThread” in our server. We record the runtime numbers for each process (slave nodes) and report the maximum runtime as the total runtime “Total(s)” of DRMATEX in Table III. We also record “pure transient simulation” as “Tran(s)”, which is the maximum runtime of the counterparts among all computing nodes.
For TRFTS, we use , so there are 1,000 pairs of forward and backward substitutions during the process of pure transient simulation for ibmpg1tibmpg6t; We use for ibmpg1t_newibmpg6t_new. Therefore, we have 10,000 pairs of forward and backward substitutions for stepping. In DRMATEX, the circuit solver uses RMATEX with , which is set to sit among the order of varied time steps during the simulation (since Sec. IIIE discusses the insensitivity of around the step size of interests). TRFTS is not distributed because it has no gain by dividing the current source as we do for the DRMATEX. TRFTS cannot avoid the repeated pairs of forward and backward substitutions. Besides, adaptive stepping for TRFTS only degrades the performance, since the process requires extra matrix factorizations.
In Table III, our distributed mode gains up to for the pure transient computing. The average peak dimension of rational Krylov subspace is . The memory overhead ratio for each node (around over TRFTS on average) is slightly larger, which is worthwhile with respect to the large runtime improvement. With the huge reduction of runtime for Krylov subspace generations, the serial parts, including LU and DC, play more dominant roles in DRMATEX, which can be further improved using advance matrix solvers, such as [58].
Design  DRMATEX  Speedups  Peak  Mem. Ratio  
Group #  Tran(s)  Total(s)  Max Df.(V)  Avg Df.(V)  SPDP  SPDP  over TRFTS  
ibmpg1t  100  1.4  1.9  5.3e5  8.6e6  6  1.9  
ibmpg2t  100  8.9  11.4  4.6e5  8.6e6  7  1.9  
ibmpg3t  100  91.7  129.9  9.6e5  19.7e6  6  1.5  
ibmpg4t  15  52.3  112.2  9.9e5  27.9e6  8  1.4  
ibmpg5t  100  148.4  178.9  9.0e5  1.1e6  7  1.5  
ibmpg6t  100  189.9  234.2  3.4e5  7.2e6  7  1.5  
ibmpg1t_new  100  2.4  2.8  5.3e5  8.6e6  6  1.9  
ibmpg2t_new  100  5.6  7.0  4.6e5  8.6e6  7  1.9  
ibmpg3t_new  100  103.0  140.9  9.8e5  19.9e6  7  1.5  
ibmpg4t_new  15  51.5  108.4  9.9e5  27.6e6  8  1.4  
ibmpg5t_new  100  185.6  227.8  9.9e5  2.2e6  7  1.5  
ibmpg6t_new  100  274.8  317.7  3.4e5  7.1e6  7  1.5  
Average  —  —  —  7.1e5  12.3e6  6.7  1.6 
Vi Conclusions and Future Directions
In this work, we propose an efficient framework MATEX for accurate PDN timedomain simulation based on the exponential integration scheme. We visualize the error distributions to show the advantages of using rational (RMATEX) and invert (IMATEX) Krylov subspace methods for matrix exponential and vector product (MEVP) over standard Krylov subspace method (MEXP). For the PDN simulation, our time integration scheme can perform adaptive time stepping without repeating matrix factorizations, which cannot be achieved by traditional methods using implicit numerical integration with fixed timestep scheme. Compared to the commonly adopted framework TR with fixed time step (TRFTS), our single mode framework (RMATEX) gains runtime speedup up to around . We also show that the distributed MATEX framework (DRMATEX) leverages the superposition property of linear system and decomposes the task based on the feature of input sources, so that we reduce chances of Krylov subspace generations for each node. We achieve runtime improvement up to speedup.
We show the exponential integration with Krylov subspace methods maintains high order accuracy and flexible time stepping ability. The exponential integration framework was actually mentioned by the very early work in circuit simulation algorithms [29], but it had not attracted too much attention due to the high computational complexities of matrix exponential during that time. Nowadays, the progress of Krylov subspace methods provides efficient way to compute matrix exponential and vector product, so that we can utilize certain features of exponential integration, which are hardly obtained by traditional time integration schemes. Exponential integration can also serve as stable explicit schemes [59, 60] for general dynamical systems. It is a promising framework for the future circuit simulation algorithms and software. The opportunities of parallel and distributed computing with the cuttingedge multicore and manycore hardware are also worth exploring for the further parallelism and runtime improvement.
Acknowledgment
We thank Prof. Scott B. Baden, Prof. Mike Botchev, Dr. Quan Chen, Dr. Peng Du, Dr. Martin Gander, Prof. Nicholas Higham, Prof. Marlis Hochbruck, Junkai Jiang, Dr. John Loffeld, Dr. Jingwei Lu, Mulong Luo, Prof. Alexander Ostermann, Prof. Mayya Tokman and Chicheng Zhang, for the informative discussions. Hao Zhuang thanks the supports from UCSD’s Powell Fellowship and Qualcomm FMA Fellowship. Last but not the least, we thank for all the insightful comments from the reviewers.
References
 [1] D. Kouroussis and F. N. Najm, “A static patternindependent technique for power grid voltage integrity verification,” in Proc. IEEE/ACM Design Autom. Conf., pp. 99–104, 2003.
 [2] S. R. Nassif and J. N. Kozhaya, “Fast power grid simulation,” in Proc. IEEE/ACM Design Autom. Conf., pp. 156–161, 2000.
 [3] M. S. Gupta, J. L. Oatley, R. Joseph, G.Y. Wei, and D. M. Brooks, “Understanding voltage variations in chip multiprocessors using a distributed powerdelivery network,” in Proc. DATE, pp. 1–6, 2007.
 [4] S. Lin, M. Nagata, K. Shimazake, K. Satoh, M. Sumita, H. Tsujikawa, and A. T. Yang, “Fullchip vectorless dynamic power integrity analysis and verification against 100uv/100psresolution measurement,” in Proc. IEEE CICC, pp. 509–512, 2004.
 [5] S. Lin and N. Chang, “Challenges in powerground integrity,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 651–654, 2001.
 [6] R. Zhang, B. H. Meyer, W. Huang, K. Skadron, and M. R. Stan, “Some limits of power delivery in the multicore era,” WEED, 2012.
 [7] R. Zhang, K. Wang, B. H. Meyer, M. R. Stan, and K. Skadron, “Architecture implications of pads as a scarce resource,” in Proc. IEEE/ACM Intâl Symp. Computer Architecture, pp. 373–384, 2014.
 [8] K. Wang, B. H. Meyer, R. Zhang, K. Skadron, and M. R. Stan, “Walking pads: Fast powersupply padplacement optimization.,” in Proc. IEEE/ACM Asia South Pac. Design Autom. Conf., vol. 20, p. 4, 2014.
 [9] J. Lu, P. Chen, C.C. Chang, L. Sha, D. Huang, C.C. Teng, and C.K. Cheng, “ePlace: Electrostatics based placement using Nesterov’s method,” in Proc. IEEE/ACM Design Autom. Conf., 2014.
 [10] M. Pan, N. Viswanathan, and C. Chu, “An efficient and effective detailed placement algorithm,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 48–55, 2005.
 [11] J. Lu, H. Zhuang, P. Chen, H. Chang, C.C. Chang, Y.C. Wong, L. Sha, D. Huang, Y. Luo, C.C. Teng, and C. K. Cheng, “ePlaceMS: Electrostatics based placement for mixedsize circuits,” IEEE Trans. Comput.Aided Design Integr. Circuits Syst., vol. 34, no. 5, pp. 685–698, 2015.
 [12] L. Xiao, Z. Xiao, Z. Qian, Y. Jiang, T. Huang, H. Tian, and E. F. Y. Young, “Local clock skew minimization using blockageaware mixed treemesh clock network,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 458–462, 2010.
 [13] Y. Zhang and C. Chu, “GDRouter: Interleaved global routing and detailed routing for ultimate routability,” in Proc. IEEE/ACM Design Autom. Conf., pp. 597–602, 2012.
 [14] A. B. Kahng, S. Kang, H. Lee, I. L. Markov, and P. Thapar, “Highperformance gate sizing with a signoff timer,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 450–457, 2013.
 [15] C. Zhuo, G. Wilke, R. Chakraborty, A. Aydiner, S. Chakravarty, and W.K. Shih, “A siliconvalidated methodology for power delivery modeling and simulation,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 255–262, 2012.
 [16] Z. Zeng, X. Ye, Z. Feng, and P. Li, “Tradeoff analysis and optimization of power delivery networks with onchip voltage regulation,” in Proc. IEEE/ACM Design Autom. Conf., pp. 831–836, 2010.
 [17] H. Zhuang, J. Lu, K. Samadi, Y. Du, and C. K. Cheng, “Performancedriven placement for design of rotation and right arithmetic shifters in monolithic 3D ICs,” in Proc. IEEE Intl. Conf. Communications, Circuits and Systems, vol. 2, pp. 509–513, 2013.
 [18] S. K. Samal, K. Samadi, P. Kamal, Y. Du, and S. K. Lim, “Full chip impact study of power delivery network designs in monolithic 3D ICs,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 565–572, 2014.
 [19] D. Xiang and K. Shen, “A thermaldriven test application scheme for prebond and postbond testing of threedimensional ICs,” ACM Journal on Emerging Technologies of Computing Systems, vol. 10, no. 2, p. Art. no. 18, 2014.
 [20] H. Zhuang, X. Wang, I. Kang, J.H. Lin, and C. K. Cheng, “Dynamic analysis of power delivery network with nonlinear components using matrix exponential method,” in IEEE Intl. Symp. EMC&SI, 2015.
 [21] H. Zhuang, W. Yu, I. Kang, X. Wang, and C. K. Cheng, “An algorithmic framework for efficient largescale circuit simulation using exponential integrators,” in Proc. IEEE/ACM Design Autom. Conf., 2015.
 [22] S. R. Nassif, “Power grid analysis benchmarks,” in Proc. IEEE/ACM Asia South Pac. Design Autom. Conf., pp. 376–381, 2008.
 [23] Z. Ye, Z. Zhu, and J. R. Phillips, “Sparse implicit projection (SIP) for reduction of general manyterminal networks,” in Proc. Intl. Conf. ComputerAided Design, pp. 736–741, 2008.
 [24] Z. Li, R. Balasubramanian, F. Liu, and S. Nassif, “2012 tau power grid simulation contest: benchmark suite and results,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 643–646, 2012.
 [25] C. Zhuo, H. Gan, and W.K. Shih, “Earlystage power grid design: Extraction, modeling and optimization,” in Proc. IEEE/ACM Design Autom. Conf., pp. 1–6, 2014.
 [26] H. Zhuang, W. Yu, G. Hu, Z. Liu, and Z. Ye, “Fast floating random walk algorithm for multidielectric capacitance extraction with numerical characterization of Green’s functions,” in Proc. IEEE/ACM Asia South Pac. Design Autom. Conf., pp. 377–382, 2012.
 [27] C. Zhang and W. Yu, “Efficient space management techniques for largescale interconnect capacitance extraction with floating random walks,” IEEE Trans. Comput.Aided Design Integr. Circuits Syst, vol. 32, no. 10, pp. 1633–1637, 2013.
 [28] W. Yu, H. Zhuang, C. Zhang, G. Hu, and Z. Liu, “RWCap: A floating random walk solver for 3D capacitance extraction of verylargescale integration interconnects,” IEEE Trans. Comput.Aided Design Integr. Circuits Syst, vol. 32, no. 3, pp. 353–366, 2013.
 [29] L. O. Chua and P.M. Lin, Computer Aided Analysis of Electric Circuits: Algorithms and Computational Techniques. PrenticeHall, 1975.
 [30] Y. Saad, Iteravite Methods for Sparse Linear Systems. SIAM, 2003.
 [31] T. A. Davis, Direct Method for Sparse Linear Systems. SIAM, 2006.
 [32] T. Yu and M. D.F. Wong, “PGT_SOLVER: An efficient solver for power grid transient analysis,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 647–652, 2012.
 [33] J. Yang, Z. Li, Y. Cai, and Q. Zhou, “Powerrush: Efficient transient simulation for power grid analysis,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 653–659, 2012.
 [34] X. Xiong and J. Wang, “Parallel forward and back substitution for efficient power grid simulation,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 660–663, 2012.
 [35] L. Nagel, SPICE2: A computer program to simulate semiconductor circuits. Ph.D. dissertation, 1975.
 [36] S.H. Weng, Q. Chen, and C. K. Cheng, “Timedomain analysis of largescale circuits by matrix exponential method with adaptive control,” IEEE Trans. Comput.Aided Design Integr. Circuits Syst, vol. 31, no. 8, pp. 1180–1193, 2012.
 [37] Y. Saad, “Analysis of some krylov subspace approximations to the matrix exponential operator,” SIAM J. Numer. Anal., vol. 29, no. 1, pp. 209–228, 1992.
 [38] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A view of cloud computing,” Communications of the ACM, vol. 53, no. 4, pp. 50–58, 2010.
 [39] J. Dean and S. Ghemawat, “Mapreduce: simplified data processing on large clusters,” CACM, vol. 51, no. 1, pp. 107–113, 2008.
 [40] D. Xiang, Y. Zhang, and Y. Pan, “Practical deadlockfree faulttolerant routing in meshes based on the planar network fault model,” IEEE Trans. Computers, vol. 58, no. 5, pp. 620–633, 2009.
 [41] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, “Spark: cluster computing with working sets,” in HotCloud, pp. 10–10, 2010.
 [42] Q. He, W. Au, A. Korobkov, and S. Venkateswaran, “Parallel power grid analysis using distributed direct linear solver,” in IEEE Intl. Symp. EMC, pp. 866–871, 2014.
 [43] N. Gupte and J. Wang, “Secure power grid simulation on cloud,” IEEE Trans. Comput.Aided Design Integr. Circuits Syst, vol. 34, no. 3, pp. 422–432, 2015.
 [44] J. Wang and X. Xiong, “Scalable power grid transient analysis via MORassisted timedomain simulations,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, 2013.
 [45] X.X. Liu, H. Wang, and S. X.D. Tan, “Parallel power grid analysis using preconditioned gmres solver on CPUGPU platforms,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 561–568, IEEE, 2013.
 [46] Z. Feng and P. Li, “Multigrid on GPU: tackling power grid analysis on parallel simt platforms,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 647–654, 2008.
 [47] H. Zhuang, S.H. Weng, J.H. Lin, and C. K. Cheng, “MATEX: A distributed framework of transient simulation of power distribution networks,” in Proc. IEEE/ACM Design Autom. Conf., 2014.
 [48] M. J. Gander and S. GuÌttel, “PARAEXP: A parallel integrator for linear initialvalue problems,” SIAM Journal on Scientific Computing, vol. 35, no. 2, pp. C123–C142, 2013.
 [49] S.H. Weng, Q. Chen, N. Wong, and C. K. Cheng, “Circuit simulation via matrix exponential method for stiffness handling and parallel processing,” in Proc. IEEE/ACM Int. Conf. Comput.Aided Design, pp. 407–414, 2012.
 [50] J. van den Eshof and M. Hochbruck, “Preconditioning Lanczos approximations to the matrix exponential,” SIAM J. Sci. Comput., vol. 27, no. 4, pp. 1438–1457, 2006.
 [51] M. A. Botchev, “A short guide to exponential krylov subspace time integration for maxwell’s equations,” Dept. of Applied Mathematics, Univ. of Twente, 2012.
 [52] H. Zhuang, S.H. Weng, and C. K. Cheng, “Power grid simulation using matrix exponential method with rational krylov subspaces,” in Proc. IEEE Intl. Conf. ASIC, 2013.
 [53] Q. Chen, S.H. Weng, and C. K. Cheng, “A practical regularization technique for modified nodal analysis in largescale timedomain circuit simulation,” IEEE Trans. Comput.Aided Design Integr. Circuits Syst, vol. 31, no. 7, pp. 1031–1040, 2012.
 [54] J. Wilkinson, “Kronecker’s canonical form and the QZ algorithm,” Linear Algebra and its Applications, vol. 28, pp. 285–303, 1979.
 [55] P. Li, “Parallel circuit simulation: A historical perspective and recent developments,” Foundations and Trends in Electronic Design Automation, vol. 5, no. 4, pp. 211–318, 2012.
 [56] X. Ye, M. Zhao, R. Panda, P. Li, and J. Hu, “Accelerating clock mesh simulation using matrixlevel macromodels and dynamic time step rounding,” in ISQED, pp. 627–632, 2008.
 [57] B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R. Katz, S. Shenker, and I. Stoica, “Mesos: A platform for finegrained resource sharing in the data center,” in NSDI, pp. 22–22, 2011.
 [58] X. Chen, Y. Wang, and H. Yang, “NICSLU: An adaptive sparse matrix solver for parallel circuit simulation,” IEEE Trans. Comput.Aided Design Integr. Circuits Syst, vol. 32, no. 2, pp. 261–274, 2013.
 [59] M. Hochbruck and A. Ostermann, “Exponential integrators,” Acta Numerica, vol. 19, pp. 209–286, 2010.
 [60] M. Hochbruck, C. Lubich, and H. Selhofer, “Exponential integrators for large systems of differential equations,” SIAM J. Sci. Comput., vol. 19, no. 5, pp. 1552–1574, 1998.
Hao Zhuang(S’11) is a Ph.D. candidate at the Department of Computer Science and Engineering, University of California, San Diego, CA, USA (UC San Diego). He received his C.Phil. degree in Computer Science from UC San Diego in June, 2015. His current research interests include numerical and optimization algorithms, with the applications in design automation, very largescale integration systems, network and data analysis. He has industrial experiences in several companies, including Synopsys, Inc., Mountain View, CA, USA, and Ansys, Inc., San Jose, CA, USA, where he worked on the largescale circuit analysis and dynamic power network simulation algorithms within several products. He was one of the main software developers of RWCap, a parallel program for VLSI capacitance extraction using floating random walk algorithm at Tsinghua University. At UC San Diego, he designed the numerical algorithms for largescale circuit simulation using matrix exponentials, and optimization algorithms for electrostatics based VLSI global placement. Mr. Zhuang was the recipient of the Charles Lee Powell Fellowship and the Qualcomm FMA Fellowship. He is a student member of IEEE, ACM and SIAM. He serves as a reviewer for IEEE Transactions on Computer Aided Design (TCAD), and an external reviewer for Design Automation Conference (DAC) and International Symposium on Physical Design (ISPD). 
Wenjian Yu (S’01M’04SM’10) received the B.S. and Ph.D. degrees in computer science from Tsinghua University, Beijing, China, in 1999 and 2003, respectively. In 2003, he joined Tsinghua University, where he is an Associate Professor with the Department of Computer Science and Technology. He was a Visiting Scholar with the Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, USA, twice during the period from 2005 to 2008. He serves as an associate editor of IEEE Transactions on Computer Aided Design since 2016. His current research interests include numerical modeling and simulation techniques for designing complex systems (like integrated circuit, touchscreen, cyber physical system, etc.), and the matrix computation algorithms for BIG DATA analysis. He has authored three books and over 130 papers in refereed journals and conferences. Dr. Yu was a recipient of the Distinguished Ph.D. Award from Tsinghua University in 2003 and the Excellent Young Scholar Award from the National Science Foundation of China in 2014. 
ShihHung Weng is currently with Facebook as Research Scientist and working on large scale distributed system for payments services. He received the B.S. and M.S degrees in Computer Science from National Tsing Hua University, Hsinchu, Taiwan, in 2006 and 2008, and the Ph.D. degree from University of California San Diego, La Jolla, U.S.A in 2013 under the supervision of Dr. ChungKuan Cheng. His thesis focused on parallel circuit simulation for powergrid noise analysis in very large scale integration (VLSI) system. 
Ilgweon Kang(S’12) received the B.S and M.S. degrees in electrical and electronic engineering from Yonsei University, Seoul, Korea, in 2006 and 2008, respectively. He is currently pursuing the Ph.D degree at University of California at San Diego (UCSD). He was with the Research and Development Division, SK Hynix, Icheon, Korea, from 2008 to 2012, where he was involved in development of 3D stacked DRAM with (Through SiliconVia) TSV technology. His current research interests include 3D IC design and optimization, CAD, and design for testability 
JengHau Lin (S’15) is a Ph.D student at Department of Computer Science and Engineering, University of California, San Diego, CA, USA. He received his B.S degree in Electrical Engineering in 2005 and M.S. degree in Communication Engineering in 2007 from National Taiwan University. His research interests includes timefrequency analyses on signal integrity and the consensus of multiagent system. 
Xiang Zhang (M’13) received the B.Eng. in Information Engineering from Shanghai Jiao Tong University (SJTU), China and M.S. in Electrical and Computer Engineering from University of Arizona, Tucson, AZ, in 2010. From 2011 to 2014, he was a Senior Hardware Engineer with Qualcomm Technologies, Inc. In 2014, he joined Apple Inc as an iPhone Hardware System Engineer. Since 2012, he has been working towards the Ph.D. degree in Department of Electrical and Computer Engineering at University of California, San Diego, CA, USA. His current research interests include power distribution network design and optimization, circuit simulation and systemonchip design. 
Ryan Coutts received a B.S. in electrical engineering from the University of California, Riverside in 2005. He then received his M.S. in electrical engineering from Stanford University in 2006. After receiving his M.S. degree, he worked as a signal and power integrity engineer at NVIDIA Inc. specializing in FSB, DDR and power integrity. He is currently working at Qualcomm Inc. where he works in low power design methodologies for mobile covering the range from power integrity, thermal optimization and onchip timing. Mr. Coutts has filed 13 patents related to his innovation as well and is currently pursuing a Ph.D at the University of California, San Diego. 
ChungKuan Cheng (S’82M’84SM’95F’00) received the B.S. and M.S. degrees in electrical engineering from National Taiwan University, and the Ph.D. degree in electrical engineering and computer sciences from University of California, Berkeley in 1984. From 1984 to 1986 he was a senior CAD engineer at Advanced Micro Devices Inc. In 1986, he joined the University of California, San Diego, where he is a Distinguished Professor in the Computer Science and Engineering Department, an Adjunct Professor in the Electrical and Computer Engineering Department. He served as a principal engineer at Mentor Graphics in 1999. He was an associate editor of IEEE Transactions on Computer Aided Design for 19942003. He is a recipient of the best paper awards, IEEE Trans. on ComputerAided Design in 1997, and in 2002, the NCR excellence in teaching award, School of Engineering, UCSD in 1991, IBM Faculty Awards in 2004, 2006, and 2007, the Distinguished Faculty Certificate of Achievement, UJIMA Network, UCSD in 2013. He is appointed as an Honorary Guest Professor of Tsinghua University 20022008, and a Visiting Professor of National Taiwan University 2011, and 2015. His research interests include medical modeling and analysis, network optimization and design automation on microelectronic circuits. 