#
Exponential Adams-Bashforth integrators for stiff ODEs,

application to cardiac electrophysiology

###### Abstract

Models in cardiac electrophysiology are coupled systems of reaction diffusion PDE and of ODE. The ODE system displays a very stiff behavior. It is non linear and its upgrade at each time step is a preponderant load in the computational cost. The issue is to develop high order explicit and stable methods to cope with this situation.

In this article is analyzed the resort to exponential Adams Bashforth
(EAB) integrators in cardiac electrophysiology. The method is presented in the
framework of a general and varying stabilizer, that is well suited in this
context. Stability under perturbation (or 0-stability) is proven. It provides
a new approach for the convergence analysis of the method. The Dahlquist
stability properties of the method is performed. It is presented in a new
framework that incorporates the discrepancy between the stabilizer and the
system Jacobian matrix. Provided this discrepancy is small enough, the method
is shown to be A(alpha)-stable. This result is interesting for an explicit
time-stepping method. Numerical experiments are presented for two classes of
stiff models in cardiac electrophysiology. They include performances
comparisons with several classical methods. The EAB method is observed to be
as stable as implicit solvers and cheaper at equal level of accuracy.

Keywords:
stiff equations,
explicit high-order multistep methods,
exponential integrators,
stability and convergence, Dahlquist stability

Acknowledgments.
This study received financial support from the French Government as part of the
“Investissement d’avenir” program managed by the National Research Agency
(ANR), Grant reference ANR-10-IAHU-04. It also received fundings of the project
ANR-13-MONU-0004-04.

## Introduction

Computations in cardiac electrophysiology have to face two constraints. Firstly the stiffness due to heterogeneous time and space scales. This is usually dealt with by considering very fine grids. That strategy is associated with large computational costs, still challenging in dimension 3. Secondly, the resolution of the reaction terms that appears in the ionic models has an important cost. That resolution occur at each grid node. The total amount of evaluation of the reaction terms has to be maintained as low as possible by the considered numerical method. Implicit solvers therefore are usually avoided. Exponential integrators are well adapted to cope with these two constraints. Actually they both allow an explicit resolution of the reaction term and display strong stability properties. In this article is studied and analyzed exponential time-stepping methods dedicated to the resolution of reaction equations.

Models for the propagation of the cardiac action potential are evolution reaction diffusion equations coupled with ODE systems. The widely used monodomain model [CLEMENS-NENONEN-04, FRANZONE-PAVARINO-TACCARDI-05.1, FRANZONE-PAVARINO-TACCARDI-05.2] formulates as,

with space and time variables and . The unknowns are the functions (the transmembrane voltage) and (a vector that gathers variables describing pointwise the electrophysiological state of the heart cells). In the monodomain model, the diffusion operator is (), and the reaction terms are the nonlinear functions , . These functions model the cellular electrophysiology. They are called ionic models. Ionic models are of complex nature, see e.g. [beeler-reuter, LR2a, tnnp, winslow]. A special attention has to be paid to the number of evaluations of the functions and , and implicit solvers are usually avoided. Though we ultimately use an implicit/explicit method to solve the PDE, we need an efficient, fast and robust method to integrate the reaction terms. Therefore, this article focuses on the time integration of the stiff ODE system

(1) |

in the special cases where is an ionic model from cellular electrophysiology. For that case, stiffness is due to the co existence of fast and slow variables. Fast variables are given in (1) by equations of the form,

(2) |

Here is provided by the model. This scalar rate of variation will be inserted in the numerical method to stabilize its resolution.

Exponential integrators are a class of explicit methods meanwhile exhibiting strong stability properties. They have motivated many studies along the past 15 years, among which we quote e.g. [hochbruck-98, Cox_Mat, oster_ex_RK, exp-rozenbrock-2009, Tokman-2012, Ostermann-ERK-2014] and refer to [Bor_W, Hochbruck-Ostermann-2010, hochbruck-2015] for general reviews. They already have been used in cardiac electrophysiology, as e.g. in [perego-2009, borgers-2013]. Exponential integrators are based on a reformulation of (1) as,

(3) |

(with ) where the linear part is used to stabilize the resolution. Basically is assumed to capture the stiffest modes of the Jacobian matrix of system (1). Stabilization is brought by performing an exact integration of these modes. This exact integration involves the computation of the exponential at the considered point. This computation is the supplementary cost for exponential integrators as compared to other time stepping methods.

Exponential integrators of Adams type are explicit multistep exponential integrators. They were first introduced by Certaine [Cer_J] in 1960 and Nørsettin [Nors_EAB] in 1969 for a constant linear part in (3). The schemes are derived using a polynomial interpolation of the non linear term . It recently received an increasing interest [Tokman-2006, Cal_Pal, Oster_class_Exp] and various convergence analysis have been completed in this particular case [EAB_M_O, Auzinger-2011, oster-2013]. Non constant linear parts have been less studied. Lee and Preiser [Lee_Stan] in 1978 and by Chu [CHU] in 1983 first suggested to rewrite the equation (1) at each time instant as, rewritten as,

(4) |

with and . In the sequel, is referred to as the stabilizer. It is updated after each time step. Recently, Ostermann et al [EAB_M_O, oster-2013] analyzed the linearized exponential Adams method, where the stabilizer is set to the Jacobian matrix of in (1). This choice requires the computation of a matrix exponential at every time step. Moreover, when the fast variables of the system are known, stabilization can be performed only on these variables. Considering the full Jacobian as the stabilizer implies unnecessary computational efforts. To avoid these problems, an alternative is to set the stabilizer as a part or as an approximation of the Jacobian. This has been analyzed in [Tranquilli-Sandu-2014] and in [Rainwater-Tokman-2016] for exponential Rosenbrock and exponential Runge Kutta type methods respectively. That strategy is well adapted to cardiac electrophysiology, where a diagonal stabilizer associated with the fast variable is directly provided by the model with equation (2). The present contribution is to analyze general varying in (3) for exponential integrators of Adams type, referred to as exponential Adams-Bashforth, and shortly denoted EAB. Together with the EAB scheme, we introduce a new variant, that we called integral exponential Adams-Bashforth, denoted I-EAB.

The convergence analysis held in [EAB_M_O] extends to the case of general varying stabilizers. However there is a lack of results concerning the stability in that case: for instance, consider the simpler exponential Euler method, defined by

Stability under perturbation (also called 0-stability) can be easily proven provided that the scheme generator is globally Lipschitz in with a constant bounded by . Therefore stability under perturbation classically is studied by analyzing the partial derivative . This can be done in the case where is either a constant operator or a diagonal varying matrix. In the general case however things turn out to be more complicated. The reason is that we do not have the following general series expansion, as ,

unless the two matrices and are commuting. As a consequence differentiating in cannot be done without very restrictive assumptions on . We present here a stability analysis for general varying stabilizers. This will be done by introducing relaxed stability conditions on the scheme generator . Together with a consistency analysis, it provides a new proof for the convergence of the EAB schemes, in the spirit of [EAB_M_O].

Stability under perturbation provides results of qualitative nature. In addition, the Dahlquist stability analysis strengthens these results. It is a practical tool that allows to dimension the time step with respect to the variations of in equation (1). The analysis is made by setting in (1). For exponential integrators with general varying stabilizer, the analysis must incorporate the decomposition of used in (3). The stability domain of the considered method will depend on the relationship between and , following a concept first introduced in [perego-2009]. We numerically establish that EAB methods are stable provided that the stabilizer is sufficiently close to the system Jacobian matrix (precise definitions are in section 4). Moreover the angle approaches when the stabilizer goes to the system Jacobian matrix. In contrast, there exists no stable explicit linear multistep method (see [Hairer-ODE-II, chapter V.2]. This property is remarkable for explicit methods.

Numerical experiments for the EAB and I-EAB scheme are provided in section
6, in the context of cardiac electrophysiology.
Robustness to stiffness is studied with this
choice. It is numerically shown to be comparable to implicit methods both in
terms of accuracy and of stability condition on the time step. We conclude that
EAB methods are well suited for solving stiff differential problems. In
particular they allow computations at large time step with good accuracy
properties and cheap cost.

The article is organized as follows. The EAB and I-EAB methods are introduced
in section 1. The general stability and convergence results are
stated and proved in section 2. The EAB and I-EAB stability
under perturbation and convergence are proved in
section 3. The Dahlquist stability is investigated in
section 4, and the numerical experiments end the article, in
section 6.

In all this paper, is a constant time-step and are the time instants associated with the numerical approximate of the solution of the ODE (1).

## 1 Scheme definitions

### 1.1 The EAB method

The exact solution at time to the equation (4) (with ) is given by the variation of the constants formula:

(5) |

Using the approximations for , we build the Lagrange polynomial of degree at most that satisfies,

(6) |

It provides the numerical approximation as

(7) |

The Taylor expansion of the polynomial is , where the coefficients are uniquely determined by (6), and actually given in table 1 for . An exact integration of the integral in equation (7) may be performed:

(8) |

where the functions , originally introduced in [Nors_EAB], are recursively defined (for ) by

(9) |

The equation (8) defines the Exponential Adams-Bashforth method of order , denoted by EAB.

1 | 2 | 3 | 4 | |
---|---|---|---|---|

###### Remark 1.

When is a diagonal matrix, can be computed component-wise. Its computation is straightforward.

###### Remark 2.

### 1.2 A variant: the I-EAB method

If the matrix is diagonal, we can take advantage of the following version for the variation of the constants formula:

where . An attempt to improve the EAB formula (8) is to replace and in the integral above by their Lagrange interpolation polynomials. At time , we define the two polynomials and of degree at most so that:

and the primitive . The resulting approximate solution at time is finally given by the formula

(10) |

The method is denoted I-EAB (for Integral EAB). Dislike for the formula (7), no exact integration formula is available, because of the term . A quadrature rule is required for the actual numerical computation of the integral in formula (10). Implementation details are given in section 6.1.

## 2 Stability conditions and convergence

The equation (1) is considered on a finite dimensional vector space with norm . We fix a final time and assume that equation (1) has a solution on . We adopt the general settings for the analysis of -multistep methods following [Hairer-ODE-I]. The space is equipped with the maximum norm with . A -multistep scheme is defined by a mapping,

For instance, the EAB scheme rewrites as with in the formula (8). The scheme generator is the mapping given by

A numerical solution is a sequence in for so that

(11) |

being its initial condition. A perturbed numerical solution is a sequence in for such that

(12) |

with for . The scheme is said to be stable under perturbation (or 0-stable) if: being given a numerical solution as in (11) there exists a (stability) constant so that for any perturbation as defined in (12) we have,

(13) |

###### Proposition 1.

Assume that there exists constants and such that

(14) | |||

(15) |

for , and for . Then, the numerical scheme is stable under perturbation with the constant in (13) given by

(16) |

###### Proof.

Like in the classical cases, stability under perturbation together with consistency ensures convergence. Let us specify this point. For the considered solution of problem (1) on , we define,

(17) |

The local error at time is,

(18) |

The scheme is said to be consistent of order if there exists a (consistency) constant only depending on such that

###### Corollary 1.

###### Remark 4.

Note that the stability constant in (16) depends on , and then on . This is not a problem since can be bounded uniformly as for in a neighbourhood of .

## 3 Eab and I-EAB schemes analysis

The space is assumed to be with its canonical basis and with
the maximum norm. The space of operators on is equipped with
the associated operator norm, and associated to matrices. Thus
is a matrix and its norm is the matrix norm
associated to the maximum norm on .

It is commonly assumed for the numerical analysis of ODE solvers that in the
equation (1) is uniformly Lipschitz in its second component . With the formulation (3), the following supplementary hypothesis will be needed: on ,

(20) |

We denote by , and the Lipschitz constant for , and respectively.

###### Theorem 1.

The stability and consistency are proved in sections 3.3 and 3.4, respectively. Preliminary tools and definitions are provided in the sections 3.1 and 3.2.

### 3.1 Interpolation results

Consider a function and a triplet with . We set to the polynomial with degree less than so that

We then extend component-wise this definition to vector valued or matrix valued functions (e.g. the functions , or ).

###### Lemma 1.

There exists an (interpolation) constant such that, for any function ,

(21) | ||||

(22) |

Consider a function and assume that and have a regularity. Then, when ,

(23) |

with defined in (17).

For a vector valued function in the previous inequalities hold when considering the max norm on .
For a matrix valued function in this is also true for the operator norm on when multiplying the constants in the inequalities (21), (22) and (23) by .

###### Proof.

The space of the polynomials with
degree less than is equipped with the norm .
On is considered the max norm.
To is associated uniquely determined by for .
The mapping is linear.
Let be its continuity constant (it only depends on ).

We fix and .
Consider the vector and set the vector
by .
We have .
The relation (21) exactly is the continuity of
and .

Consider , and the associated vectors , as above.
We have
. Again, relation (22) comes from the continuity of .

Let be a function,
its interpolation polynomial at the points is considered.
A classical result on Lagrange interpolation applied to states that,
for all , there exists
, such that
,
where .
For
, we have .
This proves (23) by setting .

For a vector valued function , these three inequalities holds by processing component-wise and when considering the max norm on .

For a matrix valued function , the extension is direct when considering the max norm on (i.e. the max norm on the matrix entries).
The operator norm is retrieved with the inequality
∎

### 3.2 Scheme generators

Let us consider with . With the notations used in the previous subsection, we introduce the interpolations and for the functions and in (3). Thanks to the definition (10), the I-EAB scheme generator is defined by

(24) |

We introduce the polynomial with degree less than that satisfies,

The function in (6) is given by with . With the definition (7), the EAB scheme generator is defined by

(25) |

We will use the fact that is the interpolator of the function , precisely:

These scheme generator definitions will allow us to use the following Gronwall’s inequality (see [gronwall-monograph, Lemma 196, p.150]).

###### Lemma 2.

Suppose that is a function on . If there exist and such that for all , then:

(26) |

### 3.3 Stability

With proposition 1 we have to prove the stability conditions (14) and (15). Note that it is sufficient to prove these relations for for some constant since the limit is of interest here.

#### 3.3.1 Case of the I-EAB scheme

Consider and a vector . We simply denote and . The scheme generator is given by (24). We first have to bound where is given by

On the first hand, with the interpolation bound (21),

On the second hand, the function is globally Lipschitz in and thus can be bounded as , for and for some constant only depending on and on . Then with the bound (21),

By applying the Gronwall inequality (26), for ,

Thus, there exists a constant only depending on , , and so that, for and for ,

(27) |

This gives the condition (14) taking .

For =1, 2 We consider and denote and the interpolations of the functions and . With the definition (24) of the I-EAB scheme we have: with and with given by

We then have,

Using that and are Lipschitz in and with the interpolation bound (22),

With the upper bound (27), for and for ,

(28) |

For a constant only depending on , , , and . We finally apply the Gronwall inequality (26). It yields,

This implies the second stability condition (15) for .

#### 3.3.2 Case of the EAB scheme

Consider and a vector . Following the definition of the EAB scheme given in section 1.1, we denote , the function and . We have that is the Lagrange interpolation polynomial of , specifically