A self adjusting multirate algorithm based on the TRBDF2 method
Abstract
We propose a self adjusting multirate method based on the TRBDF2 solver. The potential advantages of using TRBDF2 as the key component of a multirate framework are highlighted. A linear stability analysis of the resulting approach is presented and the stability features of the resulting algorithm are analysed. The analysis framework is completely general and allows to study along the same lines the stability of self adjusting multirate methods based on a generic one step solver. A number of numerical experiments demonstrate the efficiency and accuracy of the resulting approach also the time discretization of hyperbolic partial differential equations.
MOX – Modelling and Scientific Computing
Dipartimento di Matematica, Politecnico di Milano
Via Bonardi 9, 20133 Milano, Italy
luca.bonaventura@polimi.it, ludovica.delpopolo@polimi.it
Dipartimento Elettronica, Informazione e Bioingegneria
Politecnico di Milano
Via Ponzio 34/35, 20133 Milano, Italy
francesco.casella@polimi.it
Department of Computer Science
University College Cork
Western Road
Cork, Ireland
akshay.ranade@cs.ucc.ie
Keywords: Multirate methods, stiff ODE problems, Implicit Runge Kutta methods, SDIRK methods
AMS Subject Classification: 65L04, 65L05, 65L07, 65M12, 65M20
1 Introduction
The concept of multirate methods was first proposed in [17] and a great attention has been devoted to these methods, see e.g. [1], [8] and the more comprehensive review reported in [16]. The basic idea of multirate methods is to integrate each component using a time step that is the most appropriate to the timescale of that component. Slowly varying components are integrated with larger time steps, while smaller time steps are used for fast components only. Thus, multirate methods can avoid a significant amount of the computation that is necessary in the standard, single rate approach. In other words, in the multirate approach, the most appropriate time resolution is employed for each state variable of the system. In the context of multirate algorithms, the faster components are often referred to as active and the slow ones are called instead latent. It is also to be remarked that conceptually similar approaches have been introduced in the literature on numerical weather prediction, see e.g. [13], for the purpose of avoiding the time step stability restrictions related to fast propagating acoustic waves.
The specific way in which the partitioning is performed and the degree of coupling allowed between latent and active components determines the efficiency and the stability of the resulting methods. The stability of multirate methods has been investigated, among others, in [2] and [10], while the stability of splitexplicit methods has been analyzed in [3], [20]. In general, it is seen that the stability region of the multirate methods decreases as the strength of the coupling between the partitions or the stiffness of the system increases. In spite of this, there is ample evidence in the literature that multirate algorithms can offer a promising path for the numerical integration of large systems with widely varying time scales or with highly localised periods of activity. In order to address the potential problems of multirate methods, the socalled compound step multirate methods have been proposed in [10]. A selfadjusting, recursive time stepping strategy has been proposed in [19]. In this kind of approach, a tentative global step is first taken for all components, using a robust, unconditionally stable method. The time step is then reduced only for those components for which some local error estimator is greater than the specified tolerance. In this way, automatic detection of the latent components is achieved. In [18], an algorithm based on this recursive strategy was shown to be more stable than the original compound step algorithm, while a stability analysis of a selfadjusting algorithm based on the method has been proposed in [12].
In this paper, we propose a selfadjusting multirate method based on the same concept as [19], that relies on the TRBDF2 method as fundamental single rate solver. The TRBDF2 method has been originally introduced in [4] and more thoroughly analyzed in [11]. It is a second order, one step, Lstable implicit method endowed with a number of interesting properties, as discussed in [11]. For example, it allows for cheap error estimation via an embedded third order method, it can provide continuous output via a cubic Hermite interpolant and it has an explicit second order companion with which it can be combined to form a second order, implicitexplicit additive Runge Kutta method. Due to its favourable properties, it has been recently applied for efficient discretization of high order finite element methods for numerical weather forecasting in [9], [22]. Its appealing features also make it an interesting candidate for multirate approaches targeted at large scale stiff problems, resulting either from the simulation industrial systems or from the spatial discretization of partial differential equations. In particular, the cubic Hermite interpolant can be employed to maintain higher accuracy also for stiff problems in the framework of a multirate approach. The proposed selfadjusting multirate TRBDF2 method is equipped here with a partitioning and time step selection criterion based on the technique proposed in [7].
The stability of the method has been analyzed in the framework of the classical linear model problem. Even though no complete theoretical results can be achieved, numerical computation of the stability function norm for a range of relevant model problems shows that the method has good stability properties. This is true not only for contractive problems with strictly dissipative behaviour, but also for problems with purely imaginary eigenvalues, which suggests that the method could be advantageous also for the application to hyperbolic PDEs and structural mechanics problems. The numerical results obtained on several benchmarks show that the application of the proposed method leads to significant efficiency gains, that are analogous to those achieved by other selfadjusting multirate approaches with respect to their single rate counterparts. In particular, the method appears to perform well when applied to the time discretization of nonlinear, hyperbolic partial differential equations, allowing to achieve automatic detection of complex localized phenomena such as shock waves and significant reductions in computational cost.
In section 2, the selfadjusting implicit multirate approach is described. The time step refinement criterion and the resulting partitioning principle between active and latent components are presented in section 3. The TRBDF2 method is reviewed in section 4. The interpolation procedures we have considered are described in section 5. A linear stability analysis is then presented in section 6. The approach proposed for such analysis is rather general and allows to study generic multirate methods based on any one step solver and interpolation procedure. Numerical simulations are presented in section 7, while some conclusions and perspectives for future work are presented in 8.
2 A self adjusting implicit multirate approach
We focus on multirate methods for the solution of the Cauchy initial value problem
(2.1) 
We consider time discretizations associated to discrete time levels such that and we will denote by the numerical approximation of We will also denote by the implicitly defined operator whose application is equivalent to the computation of one step of size of a given single step method. While here only implicit methods will be considered, the whole framework can be extended to explicit and IMEX methods. Notice that, if is the projector onto a linear subspace with dimension the operator that represents the solution of the subsystem obtained freezing the components of the unknown belonging to to the value can be defined by Furthermore, we will denote by an interpolation operator, that provides an approximation of the numerical solution at intermediate time levels where Linear interpolation is often employed, but, for multistage methods, knowledge of the intermediate stages also allows the application of more accurate interpolation procedures without substantially increasing the computational cost.
We propose a selfadjusting multirate algorithm that is a generalization of the method introduced in [19], in which the choice of the local time step is left completely to the time step selection criterion. The multirate algorithm can be described as follows.

Perform a tentative global (or macro) time step of size with the standard single rate method and compute The specific way in which error control mechanisms are applied to the choice of the global and local time steps are crucial for the efficiency of the algorithm and will be discussed in detail in section 3.

Compute the error estimator and partition the state space into active and latent variables, based on the value of the error estimate. The projection onto the subspace of the active variables is denoted by while the projection onto the complementary subspace will be denoted by Define as well as and

For choose a local (or micro) time step for the active variables, based on the value of the error estimator. Set

Update the latent variables by interpolation

Update the active variables by computing

Compute the error estimator for the active variables only and partition again into latent and active variables. Denote by the new subspace of active variables and by the corresponding projection.

Repeat 3.1)  3.3) until

Clearly, the effectiveness of the above procedure depends in a crucial way on the accuracy and stability of the basic ODE solver as well as on the time step refinement and partitioning criterion. Furthermore, as well known in multirate methods, unconditional stability of the solver does not necessarily entail that the same property holds for the derived multirate solver. The refinement and partitioning criterion will be described in detail in section 3. Here, we only remark that, as it will be shown by the numerical experiments in section 7, the approach outlined above is able to reduce significantly the computational cost with respect to the equivalent single rate methods, without a major reduction in stability for most of the envisaged applications.
3 The time step refinement and partitioning criterion
We now describe in detail the time step refinement and partitioning strategy that we have used in the multirate algorithm described in section 2. Our approach is based on the strategy proposed for an explicit Runge Kutta multirate method in [7], where the time steps for refinement are obtained from the error estimates of the global step. The user specified tolerance plays an important role in the partitioning of the system. In [7], a simple absolute error tolerance was considered. However, in most engineering system simulations, whenever the typical values of different components can vary greatly, the tolerance used is in general a combination of absolute and relative error tolerances, see e.g. [21]. We have thus extended the strategy proposed in [7] in order to employ such a combination of absolute and relative error tolerances.
More specifically, denote by the user defined error tolerances for relative and absolute errors, respectively. Furthermore, assume that the tentative global step has been computed and that an error estimator is available. The first task of the time step selection criterion is to asses whether the global time step has been properly chosen. Denoting by the th components of the error estimator and of the tentative global step numerical solution, respectively, we define a vector of normalized errors with components
Since clearly the condition has to be enforced, before proceeding to the partitioning into active and latent variables, this condition is checked and the global step is repeated with a smaller value of whenever it is violated. Numerical experiments have shown that, while an increase of efficiency with respect to the single rate version of the method is always achieved, independently of the choice of the tentative step, the greatest improvements are only possible if the global time step does not have to be repeated too often.
Once the condition is satisfied, the set of indices of the components flagged for the first level of time step refinement is identified by
(3.1) 
where is a user defined coefficient. The smaller the value of the larger is the fraction of components marked for refinement. Notice that, if is set to unity, the algorithm effectively operates in single rate mode. For each iteration of the algorithm described in section 2, active variable sets are then defined analogously. In each iteration the time step is chosen according to the standard criterion (see e.g. [14], [19])
(3.2) 
where is the convergence order of the solver and is an user defined safety parameter.
4 The single rate TRBDF2 method
The TRBDF2 method was originally proposed in [4]. It is a composite one step, two stage method, consisting of one stage of the trapezoidal method followed by another of the BDF2 method. The stages are so adjusted that both the trapezoidal and the BDF2 stages use the same Jacobian matrix. This composite method has been reinterpreted in [11] as a one step Diagonally Implicit Runge Kutta (DIRK) method with two internal stages The TRBDF2 method is also in some sense the optimal method among all 3stage DIRK methods, owing to the following properties:

it is first same as last (FSAL), so that only two implicit stages need to be evaluated;

the Jacobian matrix for both the stages is the same;

it has an embedded third order companion that allows for a cheap error estimator;

the method is strongly SStable;

all the stages are evaluated within the time step;

it is endowed with a cubic Hermite interpolation algorithm for dense output that yields globally continuous trajectories.
Features 3), 5) and 6) will play an important role in the multirate method proposed here. Furthermore, as shown in [9], the method has an explicit second order companion with which it can be combined to form a second order implicitexplicit additive Runge Kutta method. Due to its favourable properties, it has been recently applied for efficient discretization of high order finite element methods for numerical weather forecasting in [9], [22].
The TRBDF2 method, considered as a composite method consisting of a step with the trapezoidal method followed by a step of the BDF2 method, can be written as
(4.1)  
The TRBDF2 method viewed as a DIRK method has the following Butcher tableau:
*
where , and and the row * corresponds to the embedded third order method that can be used to build a convenient error estimator. Here, the value is chosen for the two stages to have the same Jacobian matrix, as well as in order to achieve Lstability, as it will be shown shortly. It can be seen that the method has the First Same As Last (FSAL) property, i.e., the first stage of any step is the same as the last stage of the previous step. Thus, in any step, the first explicit stage need not be computed.
The implementation of the two implicit stages is done as suggested in [11]. The two stages according to the Butcher tableau are given by equation (4.1). Instead of iterating on the variable , we define another variable and solve for this variable by iteration. Thus, for the first implicit stage, we take and is computed by Newton iterations as
where denotes the Jacobian of Similarly, for the second implicit stage we take and the Newton iteration is given by
The reason for doing so is that if the problem is stiff, a function evaluation will amplify the numerical error in the stiff components. Iterating on , however, ensures that it is computed to within the specified tolerance. For a more detailed discussion the interested reader is referred to [11]. An important point to be noted is that, although the TRBDF2 method is Lstable, its third order companion formula is not. Therefore, the error estimate at time level given by
cannot be expected to be accurate for stiff problems. To overcome this problem, it is suggested in [11] to modify the error estimate by considering as error estimator the quantity defined as the solution of the linear system
(4.2) 
This modification of the error estimate allows to improve it for stiff components, while preserving its accuracy in the limit of small time steps. Notice that the stability function of the TRBDF2 method is given by
(4.3) 
whence it can be seen that the method is Lstable for
5 The interpolation procedures
An essential component of any multirate algorithm is the procedure employed to reconstruct the values of the latent variables at those intermediate time levels for which only the active variables are computed. Selfadjusting multirate procedures based on implicit methods, such as the one proposed in [19] and that presented in this paper, can avoid the use of extrapolation, thus increasing their overall stability. The simplest and most commonly used procedure is linear interpolation, that is defined for as
(5.1) 
In case a multistage solver is employed, higher order interpolation can also be employed at no extra cost. Assuming that a sufficiently accurate approximation of the solution is available at time and setting quadratic Lagrange interpolation can then be written as
An interesting feature of the TRBDF2 method in the multirate framework is that the method, as explained in [11], is endowed with a cubic Hermite, globally interpolation algorithm for dense output. Using the notation of section 4, the Hermite cubic interpolant can be defined as
where, for one has
and for one has instead
This interpolant allows to have accurate approximations of the latent variables without extra computational cost or memory storage requirements, since it employs the intermediate stages of the TRBDF2 method in order to achieve higher order accuracy. This contributes to making the TRBDF2 method an attractive choice as the basis for a multirate approach.
6 Linear stability analysis
In this section, we will study the linear stability of the multirate algorithm outlined in the previous sections in the case of a generic one step solver. Following the approach employed in [12] for the method, we will consider the special case in which the macro time step is constant and denoted by and a single refinement is carried out, with the refined time step used for the active components given by As a result, a single subspace of active variables is introduced, with dimension As in section 2, will denote the projector onto this subspace, while will denote the corresponding projector onto the complementary subspace of the latent variables. We will also denote by the corresponding natural embedding operators of into As a result, the operators and represent the zeroing of the components of a given vector belonging to respectively. It is to be remarked that this special setting is a major simplification with respect to the generality of the algorithm described in section 2, so that the results of the analysis may not fully explain the behaviour of the algorithm in practical applications. In particular, large values of the stability function norm in this context do not necessarily imply that the method is unable to achieve the desired accuracy when applied in its most general form.
We now consider the linear problem and we set . We assume that the amplification matrix of the basic one step solver can be written as
where are polynomials in Notice that this includes most standard one step solvers, either explicit or implicit. We will also use the notations
Notice that, in the special case considered here, the interpolation operator can be represented by the application to the data of a matrix whose entries only depend on and on the amplification matrix of the single rate method employed. For example, the linear interpolant (5.1) is represented in this case by the matrix
while for the cubic Hermite interpolant (5) one has
where and
Given these definitions, one step of the selfadjusting multirate algorithm can be rewritten in this special setting as
(6.1)  
Recalling the previous definitions, the equation for can be rewritten as
which implies
Substituting this expression in the last equation of (6) one obtains
(6.2)  
The stability matrix of the complete multirate method can then be obtained deriving from (6) an explicit expression for and substituting in
Introducing for compactness the notation
one can express the stability matrix as follows
We will not attempt a complete theoretical analysis based on the previously derived expression of However, the same expression can be employed to compute numerically the stability matrix norm and spectral radius for relevant model problems. We have done so for three classes of linear ODE systems Notice that, for simplicity, all the systems have been rearranged so that the latent variables are the first of the vector of unknowns.
Firstly, we consider a system with the same characteristics as those employed in [12], defined by
(6.3) 
In particular, in the notation of [12], these coefficient values correspond to the case of The amplification matrix and its spectral and matrix norms have been computed for different values of the time step, up to a maximum value such that Therefore, the maximum time step considered is approximately 100 times larger than that of the customary stability restriction for explicit schemes. The values of the matrix norms are reported in figures 1, 2, for the spectral norm and the other matrix norms, respectively. It can be observed that no stability problems arise and that the multirate algorithm appears to be insensitive to the choice of the interpolation method.
We have then considered the dynamical system defined
(6.4) 
The system represents two masses such that the first is tied to a wall by a spring of elastic constant and the second is tied to the first by a spring of elastic constant while represent friction parameters. Similar systems have been often used to analyze empirically the stability of numerical methods for structural mechanics, see e.g. [6]. We have considered as an example the case The values of the matrix norms are reported in figures 3, 4, for the spectral norm and the other matrix norms, respectively. We observe again that, even though the stability function norm can achieve large values, the method is still stable in the spectral norm sense. Furthermore, it does not appear to be sensitive to the choice of the interpolation operator. In figure 5, the same quantities are plotted for the same system in the case The good stability behaviour displayed in these tests seems to justify further work on the application of this multirate approach to structural mechanics problems with multiple time scales, such as those considered e.g. in [6]. As a comparison, we also plot in figure 6 the corresponding norms for the single rate TRBDF2 algorithm, that display an entirely analogous behaviour.
Finally, we have considered a linear ODE system resulting from the discretization of a linear heat equation by second finite differences, with a diffusivity parameter times larger for the last 20 variables than for the first 20. All matrix norms are essentially identical to one over the whole range of time step values. Analogous results are obtained considering the corresponding system for linear advectiondiffusion equation, taking in this case centered finite differences for the advection discretization and assuming the velocity and diffusivity parameters to be times larger for the last 20 variables than for the first 20. The values of the matrix norms are reported in figure 7 for the cubic interpolation case. In figure 8, the analogous quantities are plotted instead in the case of a pure advection system with the same velocity values as in the previous case. The good stability properties that result in this case seem to justify further work on the application of this multirate approach to hyperbolic equations with multiple time scales, such as those considered e.g. in [22].
7 Numerical experiments
Several benchmark problems already considered in the literature have been employed to assess the performance of the multirate algorithm outlined in section 2. The self adjusting multirate approach based on the TRBDF2 solver has been compared to its single rate counterpart, considering several performance indicators, such as the CPU time required by the codes, the number of scalar function calls and the number of components being computed at any given time step, as an estimate of the computational workload for each time step. In all the numerical experiments, the TRBDF2 multirate algorithm employed the cubic Hermite interpolant and a nonlinear solver based on the Newton method with approximate computation of the Jacobian to solve the nonlinear systems involved.
7.1 The Inverter Chain Problem
For the first test we consider the inverter chain problem which is an important test problem from the field of electrical circuits. It has also been considered e.g. in [19], [23]. The system of equations is given by
(7.1)  
where represent a chain of inverters, is the operating voltage corresponding to the logical value 1 and serves as a stiffness parameter. The function is defined by
(7.2) 
The parameter values used were and The initial condition was defined as
(7.3) 
while the input signal was given by
(7.4) 
In this test case, the values of all the components vary in the range , so that only the absolute tolerance was used.
Figure 9 shows the solution of the inverter chain problem described above. The input signal can be seen to be propagating across the chain of inverters. It is seen that the input signal is also smoothed by the inverter chain. The reference solution was computed by the ode15s MATLAB solver with stringent tolerance and small maximum time step. The same reference solution was employed to assess the accuracy of the multirate approach with respect to its single rate counterpart. The maximum norm errors at the final time of the simulation are shown in figure 10. Even though the multirate approach introduces a larger error than that of the single rate counterpart, the errors committed are in general close to the specified tolerances.
The performance of the algorithms with respect to the different metrics was analyzed for different values of the tolerance. Figure 11 shows the plot of CPU time taken by the multirate and single rate TRBDF2 algorithms. At a tolerance of a speedup of a factor 3 was achieved. Figure 12 shows the total number of scalar function evaluations at different tolerances. Again it can be seen that the performance of the multirate algorithm is better than that of the corresponding single rate algorithm. At a tolerance of the number of function evaluations decreased by a factor of 3 for the TRBDF2. For a tolerance of , the multirate TRBDF2 made a larger number of function evaluations than it did at tolerance . Indeed, if the tolerance is not too stringent, the time step adaptation mechanism causes the algorithm to try larger steps, which makes it harder for the Newton solver to converge.
In figure 13, a portion of the socalled space time diagram of the multirate algorithm is shown, denoting all the active components at each time step. For clarity, only a zoomed in view of 200 inverters for the first 500 time steps is shown. SInce the simulation is started with a very conservative value of the time step, the first few time steps involve all the degrees of freedom. The time step is then gradually increased by the adaptation mechanism, so that at a later stage there are only a few components being refined at each time step. The set of refined components moves forward across the inverters as the simulation goes on. This is as expected, because the signal is propagating across the inverter chain and the active components are being refined, while the others are being integrated with larger timesteps. The number of spacetime points in the figure 13 can be regarded as a measure of the computational workload of the algorithm. Obviously, for a single rate algorithm this would simply be equal to the number of components times the number of timesteps. Figure 14 shows this estimate of computational workload for the algorithms for the inverter chain problem. With regard to this metric, using a tolerance of the multirate algorithm leads to a gain in efficiency of 3.4 times with respect to the single rate algorithm.
7.2 A reaction diffusion problem
We then consider a test in which the ODE problem to be solved results from the space discretization of a partial differential equation. In particular, we consider the reaction diffusion problem also discussed in [19], whose associated PDE can be written as
(7.5) 
in the domain , . The initial and boundary conditions are given by
The values used for parameters were , , and . The equation was discretized in space by simple second order finite differences on a mesh of constant spacing with different values of Even though no analytic solution is available, the solution is well known to consist in a wave moving in the positive direction and connecting the two stable states of the nonlinear reaction potential. Figure 15 shows the solution as a function of space at different time instants.
As in the previous section, we have compared the accuracy of the proposed algorithm with respect to the reference solution computed by the MATLAB ode15s solver with tight tolerance and small maximum time step values. Figure 16 shows the plot of the errors at the final simulation time for different values of user specified tolerance. The performance of the algorithms with respect to the different metrics was analyzed for different values of the tolerance. Figure 17 shows the plot of CPU time and the number of function evaluations required by the multirate and single rate TRBDF2 algorithms, respectively. At a tolerance of the speedup of the multirate algorithm with respect to the single rate algorithm was 2.7.
7.3 Linear Advection Equation
As first example of hyperbolic equations we consider the linear problem
(7.6) 
with a Gaussian initial datum. To discretize in space we used a first order upwind method with periodic boundary conditions. The interval time is with a number of cells equal to The relative error tolerance was set to while the absolute error tolerance was set to The initial size of the time step was taken to be . The upwind discretization entails a relevant amount of numerical diffusion, which is responsible for the spreading of the initial profile. This effect also entails that the number of computed components increases as time progresses, see Figure 18.
Method  CPU time  Number of time steps 

Single rate TRBDF2  
Multirate TRBDF2 
Time [s]  Cubic Interp.  Linear Interp.  Single rate 

Time [s]  Cubic Interp.  Linear Interp.  Single rate 

The CPU times and total number of computed time steps are reported in Table 1 for the multirate and single rate TRBDF2 methods, respectively. For the multirate method, cubic interpolation was found to be necessary in order to achieve an improvement over the performance of the single rate method, while linear interpolation lead to an excessive number of rejections of macro steps. In order to assess also the accuracy of the multirate approach, the relative errors obtained in this test case are reported in Table 2. More specifically, errors obtained by the multirate TRBDF2 method with cubic Hermite interpolator, the multirate TRBDF2 method with linear interpolator and the singlerate TRBDF2 method are compared. It can be seen that all three approaches yield comparable errors. Since the first order upwind space discretization is responsible for the largest part of the error, we also compute the error with respect to the reference solution obtained by discretizing in time with the MATLAB ode45 solver set to a suitably stringent tolerance. The results, reported in Table 3, show that also in this case the errors obtained with the three variants are entirely analogous, albeit smaller by several orders of magnitude due the factoring out of the spatial discretization error.
7.4 Burgers equation
Finally, as a nonlinear test for hyperbolic equations, we consider the Riemann problem for the inviscid Burgers equation
(7.7) 
which amounts to assigning a piecewise constant initial datum We first consider the case for which a shock wave solution is obtained. The space discretization is given by finite volume method with Rusanov numerical flux, see e.g. [15]. The interval time is with a number of cells equal to The relative error tolerance was set to while the absolute error tolerance was set to and the tolerance for the Newton solver required by the implicit TRBDF2 method was taken to be . We took the initial size of the time step equal to . For the Riemann problem we used in this first case while .
Method  CPU time  Number of time steps 

Single rate TRBDF2  
Multirate TRBDF2 
Time [s]  Cubic Interp.  Linear Interp.  Single rate 

Time [s]  Cubic interp.  Linear interp.  Single rate 

Figure 19 shows how the shock wave solution evolves in space as time progresses. It can be seen how the automatically selected set of active components moves along with the shock wave and does not increase in size. The same result is apparent from the space time diagram in Figure 20. The CPU times and total number of computed time steps are reported in Table 4 for the multirate and single rate TRBDF2 methods, respectively. For the multirate method, cubic interpolation was found to be necessary in order to achieve an improvement over the performance of the single rate method, while linear interpolation lead to an excessive number of rejections of macro steps. In order to assess also the accuracy of the multirate approach, the relative errors obtained in this test case are reported in Table 5. More specifically, errors obtained by the multirate TRBDF2 method with cubic Hermite interpolator, the multirate TRBDF2 method with linear interpolator and the singlerate TRBDF2 method are compared. It can be seen that all three approaches yield comparable errors. Since the first order upwind space discretization is responsible for the largest part of the error, we also compute the error with respect to the reference solution obtained by discretizing in time with the MATLAB ode45 solver set to a suitably stringent tolerance. The results, reported in Table 6, show that also in this case the errors obtained with the three variants are entirely analogous, albeit smaller by several orders of magnitude due the factoring out of the spatial discretization error.
In order to highlight the capability of the adaptive strategy to select automatically a time step that is compatible with the underlying dynamics, we have also reported the maximum Courant numbers at each time step, computed as
(7.8) 
where is the total number of cells. In Figure 21 a), the Courant number values associated to the global time steps are represented by red circles, while those associated to local, refined time steps are represented by blue symbols. It can be seen that, while large Courant numbers are used for the latent components, much smaller values are employed for the active components that correspond to the locations actually crossed by the shock wave. In Figure 21 b) the corresponding fractions of active components are displayed, highlighting the significant reduction in computational cost in this case.
We then consider the case for which a rarefaction wave solution is obtained. More specifically, we consider and . The other parameters are the same as in the previous case.
Method  CPU time  Numb. time steps 

Single rate TRBDF2  
Multirate TRBDF2 
Time [s]  Cubic interp.  Linear interp.  Single rate 

Time [s]  Cubic interp.  Linear interp.  Single rate 

Figure 22 shows how the rarefaction wave solution evolves in space as time progresses. It can be seen how the automatically selected set of active components moves along with the wave. The same result is apparent from the space time diagram in Figure 23. Since in this case the solution undergoes significant changes in a larger number of cells, the number of active components is correspondingly larger than in the shock wave case.
The CPU times and total number of computed time steps are reported in Table 7 for the multirate and single rate TRBDF2 methods, respectively. For the multirate method, cubic interpolation was found to be necessary in order to achieve an improvement over the performance of the single rate method, while linear interpolation lead to an excessive number of rejections of macro steps. In order to assess also the accuracy of the multirate approach, the relative errors obtained in this test case are reported in Table 8. More specifically, errors obtained by the multirate TRBDF2 method with cubic Hermite interpolator, the multirate TRBDF2 method with linear interpolator and the singlerate TRBDF2 method are compared. It can be seen that all three approaches yield comparable errors. Since the first order upwind space discretization is responsible for the largest part of the error, we also compute the error with respect to the reference solution obtained by discretizing in time with the MATLAB ode45 solver set to a suitably stringent tolerance. The results, reported in Table 9, show that also in this case the errors obtained with the three variants are entirely analogous, albeit smaller by several orders of magnitude due the factoring out of the spatial discretization error.
In Figure 24 a) we have reported the Courant number values associated to each time step. Comparing to the shock wave case, we can see that larger Courant numbers are allowed by the error estimator also in the case of refined time steps, as a consequence of the different dynamics of this kind of solution. In Figure 24 b) the corresponding fractions of active components are displayed, highlighting the lesser reduction in computational cost in this case with respect to the shock wave solution.
8 Conclusions
We have proposed a selfadjusting multirate method that relies on the TRBDF2 method as fundamental single rate solver. The TRBDF2 method has a number of interesting properties, such as cheap error estimation via an embedded third order method and continuous output via a cubic Hermite interpolant, that can be exploited to increase efficiency and accuracy of a multirate approach.
Our selfadjusting multirate TRBDF2 method has been coupled to a partitioning and time step selection criterion based on the technique proposed in [7]. The stability of the method has been analyzed in the framework of the classical linear model problem. Even though the results achieved were only based on the numerical computation of the stability function norm, we show that for a range of relevant stiff model problems the method has remarkable stability properties. This is true not only for contractive problems with strictly dissipative behaviour, but also for problems with purely imaginary eigenvalues, which suggests that the method could be advantageous also for the application to hyperbolic PDEs and structural mechanics problems. The numerical results obtained on several standard benchmarks show that application of the proposed method leads to significant efficiency gains, that are analogous to those achieved by other selfadjusting approaches when compared to their corresponding single rate variants. In particular, the has been applied to the time discretization of nonlinear, hyperbolic partial differential equations, allowing to achieve automatic detection of complex localized phenomena such as shock waves and significant reductions in computational cost.
In a companion work, we will focus on the extensive application of the method described in this paper to nonlinear hyperbolic equations and on the derivation of mass conservative versions of this multirate approach.
Acknowledgements
This paper contains developments of the results presented in the PhD thesis by A.R. [16] and in the Master thesis by L.D. [5], both at Politecnico di Milano. A.R.’s PhD grant was supported by the Erasmus Mundus Heritage project, while he is presently supported by the Science Foundation Ireland grant 13/RC/2094. L.B. was partially supported by the INDAM  GNCS 2015 project Metodi numerici semiimpliciti e semiLagrangiani per sistemi iperbolici di leggi di bilancio. Useful discussions with Luca Formaggia are gratefully acknowledged, as well the comments by Nicola Guglielmi and Claus Führer on the first draft of the thesis by Akshay Ranade.
References
 [1] J.F. Andrus. Numerical solution of systems of ordinary differential equations separated into subsystems. SIAM Journal of Numerical Analysis, 16:605–611, 1979.
 [2] J.F. Andrus. Stability of a multirate method for numerical integration of ODEs. Computers and Mathematics with Applications, 25:3–14, 1993.
 [3] M. Baldauf. Linear stability analysis of RungeKuttabased partial timesplitting schemes for the Euler equations. Monthly Weather Review, 138:4475–4496, 2010.
 [4] R.E. Bank, W.M. Coughran, W. Fichtner, E.H. Grosse, D.J. Rose, and R.K. Smith. Transient simulation of silicon devices and circuits. IEEE Transactions on Electron Devices, 32:1992–2007, 1985.
 [5] L. Delpopolo Carciopolo. Application of the multirate TRBDF2 method to the time discretization of nonlinear conservation laws. Master’s thesis, Degree in Mathematical Engineering, Politecnico di Milano, 2015.
 [6] S. Erlicher, L. Bonaventura, and O. S. Bursi. The analysis of the generalized method for nonlinear dynamic problems. Computational Mechanics, 28:83–104, 2002.
 [7] P.K. Fok. A linearly fourth order multirate Runge–Kutta method with error control. Journal of Scientific Computing, pages 1–19, 2015.
 [8] C.W. Gear and D.R. Wells. Multirate linear multistep methods. BIT Numerical Mathematics, 24:484–502, 1984.
 [9] F.X. Giraldo, J.F. Kelly, and E.M. Constantinescu. Implicitexplicit formulations of a threedimensional nonhydrostatic unified model of the atmosphere (NUMA). SIAM Journal of Scientific Computing, 35(5):1162–1194, 2013.
 [10] M. Günther, Anne Kvaernø, and P. Rentrop. Multirate partitioned RungeKutta methods. BIT Numerical Mathematics, 41:1504–514, 2001.
 [11] M.E. Hosea and L.F. Shampine. Analysis and implementation of TRBDF2. Applied Numerical Mathematics, 20:21–37, 1996.
 [12] W. Hundsdorfer and V. Savcenco. Analysis of a multirate thetamethod for stiff ODEs. Applied Numerical Mathematics, 59:693–706, 2009.
 [13] J.B. Klemp and R.B. Wilhelmson. The simulation of threedimensional convective storm dynamics. Journal of the Atmospheric Sciences, 35:1070–1096, 1978.
 [14] J.D. Lambert. Numerical methods for ordinary differential systems: the initial value problem. Wiley, 1991.
 [15] R. J. Leveque. Finite Volume Methods for Hyperbolic Problems. Cambridge University Press, Cambridge, UK, 2002.
 [16] A. Ranade. Multirate Algorithms Based On DIRK Methods For Large Scale System Simulation. PhD thesis, Politecnico di Milano, 2016.
 [17] J.R. Rice. Split RungeKutta methods for simultaneous equations. Journal of Research of the National Institute of Standards and Technology, 60, 1960.
 [18] V. Savcenco. Comparison of the asymptotic stability properties for two multirate strategies. Journal of Computational and Applied Mathematics, 220:508–524, 2008.
 [19] V. Savcenco, W. Hundsdorfer, and J.G. Verwer. A multirate time stepping strategy for stiff ordinary differential equations. BIT Numerical Mathematics, 47:137–155, 2007.
 [20] W.C. Skamarock and J.B. Klemp. The stability of timesplit numerical methods for the hydrostatic and the nonhydrostatic elastic equations. Monthly Weather Review, 120:2109–2127, 1992.
 [21] G. Söderlind and L. Wang. Evaluating numerical ODE/DAE methods, algorithms and software. Journal of Computational and Applied Mathematics, 185:244–260, 2006.
 [22] G. Tumolo and L. Bonaventura. A semiimplicit, semiLagrangian, DG framework for adaptive numerical weather prediction. Quarterly Journal of the Royal Meteorological Society, 141:2582–2601, 2015.
 [23] A. Verhoeven, B. Tasić, T.G.J. Beelen, E.J.W. ter Maten, and R.M.M. Mattheij. Automatic partitioning for multirate methods. In Scientific Computing in Electrical Engineering, pages 229–236. Springer, 2007.