Comparing different muscle activation dynamics using sensitivity analysis
In this paper, we mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze (1977) and Zajac (1989). Both models are representative of a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). The models incorporate some parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis as a mathematical tool for investigating the influence of model parameters on the solution of the ODEs. That is, we adopted a former approach (Lehman and Stark, 1982) for calculating such (first order) sensitivities. Additionally, we expanded it to treating initial conditions as parameters and to calculating second order sensitivities. The latter quantify the non-linearly coupled effect of any combination of two parameters. As a completion we used a global sensitivity analysis approach from Chan et al. (1997) to take the variability of parameters into account. The method we suggest has numerous uses. A theoretician striving for model reduction may use it for identifying particularly low sensitivities to detect superfluous parameters. An experimenter may use it for identifying particularly high sensitivities to facilitate determining the parameter value with maximised precision.
We found that, in comparison to Zajac’s linear model, Hatze’s non-linear model incorporates a set of parameters to which activation dynamics is clearly more sensitive. Other than Zajac’s model, Hatze’s model can moreover reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly, we extracted a specific parameter set for Hatze’s model that combines best with a particular muscle force-length relation. We also give an outlook on how sensitivity analysis could be used for optimising parameter sets in future work.
keywords:biomechanical model, direct dynamics, ordinary differential equation
List of symbols
|contractile element (CE) length||time-depending|
|contraction velocity||first time derivative of|
|optimal CE length||muscle-specific|
|relative CE length||(dimensionless)|
|maximum isometric force of the CE||muscle-specific|
|neural muscle stimulation||time-depending ; here: a fixed parameter|
|muscle activity (bound -concentration)||time-depending|
|basic activity according to Hatze (1981)|
|activity according to Hatze (1977)||time-length-depending|
|initial condition for Hatze’s activation ODE||mutable|
|activity according to Zajac (1989)||time-depending|
|initial condition for Zajac’s activation ODE||mutable|
|activation time constant in Zajac (1989)||here:|
|deactivation time constant in Zajac (1989)||here: or|
|corresponding deactivation boost (Zajac, 1989)|
|exponent in Hatze’s formulation||2 or 3|
|activation frequency constant in Hatze (1977)||range: ; here:|
|maximal -concentration in Hatze (1978)|
|representation of free -concentration (Hatze, 1977, 1981)||time-depending|
|length dependency of Hatze (1978) activation dynamics|
|pole in Hatze’s length dependency function|
|factor in van Soest (1992); Hatze (1977)||() or ()|
|merging of and||; here: () or ()|
|model parameter set|
In science, knowledge is gained by an interplay between quantitative real world measurements of physical, chemical, or biological phenomena and the development of mathematical models accounting for the dynamical processes behind. In general, such phenomena are determined as spatio-temporal patterns of physical measures (state variables). Modelling consists of distinguishing the surrounding world from a system that yields the phenomena, and formulating a mathematical description of the system, a model, that can calculate its state variables. The calculations depend on model parameters and often on prescribing measured input variables. By changing parameter values and analysing the resulting changes in the values of the state variables, the model may then be used as a predictive tool. This way, the model’s validity can be verified. If the mathematical model description is moreover derived from first principles, the model can potentially even explain the phenomena in a causal sense.
Calculating the sensitivities of a model’s predicted output, i.e., the system’s state variables, with respect to model parameters is a means of eliminating redundancy and indeterminancy from models, and thus helps to identify valid models. Sensitivity analyses can be helpful both in model-based experimental approaches and in purely theoretical work. A modelling theoretician may be looking for parameters to which all state variables are non-sensitive. Such parameters may be superfluous. An experimenter may inspect the model that represents his working hypothesis and analyse which of the model’s state variables are specifically sensitive to a selected parameter. It would then make sense to measure exactly this state variable to identify the value of the selected parameter.
In a biomechanical study, Scovil and Ronsky (2006) applied sensitivity analysis to examine the dynamics of a mechanical multi-body system (a runner’s skeleton) coupled to muscle activation-contraction dynamics. They calculated specific sensitivity coefficients in three slightly different ways. A sensitivity coefficient is the difference quotient that is calculated from dividing the change in a state variable by the change in a model parameter value, evaluated in a selected system state (Lehman and Stark, 1982). The corresponding partial derivative may be simply called “sensitivity”. A sensitivity function is then the time evolution of a sensitivity (Lehman and Stark, 1982). Thus, Lehman and Stark (1982) had proposed a more general and unified approach than Scovil and Ronsky (2006), which allows to systematically calculate the sensitivities of any dynamical system described in terms of ordinary differential equations. As an example for sensitivity functions, Lehman and Stark (1982) had applied their proposed method to a muscle-driven model of saccadic eye movement. By calculating a percentage change in a state variable value per percentage change in a parameter value, all sensitivities can be made comprehensively comparable, even across models. A sensitivity as defined so far is of first order. Methodically, we aim at introducing a step beyond, namely at calculating second order sensitivities. These measures are suited to quantifying how much the sensitivity of a state variable with respect to (w.r.t.) one model parameter depends on changing another parameter. This way, the strength of their interdependent influence on model dynamics can be determined.
In addition to this so-called local sensitivity analysis we furthermore take the variability of the parameters into account aiming for a global sensitivity analysis as presented in Chan et al. (1997) and Saltelli (2000). This approach allows to translate sensitivities with respect to parameters into importances of parameters.
In this study, we will apply the sensitivity analysis to models that predict how the activity of a muscle (its chemical state) changes when the muscle is stimulated by neural signals (electrical excitation). Such models are used for simulations of muscles’ contractions coupled to their activation dynamics. Models for coupled muscular dynamics are often part of neuro-musculo-skeletal models of biological movement systems. In particular, we want to try and rate two specific model variants of activation dynamics formulated by Zajac (1989) and by Hatze (1977). As a first result, we present an example of a simplified version of the Zajac (1989) model, in which sensitivity functions can even be calculated in closed form. Then, we calculate the sensitivities numerically with respect to all model parameters in both models, aiming at an increased understanding of the influence of changes in model parameters on the solutions of the underlying ordinary differential equations (ODEs). Additionally, we discuss which of both models may be the more physiological one. The arguments come from a mixture of three different aspects: sensitivity analysis, others’ experimental findings, and an additional attempt to best fit different combinations of activation dynamics and force-length relations of the contractile element (CE) in a muscle to known data on shifts in optimal CE length with muscle activity (Kistemaker et al., 2005).
2 Two models for muscle activation dynamics
Macroscopically, a muscle fibre or an assembly of muscle fibres (muscle belly) is often mapped mathematically by a one-dimensional massless thread called “contractile component” or “contractile element” (CE) (van Soest, 1992; van Soest and Bobbert, 1993; Cole et al., 1996; Günther et al., 2007; Haeufle et al., 2014). Its absolute length is which may be normalised to the optimal fibre length by . In macroscopic muscle models, the CE muscle force is usually modelled as a function of a force-(CE-)length relation, a force-(CE-)velocity relation, and (CE-)activity . A common view is that muscle activity represents the number of attached cross-bridges within the muscle, normalised to the maximum number available (). It can also be considered as the concentration of bound Ca-ions in the muscle sarcoplasma relative to its physiological maximum. The parameter represents the minimum activity that is assumed to occur without any stimulation (Hatze, 1977).
We analyse two different formulations of muscle activation dynamics, i.e., the time (its symbol: ) evolution of muscle activity . One formulation of muscle activation dynamics was suggested by Zajac (1989) which we modified slightly to take into account:
Here, is meant to represent the (electrical) stimulation of the muscle, thus, a parameter for controlling muscle dynamics. It represents the output of the nervous system’s dynamics applied to the muscle which in turn interacts with the skeleton, the body mass distribution, the external environment, and so with the nervous system in a feedback loop. Electromyographic (EMG) signals can be seen as a compound of such neural stimulations collected in a finite volume (being the input to a number of muscle fibres), over a frequency range, and coming from a number of (moto-)neurons. The parameter denotes the activation time constant, and is the ratio of activation to deactivation time constants (deactivation boost).
An alternative formulation of muscle activation dynamics was introduced by Hatze (1977):
Here, we divided the original equation from Hatze (1977) by the parameter which represents the maximum concentration of free Ca-ions in the muscle sarcoplasma. Thus, the values of the corresponding normalised concentration are . The activity is then finally calculated by the function
and the parameter is shifted to the accordingly renormalised function
with and . Two cases have been suggested by Hatze (1981): (i.e. ) for and (i.e. ) for which has been applied in literature (van Soest, 1992; Kistemaker et al., 2005, 2006, 2007). By substituting equations (2) and (3) into and resubstituting the inverse of (3) afterwards, Hatze’s formulation of an activation dynamics can be transformed into a non-linear differential equation directly in terms of the activity:
with initial condition .
3 Local first and second order sensitivity of ODE systems regarding their parameters
Let and . We then consider a system of ordinary, first order initial value problems (IVP)
where denotes the vector of state variables, the vector of right-hand sides of the ODE, and the set of parameters which the ODE depends on. The vector of initial conditions is abbreviated by
Then, the first order solution sensitivity with respect to the parameter set is defined as the matrix
For simplicity, we denote , , but keep the dependencies in mind. Because the solution might only be gained numerically rather than in a closed-form expression, we have to apply the well-known theory of sensitivity analysis as stated in Vukobratovic (1962); Gelinas (1976); Lehman and Stark (1982); ZivariPiran (2009). Differentiating equation (8) w.r.t. and applying the chain rule yields
|with being the gradient of state variables. Hence, we obtain the following ODE for the first order solution sensitivity|
or in short terms
where is the sensitivity matrix, is the Jacobian matrix with , furthermore the -matrix containing the partial derivatives and the -matrix consisting of zeros only.
By analogy, the second order sensitivity of with respect to is defined as the following -tensor
assuming for all . That is, we assume that the prerequisites of Schwarz’ Theorem (symmetry of the second derivatives) are fulfilled throughout. Again, differentiating w.r.t. and applying the chain rule leads to the ODE
with . For purposes beyond the aim of this paper, a condensed notation introducing the concept of tensor (or Kronecker) products as in ZivariPiran (2009) may be helpful. For a practical implementation in MatLab see Kolda (2006).
Differentiating this equation w.r.t. yields
and differentiating again w.r.t. results in a homogeneous ODE for each component , namely
The parameters of our analysed models are meant to represent physiological processes, and bear physical dimensions therefore. For example, and are frequencies measured in [Hz], whereas is measured in [mol/l]. Accordingly, would be measured in [Hz] and in [s] (note that our model only consists of ODE and therefore we do not need a second index). Normalisation provides a comprehensive comparison between all sensitivities, even across models. For any parameter, the value fixed for a specific simulation is a natural choice. For any state variable, we chose its current value at each point in time of the corresponding ODE solution. Hence, we normalise each sensitivity by multiplying it with the ratio to get the relative sensitivity
A relative sensitivity thus quantifies the percentage change in the -th state variable value per percentage change in the -th parameter value. This applies accordingly to the second order sensitivity
It can be shown that this method is valid and mathematically equivalent to another common method in which the whole model is non-dimensionalised a priori (Scherzer, 2009). A non-normalised model formulation has the additional advantage of usually allowing a more immediate appreciation of and transparent access for experimenters. In the remainder of this manuscript, we always present and discuss relative sensitivity values normalised that way.
4 Variance-based global sensitivity analysis
The previous presented differential sensitivity analysis is called a local method because it does not take the physiological range of parameter values into account. If we imagine the parameter space as a -dimensional cuboid , where are the minimal and maximal parameter value, we can only fix a certain point and calculate the local gradient of the solution w.r.t. . By changing only one parameter at once the investigated star-shaped area lies within a ball around whose volume vanishes in comparison to for an increasing number of parameters as shown in Saltelli and Annoni (2010).
For taking the range of parameter values into account, Saltelli (2000) gave a detailed elaboration of so-called global methods. The main idea behind most global methods is to include a statistical component to scan the whole parameter space and combine the percentage change of the state variable per percentage change of the parameters with the variability of the parameters themselves. For an overview of the numerous methods like ANOVA, FAST, Regression or Sobol’ Indexing we refer the reader to Saltelli (2000) and Frey (2003).
In this paper we want to sketch the main idea of the variance-based sensitivity analysis approach presented in Chan et al. (1997) based on Sobol’ indexing. We chose this method because of its transparency and low computational cost. The aim of this method is to calculate two measurands of sensitivity w.r.t. parameter : the variance based sensitivity function denoted by and the total sensitivity index function denoted by . The functions give a normalised first order sensitivity quite similar to from the previous section but include the parameter range. The functions, however, even include higher order sensitivities and give a measurand for interactions of parameter influences.
A receipt for calculating and can be given as follows. First of all set boundaries for all model parameters, either by model assumptions or literature reference, thus by resulting in . Secondly generate two sets of sample points w.r.t to the underlying probability distribution of each parameter, in our case the uniform distribution. Thirdly calculate sets of new sample points where consists of all sample points in with the -th component of . Consequently consists of the -th component of and every other component of . Fourthly evaluate the model from Eqn. (6) on every of the sample points resulting in a family of solutions. For this family perform the following calculations:
Compute the variance of the family of all solutions as a function of time, namely . This variance function indicates the general model output variety throughout the whole parameter range.
Compute the variance of the family of all solutions resulting from an evaluation of the model at and . Again the variance is a function of time, namely , that indicates the model output variety if only one parameter value is changed.
Compute the variance of the family of all solutions resulting from an evaluation of the model at and , namely , that indicates the model output variety if only one parameter value fixed.
Note that in Chan et al. (1997) the coputations are done via an approximation by Monte-Carlo integrals. The and can finally be calculated as
As a consequence of the normalisation we can give additional properties of and that can be comprehended in (Chan et al., 1997, Fig. 1):
In other words gives the normalised first order sensitivity function of the solution w.r.t. in relation to the model output range. Accordingly gives a relative influence of parameter on the model output regarding all interactions between other parameters. Chan et al. (1997) suggested relating the value to the “importance” of .
5 An analytical example for local sensitivity analysis including a link between Zajac’s and Hatze’s formulations
By further simplifying Zajac’s formulation of an activation dynamics (1) through assuming an deactivation boost (activation and deactivation time constants are equal) and a basic activity , we obtain a linear ODE for this specific case which is equivalent to Hatze’s equation (2) modelling the time evolution of the free Ca-ion concentration:
By analysing this specific case, we aim at making the above described sensitivity analysis method more transparent for the reader. Solving equation (17) yields
depending on just two parameters (stimulation: control parameter) and (time constant of activation: internal parameter) in addition to the initial value . The solution equals the value after about .
Although already knowing the solution (18) explicitly in terms of time and both parameters, we still apply, because of transparency, the more generally applicable, implicit method (9),(12) to determine the derivatives of the solution w.r.t. the parameters (the sensitivities). For that, we calculate the gradient of the right hand side of the ODE (17)
and insert these partial derivatives into equations (9) and (12). Solving the respective three ODEs for the three parameters (, ) and normalising them according to (13) gives the relative sensitivities of w.r.t. , , and as functions of time (see Fig. 1):
A straight forward result is that the time constant has its maximum effect on the solution (Fig. 1: see ) at time . In case of a step in stimulation, the sensitivity vanishes in the initial situation and exponentially approaches zero again after a few further times the typical period . Note that is negative which means that an increase in decelerates activation. That is, for a fixed initial value , the solution value decreases at a given point in time if is increased. After a step in stimulation , the time in which the solution bears some memory of its initial value is equal to the period of being non-sensitive to any further step in (compare to and (19) to (21)). After about the sensitivity has already fallen to about and to about accordingly.
6 The numerical approach and results
Typically, biological dynamics are represented by non-linear ODEs. So the linear ODE used for describing activation dynamics in the Zajac (1989) case (1) is more of an exception. For example, a closed-form solution can be given. (18) is an example as shown in the previous section for the reduced case of non-boosted deactivation (17).
In general, however, non-linear ODEs used in biomechanical modelling, as the Hatze (1977) case (5) for describing activation dynamics, can only be solved numerically. It is understood that any explicit formulation of a model in terms of ODEs allows to provide the partial derivatives of their right hand sides w.r.t. the model parameters in a closed form. Fortunately, this is exactly what is required as part of the sensitivity analysis approach presented in section 3, in particular in equation (9).
As an application for applying this approach, we will now present a comparison of both formulations of activation dynamics. The example indicates that the approach may be of general value because it is common practice in biomechanical modelling to (i) formulate the ODEs in closed form and (ii) integrate the ODEs numerically. Adding further sensitivity ODEs for model parameters is then an inexpensive enhancement of the procedure used to solve the problem anyway.
including the initial conditions. The numerical solutions for these ODEs were computed within the MATLAB environment (The MathWorks, Natick, USA; version R2013b) using the pre-implemented numerical solver which is a Runge-Kutta algorithm of order 5 (for details see (Benker, 2005)).
6.1 Results for Zajac’s activation dynamics: sensitivity functions
We simulated activation dynamics for the parameter set (22) leaving two of the values constant (, ) and varying the other three (initial condition , stimulation , and deactivation boost ). The time courses of the relative sensitivities w.r.t. all parameters are plotted in Fig. 2. In the left column of Fig. 2, is used, in the right column . Pairs of the parameter values and are specified in the legend of Fig. 2, with increasing values of both parameters from top to bottom.
Solutions are non-sensitive to the choice except if both initial activity and stimulation (also approximating the final activity if and ) are very low nearby itself.
The memory (influence on solution) of the initial value is lost after about , almost independently of all other parameters. This loss in memory is obviously slower than in the extreme case (initial value) and (for and exactly the final value; see section 5 and Fig. 1). In that extreme case, the influence (relative sensitivity) of the lowest possible initial value () on the most rapidly increasing solution (maximum possible final value: ) is lost earlier.
The influence of the time constant on the solution is reduced with decreasing difference between initial and final activity values (compare maximum values in Figs. 1 and 2) and, no matter the value, with compoundly raised levels of initial activity and , the latter determining the final activity value if . When deactivation is slower than activation (: right column in Fig. 2) is higher than in the case , both in its maximum amplitude and for longer times after the step in stimulation, especially at low activity levels (upper rows in Fig. 2).
Across all parameters, the solution is in general most sensitive to . However, the influence of the deactivation boost parameter is usually comparable. In some situations, this holds also for the activation time constant (see below). For (Fig. 2, left), the solution becomes a little less sensitive to with decreasing activity level (), which reflects that the final solution value is not determined by alone but by and as much. If deactivation is much slower than activation (: Fig. 2, right), we find the opposite to the case : determines the solution the less the more the activity level rises. Additionally, stimulation somehow competes with both deactivation boost and time constant (see further below). Using the term “compete” is meant to illustrate the idea that any single parameter should have in a sense an individual interest in influencing the dynamics as much as possible in order not to be considered superfluous.
Sensitivity w.r.t generally decreases with increasing activity and stimulation levels, and vanishes at maximum stimulation .
Relative sensitivities , ,
At submaximal stimulation levels , the final solution value is determined to almost the same degree by stimulation and deactivation boost , yet, with opposite tendencies (, ). Both parameters compete, in the above explained meaning, for their impact on the final solution value. Only at maximum stimulation (lowest row in Fig. 2), this parameter competition is resolved in favour of . In this specific case, does not influence the solution at all. For the competition about influencing the solution is intermittently but only slightly biased by : sensitivity peaks at comparably low magnitude around . This influence comes likewise intermittently at the cost of influence: the absolute value of rises a little slower than . In the case , this competition becomes even more differentiated and spreaded out in time. Again at submaximal stimulation and activity levels, the absolute value of is lower than that of but higher than that of , making all three parameters , , and compete to comparable degrees for an impact on the solution until about . Also, does not vanish before about .
6.2 Results for Hatze’s activation dynamics: sensitivity functions
We also simulated activation dynamics for the parameter set (23), leaving now four of the values constant (, , , ) and again varying three others (initial condition , stimulation , and non-linearity ), keeping in mind that the eighth parameter () is assumed to depend on . Again, time courses of the relative sensitivities w.r.t. all parameters (set ) are plotted (see Fig. 3). In the left column of Fig. 3, is used, in the right column . Here again, the same pairs of the parameter values ( and , increasing from top to bottom; see legend of Fig. 3) are used as in the previous section 6.1 (Fig. 2).
Hatze’s activation dynamics (5) are non-linear unlike Zajac’s activation dynamics (1). This non-linearity manifests particularly in a changeful influence of the parameter . Additionally, the parameter is just roughly comparable to the inverse of the exponential time constant in Zajac’s linear activation dynamics.
In Zajac’s linear differential equation (1), establishes a distinct time scale independent of all other parameters. The parameter in Hatze’s activation dynamics (5) is just formally equivalent to the reciprocal of : the sensitivity does not peak stringently at but rather diffusely between about and in both cases and . At first sight, this is not a surprise because the scaling factor in Hatze’s dynamics is rather than just . However, does neither fix an invariant time scale for Hatze’s non-linear differential equation. This fact becomes particularly prominent at extremely low activity levels for (Fig. 3, left, top row) and up to moderately submaximal activity levels for (Fig. 3, right, top two rows). Here, is negative which means that increasing the parameter results in less steeply increasing activity. This observation is counter-intuitive to identifying with a reciprocal of a time constant like . Moreover, as might be expected from the product , the exponent does not linearly scale the time behaviour because peaks do not occur systematically earlier in the case as compared to .
Losing the memory of the initial condition confirms the above analysis of time behaviour based on . At high activity levels (Fig. 3, bottom row), Hatze’s activation dynamics loses memory after practically identical time horizons, no matter the value, seemingly even slower for higher at intermediate levels (Fig. 3, two middle rows), and clearly faster at very low levels (Fig. 3, top row). Still, the parameter does roughly determine the time horizon in which the memory of the initial condition is lost and the influence of all other parameters is continuously switched on from zero influence at .
As in Zajac’s dynamics the solution is generally only sensitive to at very low stimulation levels (Fig. 3, top row). Only at such levels, the case shows, however, the peculiarity that the solution becomes strikingly insensitive to any other parameter than itself (and ). The time evolution of the solution is more or less determined by just this minimum () and initial () activities, and determining the approximate switching time horizon between both. In particular, the dependency, constituting a crucial property of Hatze’s activation dynamics, is practically suppressed for at very low activities and stimulations. For , in contrast, remains on a low but still significant level of about a fourth of the three dominating quantities , , and .
The latter sensitivity w.r.t. itself is extraordinarily high at low activities and stimulations around , both for and (Fig. 3, second row from top), additionally at extremely low levels for (Fig. 3, left, top row). At moderately submaximal levels (Fig. 3, third row from top), the solution is then influenced with an already inverted tendency ( changes sign to positive) after around an time horizon for . Here however, the solution is practically insensitive to for any . At high levels (Fig. 3, bottom row) then, we find that there is no change in the character of time evolution of the solution, whatever specific value. The degree of non-linearity does not matter because the time evolution and the ranking of all other sensitivities is hardly influenced by . In both cases anyway, the rise in activity is speeded up by increasing (), as opposed to low activity and stimulation levels where rises in activity are slowed down (; see also above).
Relative sensitivities , , ,
Of all the remaining parameters, stimulation , scaled maximum free Ca-ion concentration , relative CE length , and the pole of the length dependency in Hatze’s activation dynamics, the latter has the lowest influence on the solution, whereas the influence characters of these four parameters are completely identical. That is, their sensitivities are always positive and coupled by fixed scaling ratios due to all of them occurring within just one product on the right side of (5). and are identical, and the sensitivity w.r.t. is the highest, with a ratio and . Except at very low activity, where plays a dominating role, and except for the generally changeful influence, these are the four parameters that dominate the solution after an initial phase in which the initial activity determines its evolution. The parameter does not have a strong direct influence on the solution. As said above, it yet defines the approximate time horizon in which the influence gets lost and all other parameters’ influence is switched on from zero at .
6.3 Variance-based sensitivity and total sensitivity indices for Zajac’s and Hatze’s activation dynamics
In the table below we give the lower and upper boundaries for every parameter in and used in our calculations. We do not relate any individual boundary value to a literature source but refer to Hatze (1978), Zajac (1989) or Günther et al. (2007) for traceability of our choices.
The left hand side of Fig. 4 shows the variance-based sensitivity functions of every parameter in of Zajac’s model. We compare these results to our previously computed relative first order sensitivity functions from Fig. 2: At first sight and look equal but the variance based sensitivity function increases the duration of influence of a little. For the also peaks at the typical time from but with a smaller amplitude. The behaviour of and is also comparable to and from the second and third row of Fig. 2. Additionally we plotted the sum of all first order sensitivities. This sum indicates which amount of the total variance is covered by first order sensitivities. The closer the sum is to 1 the less second and higher order sensitivities occur.
On the right hand side of Fig. 4 we see the total sensitivity index functions of every parameter in of Zajac’s model. It is noticeable that the look similar to the previous mentioned . We interpret the graphic results as an importance of every parameter as suggested by Chan et al. (1997). Hence, the importance of is only measurable at the beginning of the activation. At the importance is nearly but exponentially vanishing. The parameter is just of a little importance while the activation build-up is in progress. After saturating at a constant activation level the importance is only shared between (major importance) and (minor importance).
The left hand side of Fig. 5 shows the variance-based sensitivity functions of every parameter in of Hatze’s model. The curve shape of is similar to and . For the is again comparable to the second and third row in Fig. 3. is peaking in a small value and similar to . The main differences are , , and which are significant lower than the respective relative sensitivity functions.
On the right hand side we see again the total sensitivity indices of Hatze’s model. As above the and look alike and allow an interpretation in the meaning of importance. Hence, is as important for Hatze’s model as is for Zajac’s. In the steady state of Hatze’s dynamics, which is reached after a longer time period than in Zajac’s, the importance is again split almost exclusively between (major importance) and (minor importance). All other parameter are of almost negligible importance.
7 Consequences, discussion, conclusions
7.1 A bottom line for comparing Zajac’s and Hatze’s activation dynamics: second order sensitivities
At first sight, Zajac’s activation dynamics (Zajac, 1989) is more transparent because it is descriptive in a sense that it captures the physiological behaviour of activity rise and fall in an apparently simple way. It thereto utilises a linear differential equation with well-known properties, allowing for a closed-form solution. It needs only a minimum number of parameters to describe the Ca-ion influx to the muscle as a response to electrical stimulation: the stimulation itself as a control parameter, the time constant for an exponential response to a step increase in stimulation, and a third parameter (deactivation boost) biasing both the rise time and saturation value of activity depending on stimulation and activity levels. The smaller (deactivation in fact slowed down compared to activation), the faster is the very activity level reached, at which saturation would occur for . Saturation for occurs at a level that is higher than . Altogether, in Zajac’s as compared to Hatze’s activation dynamics, the outcome of setting a control parameter value in terms of how fast and at which level the activity saturates seems easier to be handled by a controller.
A worse controllability of Hatze’s activation dynamics (Hatze, 1977) may be expected from its non-linearity, a higher number of parameters, and their interdendent influence on model dynamics. Additionally, Hatze’s formulation depends on the CE length , which makes the mutual coupling of activation with contraction dynamics more interwoven. So, at first sight, this seems to be a more intransparent construct for a controller to deal with a muscle as the biological actuator. Regarding the non-linearity exponent , solution sensitivity further depends non-monotonously on activity level, partly even with the strongest influence, partly without any influence. We also found that the solution is more sensitive to its parameters , , than is Zajac’s activation dynamics to any of its parameters.
This higher complexity of Hatze’s dynamics becomes even more evident by analysing the second order sensitivities (see (10) as well as (14) for their relative values). They express how a first order sensitivity changes upon variation of any other model parameter. In other words, they are a measure of model entanglement and complexity. Here, we found that the highest values amongst all relative second order sensitivities in Zajac’s activation dynamics are about () and (). In Hatze’s activation dynamics, the highest relative second order sensitivities are those with respect to or (in particular for , and , themselves) with maximum values between about (, ) and (, , , at submaximal activity). That is, they are an order of magnitude higher than in Zajac’s activation dynamics.
Yet, we have to acknowledge that Hatze’s activation dynamics contains crucial physiological features that go beyond Zajac’s description.
7.2 A plus for Hatze’s approach: length dependency
It has been established that the length dependency of activation dynamics is both physiological (Kistemaker et al., 2005) and functionally vital (Kistemaker et al., 2007) because it largely contributes to low-frequency muscle stiffness. It was also verified that Hatze’s model approach provides a good approximation for experimental data (Kistemaker et al., 2005). In that study, was used without comparing to the case. There seem to be arguments in favour of from a mathematical point of view. Especially, the less changeful scaling of the activation dynamics’ characteristics down to very low activity and stimulation levels, and particularly a remaining CE length sensitivity of the dynamics, seem to be an advantage when compared to the case. This applies subject to being in accordance with physiological reality. It seems that experimental data with a good resolution of activation dynamics as a response to very low muscular stimulation levels are missing in literature so far: the lowest analysed level in Kistemaker et al. (2005) was , i.e., comparable to the second rows from top in Figs. 2,3.
7.3 An optimal parameter set for Hatze’s activation dynamics plus CE force-length relation
Sensitivity analysis allows to rate Hatze’s approach as an entangled construct. Additionally, Kistemaker et al. (2005) decided to choose without giving a reason for discarding . Also, it seemed that they did not perform an algorithmic optimisation on their muscle parameters to fit known shifts in optimal CE length at submaximal stimulation levels, i.e., the CE length value where the submaximal isometric force