On the Kolmogorov-Sinai entropy of many-body Hamiltonian systems

On the Kolmogorov-Sinai entropy of many-body Hamiltonian systems

Arul Lakshminarayan Department of Physics, Indian Institute of Technology Madras, Chennai, 600036, India    Steven Tomsovic Department of Physics, Indian Institute of Technology Madras, Chennai, 600036, India Department of Physics and Astronomy, Washington State University, Pullman, Washington 99164-2814 USA111permanent address

The Kolmogorov-Sinai (K-S) entropy is a central measure of complexity and chaos. Its calculation for many-body systems is an interesting and important challenge. In this paper, the evaluation is formulated by considering -dimensional symplectic maps and deriving a transfer matrix formalism for the stability problem. This approach makes explicit a duality relation that is exactly analogous to one found in a generalized Anderson tight-binding model, and leads to a formally exact expression for the finite-time K-S entropy. Within this formalism there is a hierarchy of approximations, the final one being a diagonal approximation that only makes use of instantaneous Hessians of the potential to find the K-S entropy. By way of a non-trivial illustration, the K-S entropy of identically coupled kicked rotors (standard maps) is investigated. The validity of the various approximations with kicking strength, particle number, and time are elucidated. An analytic formula for the K-S entropy within the diagonal approximation is derived and its range of validity is also explored.


I Introduction

The Kolmogorv-Sinai (K-S) entropy is a widely used measure of chaos in dynamical systems Lichtenberg and Lieberman (1983); Ott (2002) and indicates the exponential rate at which information is produced due to the complex dynamics. It is also the exponential rate at which certain -dimensional phase space differential surfaces expand, where is the number of degrees of freedom. It is positive only for chaotic systems and vanishes otherwise. According to Pesin’s theorem Eckmann and Ruelle (1985), in a closed fully chaotic system it is also the sum of all positive Lyapunov exponents, which measure the local exponential divergence of phase-space trajectories. In an open or scattering system, the difference between the sum of the positive Lyapunov exponents and the K-S entropy may be related to transport properties like diffusion coefficients Gaspard and Nicolis (1990), thus providing a link between the macroscopic and microscopic dynamics. Given its central importance, it has not only been calculated in low-dimensional and model systems, but also in many-body systems such as dilute hard sphere/disk gases van Beijeren et al. (1997); Dorfman et al. (1998); de Wijn (2005) and dilute wet granular gases Fingerle et al. (2007). It transplants the notion of entropy, originally introduced as a measure of disorder in statistical mechanics, into the context of low-dimensional deterministic dynamical systems.

The finite-time K-S entropy turns out to be the natural measure that appears in semiclassical mechanics and provides, along with the principle of uniformity of periodic orbits and periodic-orbit information (i.e. actions, periods, and Maslov indices), the route to inferring quantum spectra from classical dynamics via the Gutzwiller trace formula Gutzwiller (1971). It appears to have important connections to the rate of entropy or entanglement production in strongly chaotic, but weakly coupled systems, the rate being proportional to the K-S entropy Miller and Sarkar (1999). Finite-time Lyapunov exponents have also proven useful in many contexts and were recently used for instance as indicators of the presence of very small regular islands in an otherwise chaotic phase space Tomsovic and Lakshminarayan (2007); Vallejo et al. (2008); Manchein et al. (2009). In high-dimensional systems, it seems more natural to study a finite-time K-S entropy and look at the fluctuations of this quantity in an ensemble of trajectories as a potential measure that can indicate non-ergodicity.

This paper addresses the issue of calculating the K-S entropy from these motivations and restricts attention to Hamiltonian systems in the form of symplectic maps Lichtenberg and Lieberman (1983). In particular, the connection to a transfer-matrix formulation is made explicit, wherein a (transformed) Jacobian plays this role, and the problem is formulated in terms of the spectra of block tridiagonal Hermitian matrices. A general duality relation emerges that has an exact mapping to previous work on Anderson models of transport Molinari (1998, 2003). A generalization of the Thouless relation gives an exact formula for the finite time K-S entropy in terms of an average over a phase at the boundary. Although the connections between Lyapunov exponents of transfer matrices and localization length through transfer matrices have been well known for quite some time Pichard and Sarma (1981), to our knowledge the generalization to the K-S entropy of classical symplectic maps is absent. Nevertheless, the ingredients from the localization and generalized transfer matrix literature are indeed there Molinari (1998).

In this paper the focus is on the the infinite time limit (asymptotic) entropy. The formalism is used to derive a simply calculable upper bound for the K-S entropy that becomes better the stronger the chaos as well as with the number of particles in the system. Furthermore, it is possible to argue for a series of increasingly restrictive approximations culminating in the “diagonal approximation” of the K-S entropy that gives estimates often much better than the upper bound in regimes of fully developed chaos. The diagonal approximation was discussed in some detail recently in the context of two-dimensional maps via the standard map Tomsovic and Lakshminarayan (2007). A generalization to higher dimensions is provided herein. The treatment, though restricted to symplectic maps or Poincaré sections of Hamiltonian flows, can conceivably be usefully extended to continuous flows.

Both the diagonal approximation and the upper-bound are defined in terms of spectra of instantaneous Hessians of a potential (through the Hessian of the action). Studying stability problems using the Hessians of potentials to calculate quantities of a similar nature as the K-S entropy is not new. They appear naturally in stability problems as part of a curvature. For instance, they appear in using Riemannian geometry to study stability, in particular to find the largest Lyapunov exponent Casetti et al. (1996). Also, a Hamilton-Jacobi formulation of the K-S entropy Partovi (2002) uses similar quantities. The approach presented in this paper, though, is different as it is based on generating functions and generalized transfer matrices. It provides routes for several approximations and bounds in both the chaotic and moderately chaotic regimes.

The diagonal approximation is applied to the K-S entropy of a system of strongly interacting kicked rotors, which may be considered to be a kicked version of the well-studied Hamiltonian mean field model Antoni and Ruffo (1995), and which has also been studied for sometime Konishi and Kaneko (1992). That interest continues, for example, Ref. Woellner et al. (2009) is related to the present work. They calculate the K-S entropy numerically, among other results. An analytical expression for the K-S entropy is derived in this paper by finding the density of states of the potential Hessian. This is perhaps the first analytical evaluation of the K-S entropy in a nontrivial model of a many-body chaotic system of this kind, whereas for the two-dimensional standard map (kicked rotor) the Chirikov estimate of the Lyapunov exponent or the K-S entropy has long been known Chirikov (1979).

Ii From transfer matrices to the K-S entropy

Consider an -degree-of-freedom system with the phase space coordinates . The discrete time evolution indicated by the subscripts is derived from a generating function of old ( and new ( coordinates,


This follows from the stationarity of the action at the interior points ( of trajectories of length . The action derivatives at the edges of the trajectory are non-zero, giving the momenta at these points. The analogy with transfer matrices becomes more complete ahead with the introduction of the auxilliary generating function


where is a real number. This is equivalent to adding an isotropic harmonic oscillator of frequency to the potential, whose frequency set to zero recovers the original dynamics. Denoting the discrete time as subscripts, the Jacobian matrix propagates phase space variations, or is the tangent map:


Define the transfer matrix as


and let be the matrix such that


The following relation then holds


Defining the matrices and as






Note that while is a symmetric matrix, may be, but generally is not. In the following, it is assumed that is a positive semidefinite matrix, which implies that the symplectic map generated by it is a twist map. The Jacobian and transfer matrices can be written in terms of the matrices introduced above with the addition of defined as


Denoting the dependence on the real parameter as the Jacobian is


and the transfer matrix is


The Jacobian and transfer matrices over a time are the products and , respectively. By construction, is a symplectic matrix, but the transfer matrix generally is not. To relate the spectral properties of , which is of interest, to those of transfer matrices, a closely related matrix


is introduced. This matrix must be isospectral with since the pair are related by a similarity transformation, . It follows also that is symplectic,


Therefore the characteristic polynomial of , is identical to that of , and is reflexive, i.e. . Note that in the cases of the central orbit being periodic of period or the independent of time, coincides with the transfer matrix . The latter is the case for many multi-dimensional symplectic maps considered in the literature so far.

The advantage of the transfer matrix approach, rather than working directly with the quantity of interest, i.e. the Jacobian, is that it immediately motivates the construction of a “Hamiltonian”, not to be confused with the one which may be generating the dynamics. Before constructing this fictitious Hamiltonian, the “boundary” conditions need to be ascertained. Let be an eigenvector of with eigenvalue , then


which implies that


Again, if the orbit was periodic with period or if was independent of time, the conditions would be simpler. Using the transfer matrix, a vector of dimension can be built out of the initial variation that is an eigenvector of : . Using the above boundary conditions it follows that -dimensional vector satisfies the eigenvalue equation


where is the dimensional matrix:


Thus there is a duality in the sense that if and only if has an eigenvalue , has the eigenvalue . The “Hamiltonian” above is the Hessian of the action when .

The matrix is Hermitian in the case lies on the unit circle. It is a block-tridiagonal matrix with corner blocks. When , such a formulation for periodic orbits has been known for some time and used by Bountis, Helleman Bountis and Helleman (1981) and Greene Greene and Kim (1987) to study stability of orbits in the Chirikov-Taylor standard map. For an extension was discussed, again for periodic orbits, by Kook and Meiss Kook and Meiss (1989). In all these cases, was the relevant parameter. The extension here is valid for arbitrary orbits and includes a parameter that makes the analogy with the literature on generalized tight-binding models direct. In this context, the works of Molinari Molinari (1998, 2003) discuss the extension beyond one-dimension and applications include multichannel scattering in mesoscopic systems. There, is the energy and the eigenvalues of the transfer matrix relate to localization. The relation of Eq. (18) follows by considering how variations evolve under a Newtonian formulation of dynamics in terms of second difference operators, however the above method explicates the structure of the transfer matrix, and its relationship to the Jacobian. Classical mechanics is naturally equipped with these structures.

There is a simple relationship between the characteristic polynomial of and , the characteristic polynomial of and . This is essentially the “duality” relation that was discussed in the context of transfer matrices by Molinari and is a generalization of that stated by Kook and Meiss for periodic orbits Kook and Meiss (1989). The simultaneous vanishing of the polynomials implies their proportionality: . is a polynomial in of order , with the leading term being , and runs from to . Thus the coefficient of in provides the proportionality constant. Using the basic definition of the determinant as a signed sum of products of matrix elements whose row and column indices are permutations, it is possible to conclude that this is Molinari (2008). This gives the duality relation stated in terms of the original Jacobian matrix polynomial as:


It is consistent with the relations mentioned for periodic orbits in Kook and Meiss (1989) as well as in the work on transfer matrices.

There are several consequences of this duality relation that can be derived, and are potentially useful in estimating entropies of dynamical systems. First consider the case , which corresponds to the generating function of interest. Let the set of eigenvalues of be . From the reflexive property it follows that the reciprocal is also an eigenvalue and from the reality of the polynomial (even if it is always assumed real here) that the complex conjugate is also an eigenvalue. Thus the eigenvalues either come in pairs, in the case or is zero, or in quartets. If the orbits are periodic, these different cases correspond to elliptic, hyperbolic and loxodromic orbits respectively. The numbers are important and indicate the exponential instability, they are finite-time stability exponents and are closely allied to the well-studied finite-time Lyapunov exponents. In the infinite time limit for generic systems they comprise the complete spectrum of Lyapunov exponents. The sum of the finite-time positive elements may therefore be considered more or less a finite-time K-S entropy ().

An exact formula for this finite-time K-S entropy follows by applying Jensen’s formula Conway (1978) to the analytic function , where is now considered as a complex variable. Jensen’s formula is a generalization of the Mean Value Principle of harmonic functions, and although is not a harmonic function on the unit circle centered at the origin due to the zeros of in the interior, these zeros can be removed in a suitably redefined function. That results in:


From this exact relation for comes the first level of approximation. Under some circumstances, such as long times and/or chaos that is not too weak, the -dependence of becomes negligible. Without going into details just yet (to be discussed ahead), for the many-body kicked rotors there are very interesting, but not fully understood, regimes of behavior which emerge on this point. In particular and unexpectedly, for some initial conditions, values of interaction strength, times, and to the accuracy of the numerical calculations done,


where indicates that the time transient contribution due to a center of mass coordinate is first subtracted from . In those cases, effectively is not an approximation and it is possible to study finite-time K-S entropy fluctuations via the simpler quantity . It could be useful for finding non-ergodic dynamics following the methods of Refs. Tomsovic and Lakshminarayan (2007); Vallejo et al. (2008); Manchein et al. (2009), but is outside the scope of this paper.

In the limit of large times, reaches its asymptotic value, which is the K-S entropy


( averaging no longer needed). The quantity defined herein, , appears in many considerations of a semiclassical kind and the study of its behaviors and fluctuations are of considerable interest Gutzwiller (1990); Elton et al. (2010). If the normalized density of eigenvalues of is denoted as , the entropy can be written as


Although the spectral problem seems to have been compounded by now focussing attention on a -dimensional matrix ( instead of products of dimensional ones, in fact the eigenvalues of the large matrix are usually bounded, whereas those of the product exponentially increase.

A second level of approximation suggests itself here and is denoted the banded approximation. As the spectrum depends little on the nature of the corner blocks, they may as well be neglected entirely. In that case, is truly banded and its determinant and spectrum can replace that of in Eqs. (22,23). This approximation can be extremely useful numerically as the diagonalization can be performed much more efficiently. Very little is lost in making this approximation and it is expected to be valid even well into the weakly chaotic regime. It may yet be possible to study finite-time fluctuations at this level of approximation.

Iii An upper bound, the diagonal approximation, and an estimate

The following considerations are for a specialized class of generating functions, mechanical-type, that are still general enough to contain many of the Hamiltonians studied in the literature on coupled maps. The idea is to focus on the essential details and facilitate interpretation, as the generalizations are straightforward. Let


where is a constant symmetric positive semidefinite matrix. Then the symplectic map is


In this case the matrices are independent of time , and the instantaneous transfer matrix is a similarity transform of the Jacobian matrix . The “Hamiltonian” is


where is the instantaneous Hessian matrix of the potential. Thus in this case the first block of the matrix remains as if the orbit were periodic.

The matrix although Hermitian is not positive semidefinite. Consider the diagonal blocks of the positive definite matrix . These are , which are themselves clearly positive semidefinite. A generalization of the well-known Hadamard determinant inequality may be applied. If is a positive semidefinite matrix whose diagonals are partitioned into , dimensional square matrices that are themselves positive semidefinite, then . In the instance that and the are simply the diagonal elements of , this is the usual Hadamard inequality. The inequality generalization provides a better upper bound and may be compared with other generalizations of the inequality such as Fischer’s Horn and Johnson (1985). Applying this inequality with results in an upper-bound for the K-S entropy

where is the unique positive semidefinite square-root of the matrix . The product of the mass matrix with the potential Hessian is written such that the resulting matrix remains symmetric, however this is not necessary and the form maybe used instead. The angular brackets indicate replacing the time-average by a space-average, assuming ergodicity as in the case of chaotic dynamics. The final equality is written in terms of , the normalized density of eigenvalues of the dimensional symmetric matrices . An ensemble is formed out of the instantaneous potential Hessians to construct such a density. For many systems this is an easily calculable upper-bound depending on the Hessian of the potential. In terms of an interpretation, the local instantaneous Hessians provide an equivalent set of harmonic oscillators and the instability is bounded by this system. The larger the potential curvature is, the larger the upper-bound can be. Surprisingly, the upper-bound can almost be reached, and it gets better with increased chaos and/or particle number.

As the upper-bound is dominated by the diagonal blocks with the Hessian of the potential it may be expected in strongly chaotic cases that just the diagonal blocks provide an approximation to the whole matrix. This constitutes the “diagonal approximation” which can be written as the assertion that:


In fact for the diagonal approximation to be valid it must turn out that , thus ignoring the different ways the mass matrices enter the definitions.

A detailed study of this approximation was made for the case of the Chirikov-Taylor standard map in Tomsovic and Lakshminarayan (2007). Note that this is a “local” approximation in the sense that it is built out of adding contributions at each instant of time. The above derivation does not estimate errors and may be regarded at this stage as heuristic. However, recall that for the one-dimensional case it is possible to write an expansion of the determinant and estimate errors. In fact it is worth pointing out that for the one-dimensional standard map with , the upper bound is given by


which is close to the widely used estimate of Chirikov Chirikov (1979) that coincides with the diagonal approximation:


A more detailed and careful analysis of this case gave


thus showing that the diagonal approximation is closer to the “exact” entropy (in the one-dimensional case this is the same as the Lyapunov or stability exponent) than is the upper-bound. Nevertheless, both the approximation and the bound are indeed close to the actual value.

Consider for a moment that under the action of completely chaotic dynamics, the entries into the inverse mass matrix product with the Hessian become something like random entries. The determinant in Eq. (29) could be treated as a random variable with normalized probability density and a variance . In that case


where and has only weak dependence on particle number or interaction strength (assuming the system is chaotic). Thus, the result is the logarithm of the width of the density plus a remaining integral that is nearly a constant. In the case of a Gaussian density for , the constant is , where is the Euler-Mascheroni constant. In the case of a lognormal density, the constant is unity. If the distribution is not exactly zero-centered in its mean, then corrections emerge ordered in powers of the ratio of the mean to standard deviation. For example, the positive-off-centered Gaussian density gives


a formula that is used ahead with minor modifications. Two forms are given for the result of the integral. The first is the convergent series, whereas the second is the asymptotic series. The latter is far more useful due to slow convergence of the former if the width is much smaller than the mean. Finally note that some significant information is lost in making the diagonal approximation. There is little reason to expect it to work well enough numerically to study finite time approximations just by limiting the number of consecutive instantaneous Hessians (the analytical expression is already fully ergodically averaged and without fluctuation information) and it cannot be expected to hold in the weakly chaotic regime as well as the banded approximation.

Iv K-S entropy of a coupled map lattice

Consider identical particles each of whose configuration space is a circle and interacting via two-body forces that depend only on the angular distance ; in this section the notation is changed such that subscripts label particles rather than time. Assume the mass matrix is diagonal with unit elements. The potential can be represented as and there is a center of mass degree of freedom that can be removed from the calculation for any number of particles. Conveniently, the two particle case reduces to the one particle case with a renormalized interaction strength and can be used as a check. Before removing the center of mass, the Hessian matrix has the property that sums of row or column elements vanishes.

In order to work through a concrete system in detail, a system of coupled Chirikov type maps is introduced that has been studied for some time and has many interesting features Konishi and Kaneko (1992). It can also be considered as the kicked version of the well-studied Hamiltonian mean field model Antoni and Ruffo (1995). The Hamiltonian is


and the resulting map connecting variables just before subsequent kicks is


The kicking strength has been renormalized with so that the final K-S entropy will be extensive. In much of the previous research with this model this normalization has been used primarily so that with the same kicking strength increasing number of particles produces the same order of change in momentum for each of the particles. This follows as the sum over the sines will be of the order of if the mimic a random distribution. There is an additional closely related system, for which it is also useful to make a few brief comments. It has particles on a ring, but only the nearest neighbors (NN) interact via the above forces. For this NN model, the renormalization of the interaction strength by is not helpful and omitted. All remarks on this system are clearly preceded by a reference to the NN model.

For the system is integrable since all the elements of the set are constants of the motion. For more than a few particles, as is changed away from zero even by small amounts it is found numerically that the system develops significant chaotic dynamics. Even for values of on the order of, but less than unity, it is already extremely difficult to detect any remnants of stable dynamics. For a given configuration of particles the Hessian is given by:


and the mass matrix is always the identity matrix.

Figure 1: The dependence of . The upper panel shows for three distinct initial conditions compared with the values shown as horizontal dashed lines; in each case, the known transient contribution is subtracted from . All three values here happen to match nearly exactly (at least 5 decimal place accuracy) the average. The solid flat line is the average over initial conditions, thus giving some idea of the scale of the finite-time entropy fluctuations. In the lower panel, a less well converged case, i.e. smaller and , is shown. There is more structure as a function of and the value is not equal in this case to the -average shown here as the solid line. Note that has an exact reflection symmetry about and it is not necessary to show the range .

The first step is to check the validity of averaging over in obtaining the finite time K-S entropy. It is straightforward to show for , i.e. , that the center of mass coordinate generates a transient contribution to the finite time entropy of and this term is subtracted from for the purposes of this discussion. Figure 1 illustrates the behavior of in two regimes, one for stronger chaos and longer times and its opposite. Remarkably, for each of the three initial conditions pictured in the upper panel, equals the average of . The -dependence is essentially identical for each initial condition except for an overall displacement, and it is this displacement which contains all the information contained in the fluctuations of the finite-time K-S entropy. For these parameters, the fluctuations are a small fraction of the infinite-time K-S entropy. The lower panel shows the more complicated -dependence found at shorter times and weaker chaos. In this case, does not equate to the -average. Curiously, after checking several other initial conditions, it appears as though the difference only takes on a discrete set of values approximately. Overall, it is found that replacing by is an excellent approximation for large times.

Figure 2: The difference with time in the K-S entropy estimate calculated using the full and its banded counterpart in which the corner blocks are neglected. The transient of at time and the transient of the banded approximation were both subtracted before taking the difference between the two approximations.

These results beg the question as to how well the banded approximation compares to . Just as has a center of mass transient term, so also has the banded matrix. However, it is straightforward to show that its value is not the same as for . Rather, it is and likewise is subtracted for the comparisons discussed here. In Fig. 2, the difference between the two approximations for is plotted as a function of time for a reasonably strongly chaotic case. There is no detectable difference not due to a random-like statistical error. In fact, this statistical error is smaller than the finite-size fluctuations in , and it may therefore be possible to get good estimates of the finite-time fluctuations even using the banded approximation, at least for strongly chaotic systems. Though not shown here, the case had a very similar appearance except with a larger statistical error.

It remains only to understand its density of states under the ergodic assumption that the are uniformly distributed in over time and uncorrelated with respect to each other. The density of states is not that of a full random symmetric matrix, namely the Wigner semicircle, as there are only random numbers input in determining the Hessian matrix elements. In addition, the diagonal elements are a sum of random contributions whereas the off-diagonal elements have just one. Thus the spectra must reflect somehow this diagonal dominance.

From Eq. (33), in the diagonal approximation finding this system’s K-S entropy requires only the normalized spectrum of the potential’s instantaneous Hessian. In fact, the density of states has a very interesting and curious structure in which there are three exceptional eigenvalues, whereas the remaining eigenvalues form a Gaussian sea centered near zero. One of the three exceptional eigenvalues is the exact eigenvalue corresponding to the center of mass and the uniform mode, and the remaining two exceptional eigenvalues are “outliers”, which for large particle number are well separated from the Gaussian sea formed by the others.

The outliers’ origins may be seen by considering the matrix , which results from the superposition of two one-dimensional projectors:


where , and , ensure normalization. Then the element is identical to the element of the Hessian if the scaling factor . Choosing this value of , the diagonal elements of are constant and equal to . As is the sum of two one-dimensional projectors, it has exactly two non-zero eigenvalues, say and , which can be found from and . The relations are:


whose solutions are


These eigenvalues move toward infinity with increasing particle number and are each always close in value to . In fact, their ensemble averaged root mean square displacement from this value is lower order and given by . Furthermore, it is possible to deduce a full probability density for these eigenvalues assuming the set behaves as uniformly random variables on the interval , i.e. the ergodic expectation for a fully chaotic dynamics. One method is to calculate the moments,


which are the moments of an exponential probability density. Another method is to realize that the sum is actually the sum of squares of two simple sums for which a central limit theorem applies to each and that leads to a probability density. Again, this is exponential and consistent with the moment method. Therefore, for


Thus, the initial scaling of the parameter as leads to the two nonzero eigenvalues of being of the order of , and their fluctuations to be of order and hence independent of .

The eigenvectors, and , require a little bit more algebra to find because the two projector states are not orthogonal, i.e. . However, it turns out that for a given just ahead, they can be expressed as


The determination of follows from imposing the conditions and , which require that




Applying to the eigenvectors generates a relation between the value of and the eigenvalue difference,




Although the matrix is not the instantaneous Hessian, , that is needed for the KS entropy problem, much of its spectral structure described above survives. The critical distinctions between the two matrices are entirely related to the diagonal elements as the off-diagonal elements are connected exclusively by a sign change. Alternatively, one could write that , where is a diagonal matrix whose diagonal elements are given by


Relative to the spectrum of , the identity operator introduces a trivial shift by two of the entire spectrum and can be accounted for at the very end. The matrix, on the other hand, is responsible for ensuring the exact center of mass eigenvalue (zero) and eigenvector (constant coefficients ). Note that the center of mass can have nothing to do with the KS entropy. However, unlike the two previous levels of approximation in which the transient was decreasing with increasing time, the center of mass eigenvalue here leads to a constant contribution independent of time. Its eigenvalue must be excluded from the spectrum of . also introduces three more effects: i) the two projector states cease being exact eigenstates, ii) they have non-vanishing overlaps with the center of mass eigenvector, and iii) the remaining formerly zero eigenvalues all take on non-zero values.

Nevertheless, adding the diagonal elements is found neither to shift the two special levels significantly nor alter the width of their variation. An educated guess as to the approximate structure of the perturbed projector eigenstates would be that the overlap with the center of mass eigenstate should be subtracted, slightly altered to maintain the orthogonality, and the normalization constants recalculated, i.e.


where, for example, would satisfy the relation


which would give a new rotation differing from the previous one by an shift. As the center of mass eigenvalue is zero, removing its small eigenstate contribution to the previous projector eigenstates has little to no effect on the eigenvalues. A priori, one might have anticipated that the loss of the term from the diagonal might have moved these levels, but in fact that does not happen.

That leaves the eigenvalues to be understood. They are related to the introduction of the diagonal elements, which are of higher order than the off-diagonal elements, and those elements are mostly accounted for by the projector states. It is reasonable to expect and it is found that the remaining eigenstates are rather localized in nature except for having to be orthogonal to the 3 special extended states. The diagonal elements behave like Gaussian random numbers from a central limit theorem, and one can anticipate that the eigenvalues form a Gaussian spectral sea. Thus, to summarize the spectrum of has a center of mass zero eigenvector with a uniform mode, which must be removed from the spectrum for entropy calculations, two delocalized modes with large outlier eigenvalues of the order of , and the remainder of the spectrum’s eigenvalues are in a Gaussian sea.

The Gaussian sea has only two parameters to be determined, its centroid and width. Its properties can be determined by the first and second moments of the instantaneous Hessian similarly as done in Eq. (39), except assuming ergodicity in the dynamics, which leads to ensemble averaging. Given that


and that the sum of the three special eigenvalues equals (now fully accounting for the identity operator also), the mean location of the remaining levels is


By the same method, it is possible to also calculate the variance of . Consider


and subtract the contribution to this quantity from the three special levels to get the contribution for the remaining levels to the mean square operator trace. Dividing by and subtracting the square of gives the variance as


Clearly, the separation of a Gaussian sea and 3 special eigenvalues does not make sense if as the variance becomes negative, which is impossible. For large , the width of the Gaussian sea approaches , which is the same order as the width of the distribution of the two outliers, and is again independent of due to the scaling of the kicking strength. The centroid of the Gaussian sea is slightly more complicated. For fixed and increasing , it approaches . On the other hand, for a fixed, sufficiently large , as increases (), it approaches . Nevertheless, in either case for reasonably large , the Gaussian sea does not overlap with the distribution of the two outliers, which are further out.

All the ingredients for an evaluation of the K-S entropy within the diagonal approximation, Eq. (29), are in place. The two outliers contribute:


where is given by the density in Eq. (42). This integral can be evaluated exactly,


where is the exponential integral. The contribution from the Gaussian sea follows from a slight modification of Eq. (34) and the final expression for the K-S entropy is


where is the width given by Eq. (54) and is the center of the Gaussian given in Eq. (52). As in Eq. (34), the upper form is the convergent series and the lower form is the asymptotic series.

Figure 3: The diagonal approximation of . In the upper panel, the upper bound (highest lying curve) and the analytical form for the diagonal approximation (dashed curve) are compared to the banded approximation using and ten initial conditions. In the lower rectangular box, the difference between the numerical diagonal approximation and the analytical form is shown.

The comparison of the upper bound, numerical diagonal approximation, and analytical diagonal approximation versus the banded approximation, which is taken to be essentially exact, as a function of interaction strength is shown in Fig. 3. It is seen that the upper bound does indeed lie above the other curves, and also as increases it gets closer and closer to the true K-S entropy. Nevertheless, it is nowhere as good as the diagonal approximation. The diagonal approximation is equally good as an analytical approximation or numerically evaluated approximation independent of the strength of the chaos, which is shown in the lower panel of the figure. It has a pair of unusual double-well-like variations in its behavior for weakly chaotic systems (one of the double well-like variations is on too small a scale to show up clearly in the figure). In this regime (small ), it is unable to follow the true K-S entropy. The K-S entropy is seen to be an asymmetric function of interaction strength. The diagonal approximation does a bit better on the positive side. It is impressive that the diagonal approximation works on an absolute scale. On an entropy per particle basis, it would have even greater accuracy. Overall, for moderate and the formula with all the details is necessary for the quality of the agreement with the K-S entropy shown.

Note that it is well-known that the usual standard map (with one-degree of freedom) has a symmetry with respect to due to their phase spaces being shifts of each other by one-half along the position direction. For the fully interacting maps, this symmetry is broken. This is seen in the -dependence of the eigenvalues of the two special outlier states, which changes sign with the sign change (although the shift by two from the identity matrix does not). Likewise, the off-centered density of the remaining states changes sign as well. The NN model retains the symmetry exactly for even numbers of particles on the ring. This can be demonstrated by shifting every other particle by one-half. We have not identified a similar transformation for an odd number of particles, but confirmed numerically that the symmetry remains.

The entropy per particle for large and becomes [neglecting and contributions]


The leading term is the same as the one-degree-of-freedom kicked rotor noted in Eq. (32), although the correspondence cannot be pushed to far because the meaning of is somewhat renormalized by removing the center of mass. It is interesting to note that the leading term odd in the sign of goes a long way to explaining the difference in values of for positive and negative . For example, from the last term in the above equation the difference of at the edges of Fig. 3 () is expected to be , whereas it is actually .

Figure 4: The -dependence of the analytical diagonal approximation. The solid curves are the banded approximation using ten initial conditions. The dashed curves are the analytical diagonal approximation.

Figure 4 shows the diagonal approximation with fixed interaction strength and varying the number of particles. For values of or , the diagonal approximation works extremely well. In these regimes, above some value of , the diagonal approximation matches the banded approximation to within sample size fluctuations (ten initial conditions were used). In the upper panel, a -value is shown where the approximation is working well. As increases the diagonal approximation converges to the K-S entropy quickly with particle number. The lower panel shows a weaker chaos case where the approximation is struggling a bit. There the two curves slowly diverge. For , increasing does not improve the estimate of the KS entropy in an absolute sense, although it would still improve on an entropy per particle basis.

Figure 5: The full, banded, and diagonal eigenvalue densities compared to the analytical form of the diagonal approximation. The three histograms result from using initial conditions. It is essentially impossible to see the difference between the full and banded results. Small differences can be seen with respect to the diagonal case.

Finally, in Fig. 5 the eigenvalue densities are shown as histograms for the full (, banded, and diagonal approximations, and compared to the analytical expression for the instantaneous Hessians (diagonal approximation). Excellent agreement is seen throughout the spectrum, including the details of the two outliers in the negative tail. In the case of the NN model, the density of states does not have outliers, and is symmetric about zero. The absence of outliers follows from the lack of a rank-2 projector structure for the off-diagonal part of the Hessian matrix. The density of states in the bulk is also not normal, as the diagonal elements do not look like sums of many random variables. Thus, while the mean field model has more features like the outliers, analytical approaches to the density of states and hence, the K-S entropy was possible, in contrast to the NN model which needs other special methods not considered here.

V Summary and conclusions

This paper presents a formalism to calculate the K-S entropy of symplectic maps of arbitrary dimension. Constructing a transfer matrix, and using a duality relation, a formally exact expression for the finite-time K-S entropy is given as an average over a phase in the [logarithm of a] determinant of an effective Hamiltonian. The Hamiltonian has a tridiagonal block-banded structure with corner blocks which alone include the phase. A series of approximations naturally follow in this scheme. First, the variation with the phase is neglected and the resultant K-S entropy is found to get better with longer time. Second, the corner blocks of the Hamiltonian are themselves neglected resulting in a banded approximation that is found to be a very good approximation that is useful numerically, works even with weakly chaotic systems, and may still be valid for studying finite time fluctuations. The third level of approximation involves dropping of all off-diagonal blocks in the Hamiltonian and constitutes the diagonal approximation. This is expected to hold only in regimes of strong chaos. A fourth estimate also follows as an exact upper-bound to the finite-time K-S entropy. The last two, namely the diagonal approximation and the upper-bound can be found from the spectra of the instantaneous Hessians of the potential and are therefore extremely easy to evaluate numerically, vastly simpler and numerically faster than even the banded approximation.

One of the long term motivations was to study fluctuations in the finite-time K-S entropy, and we believe that neglecting phase averaging, one can still study such fluctuations for moderately chaotic regimes. More investigation is needed to find out under what circumstances phase averaging can be left undone. Curiously, many initial conditions, though not all, led to a zero angle value, , exactly equal to the phase average (exact here meaning to the accuracy of our calculations). It appeared that differences when they appeared took on only a discrete set of values. Furthermore, the banded approximation also appears form a useful and powerful tool to study fluctuations numerically in a dynamically interesting range of parameter values. The diagonal approximation seems highly unlikely to reflect these fluctuations faithfully, especially in regimes of weak to moderate chaos.

A thorough study of a kicked version of the Hamiltonian Mean Field model, or coupled standard maps, has been undertaken wherein the above approximations have been tested in detail. It has been possible to find analytically the spectral density of the instantaneous Hessians and hence provide an analytical expression for the K-S entropy within the diagonal approximation. For strongly chaotic systems, the diagonal approximation is an excellent estimate of the K-S entropy itself. There is no real difference between the numerical diagonal and the analytical diagonal approximations. For weakly chaotic systems, the diagonal approximation is not good, but this is natural. It is more accurate than the upper bound in all dynamical regimes. It has strange double-well-like oscillations that is not an artifact of the extra approximations going into getting an analytic expression, as it is there in the numerical diagonal results as well. It is also interesting that the K-S entropy is not an even function of . It turns out that the details of the analytical expression cannot be neglected for moderate values of , and . For increasing particle number, the absolute error in the diagonal approximation is negligible except where it slowly grows in the weak chaos regime. Nevertheless, even there the per particle entropy would improve for most values of .

The analytical results pertaining to the diagonal approximation were facilitated by deriving the density of states for the instantaneous Hessians, which had a peculiar structure of a Gaussian sea along with two large outliers apart from the center of mass mode. The density of states histograms for the full and banded approximations are difficult to distinguish whereas the diagonal case is a bit further away. All of them match the analytic prediction for the level density extremely well. It is interesting that given the long existence of the Chirikov estimate for the Lyapunov exponent or the K-S entropy of the two-dimensional standard map, this paper has finally provided a generalization to a nontrivial many-body extension of the same. In addition, it is curious that an analytic diagonal approximation appears to be more difficult for the NN model even though it has a symmetry and thus its instantaneous Hessian possesses a symmetric density of states, and no outliers in its spectrum.

The techniques developed herein have several natural generalizations, for instance to systems with dissipation as well as to continuous time Hamiltonian flows. Even as it stands, this scheme will provide a pathway into the maze that is high dimensional chaotic phase spaces by allowing one to access relatively easily the finite-time K-S entropy and its fluctuations. The analytical structure behind the finite-time K-S entropy is also intriguing and needs to be more fully investigated.

The authors are indebted to hospitality and financial support of the Max-Planck-Institut für Physik komplexer Systeme in Dresden, GE where the work began. One of us (ST) gratefully acknowledges financial support from the US National Science Foundation grant number PHY-0855337.


  • Lichtenberg and Lieberman (1983) A. J. Lichtenberg and M. A. Lieberman, Regular and Stochastic Motion (Springer, Berlin, 1983).
  • Ott (2002) E. Ott, Chaos in Dynamical Systems (Cambridge University Press, Cambridge, 2002).
  • Eckmann and Ruelle (1985) J. P. Eckmann and D. Ruelle, Rev. Mod. Phys., 57, 617 (1985).
  • Gaspard and Nicolis (1990) P. Gaspard and G. Nicolis, Phys. Rev. Lett., 65, 1693 (1990).
  • van Beijeren et al. (1997) H. van Beijeren, J. R. Dorfman, H. A. Posch,  and C. Dellago, Phys. Rev. E, 56, 5272 (1997).
  • Dorfman et al. (1998) J. R. Dorfman, A. Latz,  and H. van Beijeren, Chaos, 8, 444 (1998).
  • de Wijn (2005) A. S. de Wijn, Phys. Rev. E, 71, 046211 (2005).
  • Fingerle et al. (2007) A. Fingerle, S. Herminghaus,  and V. Y. Zaburdaev, Phys. Rev. E, 75, 061301 (2007).
  • Gutzwiller (1971) M. C. Gutzwiller, J. Math. Phys., 12, 343 (1971), and references therein.
  • Miller and Sarkar (1999) P. A. Miller and S. Sarkar, Phys. Rev. E, 60, 1542 (1999).
  • Tomsovic and Lakshminarayan (2007) S. Tomsovic and A. Lakshminarayan, Phys. Rev. E, 76, 036207 (2007), arXiv:0708.0176 [nlin.CD].
  • Vallejo et al. (2008) J. C. Vallejo, R. L. Viana,  and M. A. F. Sanjuan, Phys. Rev. E, 78, 066204 (2008).
  • Manchein et al. (2009) C. Manchein, M. W. Beims,  and J.-M. Rost, Phys. Lett. A (2009), submitted; arXiv:0907.4181.
  • Molinari (1998) L. Molinari, J. Phys. A, 31, 8553 (1998).
  • Molinari (2003) L. Molinari, J. Phys. A, 36, 4081 (2003).
  • Pichard and Sarma (1981) J. L. Pichard and G. Sarma, J. Phys. C: Solid State Phys., 14, L127 (1981).
  • Casetti et al. (1996) L. Casetti, C. Clementi,  and M. Pettini, Phys. Rev. E, 54, 5969 (1996).
  • Partovi (2002) M. H. Partovi, Phys. Rev. Lett., 89, 144101 (2002).
  • Antoni and Ruffo (1995) M. Antoni and S. Ruffo, Phys. Rev. E, 52, 2361 (1995).
  • Konishi and Kaneko (1992) T. Konishi and K. Kaneko, J. Phys. A, 25, 6283 (1992).
  • Woellner et al. (2009) C. F. Woellner, S. R. Lopes, R. L. Viana,  and I. L. Caldas, Chaos, Solitons & Fractals, 41, 2201 (2009).
  • Chirikov (1979) B. V. Chirikov, Phys. Rep., 52, 263 (1979).
  • Bountis and Helleman (1981) T. Bountis and R. H. G. Helleman, J. Math. Phys., 22, 1867 (1981).
  • Greene and Kim (1987) J. M. Greene and J.-S. Kim, Physica, 24D, 213 (1987).
  • Kook and Meiss (1989) H. T. Kook and J. D. Meiss, Physica, 35D, 65 (1989).
  • Molinari (2008) L. Molinari, Linear Algebra Appl., 429, 2221 (2008).
  • Conway (1978) J. H. Conway, Functions of One Complex Variable I (Springer, New York, 1978).
  • Gutzwiller (1990) M. C. Gutzwiller, Chaos in Classical and Quantum Mechanics (Springer-Verlag, New York, 1990).
  • Elton et al. (2010) J. R. Elton, A. Lakshminarayan,  and S. Tomsovic, Phys. Rev. E, 82, 046223 (2010), arXiv:0905.3523 [nlin.CD].
  • Horn and Johnson (1985) R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University Press, New York, 1985).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description