Numerical test for hyperbolicity in chaotic systems with multiple time delays
We develop an extension of the fast method of angles for hyperbolicity verification in chaotic systems with an arbitrary number of time-delay feedback loops. The adopted method is based on the theory of covariant Lyapunov vectors and provides an efficient algorithm applicable for systems with high-dimensional phase space. Three particular examples of time-delay systems are analyzed and in all cases the expected hyperbolicity is confirmed.
keywords:hyperbolic chaos, hyperbolicity test, fast method of angles, delay differential equations
Hyperbolic theory Smale67 (); Anosov95 (); KatHas95 () studies invariant sets in phase space of dynamical systems, including those with chaotic dynamics, composed exclusively of saddle trajectories. For all points on such a trajectory, in the space of small perturbations (tangent space), one can define a subspace of vectors, which exponentially decrease in norm under the forward time evolution, and a subspace of vectors, which exponentially decrease under the backward time evolution. In flow systems, in addition, there is a one-dimensional neutral subspace of perturbations along a trajectory that neither increase nor decrease on average. An arbitrary vector of small perturbation is a linear combination of vectors belonging to the indicated subspaces. A set of states that approach a given trajectory during time evolution is called the contracting (or stable) manifold of this trajectory. Similarly, the expanding (unstable) manifold is a set of states tending to the reference trajectory under the backward time evolution. Tangencies of stable and unstable manifolds should be excluded; only intersections at nonzero angles are admitted.
Hyperbolic chaos plays a special role among other types of chaotic dynamics. Systems of this type, like, for example, the Smale-Williams solenoid, manifest deterministic chaos justified in rigorous mathematical sense. They demonstrate strong and structurally stable stochastic properties. They are insensitive to variation of functions and parameters in the dynamical equations, to noises, interferences etc. Moreover, hyperbolic chaotic dynamics in such systems allow a detailed mathematical analysis Smale67 (); Anosov95 (); KatHas95 ().
Due to their great potential importance for applications, structurally stable chaotic systems with hyperbolic attractors obviously have to be a subject of priority interest, like rough systems with regular dynamics in the classic theory of oscillations AndrPontr (); AndrKhaikVitt (). However, many years the hyperbolic attractors were commonly regarded only as purified abstract mathematical images of chaos rather than something intrinsic to real world systems. A certain progress in this field has been achieved recently when many examples of physically realizable systems with hyperbolic attractor have been purposefully constructed using a toolbox of physics (oscillators, particles, fields, interactions, feedback loops, etc.) HyperBook12 (); KuzUFN11 ().
Physical and technical devices employed in the offered systems surely are not well suited for rigorous mathematical analysis. However, strong evidence of the hyperbolicity is significant to exploit properly relevant theoretical results. In this situation numerical verification of hyperbolicity of chaos becomes an essential ingredient of the studies.
Systems with time-delay feedback combine relative simplicity of implementation, almost like low dimensional systems modeled by ordinary differential equations, and rich complexity of dynamics, comparable with infinite dimensional systems associated with partial differential equations. Examples of such systems are wide-spread in electronics, laser physics, acoustics and other fields VihLaf13 (). Their mathematical description is based on differential equations with delays BellCook63 (); Myshkis72 (); ElsNor73 (). A number of time-delay systems with chaotic dynamics was explored DorGram87 (); Chrost83 (); Lepri94 (); Mackey77 (); Farmer82 (); GrassProc84 (); KuzPonom08 (); Autonom10 (); KuzPik08 (); Baranov10 (); KuzKuz13 (), and several examples were suggested as realizable devices for generation of rough hyperbolic chaos KuzPonom08 (); Autonom10 (); KuzPik08 (); Baranov10 (); KuzKuz13 (); Arzh14 (). However, no direct verification of the hyperbolicity was performed for them.
There are two different approaches to numerical test of hyperbolicity. One of them, the method of cones, is based on a mathematical theorem on expanding and contracting cones Sinai79 (); KatHas95 (). It has been adopted for computer verification and applied for some low-dimensional systems KuzSat07 (); KuzKuzSat10 (); Wilcz10 ().
The second approach, the method of angles, directly employs the definition of hyperbolic attractor: its orbits are of saddle type, and their expanding and contracting manifolds do not have tangencies but can only intersect transversally. The method involves a computation of angles between the manifolds along trajectories. In the case of hyperbolicity, the distributions of these angles are distant from zero. This method was used initially in Ref. LaiGeb93 (), and in Ref. FastHyp12 () its fast and economical reformulation was suggested. This approach may be regarded as an extension of Lyapunov analyses, well-established and applied successively not only for low-dimensional systems but for spatiotemporal systems too Hirata99 (); Anishch00 (); Kuznetsov05 (); KuzSelez06 (); KupKuz09 (); Kuznetsov15 (); KuzKrug16 (); Kuz16 ().
In our recent brief report KupKuz16 () we have adopted the fast method of angles FastHyp12 () to perform the hyperbolicity test for systems with a single time delay. In the present paper we extend this method for systems with arbitrary number of time-delay feedbacks. Three particular examples considered in Refs. Baranov10 (); KuzPik08 (); Arzh14 () are analyzed, and in all cases the expected hyperbolicity is confirmed.
The paper is organized as follows. In Sec. 2 we briefly review the theory laying behind the fast method of angles. Required for this method adjoint variational time-delay equation is derived in Sec. 3, and in Sec 4 its numerical approximation is considered. In Sec. 5 we discuss a numerical method for solving delay differential equations that is applied in Sec. 6 for hyperbolicity tests of particular time-delay systems. Finally in Sec. 7 the results of the paper are summarized.
2 Numerical verification of hyperbolic chaos. Fast numerical algorithm for the method of angles
In this section we review the theoretical background of the fast method of angles. For more details see Refs. CLV2012 (); FastHyp12 (). The method is based on the concept of covariant Lyapunov vectors (CLVs) GinCLV (); WolfCLV (); CLV2012 (); PikPol16 (). The adjective “covariant” means that these vectors are in one-to-one correspondence with Lyapunov exponents and evolve in such way that the th CLV at time is mapped to the th CLV at . In average they grow or decay exponentially, each with the corresponding Lyapunov exponent.
CLVs form a natural non-orthogonal basis for the subspaces tangent to expanding, neutral and contracting manifolds. Analysis of these vectors can reveal presence or violation of the hyperbolicity. As follows from the definition Smale67 (); Anosov95 (); KatHas95 (), a chaotic system is hyperbolic when the manifolds never have tangencies. So, the idea is to compute angles between CLVs related to the positive, zero and negative Lyapunov exponents. The system is hyperbolic if these angles never vanish.
There are various methods for computations of CLVs. The fast method of angles employs the one suggested in Ref. CLV2012 (). Its idea goes back to the work WolfCLV () with later essential supplement form PazoSzendro08 (). The other method for CLVs can be found in Ref. GinCLV (). See also a book PikPol16 () for an extended review.
CLVs can be computed both for continues and discrete time systems. Consider a system represented by an ordinary differential equation:
where is dimensional state vector, and is a nonlinear function of and, for non-autonomous systems, of . Infinitely small or so called tangent perturbations to trajectories of the system (1) obey variational equation
where is a tangent vector and is the Jacobian matrix, i.e., the matrix of derivatives of with respect to . Its time dependence can be implicit via and explicit in the non-autonomous case.
Deriving the variational equation (2) we do not automatically find out the way of computation of tangent vector norms. In other words the basis for these vectors remain unknown. We have to define it introducing a metric tensor . Usually the identity matrix is taken, but we will consider a more general case of real symmetric positive definite metrics . Once such metric is selected, an inner product of two arbitrary vectors and is defined as Horn2012 (); Hogben2013 ()
where “T” stands for transposition. In turn, the inner product allows evaluating the vector norms:
Vectors of the basis itself are represented as , and so on. It can be easily checked that for a generic metric these vectors are neither orthogonal nor normalized. The following transformation provides the orthonormalization:
where is such that . The matrix always exists since is assumed to be positive definite and symmetric. The metric under this transformation changes into an identity matrix so that the inner product takes the form of the standard dot product: .
Lyapunov exponents as well as CLVs do not depend on the metric choice. That is why usually the simplest case is considered, i.e., the identity matrix is taken as the metric and the standard dot product is used. However, as we will see in the subsequent sections time-delay systems need more accurate approach.
Evolution of a tangent vector from time to time can be expressed as action of a linear operator called propagator:
For the continues time system (2) the propagator is related with the Jacobian matrix through the Magnus expansion CLV2012 (). In actual numerical computations it merely means that we solve Eq. (2) from to to find a result of the propagator action.
To compute CLVs and find corresponding angles we employ the standard algorithm for Lyapunov exponents Benettin80 (); Shimada79 (). First, a required number of tangent vectors are initialized and written as columns of a matrix . Then the propagator is applied to obtain , that actually means solving the basic system together with copies of the variational equations.
Due to action of the propagator, any arbitrarily chosen vector tends to line up along the most expanding direction growing exponentially with the rate equal to the largest Lyapunov exponent. Similarly, any non-colinear pair of vectors tends to the most expanding plane, and the spanned area grows exponentially with a rate determined by the sum of two first Lyapunov exponents. Any three vectors approach the most expanding three-dimensional volume with the growth rate equal to the sum of three first Lyapunov exponents and so on.
Each previous alignment shades the next one: without a special treatment we will not see the most expanding plane since the expanding direction will absorb all its vectors. But it becomes available when an orthogonalization procedure of vector-columns of is performed in the course of the computations. The first vector preserves its direction; the second one is rotated up to the orthogonal position always staying on the plane spanned by its initial direction and the first vector; the third vector is rotated to become orthogonal to the first two but strictly within the space spanned by the first three vectors and so on. The rotations are accompanied by normalization of the vector lengths. This procedure is known as Gram-Schmidt orthogonalization or QR factorization GolubLoan (); Hogben2013 (). The letter means representation of a matrix as a product an orthogonal matrix and an upper triangular matrix .
Altogether, the iteration step includes the evolution from to and the orthogonalization:
Computed in this way orthogonal matrix is used for the next step of the algorithm.
After skipping transients, we can start to accumulate absolute values of logarithms of diagonal elements of . Averaged over the iteration time they become Lyapunov exponents. CLVs computation requires instead. After the transient, the columns of this orthogonal matrix become the backward Lyapunov vectors. This name emphasizes the fact that these vectors arrive at time after long evolution form far past. Also they are known as Gram-Schmidt vectors. Being norm-dependent, they nevertheless contain an essential information about the tangent structure of the trajectory manifolds. If CLVs are gathered as columns of a matrix they can be represented via backward Lyapunov vectors CLV2012 ():
where is an upper triangular matrix. It means that the first CLV coincides with the first backward Lyapunov vector, the second one belongs to a plane spanned by the first two backward vectors, the third one belongs to a three-dimensional space of the first three backward vectors and so on.
If is dimension of the expanding tangent subspace, i.e., the number of positive Lyapunov exponents, from iterations of (7), (8) with vectors we obtain the basis for this subspace. To carry out the hyperbolicity check we also need the basis for the remaining contracting subspace whose dimension is . At the moment we ignore possible existence of the neutral subspace associated with zero Lyapunov exponents.
The absent basis can be computed in the course of iterations of tangent vectors backward in time. The straightforward idea is to perform steps analogous to (7) and (8) but with . In other words, we can integrate copies of the variational equation with negative time step. This approach was used in early papers when the theory of CLVs was not well developed, and, in particular, no efficient algorithms for their computations were known. It worked well for low dimensional systems, but if was high it became inapplicable due to very high consumption of computational resources.
The preferred way is to perform backward steps with an adjoint propagator satisfying the identity Horn2012 (); Hogben2013 (), where and are arbitrary vectors. Taking (3) into account we obtain111Notice that in Ref. CLV2012 () the adjoint propagator is introduced as .:
The implementation of this propagator includes solving the adjoint variational equation
backward in time, where is the adjoint Jacobian matrix. As discussed in Refs. AGuide (); CLV2012 () the iterations with and generate identical sets of vectors, but since the inverted propagator has the reciprocal singular values with respect to , in its iterations the smallest Lyapunov exponent dominates so that the resulting vectors come in the reverse order. The use of the adjoint propagator instead of the inverted one is the key point of the fast method of angles FastHyp12 ().
Consider the following backward time iterations with :
Here an orthogonal matrix contains columns in exactly the same way as in Eqs. (7), (8). After the transient, the columns become the forward Lyapunov vectors, i.e., the vectors arriving at after long iteration initiated in far future. The full set of these vectors is related with CLVs as follows, cf. Eq. (9):
where is a lower triangular matrix CLV2012 (). It means that the first forward vectors form an orthogonal complement for the sought subspace of the last CLVs. The important point is that typically the complement has much lower dimension, and the corresponding computational routine, including iterations (12), (13) with columns of is more economical and fast than straightforward iterations of vectors with the inverted propagator .
Altogether, the first backward Lyapunov vectors span the subspace holding the first CLVs, and the first forward Lyapunov vectors provide the basis for the orthogonal complement to the subspace holding the last CLVs. To check the tangency we need to find principal angles between these two subspaces GolubLoan (). Given their orthogonal bases and , respectively, cosines of the principal angles are found as singular values of matrix of pairwise inner products of the bases vectors:
The second subspace is the orthogonal complement to the subspace of interest. It means that the tangency is signaled by the largest principal angle that corresponds to the smallest singular value. Once the matrix has been computed, we gain access to a series of angles. Taking its top left square submatrix , where , and finding the smallest corresponding singular value , we can compute the angle between the dimensional subspace of the first CLVs and dimensional subspace of the remaining CLVs as follows:
The smallest singular value as well as the angle vanish when a tangency between the corresponding subspaces occurs. Because trajectories with the exact tangencies are rather untypical, in actual computations we register a tangency between subspaces if the corresponding angle can be arbitrarily small.
Since the angle computation involves norm dependent forward and backward Lyapunov vectors, particular values of depend on the norm (4), i.e., on the choice of the metric . However, the related topological properties, i.e., their vanishing or non-vanishing, are norm independent.
Typical examples of application of the described approach are the following. When a chaotic system has a single positive Lyapunov exponent and zero exponents are absent, e.g. for a discrete-time system, or for a periodically driven non-autonomous system, we have and need to perform forward steps (7), (8) and backward steps (12), (13) monitoring one backward and one forward Lyapunov vectors, respectively. The matrix (15) is reduced to an inner product of these vectors that is substituted as to Eq. (16) to compute . The hyperbolicity is confirmed when the distribution of this angle is clearly separated from zero.
Another example is an autonomous continues time chaotic system with one positive and one zero Lyapunov exponents. In this case so that forward (7), (8) and backward (12), (13) iterations are performed with two vectors and two angles are computed. For we again take the top left element of as , and for the smallest singular value of matrix is found. The hyperbolicity is confirmed if the system corresponds to the Anosov flow Anosov95 (); KatHas95 (): its expanding, neutral and contracting subspaces never clash. This is the case when the distributions both for and for are well separated from the origin.
3 Adjoint variational equation for a system with multiple time delays
Consider a system with delays:
Here is a vector variable of finite dimension . Delay times are assumed to be labeled in the ascending order, so that is the shortest one and is the longest one.
The full dimension of the phase space is infinite: one needs to consider a trajectory segment of duration (a continuum of data) to determine each new infinitesimal time step.
The corresponding variational equation reads
where and are the derivative matrices composed of partial derivatives of over components of and , respectively.
To apply the fast method of angles to system (17) we need to the adjoint variational equation to (18). Note that arbitrary solutions and to the variational and the adjoint equations (2) and (11), respectively, must fulfill the identity
as can be verified by direct substitution. We will find the adjoint equation for Eq. (18) requiring fulfillment of the analogous identity.
In an actual physical implementation, the system (17) may be thought as endowed with a delay line providing the retarded variables . A natural way to take it into account explicitly is to introduce a wave system with a delta-function source at the origin:
Here is the delay line variable depending on the coordinate and time . The subscripts and stand for the corresponding partial derivatives. The solution to Eq. (20) is a wave propagating in the positive direction:
Now, the main equation (17) can be rewritten as
Respectively, the variational equation takes the form
where and are the same matrices as used in (18).
A state vector for Eq. (25) has a mixed discrete-continues form, . The inner product for two such vectors and can be introduced as
Now we will construct the adjoint variational equation requiring fulfillment of the identity with respect to the inner product (26). The desirable equation reads
Here is the first segment of a compound delay line including the following parts:
where . Solution to th segment reads:
The delay line (23) contains a source at the origin where the signal is injected. The retarded signals are read at points and returned back to the system. The delay line (28) for the adjoint system (27) is a chain of segments coupled via a kind of sinks at their right boundaries. The signal is injected through every sink being multiplied by the corresponding Jacobian matrix. The advanced wave solution of the first chain segment is read at the origin and returned back to the system.
where . Verification of (30) is rather straightforward taking into account the equality .
In the theory of differential equations with deviating arguments Eq. (31) belongs to the class of equations of leading or advanced type BellCook63 (); Myshkis72 (); ElsNor73 (). They are regarded as poorly defined with respect to the existence of solutions to initial value problems. In the context of our study, however, we will solve such equations in backward time only, so that they behave in a good way like the equations of retarded type in forward time.
4 Numerical approximation of the adjoint variational equations
In the previous section we have shown that the adjoint companion for the time-delay variational Eq. (18) is Eq. (31) providing that the inner product is defined by Eq. (26). In this section we will show that Eq. (31) agrees with the adjoint numerical Jacobian, i.e., performing numerical simulations we can either solve the adjoint equation (31) directly or find the numerical Jacobian matrix for Eq. (18) and then compute its adjoint form. For the sake of simplicity, we consider here the Euler numerical scheme of the first order.
Let be a time step and let us assume that , where all , , are integers. Since are ordered, are ordered too: . Also we set , , and , . Accepting these assumptions we obtain the Euler numerical approximation for the variational Eq. (25) as follows:
where is a solution of a discrete form of Eq. (23) for the delay line:
Equations (32) and (33) admit recasting in a matrix form , where is a numerical Jacobian matrix playing a role of the propagator applicable for the forward time iterations (7), (8), see Sec. 2. For example at , , and the numerical Jacobian matrix reads:
In general, this is a block matrix that contains at sites of the first row, ones, i.e., identity blocks, on the first subdiagonal and other elements are zeros.
Given we can find an explicit form of the adjoint numerical Jacobian matrix applicable for the backward iterations (12), (13). Since the adjoint variational equation (31) is constructed with respect to the inner product (26), now we need to discretize it as follows
where and are state vectors and a diagonal matrix plays the role of a metric,
Taking into account the definition of the adjoint propagator (10) we obtain the adjoint numerical Jacobian matrix:
When two manifolds of a trajectory have a tangency, i.e., the corresponding angle vanishes, this property is preserved under the metric change. It means that we could also detect this situation taking the unit matrix as a metric and using the standard dot product instead of Eq. (35). However the angles computed in this way would depend on the discretization step . The inner product (35) provides a correct asymptotic behavior of the angles as when the numerical scheme converges to the original differential equations. This is the case because Eq. (37) with the metric (36) produces the adjoint numerical Jacobian matrix that corresponds to the Euler discretization of the adjoint variational equation. Let , , , , and . In these terms the equation for backward time iterations is as follows: , where . The Euler discretization of the adjoint variational Eq. (27) for backward time solution exactly corresponds to this map. It can be illustrated using the matrix (38) whose iteration step reads:
The segments of the compound delay line are not labeled here as in Eqs. (28), however, it is easy to see that and belong to the first segment, and and form the second one.
Altogether we have shown that the analytically derived adjoint variational equation (31) agrees with the numerical one obtained via straightforward computation of the adjoint matrix. In doing so, the inner product (26) has to be used for analytical treatments and Eq. (35) is its numerical vis-á-vis.
5 Numerical procedure
For actual computations, a numerical scheme based on the Euler discretization is not the best choice due to its known poor accuracy. We will employ the Heun’s method to solve both the main system (17) and the variational equation (18). This method belongs to the class of second-order Runge-Kutta methods with constant time step NumericalDDE (). In the course of computations in addition to the current state we have to keep also the data for previous steps along a trajectory to provide the retarded variables. The advantage of the Heun’s method for our problem is that these stored values are enough for computations and no more additional data are required for intermediate steps.
The Heun’s step for the main system reads:
and the variational equation (18) is solved as follows:
Though the backward time tangent space dynamics can be implemented via straightforward solving Eq. (31), it is more efficient to reuse the data computed on the forward pass. Let us define the following block matrix whose entries are zeros except the first row:
where is either if coincides with one of the delays or zero: . Also we will need the matrix
Using these matrices the forward time Heun’s step can be represented as , where
and is a state vector. We recall that is a local dimension, i.e., the dimension of a single vector variable .
The computations discussed so far required a non-standard inner product Eq. (35) with the metric (36). But the traditional routines for linear algebra manipulations in known numerical software libraries are usually implemented with respect to the standard dot product. To bypass this obstacle we can orthonormalize the tangent space basis using the matrix , see Eq. (5) and the related discussion. Thus instead of a “raw” numerical Jacobian matrix the modified one will be used that is defined with respect to the orthonormal basis:
If forward time tangent space iterations (7), (8) are performed with , the adjoint matrix for the backward time iterations (12), (13) is merely its transposition, , and the inner product of the involved tangent vectors is computed via the standard dot product.
Regardless of the high dimension of the phase space the numerical Jacobian matrix contains sufficiently small number of nontrivial trajectory dependent values. Its full size is , but the non-constant values are supplied only by the derivative matrices , where . The upper estimate for their total number is sufficiently small. But what is more important, this number does not depend on the computation accuracy that influences . It means that data for the numerical Jacobian matrices can be stored along the trajectory without risk of exhausting of a computer memory and then reused on the backward pass.
Thus the computations are organized as follows. The forward steps (7) are implemented via solving Eqs. (40) and copies of Eq. (41) over certain time intervals. Solutions of the variational equations, treated as dimensional block vectors, each block of size , are multiplied by before each step and by after the step. It corresponds to the iteration with in the orthonormalized basis, see Eq. (45). After the time evolution step (7) the vectors gathered as columns of a matrix are QR-factorized as requires Eq. (8). The resulting matrices containing backward Lyapunov vectors are stored. The nontrivial values of the derivative matrices are also stored. The backward pass (12), (13) is performed with , where is recovered via Eq. (44) using the stored derivative matrices . After each QR-factorization (13) the forward Lyapunov vectors sitting in columns of are used together with the stored backward Lyapunov vectors to compute the matrix , see Eq. (15), and the angles as explained in Sec. 2.
6 Hyperbolicity testes of particular systems
First we consider a generator of a robust chaos based on van der Pol oscillator with two delayed feedbacks Baranov10 ():
Here is a dynamical variable, controls the strength of the delayed feedback and is supposed to be small; is the amplitude of modulation of the excitation parameter with respect to the middle level . The period of modulation is assumed to be large so that , where is the natural frequency of the oscillator. Note that this system was implemented as a real electronic circuit and studied experimentally Baranov10 ().
The oscillator is activated and damped with the period . The delay times and are selected in such way that every new activation stage is initiated by signals from two previous subsequent activation stages. Suppose these signals to be and , respectively, i.e., their phases are and . Then, the nonlinear transformation in the right hand side of Eq. (46) provides a resonant term:
which stimulates the oscillation process arising at the new activation stage imposing the own phase to it. It means that from stage to stage the oscillation phase transforms according to the relation
and, respectively, the phase difference obeys the chaotic Bernoulli map
Since the only mechanism responsible for chaos in the system (46) is uniformly hyperbolic, the stroboscopic map for the system (46) considered at is also expected to demonstrate the robust hyperbolic chaos. The detailed analysis of its chaotic properties can be found in Ref. Baranov10 (). Below we will verify its hyperbolicity using the fast method of angles. For this and as well as for two other systems discussed below, doing iterations (7) and (12) in the tangent space we will perform QR-factorizations (8) and (13) at each step of the discrete time.
Consider the system (46) at two sets of the parameter values:
where the other parameters are , and . In round brackets we specify four largest Lyapunov exponents obtained numerically for the stroboscopic map of the system (46). In agreement with the previous discussion, the first Lyapunov exponents for both of the parameter sets are close to , and the second ones are zeros within the numerical error.
Due to the presence of zero Lyapunov exponent, the tangent space of the considered stroboscopic map splits into distinct subspaces: expanding, neutral and contracting222Notice that unlike autonomous systems, where the neutral subspace is related to invariant time shifts and is eliminated with reformulation in terms of the Poincaré section map, in our case the neutral subspace appears due to the symmetry of the map itself.. Figure 1 provides the numerical conformation that these three subspaces are disjoint. This and all subsequent figures have been plotted using Matplotlib graphics package Hunter:2007 (). All diagrams are shown for three different time steps to illustrate correspondence and convergence of the computational data with the continuous limit.
Figs. 1(a,b) correspond to the parameter set (50a). Fig. 1(a) indicates that the distribution for angle is well separated from the origin. It indicates that the expanding subspace never clashes with the neutral and the contracting ones. Figure 1(b) indicates that the angle is also separated from the origin; it means that the sum of the expanding and neutral subspaces also has no tangencies with the contracting subspace. Together Figs. 1(a,b) show that the expanding, neutral and contracting tangent subspaces never touch each other, so that at any trajectory point the full tangent space can be represented as their direct sum, that corresponds to the main statement of the hyperbolicity concept.
The strict mathematical definition of the uniform hyperbolicity for the discrete-time systems (diffeomorphisms) requires the existence of the expanding and contracting subspaces only. Due to the presence of the neutral subspace the stroboscopic map for the system (46) can be technically categorized as partially hyperbolic Pesin04 (). However, the strict isolation of the subspaces from each other indicates that the most important property of the robustness of chaos nevertheless resides in this system.
Let us now turn to another system with robust chaos introduced in Ref. KuzPik08 () basing on an oscillator of the Stuart-Landau type
having in mind a possible implementation as a laser device. Here is a complex dynamical variable; and control the excitation that is slowly modulated with the frequency ; and define the delay durations; small controls the strength of the delayed feedback; asterisk denotes the complex conjugation.
This system operates similarly to the previous one. It demonstrates activation and damping with the period . Since is small, the effect of the delayed signals is essential at the beginnings of the activation stages. Proper choice of the delays provides transfer of the excitation for by the signals from two previous activation stages. Being nonlinearly transformed, these signals produce a resonant term whose phase depends on two previous phases as
Here is the phase of oscillations at the th activation stage. The phase difference evolve according to the Bernoulli map:
This map has one positive Lyapunov exponent , and the map (52) additionally has the zero one due to a symmetry related to an arbitrary phase shift. As studied in Ref. KuzPik08 () the stroboscopic map of the system (51) sliced at demonstrates the robust chaos.
We consider the system (51) stroboscopically for two parameter sets:
Other parameters are , , , . The first Lyapunov exponents are close to . The second exponents are zeros within the numerical error.
Because of presence of a zero Lyapunov exponent, we again have to consider three tangent subspaces: expanding, neutral and contracting ones. Figures 2(a,b) show the data of the computations for the parameter set (54a). Both and are well separated from the origin, so that we can conclude that the three subspaces do not have tangencies. Analogously to the system (46), it means that the stroboscopic map for Eq. (51) possesses the robust chaos and can be categorized as partially hyperbolic Pesin04 (). Similar analysis for the second parameter set (54b) also confirms this conclusion.
Our final example is an autonomous generator of robust chaos with two delays suggested in Ref. Arzh14 ():
Here and are dynamical variables, is a bifurcation parameter controlling the excitation, is a natural frequency, and small determines the strength of the delayed feedback.
The key idea of operation of this system is similar to the previous ones: activation stages alternate with damping ones, and the delayed signals provide the resonant seeds for every new excitation stage from two preceding stages. In this case, however, the nonlinear transformation of the delayed signals is of such kind that evolution of the phases from stage to stage take place according to chaotic Anosov torus map. One more difference is that this system is autonomous: alternation of the activation and damping stages occurs due to the internal dynamics of the system, without external modulation of parameters.
Chaotic properties of Eq. (55) are studied in detail in Ref. Arzh14 (). In particular it is shown that new phase at th stage of excitation depends on two previous phases according to the Fibonacci map
This map has two Lyapunov exponents that are equal to logarithms of golden ratio and reciprocal golden ratio respectively: .
To check hyperbolicity of the system (55) we construct a Poincaré section map whose iterations correspond to the successive excitation stages. We define this map using the section surface in the state space determined by the relation
Consider two sets of the parameter values:
The other parameters are , , . In round brackets we specify four largest Lyapunov exponents obtained numerically for the corresponding Poincaré map of the system. Observe that the largest Lyapunov exponents in both cases correspond as expected to logarithms of the golden ratio. Moreover, among the negative exponents one is equal approximately to the logarithm of the reciprocal golden ratio ( for the first case and for the second one). The second Lyapunov exponents are zero, as it is typical for autonomous flow systems. (Though we consider the Poincaré map, i.e., a discrete time system, the zero exponent still exists since in the course of computations we evaluate the Lyapunov vectors within the full tangent space. To eliminate zero we could find projection of the Lyapunov vectors onto the section surface, however, this it is not needed in the context of our consideration.)
For comparison, we also have computed the first four Lyapunov exponents for the original flow system (55):
Rows (59a) and (59b) correspond to parameters (58a) and (58b), respectively. Observe that the largest exponents do not coincide since the periods of excitation in the two cases are different. If we considered the trajectory and found the average return periods , we would obtain that for both cases, in agreement with the computations for the Poincaré map.
Let us now turn to the hyperbolicity check. The initial system is autonomous with a single zero Lyapunov exponent related to the symmetry with respect to shifts in continuous time. Introducing the Poincaré section we exclude this symmetry, so the zero exponent disappears. It means that the hyperbolicity test in this case must include computation of one angle only, between the expanding and contracting tangent subspaces. However as already mentioned, it would require computation of projections of Lyapunov vectors onto the section surface that is computationally inefficient. Instead we will check if the system (55) corresponds to the Anosov flow Anosov95 (); KatHas95 () on the section surface. It automatically implies the hyperbolicity of the corresponding Poincaré map.
For this purpose we again obtain numerically the distribution of the angles for the expanding vs. the neutral plus contracting subspaces, and the distribution of the angles for the expanding plus neutral vs. the contacting subspaces. Figures 3(a,b) and (c,d) show that the system indeed is hyperbolic in both cases (58a) and (58b).
Systems with hyperbolic chaotic attractors including those with time-delays are of great potential importance for applications because of the intrinsic structural stability that implies nonsensitivity of the generated chaos to parameters and functional characteristics of the components, to perturbations, noises, interferences, fabrication imperfections and so on. In this paper a method of computer verification of hyperbolic nature of chaotic attractors is developed in a form appropriate for systems with multiple time delays. The method is based on the calculation of the angles between expanding, contracting and neutral manifolds for phase trajectories (the “angle criterion”).
Since for time-delay systems the phase space is infinite-dimensional, the contracting manifold is also infinite-dimensional. Therefore, the procedure is based on the use of the complement to a contracting tangent subspace. For tangent vectors related to the expanding and neutral subspaces linearized equations with retarded argument are integrated along a reference trajectory on the attractor in direct time. The contracting subspace is identified with the use of the adjoint system of equations with deviating argument of leading or advanced type, formulated within the framework of a specially worked out mathematical justification of the technique. The integration of the adjoint equations is performed along the reference trajectory in the inverse time. The obtained data make it possible to obtain and analyze statistics for distributions of the angles of intersection of the expanding, contracting and neutral subspaces in the tangent space of deviation vectors for the reference trajectory. The absence of angles close to zero indicates hyperbolicity, while a nonzero probability of zero angles implies its violation. With the help of the proposed algorithm, the hyperbolic nature of chaos is substantiated for three examples of time-delay systems with two delays by presentation of histograms of angular distributions.
Work of SPK on theoretical formulations was supported by grant of Russian Science Foundation No 15-12-20035. The work of PVK on elaborating computer routines and numerical computations was supported by grant of RFBR No 16-02-00135.
- (1) S. Smale, Differentiable dynamical systems, Bull. Amer. Math. Soc. (NS) 73 (1967) 747–817.
- (2) D. V. Anosov (Ed.), Dynamical Systems 9: Dynamical Systems with Hyperbolic Behaviour, Vol. 9 of Encyclopaedia Math. Sci., Berlin: Springer, 1995.
- (3) A. Katok, B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, 1st Edition, Vol. 54 of Encyclopedia of mathematics and its applications, Cambridge University Press, 1995.
- (4) A. A. Andronov, L. S. Pontryagin, Structurally stable systems, Dokl. Akad. Nauk SSSR 14 (1937) 247–250.
- (5) A. A. Andronov, S. E. Khaikin, A. A. Vitt, Theory of Oscillators, Pergamon Press, 1966.
- (6) S. P. Kuznetsov, Hyperbolic Chaos: A Physicist’s View, Higher Education Press: Bijing and Springer-Verlag: Berlin, Heidelberg, 2012.
- (7) S. P. Kuznetsov, Dynamical chaos and uniformly hyperbolic attractors: From mathematics to physics, Phys. Uspekhi. 54 (2) (2011) 119–144.
- (8) T. Vyhlídal, J. F. Lafay, R. Sipahi (Eds.), Delay Systems: From Theory to Numerics and Applications, Vol. 1, Springer Science & Business Media, 2013.
- (9) R. Bellman, C. Cooke, Differential-difference equations, Acad. Press, 1963.
- (10) A. Myshkis, Linear differential equations with retarded argument, Moscow, Nauka, 1972, in Russian.
- (11) L. El’sgol’ts, S. Norkin, Introduction to the theory and application of differential equations with deviating arguments, Acad. Press, 1973.
- (12) B. Dorizzi, B. Grammaticos, M. LeBerre, Y. Pomeau, E. Ressayre, A. Tallet, Statistics and dimension of chaos in differential delay systems, Phys. Rev. A 35 (1) (1987) 328–339.
- (13) J. Chrostowski, R. Vallee, C. Delisle, Self-pulsing and chaos in acoustooptic bistability, Can. J. Phys. 61 (8) (1983) 1143–1148.
- (14) S. Lepri, G. Giacomelli, A. Politi, F. T. Arecchi, High-dimensional chaos in delayed dynamical systems, Phys. D 70 (3) (1994) 235–249.
- (15) M. Mackey, L. Glass, Oscillation and chaos in physiological control systems, Science 197 (4300) (1977) 287–289.
- (16) J. D. Farmer, Chaotic attractors of an infinite-dimensional dynamical system, Phys. D 4 (1982) 366–393.
- (17) P. Grassberger, I. Procaccia, Dimensions and entropies of strange attractors from a fluctuating dynamics approach, Phys. D 13 (1) (1984) 34–54.
- (18) S. P. Kuznetsov, V. I. Ponomarenko, Realization of a strange attractor of the Smale-Williams type in a radiotechnical delay-feedback oscillator, Tech. Phys. Lett. 34 (9) (2008) 771–773.
- (19) S. P. Kuznetsov, A. Pikovsky, Attractor of Smale-Williams type in an autonomous time-delay system, Preprint nlin. arXiv: 1011.5972.
- (20) S. P. Kuznetsov, A. Pikovsky, Hyperbolic chaos in the phase dynamics of a q-switched oscillator with delayed nonlinear feedbacks, Europhys. Lett. 84 (2008) 10013.
- (21) S. V. Baranov, S. P. Kuznetsov, V. I. Ponomarenko, Chaos in the phase dynamics of Q-switched van der Pol oscillator with additional delayed-feedback loop, Izvestiya VUZ. Applied Nonlinear Dynamics (Saratov) 18 (1) (2010) 11–23, in Russian.
- (22) A. S. Kuznetsov, S. P. Kuznetsov, Parametric generation of robust chaos with time-delayed feedback and modulated pump source, Commun. Nonlinear Sci. Numer. Simul. 18 (2013) 728–734.
- (23) D. S. Arzhanukhina, S. P. Kuznetsov, Robust chaos in autonomous time-delay system, Izvestiya VUZ. Applied Nonlinear Dynamics (Saratov) 22 (2) (2014) 36–49.
- (24) Y. G. Sinai, Stochasticity of dynamicsl systems, in: A. V. Gaponov-Grekhov (Ed.), Nonlinear Waves, Nauka, Moscow, 1979, pp. 192–212, in Russian.
- (25) S. P. Kuznetsov, I. R. Sataev, Hyperbolic attractor in a system of coupled non-autonomous van der Pol oscillators: Numerical test for expanding and contracting cones, Phys. Lett. A 365 (1-2) (2007) 97–104.
- (26) A. S. Kuznetsov, S. P. Kuznetsov, I. R. Sataev, Parametric generator of hyperbolic chaos based on two coupled oscillators with nonlinear dissipation, Tech. Phys. 55 (12) (2010) 1707–1715.
- (27) D. Wilczak, Uniformly hyperbolic attractor of the Smale-Williams type for a Poincaré map in the Kuznetsov system, SIAM J. Applied Dynamical Systems 9 (2010) 1263–1283.
- (28) Y.-C. Lai, C. Grebogi, J. A. Yorke, I. Kan, How often are chaotic saddles nonhyperbolic?, Nonlinearity 6 (1993) 779–798.
- (29) P. V. Kuptsov, Fast numerical test of hyperbolic chaos, Phys. Rev. E 85 (2012) 015203.
- (30) Y. Hirata, K. Nozaki, T. Konishi, The intersection angles between N-dimensional stable and unstable manifolds in 2N-dimensional symplectic mappings, Prog. Theor. Phys. 102 (1999) 701–706.
- (31) V. S. Anishchenko, A. S. Kopeikin, J. Kurths, T. E. Vadivasova, G. I. Strelkova, Studying hyperbolicity in chaotic systems, Phys. Lett. A 270 (2000) 301–307.
- (32) S. P. Kuznetsov, Example of a physical system with a hyperbolic attractor of the Smale-Williams type, Phys. Rev. Lett. 95 (2005) 144101.
- (33) S. P. Kuznetsov, E. P. Seleznev, A strange attractor of the Smale-Williams type in the chaotic dynamics of a physical system, JETP 102 (2006) 355–364.
- (34) P. V. Kuptsov, S. P. Kuznetsov, Violation of hyperbolicity in a diffusive medium with local hyperbolic attractor, Phys. Rev. E 80 (2009) 016205.
- (35) S. P. Kuznetsov, Hyperbolic chaos in self-oscillating systems based on mechanical triple linkage: Testing absence of tangencies of stable and unstable manifolds for phase trajectories, Regular and Chaotic Dynamics 20 (6) (2015) 649–666.
- (36) S. P. Kuznetsov, V. P. Kruglov, Verification of hyperbolicity for attractors of some mechanical systems with chaotic dynamics, Regular and Chaotic Dynamics 21 (2) (2016) 160–174.
- (37) S. P. Kuznetsov, From Anosov dynamics on a surface of negative curvature to electronic generator of robust chaos, Izv. Saratov Univ. (N.S.), Ser. Physics 16 (2016) 131–144, in Russian.
- (38) P. V. Kuptsov, S. P. Kuznetsov, Numerical test for hyperbolicity of chaotic dynamics in time-delay systems, Phys. Rev. E 94 (2016) 010201. doi:10.1103/PhysRevE.94.010201.
- (39) P. V. Kuptsov, U. Parlitz, Theory and computation of covariant Lyapunov vectors, J. Nonlinear. Sci. 22 (5) (2012) 727–762.
- (40) F. Ginelli, P. Poggi, A. Turchi, H. Chaté, R. Livi, A. Politi, Characterizing dynamics with covariant lyapunov vectors, Phys. Rev. Lett. 99 (2007) 130601.
- (41) C. L. Wolfe, R. M. Samelson, An efficient method for recovering Lyapunov vectors from singular vectors, Tellus A 59 (2007) 355–366.
- (42) A. Pikovsky, A. Politi, Lyapunov Exponents: A Tool to Explore Complex Dynamics, Cambridge University Press, 2016.
- (43) D. Pazó, I. G. Szendro, J. M. López, M. A. Rodríguez, Structure of characteristic Lyapunov vectors in spatiotemporal chaos, Phys. Rev. E 78 (1) (2008) 016209. doi:10.1103/PhysRevE.78.016209.
- (44) R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge university press, 2012.
- (45) L. Hogben (Ed.), Handbook of linear algebra, Chapman and Hall/CRC, 2013. doi:10.1201/b16113.
- (46) G. Benettin, L. Galgani, A. Giorgilli, J.-M. Strelcyn, Lyapunov characteristic exponents for smooth dynamical systems and for Hamiltonian systems: a method for computing all of them. Part 1: Theory, Meccanica 15 (1) (1980) 9–20.
- (47) I. Shimada, T. Nagashima, A numerical approach to ergodic problem of dissipative dynamical systems, Prog. Theor. Phys. 61 (6) (1979) 1605–1616.
- (48) G. H. Golub, C. F. Van Loan, Matrix computations, Vol. 3, JHU Press, 2012.
- (49) B. Legras, R. Vautard, A guide to Lyapunov vectors, in: T. Palmer (Ed.), Predictability Seminar Proc., Vol. 1 of ECWF Seminar, European Centre for Medium-Range Weather Forecasts, Reading, United Kingdom, 1996, pp. 135–146.
- (50) A. Bellen, M. Zennaro, Numerical methods for delay differential equations, Oxford university press, 2013.
- (51) J. D. Hunter, Matplotlib: A 2d graphics environment, Computing In Science & Engineering 9 (3) (2007) 90–95. doi:10.1109/MCSE.2007.55.
- (52) Y. B. Pesin, Lectures on partial hyperbolicity and stable ergodicity, European Mathematical Society, 2004.