Modern Geometry in Not-So-High Echelons of Physics:
Case Studies


In this mostly pedagogical tutorial article a brief introduction to modern geometrical treatment of fluid dynamics and electrodynamics is provided. The main technical tool is standard theory of differential forms. In fluid dynamics, the approach is based on general theory of integral invariants (due to Poincaré and Cartan). Since this stuff is still not considered common knowledge, the first chapter is devoted to an introductory and self-contained exposition of both Poincaré version as well as Cartan’s extension of the theory. The main emphasis in fluid dynamics part of the text is on explaining basic classical results on vorticity phenomenon (vortex lines, vortex filaments etc.) in ideal fluid. In electrodynamics part, we stress the aspect of how different (in particular, rotating) observers perceive the same space-time situation. Suitable decomposition technique of differential forms proves to be useful for that. As a representative (an simple) example we analyze Faraday’s law of induction (and explicitly compute the induced voltage) from this point of view.

02.40.-k, 47.10.A-, 47.32.-y, 03.50.De

261359 Department of Theoretical Physics and Didactics of Physics,
Comenius University, Bratislava, Slovakia


Ideal fluid, barotropic flow, vortex lines, transport theorem, Helmholtz theorem, lines of solenoidal field, integral invariant, 3+1 decomposition, rotating frame, Faraday’s law


1 Introduction

Among theoretical physicists, modern differential geometry is typically associated with its “higher echelons”, like advanced general relativity, string theory, topologically non-trivial solutions in gauge theories, Kaluza-Klein theories and so on.

However, geometrical methods also proved to be highly effective in several other branches of physics which are usually treated as more “mundane” or, put it differently, as “not-so-high echelons” of theoretical physics. Good old fluid dynamics (or, more generally, dynamics of continuous media) and electrodynamics may serve as prominent examples.

Nowadays, some education in modern differential geometry (manifolds, differential forms, Lie derivatives, …) becomes a standard part of theoretical physics curriculum. After learning those things, however, the potential strength of this mathematics is rarely demonstrated in real physics courses.

Although I definitely do not advocate entering of modern geometry into “first round” physics courses (of, say, above mentioned fluid dynamics and electrodynamics), it seems to me that to show how it is really used in some “second round” courses might be quite a good idea. First, in this way some more advanced material in the particular subject may be explained in a simple and lucid way so typical for modern geometry. Second, from the opposite perspective, this exposition is the best way to show how differential geometry itself really works.

If, on the contrary, this is not done so, modern geometry is segregated from real life and forced out to the above mentioned “higher echelons”, with the natural consequence that for majority of students who put considerable energy into grasping this stuff in mathematics courses all their work is completely in vain.

Now, a few words about the structure of this tutorial article.

In the fluid dynamics part, we restrict to ideal (inviscid) fluid and, in addition, are only interested in the barotropic case (except for Ertel’s theorem, which is more general). Our exposition rests on theory of integral invariants due to Poincaré and Cartan. I think this approach is well suited for treating classical material concerning vorticity (like Helmholtz and Kelvin’s theorems). Perhaps it is worth noting that we treat fluid dynamics in terms of “extended” space as the underlying manifold (i.e. the space where time is a full-fledged dimension rather than just a parameter).

In electrodynamics part, we first derive Maxwell equations in terms of differential forms in 4-dimensional space-time (this is achieved by learning the structure of general forms in Minkowski space and checking it versus the standard 3-dimensional version of the equations). Then, in the second step, we introduce the concept of observer field in space-time, intimately connected to the concept of reference frame. Using appropriate technique of 3+1 (space + time) decomposition of forms (and operations on them) with respect to the observer field we can easily compute what various observers “see” when they “look at invariant space-time electromagnetic reality”. As an elementary example of this approach, we explicitly compute relativistically induced electric field seen in the rotating frame of a wire rim as well as its line integral along the rim (the induced voltage) in the Faraday’s law demonstration setting.

The reader is supposed to have basic working knowledge of modern geometry mentioned above (manifolds, differential forms, Lie derivatives, …; if not, perhaps the crash course [Fecko 2007] might help as a first aid treatment). More tricky material is explained in the text (and mostly a detailed computation is given when needed).

In Appendix A we collect useful formulas which relate expressions in the language of differential forms in 3-dimensional Euclidean space (as well as 4-dimensional Minkowski space) to their counterparts from usual vector analysis. Ability to translate various expressions from one language to another (go back and forth at any moment) is essential for effective use of forms in both fluid dynamics and electrodynamics.

Appendices B and C are devoted to answering the question whether field lines of solenoidal (divergence-free) vector field indeed cannot end (or start) inside the domain where there are defined (they can :-).

2 Integral invariants - what Poincaré and what Cartan

2.1 Motivation - why the topic appears here

Theory of integral invariants is a well-established part of geometry with various applications. It is probably best-known from Hamiltonian mechanics.

Integral invariants were first formally introduced and studied by Poincaré in his celebrated memoir [Poincaré 1890]. He explained them in more detail in the book [Poincaré 1899]. Then the concept was developed by Cartan and summarized in his monograph [Cartan 1922].

What was Cartan’s contribution? Roughly speaking, while Poincaré considered invariants in phase space, Cartan studied these objects in extended phase space. This led him to a true generalization: one can associate, with each Poincaré invariant, corresponding Cartan’s invariant. The latter proves to be invariant with respect to a “wider class of operations” (see more below). In addition, and this point of view will be of particular interest for us, going from Poincaré version to Cartan’s one may be regarded, in a sense, as going from time-independent situation to time-dependent one.

Since the theory of integral invariants is both instructive in its own right and used in Chapter 3, we placed it in the very beginning of the paper.

In Chapter 3, we first use its original Poincaré version in Section 3.2 and then, already the Cartan’s extension, as a tool providing non-stationary fluid dynamics equations from the known form of the stationary case in Sec 3.3 and for gaining useful information from it in Sec. 3.4.

Remarkably, if we do it in this way, the resulting more general non-stationary equation looks simpler than the stationary one. In addition, its consequences, like Helmholtz theorem on behavior of vortex lines in inviscid fluid, look very naturally in this picture.

2.2 Poincaré

Let’s start with Poincaré invariants.

Consider a manifold endowed with dynamics given by a vector field


The field generates the dynamics (time evolution) via its flow . We will call the structure phase space


In this situation, let’s have a -form and consider its integrals over various -chains (-dimensional surfaces) on . Due to the flow corresponding to , the -chains flow away, . Compare the value of integral of over the original and integral over . If, for any chain , the two integrals are equal, it clearly reflects a remarkable property of the form with respect to the field . We call it integral invariant:


Let’s see what this says for infinitesimal . Then


(plus, of course, higher order terms in ; here is Lie derivative along ). So, in the case of integral invariant, the condition


is to be fulfilled. Since this is to be true for each , the form (under the integral sign) itself in (2.2.5) is to vanish


This is the differential version of the statement (2.2.3).

There is, however, an important subclass of -chains, namely -cycles. These are chains whose boundary vanish:


In specific situations, it may be enough that some integral only behaves invariantly when restricted to cycles. If this is the case, the condition (2.2.6) is overly strong. It can be weakened to the requirement that the form under the integral sign in (2.2.5) is exact, i.e.


for some form .

Indeed, in one direction, Eqs. (2.2.7) and (2.2.8) then give


so that (2.2.5) is fulfilled. In the opposite direction, if the integral (2.2.5) is to vanish for each cycle, the form under the integral sign is to be exact (due to de Rham theorem), so (2.2.8) holds.

According to whether the integrals of forms are invariant for arbitrary -chains or just for -cycles, integral invariants are divided into absolute invariants (for any -chains) and relative ones (just for -cycles). We can summarize what we learned yet as follows:


Let’s see, now, what else we can say about relative integral invariants. The condition (2.2.8) may be rewritten (using ) as


(where ). Therefore it holds, trivially,


and so also the following main statement on relative invariants (reformulation of (2.2.11)) is true:


So we can identify the presence of relative integral invariant in differential version: on phase space , we see a form fulfilling any of the two equations mentioned in Eq. (2.2.13).

[Perhaps we should stress how the second equivalence sign is to be interpreted. There is no under the integral sign. Therefore, from the rightmost statement of Eq. (2.2.14), it is not possible to reconstruct any particular , present in the leftmost statement. So one should read the second equivalence sign, in particular its right-to-left direction, as the assertion that, provided the rightmost statement holds, there exists a form such that the leftmost statement is true. (And, of course, one should adopt the same attitude with respect to the middle statement and .)]

Notice that, as a consequence of Eq. (2.2.8), we also get the equation


This says, however (see Eq. (2.2.10)), that integral of is absolute integral invariant. So, if we find a relative invariant given by , then provides an absolute invariant:


(here , whereas may not vanish).

Conversely, if we find an absolute invariant then it is, clearly, also a relative one (if something is true for all chains then it is, in particular, true for closed chains, i.e. for cycles). Absolute invariants thus present a part (subset) of relative invariants and the exterior derivative maps relative invariants into absolute invariants.

[Notice that whenever we find a ”good” triple (i.e. holds), we can generate, for the same dynamics , a series of additional ”good” triples


(check) so that we get a series of relative invariants


(Here are cycles of appropriate dimensions. For deg = odd and we get, clearly, vacuous statements, since .)]

Example 2.2.1: Consider the Hamiltonian mechanics (the autonomous case, yet, i.e. with the Hamiltonian independent of time). Here the dynamical field is the Hamiltonian field given by the equation


(see Ch.14 in ([Fecko 2006])). Comparison with Eq. (2.2.12)


reveals that a good is the 1-form . The role of the corresponding form (potential) is played by the (minus) Hamiltonian .

[Notice that this property of is actually true w.r.t. the field for arbitrary , i.e. w.r.t. a whole family of dynamical fields on . So, in this particular realization of the triple , a single is good for a whole family of dynamical vector fields (namely, for all Hamiltonian fields).]

According to Eqs. (2.2.17) - (2.2.20), we have also additional triples , given as

etc. (2.2.27)

(where ) and, consequently, relative integral invariants


Because of Eq. (2.2.16), we can also deduce that


are absolute integral invariants. The end of Example 2.2.1.

2.3 Life on

In order to clearly understand Cartan’s contribution to the field of integral invariants (i.e. sections 2.4 and 2.5), a small technical digression might be useful. What we need to understand is how differential forms (as well as vector fields) on and are related.

It is useful to interpret the -factor as time axis added to . Then, if is phase space (see Eq. (2.2.2)), we call extended phase space


On , a -form may be uniquely decomposed as


where both and are spatial, i.e. they do not contain the factor in its coordinate presentation (the property of being spatial is denoted by hat symbol, here). Simply, after writing the form in adapted coordinates , i.e. in those where come from and comes from , one groups together all terms which do contain once and, similarly, all terms which do not contain at all (there is no other possibility :-).

Since spatial forms and do not contain , they look at first sight (when written in coordinates), as if they lived on (rather than on , where they actually live).

Notice, however, that still can enter components of any form. (And spatial forms are no exception.) We say that and are, in general, time-dependent.

Therefore, when performing exterior derivative of a spatial form, say , there is a part, , which does not take into account the -dependence of the components (if any; as if was performed just on ), plus a part which, on the contrary, sees the variable alone. (In Sec. 4, we encounter a more complicated version of .) Putting together, we have


Then, for a general form (2.3.2), we get


Notice that the resulting form also has the general structure given in Eq. (2.3.2).

Consider now an important particular case. There is a natural projection


We can use it to pull-back forms from onto


From the coordinate presentation of we see, that any form on , which results from such pull-back from , is

1. spatial

2. time-independent

And also the converse is clearly true: if a form on is both spatial and time-independent, then there is a unique form on such that the form under consideration can be obtained as pull-back of the form on (just think in coordinates; the coordinate presentation of the two forms, in adapted coordinates and , coincides).

[The two properties may also be expressed more invariantly:


(notice that the vector field as well as the -form are canonical on ).]

Take two such forms. Since they are spatial, we can denote them by and (let their degrees be and , respectively; the un-hatted forms and live on ) and compose a form on according to Eq. (2.3.2).

Is the resulting -form , for the most general choice of and on , i.e. the form


the most general -form on ? No, it is not, because of the property 2. of the forms and . The forms and obtained in this particular way (as pull-backs of some and on ) are necessarily time-independent, whereas in general the two forms which figure in the decomposition (2.3.2) need not be necessarily such; what is only strictly needed is the property 1., they are to be spatial.

We can summarize the message of this part of the section by the following statements: St.2.3.1.: Any form on decomposes according to Eq. (2.3.2)

St.2.3.2.: The forms and , resulting from Eq. (2.3.2), are spatial

St.2.3.3.: The forms and are not necessarily time-independent

St.2.3.4.: A form on is both spatial and time-independent
iff it is pull-back from

If and (in the decomposition (2.3.2) of a general form on are time-dependent, a useful point of view (especially in physics) is to regard them as time-dependent objects living on .

[In this case, however, is no longer a coordinate, it becomes “just a” parameter. The point of Sec. 2.3 is, on the contrary, that going from to may simplify the life in that we get standard forms on rather than forms on carrying “parameter-like” labels.]

And what about vector fields on ? The situation is similar to forms: a general vector field may be uniquely decomposed into temporal and spatial parts


If and do not depend on time, the field on corresponds to a pair of a scalar and a vector field on , otherwise a useful point of view is to regard as a pair of time-dependent scalar and vector field, respectively, on .

In particular, consider a vector field of the structure


with time-independent components . Its flow, taking place on the extended phase space , combines a trivial flow along the temporal factor with an independent flow on the phase-space factor , given by the vector field living on . This can be used from the opposite side: the dynamics on given by a vector field on (the situation considered in Sec. 2.2) may be equivalently replaced by dynamics on , governed by the vector field (2.3.11). (If is the solution on , the solution of the original dynamics on is simply given by the projection of the result onto the factor, i.e. as .)

2.3.1 Digression: Reynolds transport theorem(s)

Let’s use the formalism introduced in Section 2.3 for a proof of a classical theorem (see [Reynolds 1903]), which is still widely used in applications.

Consider a spatial (possibly time-dependent) -form on (i.e. a -form in Eq. (2.3.2) with ). Fix a spatial -chain in hyper-plane (-dimensional surface whose points lie in the hyper-plane) and let be its image w.r.t. the flow , where with spatial (possibly time-dependent) . (Notice that is spatial as well, it lies in the hyper-plane with time coordinate fixed to .) Then integral of over is a function of time (because of time-dependence of both and ) and one may be interested in its time derivative. Using standard computation (for the last but one equation sign, see 4.4.2 in [Fecko 2006]) we get



The details of (…) are of no interest since the term does not survive (because of the presence of the factor ) integration over spatial surface . Therefore, when this expression is plugged into Eq. (2.3.12) and Stokes theorem is applied to the last term, we immediately get the desired general “transport theorem” in the form


Let us specify the result for the usual -dimensional Euclidean space, . Here, we have the following expressions representing general spatial -forms


for and , respectively. Therefore, we have as many as four versions of the transport theorem, here (separate version for each ). Namely, using well-known formulas from vector analysis in the language of differential forms in (see Tab. 2.1), Eq. (2.3.13) takes the following four appearances (so we get classical Reynolds transport theorems):

Table 2.1: Relevant operations on (possibly time dependent) differential forms in (see Appendix A or, in more detail, Sections 8.5 and 16.1 in [Fecko 2006]).


For Eq. (2.3.15), recall that integral of a -form over a point is defined as (evaluation of at ). So, the integral at the l.h.s. of Eq. (2.3.13) reduces to evaluation of at .

In (2.3.16), is a (spatial) curve (at time ) connecting and , so that .

In Eq. (2.3.17), is a (spatial) surface (at time ) with boundary ; see e.g. $13.5 in [Nearing 2010] and the end of Sec. 4.4.3 here.

In Eq. (2.3.18), is a volume (at time ) with boundary ; in fluid dynamics, it is often referred to as material volume (no mass is transported across the surface that encloses the volume).

2.4 Poincaré from Cartan’s perspective

In this section, we present Cartan’s point of view on (2.2.12) and (2.2.14).

First, we switch to extended phase space and just retell, there, the story considered in Sec. 2.2. At the end, surprisingly, even at this stage of the game, we get more than we learned in Sec. 2.2.

We know from Eq. (2.3.11) that rather than to study the (dynamics given by the) vector field on , we may (equivalently) study, on , the (dynamics given by the) field


Now, our on satisfies , i.e. Eq. (2.2.12). Cartan succeeded to find an equation on which, in terms of the field , says the same. Construction of the resulting equation is as follows:

First, pull-back the forms and (w.r.t. the natural projection ) and get spatial and time-independent forms and on (see Eq. (2.3.6) and the text following the equation).

Second, combine them to produce the -form (à la Eq. (2.3.9)):


Third, check that


holds on if and only if is true on .

Recall that vanishes on and . Then, using Eq. (2.3.4),

and, due to Eq. (2.2.12),

since we get from (2.2.12)

So indeed



Yet, we have just rewritten Eq. (2.2.12), which is a statement about something happening on phase space, into the form given in (2.4.3), which is a statement about something happening on extended phase space.

And what is it good for to switch from phase space to extended phase space?

In the first step, it reveals (as early as here, in Sec. 2.4) that already using Poincaré’s assumptions alone, a more general statement about invariance, in comparison with (2.2.14), holds.

And in addition, in the second step (which we study in detail in Sec. 2.5), the structure of Eq. (2.4.3) provides a hint to further generalization of Eq. (2.2.12), such that the new, more general, statement still will be true.

So let us proceed to the first step. In extended phase space , consider integral curves of the field , i.e. the time development curves.

[Formally, time development of points in extended phase space is meant, here. In applications, the points may have various concrete interpretations. In fluid dynamics, as an example, the points correspond to positions of infinitesimal amounts of mass of the fluid, so the curves correspond to the “real” motion of the fluid, whereas in Hamiltonian mechanics the points correspond to (abstract, pure) states of the Hamiltonian system.]

Concentrate on a family of such integral curves given as follows: Let their “left ends” emanate from a -cycle on (i.e. the points of the -cycle serve as initial values needed for the first-order dynamics given by ) and “right ends” terminate at a -cycle on . The family of such curves forms a -chain (surface) , whose boundary consists of precisely the two cycles (closed surfaces) and


We say that the integral curves “connecting” the cycles and form a tube, and the cycle encircles the tube. Then, clearly, the cycle encircles the same tube that does (see Fig. 2.1).

Figure 2.1: The cycles and encircle the same tube of integral curves of the vector field on extended phase space ; in general, they do not lie in hyper-planes of constant time.

[Here is an example of how such surface may be constructed (first very special, then its reshaping to a general one): take, in time , a -cycle in phase space . We regard it as a -cycle in the extended phase space , which, by accident, completely lies in the hyperplane . Now we let evolve all its points in time (according to the dynamics given by ). At time the family of curves produces, clearly, a new -cycle in extended phase space , lying completely in the hyperplane , now. The points of the curves of time evolution between times and form together a -dimensional surface (rather special, yet; see Fig. 2.2).

Figure 2.2: The cycles and lie in hyper-planes of constant time and encircle the same tube of integral curves of the vector field on extended phase space .

If we proceed along the lines above, the two boundary cycles do lie in the hyper-surfaces of constant time. In general, it is not required, however, the boundary cycle (as well as ) is any cycle in , i.e. it may contain points at different times. Such, more general, surface may be produced from the particular one described above as follows. We let flow the points of the particular along integral curves of the field , with the parameter of the flow, however, being (smoothly) dependent of the point on . What we get in this way still remains to be a cycle; its points, however, do not have, in general, the same value of the time coordinate (see Fig. 2.1).]

And the statement (already due to Cartan) is that the integral of the form is relative integral invariant, which means, now, the following:


where and are any two cycles encircling a common tube.

The proof is amazingly simple:


The second equality (saying that the surface integral actually vanishes) results from clever observation how an elementary contribution to the integral looks like: In each point, locally spans on two vectors tangent to the surface and one of them may be chosen to be the vector . So, in the process of integration of over , one sums terms of the structure


Any such term, however, vanishes due to the key equation (2.4.3).

Therefore, the analogue of Eq. (2.2.14) is the statement:


If we, already at this stage, make a comparison of the statement of Poincaré (2.2.14) versus the corresponding one due to Cartan, (2.4.11) and (2.4.7), we see that the Cartan’s one is stronger.

For, if both cycles in Cartan’s statement are special, namely such that they lie in hyper-surfaces of constant time, we simply return to the Poincaré statement (from the form , it is enough to take seriously the part , since the factor vanishes on special integration domains under consideration). If we use, however, general cycles allowed by Cartan, we get a brand new statement, not mentioned at all by Poincaré.

Actually, in Sec. 2.5 we will see that the statement encoded in Eq. (2.4.11) can be given even stronger meaning.

Example 2.4.1: Let’s return to Hamiltonian mechanics once again (still the autonomous case, i.e. with the Hamiltonian independent of time). Putting together concrete objects from (2.2.24) and the general receipt from (2.4.2), we get the form as follows


The dynamical field becomes


and Hamilton equations take the form


The general Cartan’s statement (2.4.7) is realized as follows:


(where and encircle the same tube of solutions, so the situation is represented by Fig. 2.1)

If we choose the cycles and in constant time hyperplanes (then results from time development of the cycle ), we get the original Poincaré statement


(here, Fig. 2.2 is appropriate). The end of Example 2.4.1.

2.5 Cartan from Cartan’s perspective

At the end of Sec. 2.4 we learned that the first Cartan’s generalization of the statement of Poincaré consisted in observation that switching from phase space to extended phase space and, at the same time, augmenting differential form under the integral sign


(where is from ) enables one to extend the class of cycles, for which the integral is invariant (namely from cycles which completely reside in hyper-planes of constant time, à la Fig. 2.2, to cycles whose points may have different values of time coordinate, à la Fig. 2.1; what remains compulsory is just to encircle, by both cycles, common tube of trajectories in extended phase space).

However, according to Cartan, there is a still further possibility how the situation may be generalized.

Recall that the forms and on , occurring in the formula (2.4.2), were just the forms and (defined in Eq. (2.2.14)) pulled-back from


w.r.t. the natural projection


(So, no new input was added in comparison with the situation in Sec. 2.2 considered by Poincaré.) Because of this fact, the forms and are both spatial and time-independent (see the discussion near Eq. (2.3.9)).

Let us focus our attention, now, on the role of time-independence of the forms. Imagine that the forms and in the decomposition (2.4.2) were time-dependent (i.e., according to Eq. (2.3.2), that was a general -form on the extended phase space ). Does it mean that integrals of the form over cycles encircling common tube of solutions cease to be equal?

When we return to the (“amazingly simple”) proof given in Eqs. (2.4.8) and (2.4.9) we see that the only fact used was validity of Eq. (2.4.3), i.e. (see the l.h.s. of (2.4.11)). Therefore, the Cartan’s variant of the statement concerning integral invariants still holds.

The “decomposed version” of the equation , however, gets a bit more complex than , now. Namely, if we (re)compute expression (not assuming 111Contrary to the computation between (2.4.2) and (2.4.4), where time-independence was used! time-independence) and equate it to zero, we get


So, we can say that


Notice that a new term,


emerges the equation, in comparison with the time-independent case (2.4.4), (2.4.5). It is also worth noticing that time-derivative of the other form, , is absent in the resulting equation.

Repeating once more the computation between (2.4.2) and (2.4.4) not assuming, however, validity of (2.3.8), we get:


Equating this to zero is equivalent to writing down as many as two spatial equations


The second equation is, however, a simple consequence of the first one (just apply on the first), so it is enough to consider the first equation alone.

Thus what Cartan added (as the second generalization of Poincaré) was the possible dependence of spatial forms on time. Then, however, one must not forget, when writing the spatial version of the elegant equation , to add the time-derivative term .

So we conclude the section by stating the final Cartan’s result:


where the last statement means, in detail,


Similarly, one can write down a corresponding statement concerning absolute invariant obtained by integration of the exterior derivative of :


Proof 1.: Plug , into Eq. (2.5.15) and use Stokes theorem.
Proof 2.: Start from scratch: consider a dynamical vector field on a manifold . (So integral curves of are “solutions” and they define the dynamics on .) Let satisfy where is a -form on . Now, consider , the solid tube of solutions. By this we mean the -dimensional domain enclosed by the hollow -dimensional tube of solutions and two -dimensional “cross section” surfaces and , see Fig. 2.3. So (and ). Then


But the integral over vanishes (due to the argument mentioned in Eq. (2.4.10)) and we get Eq. (2.5.16).

Figure 2.3: is solid cylinder (the solid tube inside) made of solutions emanating from the left cap and entering the right cap . Boundary of the solid cylinder consists of 3 parts, hollow cylinder (“side” of the solid cylinder), and the two caps, and . The cycles and are boundaries of the caps, and . They encircle the same tube of integral curves of the vector field on a manifold .

Example 2.5.1: Third time is the charm - let’s return again to Hamiltonian mechanics. But now, for the first time, let’s allow condescendingly time-dependent Hamiltonian , i.e. let’s consider the general, non-autonomous case.

From the identification (cf. (2.2.24))

we see, in spite of our generous offer, complete lack of interest, in the case of the form , to depend on time. This is not the case, however, for : there we see a sincere interest to firmly grasp the chance of a lifetime. But since time dependence of alone matters for the resulting equation (2.5.4), the spatial version of Hamiltonian equations


remains, formally, completely intact,


(Its actual time dependence is unobtrusively hidden inside and it penetrates, via equation (2.5.19), to the vector field and, in the upshot, to the dynamics itself.)

[We know that if we write down Hamilton equations “normally”, as


there is no visible formal difference, in the time-dependent Hamiltonian case, with respect to the case when the Hamiltonian does not depend on time. Of course, after unwinding the equations (performing explicitly the partial derivatives) the equations get more complicated (since they are non-autonomous), but prior to the unwinding there is no extra term because of time-dependent Hamiltonian.]

The general Cartan’s statement (2.4.7) is still (also in non-autonomous case) realized as follows:


if and encircle a common tube of solutions. And Eq. (2.5.16) adds that


if and cut (enclose) a common solid tube of solutions. The end of Example 2.5.1.

And finally, let us make the following remark concerning absolute integral invariants. Recall that, still at the level of Poincaré (i.e. of Sec. 2.2), absolute and relative invariants differ in that the Lie derivative vanishes (for absolute invariants, Eq. (2.2.6)) or it is just exact, (for relative ones, Eq. (2.2.8)). The relative case was then rewritten into the form using the identity . Notice, however, that the same identity enables one to write the “absolute” condition in the form of the “relative” one ; one just needs to put


Then, when switching to Cartan’s approach (including time-dependence of spatial forms), we are to make corresponding changes in all formulas of interest. We get, in this way, the following “absolute invariant” version of the original “relative invariant” statement given in Eqs. (2.5.5) and (2.5.6):


where the following abbreviation


was introduced.

For new definition of one just replaces ; remains intact. For the new spatial version of we get

Warning: notice that

(since whereas ; the hat matters :-). Therefore

i.e. the operator acting on in Eq. (2.5.24) should not be confused with .

[Like in computation of spatial exterior derivative (see Eq. (2.3.3)), the spatial Lie derivative (of a spatial form ) simply does not take into account -dependence of components (if any; as if it was performed just on ). Here, however, we speak of the -dependence of components of both and .]

2.6 Continuity equation

Let’s start with time-independent case.

On one often encounters volume form , i.e. a maximum degree, everywhere non-vanishing differential form. Then we define the volume of a domain as


Let be density of some scalar quantity on . For concretness, let’s speak of mass density. Then


(Clearly, we can treat in the same way other scalar quantities like, say, electric charge, entropy, number of states etc.)

Now what we mean by the statement that mass (or the scalar quantity in question) is conserved? Well, precisely that the integral in Eq. (2.6.2) is to be promoted, in particular theory under discussion, to be absolute integral invariant:


[Notice that it is integral Eq. (2.6.2) rather than Eq. (2.6.1) which is to be treated as integral invariant. The volume of some particular domain may change in time (except for very special cases, see Eq. (2.6.16)), but the mass encompassed by the domain is to be constant since the velocity is assumed to be identified with motion of the “mass particles”, so the domain moves together with these “particles”:


(Here , . Keep in mind, however, that “mass” is not to be interpreted literally, here. As an example it may be, as we already mentioned above, a quantity like appropriate probability or number of particles in Hamiltonian phase space, see Example 2.6.1).]

As we know from Sec. 2.2 (see Eq. (2.2.6)), the differential version of the statement that Eq. (2.6.3) represents absolute integral invariant, reads


This is nothing but the continuity equation for the time-independent case. It can also be expressed in several alternative (and more familiar) ways.

First, recall that divergence of a vector field is defined by


(see 8.2.1 and 14.3.7 in ([Fecko 2006])). Then Eq. (2.6.6) is equivalent to


or, in a bit longer form, to


First notice that

So, combining Eq. (2.6.6) with Eq. (2.6.7) we get Eq. (2.6.8). On the other hand,

So vanishing of also leads to Eq. (2.6.9).

Thus we can write continuity equation (in the time-independent case) in any of the following four versions:


This reduces, for incompressible case (when the volume itself is conserved), to any of the two versions:


Now we proceed to general, possibly time-dependent, case. In order to achieve this goal we can simply use the general procedure described in Sec. 2.5. In particular, since our integral invariant is absolute, we are to use the version based on Eqs. (2.5.24) and (2.5.25).

Namely, on , we define


Then, according to Eq. (2.5.24) the full, time-dependent version of continuity equation reads