# A rigourous demonstration of the validity of

Boltzmann’s scenario for
the spatial homogenization of a freely expanding gas and the equilibration of the Kac ring

###### Abstract

Boltzmann provided a scenario to explain why individual macroscopic systems composed of a large number of microscopic constituents are inevitably (i.e., with overwhelming probability) observed to approach a unique macroscopic state of thermodynamic equilibrium, and why after having done so, they are then observed to remain in that state, apparently forever. We provide here rigourous new results that mathematically prove the basic features of Boltzmann’s scenario for two classical models: a simple boundary-free model for the spatial homogenization of a non-interacting gas of point particles, and the well-known Kac ring model. Our results, based on concentration inequalities that go back to Hoeffding, and which focus on the typical behavior of individual macroscopic systems, improve upon previous results by providing estimates, exponential in , of probabilities and time scales involved.

## 1 Introduction

It is a fundamental tenet of thermodynamics, well-grounded in common experience, that an individual isolated macroscopic system will, starting from an arbitrary initial state, evolve irreversibly towards a unique stationary state of thermodynamic equilibrium, in which it will then remain. Boltzmann, assuming macroscopic matter to be composed of a large number of elementary constituents (atoms and/or molecules), provided a compelling scenario to explain how this irreversible behaviour arises from the manifestly reversible laws of mechanics. His arguments, which have been detailed and re-explained by numerous authors [Eh59, Ka59, Fe67, Le93, Br96, Pe90, Le08], involve several crucial elements.

First, Boltzmann identified states of thermodynamic equilibrium with those macrostates of a system which correspond to the largest number of microstates that are actually consistent with the physical constraints imposed upon it.

Next he argued that, because the microstates corresponding to equilibrium occupy such an overwhelmingly large fraction of the available phase space, such a macroscopic system would almost certainly evolve from any microstate that does not correspond to equilibrium to one that does.

This scenario allowed Boltzmann to both explain the inevitably observed approach to equilibrium, and to refute objections to his ideas raised in the form of paradoxes famously posed by Zermelo and by Loschmidt.

Crucial to Boltzmann’s arguments is the fact that the number of constituents making up macroscopic collections of matter is overwhelmingly large.

This fact, in his view, strongly suggested that those initial microstates of an initially constrained macroscopic system that do exhibit an approach to equilibrium when the constraints are removed (i.e., those that are, in fact, consistent with thermodynamics) are themselves overwhelmingly numerous, and therefore “typical” of the initial states in which such an initially constrained system is likely to be prepared in the absence of (perhaps extraordinary) measures taken to prevent it.

The enormously large number of constituents of macroscopic systems should also, he suggested, allow for estimates to be made of the enormously long interval of time over which a typical individual system will appear to remain in the equilibrium macrostate it eventually reaches.

Obviously, he reasoned, this equilibrium residence time must be longer than the duration of humanly-possible observation times, although necessarily shorter than the recurrence times, also predicted by the laws of mechanics, at which the system must necessarily pass arbitrarily close to its initial state.

Thus, according to Boltzmann, it is the overwhelming fraction of typical initial conditions and the enormity of typical equilibrium residence times that allow the aforementioned paradoxes to be resolved.

Despite these compelling but generally unproven arguments, Boltzmann’s scenario continues to meet resistance and various claims can be found in the literature contesting their validity. We refer to [Br96, Le08] for a detailed analysis of the continuing controversy. As an example, although it has been thoroughly refuted, it is still commonly argued that ergodicity or mixing is both a necessary and sufficient ingredient for a macroscopic dynamical system to demonstrate an approach to equilibrium. This situation is perhaps a consequence of the fact that a full mathematical and even physical treatment of Boltzmann’s ideas is not yet available [Ka59, Ru91, Le08, Vi14].

It is the view of the authors that a useful step in clearing up the controversy that persists regarding the approach to equilibrium and the irreversibility of individual macroscopic systems, is the identification of deterministic time-reversible dynamical models for which Boltzmann’s scenario can be pushed through with complete mathematical rigour. Such models can help to eliminate all doubts as to what, precisely, is being claimed and how, exactly, the various longstanding objections to Boltzmann’s scenario come to be resolved.

To this end we revisit here two classical models that allow such a program to be fully and explicitly carried out in the context of the approach to equilibrium of individual macroscopic systems.

We study first a very natural, simple, and therefore tractable model [Fr58, Be10, Be12] for what is arguably the most common textbook example of approach to equilibrium: the free expansion of a gas. The microscopic dynamics, assumed to be that of a gas of non-interacting point particles, is Hamiltonian and time-reversible. To simplify the situation further, we envision the dynamics of the gas particles as taking place on a -dimensional torus, so that there are no interactions, even, with the walls of any hypothetical container (See Fig. 1). As a result, the dynamics of the -particle system is recurrent and neither mixing nor ergodic on the energy surface. It is actually completely integrable. Since there are no energy exchanges between the particles themselves, and no interactions with any environment, the gas in our model can obviously not approach (although it will maintain) a thermal distribution of particle velocities as it expands. What we focus on here, therefore, for individual -particle systems, is the irreversible approach of an initially inhomogeneous coarse-grained gas density profile of independent particles to a state of uniform spatial homogeneity (See Fig. 1).

Our main result, stated as Theorem 4.1, asserts that with overwhelmingly large probability, to be made more precise in what follows, the coarse-grained density profile of a typical individual -particle system of this type will evolve towards a spatially homogeneous distribution and that, after having achieved such a spatially homogeneous macrostate, it will again be observed to be in that state for a very long sequence of times, provided is sufficiently large. We stress again the feature of our results that is crucial to fully implementing Boltzmann’s scenario: they concern the evolution of typical individual -particle systems, and not just the average behaviour associated with a statistical ensemble. Our main result improves upon previous work by providing estimates, exponential in , of probabilities and time scales involved. Moreover, as our proofs use little detailed information on the one-particle dynamics, our results generalize readily to other geometries, as explained in Section 5.

Another model that has been often used to illustrate the validity of Boltzmann’s scenario is the Kac ring model [Ka59, Th72, Br96, GoOl09, MaNeSh09], in which black or white balls, in discrete time steps, move in the same direction around a ring, on which there are randomly-placed obstacles that reverse the color of each ball that passes. The microscopic dynamics of the model is deterministic, time-periodic (hence not ergodic) and time-reversible. It is ideal, therefore, for illustrating how irreversibility emerges in the macroscopic limit. In previous works on this model, e.g., it has been established that, on average, the fractional difference between the number of black and white balls tends to zero for times that are large, but smaller than the period (which depends on ), and that the variance of this observable also vanishes in this limit.

This, however, does not completely establish the validity of Boltzmann’s full scenario, even for this simple model system. Indeed, it only establishes the behaviour of ensemble averaged quantities and does not prove that this irreversible approach to equilibrium emerges for any individual system that starts from one of an overwhelmingly numerous set of typical initial states. Nor does it show that, once so-equilibrated, such a system will almost certainly remain in equilibrium for times that are, like itself, exceedingly large. As in our treatment of the expanding gas, we focus in the present paper on this latter situation, and in particular we prove in Theorem 6.1 that it does. For the Kac ring, we improve upon previously obtained algebraic bounds for the deviation from unity of relevant equilibration probabilities, by providing (sub)-exponential bounds for them of the form .

The paper is organized as follows. In Section 2 we describe our model of the expanding gas and give an informal mathematical statement of our main result. In Section 3 we provide a first set of results that show that an ensemble of non-interacting freely-expanding gases will, on average, develop a flat density profile. We then present in Section 4 a more precise definition of those microstates that exhibit “typical” behaviour, after which we state and prove our main result on the expanding gas. Section 5 contains numerical examples illustrating the main theorem along with a discussion of its natural generalizations to other geometries. Section 6 contains entirely analogous results on the Kac ring model. Further discussion and a summary is provided in Section 7.

Acknowledgements S.D.B. is supported by the Labex CEMPI (ANR-11-LABX-0007-01) and by the Nord-Pas de Calais Regional Council and the Fonds Européen de Développement Économique Régional (grant CPER Photonics for Society). P.E.P. thanks the University of Lille and the Labex CEMPI, where part of this work was performed, for their hospitality. The authors thank H. Spohn for bringing the work of J. Beck to their attention, and the latter for communicating his recent unpublished work to them.

## 2 A freely expanding gas

Consider particles moving on a -dimensional torus , identified with the cube in what follows. The phase space of one particle is and that of particles is

A point in the phase space of the -particle system will be written . The dynamics is that of uniform rectilinear motion: the particles are not subject to any force and do not interact.

Initial conditions are constructed by first choosing an a priori initial probability measure on , and then drawing, for each of the particles, initial phase space points independently from this probability measure. The only randomness in the model is in this choice of the initial condition drawn from the product measure induced by on .

The focus of our analysis will be on the following coarse-grained observables for the system. Given any measurable subset of the configuration space , we consider the fraction

(2.1) |

of particles inside , which is expressed above in terms of the periodicized characteristic function of ; the latter vanishes everywhere except at points such that , where it is equal to unity, and where denotes the fractional part of taken componentwise. Letting be a partition of the configuration space, we associate a coarse-grained time-dependent density profile to each initial state .

Our main result is as follows: given an initial probability measure , satisfying suitable conditions spelled out in the next section, we show that for an overwhelming majority of initial conditions, there exists a relatively short time depending only on , and a comparatively long time that grows exponentially with , such that, for all , the fraction of particles that finds itself in at times is close to the measure of that subset. Such initial conditions, the probability of which we will show to differ from unity by an amount that is exponentially small in , we will call “typical”.

In other words, for typical initial conditions, the particles are, after a relatively brief equilibration time, uniformly distributed on the torus after which they will then be observed to be so for a very long time. In this sense, the gas exhibits an approach to equilibrium.

We only consider out-of-equilibrium initial states in which the particles are uncorrelated. In general out-of-equilibrium initial states, one expects there to be correlations between the particles. For some such states, equilibration will not occur, for example if all velocities are identical and aligned with an axis of the torus. We have not dealt with the more general question of identifying the general class of initial states for which equilibration does indeed occur.

## 3 Mean values

We start our analysis by studying the mean and the variance of the coarse-grained observables as a function of , and in order to establish, at the outset, the asymptotic values that these observables are expected to approach at sufficiently long times. The main result of this Section is stated in Proposition 3.2.

We will obtain results for two classes of the initial probability measure . We first consider the case where

(3.1) |

so that, in particular, is absolutely continuous with respect to Lebesgue measure. In addition, we need the density to be sufficiently smooth in the momentum variables and to satisfy the following condition:

(3.2) |

This allows, e.g., for a product distribution that has all particles initially confined to a measurable subset of the torus, but which has a sufficiently smooth (e.g., thermal) momentum distribution. Obviously, this is a generalization of the situation that takes place in an already thermalized gas when the volume to which it is constrained is suddenly expanded.

To proceed, we first need the following simple estimate, which is the only place where the one-particle dynamics plays a role. We will write

(3.3) |

###### Lemma 3.1.

Suppose is as above. Then there exists a constant such that, for all measurable subsets of and for all ,

(3.4) |

where .

Here

and so, in particular,

(3.5) |

To put this result in perspective, recall that, with respect to Lebesgue measure, for almost every , the flow on the -torus is ergodic. Hence, for almost every ,

(3.6) |

This statement means that, for almost every initial condition, a single free particle spends approximately a fraction of its time inside It is therefore true pointwise in but averaged in time. Equation (3.5), on the other hand, holds pointwise in , but averaged in .

Although the free dynamics of the single particle is not mixing in , the limit in (3.5) has a “mixing” flavour to it. Indeed, the map defines for each a measure on and Lemma 3.1 shows that these measures converge to Lebesgue measure as . Notice that the estimate in (3.4) deteriorates when the density becomes more concentrated and breaks down for any fixed initial condition , as does (3.5). We will come back to this point below, when dealing with the second class of measures for which our results hold (See (3.14)-(3.16)).

###### Proof.

Let be as above, and define, for ,

Then, for all non-zero , Now, defining, ,

we have

(3.7) | |||||

Hence, using the fact that , (3.4) follows. ∎

We now turn to the -particle problem. Let

(3.8) |

We will denote by the expected value with respect to the product probability measure induced by on . A first formulation of the approach to a uniform spatial density for the gas is then as follows.

###### Proposition 3.2.

Let satisfy (3.4). Then, for all measurable subsets of the one-particle configuration space , we have

(3.9) | |||||

(3.10) |

The limits are uniform in .

###### Proof.

Since the initial conditions are drawn independently from the single-particle probability , the course-grained observable is a sum of i.i.d. random variables. One then trivially obtains, for all and

(3.11) |

where

so that, using (3.4), ,

(3.12) | |||||

(3.13) | |||||

where we have used the fact that for all and . This implies (3.9) and (3.10). ∎

We now present a second class of measures on for which (3.4) and hence Proposition 3.2 can be proven to hold. Le be a product measure

(3.14) |

where is an arbitrary probability measure on and where is a one-particle momentum distribution. Introducing

we will suppose there exists constants , , and such that

(3.15) |

This, e.g., allows the application of our results to situations such that

(3.16) |

in which all particles are initially located at the same position and in which, therefore, the only randomness that appears is in the choice of the initial momenta.

We now show how to establish (3.4) for this class of measures. The constant will depend on , and . For that purpose, it is in turn sufficient to establish the analog of (3.7), i.e.,

(3.17) |

where

To that end, we consider a sequence of functions that approximate in the -norm

(3.18) |

Writing for their Fourier coefficients, it follows that and the smoothness of the implies that the Fourier coefficients decay fast in . Then, for all ,

The convergence of this series is uniform in because of the fast decay of the Fourier coefficients. Then, for all ,

Let , be the family of cubes centered at and of linear size . Then we can write, for all ,

Inserting this into (LABEL:eq:chiapprochebis), one finds, for all ,

Hence

Using condition (3.15) and bounded convergence, one finds finally (3.17) and hence (3.4).

The model we study here is a variation of one that was introduced in [Fr58], where a result equivalent to (3.9) is proven. In [Be10] (Theorem 3) a result similar to (3.10) is proven on a closely related model. Equations (3.9)-(3.10) mean that for large enough and , and for all , the coarse-grained random variable has a mean close to and a very small variance. This implies that for any partition of the configuration space, the coarse-grained time-dependent density profile is, on average, and for sufficiently large and , close to being spatially homogeneous. This provides a weak sense of the approach to spatial homogeneity. Indeed, the idea is that when a random variable has a very small variance it is “almost constant”. In that sense, one may claim that is “typically” equal to , provided and are large enough. Note however that (3.9)-(3.10) do not as such give any information about the approach of the coarse-grained observables to for any particular initial condition , i.e., about the evolution of individual macroscopic systems. Figure 2, on the other hand, illustrates for single trajectories of individual -particle systems, how typical fluctuations in about its equilibrium value decrease with increasing particle number , but do not decay to zero as a function of time. In this figure, individual trajectories, after a relatively brief initial equilibration time that does not appear to depend on , display values of that fluctuate about the expected mean value of .

Note that, according to (3.9), the average of over many such trajectories will approach zero with increasing . On the other hand, as is evident from the data in Fig. 2, and as is confirmed below in (3.13), the variance of that quantity at fixed approaches, for large times, a finite value, which can be estimated from the size of the temporal fluctuations appearing in the figure. Numerical results evaluated at times very much longer than the relatively short initial equilibration period, i.e., at times much longer those shown in Fig. 2, reveal fluctuations in statistically equivalent to those appearing in that figure.

For these simulations, which were performed with one-dimensional systems with particle numbers as indicated, the individual initial conditions for each trajectory displayed were chosen so that all particles were randomly located in the interval , with momenta independently chosen from a thermal distribution with average value equal to unity. The value of displayed in these figures is then the fraction of particles in that same interval as a function of time.

Note that the estimates (3.4) and (3.11) can already be used to obtain some information on initial trajectories, via the Markov inequality, in the spirit of a quantitative weak law of large numbers. To see this, let us introduce for all measurable and , and for all , and the set

(3.20) |

of initial conditions for which the fraction of particles that are inside at time is close to when is small. For any subset of the particle phase space, we will denote by the probability that an initial condition , drawn from the product probability measure induced by , lies in . One then obtains readily that, for large enough , depending on and , but not on

However, the control in on this probability is unsatisfactory. A more detailed mathematical analysis of the main features of for individual trajectories, as observed in Fig. 2, will be provided in the next section, where an exponential estimate on the above probability will be obtained.

## 4 Typical behaviour

To fully implement Boltzmann’s scenario we now show that for an overwhelming majority of initial conditions drawn from the product probability measure generated by , there exists an exponentially long sequence of times at which the fraction of particles in is arbitrarily close to the measure of that subset, for all . Here, the form a partition of the torus, as described in Section 2.

To state our result, we introduce a sequence of times , , for some that is macroscopically small enough that a series of observations of the system at these times could be considered quasi-continuous. The total duration over which the system is observed is therefore . Define, then, the set

of “good” initial conditions in which the fraction of particles in is close to at each instant of time in the sequence. Note that we have suppressed the and dependence of . Let denote the probability that an initial condition, chosen as described, lies in . We will show, see (4.2), that for values of that are exponentially large in , the probability differs from unity by an amount that is exponentially small in .

###### Theorem 4.1.

Let be a probability measure on and suppose there exists , and so that

(4.1) |

Let

Then, for all ,

In particular, if , then

(4.2) |

Note that it follows from (4.2) that

(4.3) |

In Section 3 we identified two classes of measure for which condition (4.1) is satisfied. For the proof of Theorem 4.1, we will utilize a particular case of Theorem 2 proven in [Ho63]:

###### Theorem 4.2.

Let , be a family of independent identically distributed random variables with and . Let . Then

With this result, then:

Proof of Theorem 4.1. It follows from (4.1) that,
for all ,

It follows that, for all , implies that

Hence, by Theorem 4.2,

(4.4) |

and hence

(4.5) |

which proves the result.

Note that the equilibration time is independent of , as observed also in the numerics presented in Section 3. It is directly linked to the estimate in (3.4) and in no clear way related to the ergodic convergence time in (3.6), which is strongly -dependent.

After we completed an initial version of this paper, Prof. J. Beck kindly communicated to us recent unpublished work [Be17] in which he proves, using detailed dynamical information, combinatorial estimates, and Fourier analysis, a result similar to Theorem 4.1, for initial conditions

in which the particle positions are initially non-random, as in (3.16), and their velocities are specifically drawn from a Maxwell distribution.

## 5 Examples and generalizations

### 5.1 Examples

The plot in Fig. 3 illustrates in one dimension the exponential scaling with and the linear scaling with of the fluctuations implied by (4.5). It shows the fraction (normalized by ) of initial conditions, randomly chosen as described below, that exhibited a deviation in the quantity , for , of magnitude greater than or equal to , at least once over a sequence of fixed times , with and , for values of , and . For each initial condition, particle positions were independently and uniformly chosen from the same interval , and momenta were independently chosen from a thermal distribution with a mean thermal velocity of unity. The six closely bunched dotted curves include the fraction of such initial conditions for each value of indicated, excluding those (at large and small ) for which no such fluctuations were obtained during the gas histories, over the time scale investigated. Open circles indicate an average over . The dot-dashed line near the top of the figure represents the function that, according to (4.5), serves as an upper bound for the normalized probability. The solid line that follows the numerical data for large is a fit to the data for of the form , with and , showing that the exponential bound implied by (4.5) is a conservative one.

In fact, the proof of (4.5) can easily be adapted to show that, for all , for , and for all , one has

(5.1) |

In the proof above, we used . If, on the other hand, one takes small, one gets a bound close to , much closer to what is observed numerically. For example, with and , one finds and . Constraints on computing power make it unfeasible to do simulations with values of or much larger than those used here to illustrate the theorem. For example, with and , the probability for finding a single fluctuation is of order . Identification of such a fluctuation would require the generation of or more different initial conditions, exceeding realistic computational capabilities.

A realistic numerical simulation showing the applicability of Theorem 4.1 to actual macroscopic systems being, therefore, out of the question, we instead now use the theorem to obtain simple estimates for a more-or-less realistic hypothetical macroscopic system.

Thus, we consider in three dimensions a gas cell occupying a cubic domain of total volume cm that is filled with an ideal gas at standard temperature and pressure. The equilibrium particle number density of such a gas is well-known to be of the order /cm. Thus, ignoring fluctuations for the moment, let us assume the actual number of gas particles in the cell to be

Consider, next, a cubic sub-region , lying entirely within , and having a linear dimension one-tenth that of the total domain . It occupies, therefore, a volume mm, i.e., a fraction of the total volume of the gas cell.

In thermal equilibrium at temperature , the average number

of particles in such a sub-region corresponds to a fraction of all the particles in the system, while the average pressure in this sub-region, being an intensive quantity, is the same as in the bulk, i.e.,

In what follows, we will denote by

the instantaneous pressure in , which is reduced or increased according to the number of particles in this sub-region of interest at time , and in which . We will also denote by

the ratio of the instantaneous pressure in to its equilibrium value.

For the purposes of illustrating the consequences of our main theorem, suppose it were possible to make localized pressure measurements over millimeter length scales in a gas of the sort described above, to a relative accuracy of ppm, i.e., .

Suppose, furthermore, that all particles were, e.g., by a movable rigid membrane, adiabatically compressed to a region of the cell that contains the millimeter-cubed sub-region described above, but which itself occupies only half of the total cell volume . The pressure in this half of the cell would therefore have increased to two atmospheres, while the pressure in the other half vanishes. Imagine, finally, that the membrane confining the particles, initially, to this region were to suddenly burst, allowing for a free expansion of the compressed gas into the full volume of the cell.

Starting from an arbitrary initial condition drawn from the product distribution implied by the initial conditions described, Theorem 4.1 implies that at a time greater than a relatively short equilibration time defined in the theorem, the probability that the relative pressure in will be found to exhibit a deviation from unity by a measurable amount is bounded from above by the relation

in which . Using this value for we thus obtain the numerical bound

As indicated, this absurdly small bound applies to the probability of observing such a deviation at a single instant of time

Consider then the probability of observing a deviation of this magnitude at least once at some point over a quasi-continuous sequence of times with , that span a total time interval . According to (5.1) this quantity is bounded by the relation

where we ignored the factor which can at any rate be taken arbitrarily close to .

Of course this upper bound on depends implicitly on the time between successive measurements and the duration of the total time interval considered. As an extreme limit, suppose it were possible to obtain a measurement of the pressure in this gas cell once every femtosecond over a total time interval .

Then, over the entire sequence of such measurements, spanning an interval of time roughly comparable to the age of the earth, the probability of observing at least one measurable deviation in the relative pressure of this gas would be less than , giving the numerical bound

This example shows that the main theorem we have proven confirms the conventional wisdom regarding the approach of individual macroscopic systems to a state of thermodynamic equilibrium, i.e., that it will almost certainly happen, and that once it has, the macroscopic system will almost certainly remain in the equilibrium macrostate state for an extraordinarily long duration of time.

At the same time, the theorem itself can, in principle, be used as a tool to provide insight into situations in which it might be possible to actually measure deviations from the equilibrium state.

To illustrate this second aspect of our result we consider a situation similar to that just outlined in which the initial particle gas is not at standard temperature and pressure, but is contained in a vacuum chamber maintained at a pressure of atm at room temperature. In that case the number of particles in the chamber would be of the order of . If in addition we take a more realistic value of so that . One then finds

If one now is able to measure the pressure in this gas every second for an hour, then with one finds

which suggests the possibility of detecting measurable fluctuations at the rate of one every three hours.

The presence of the factor in the exponent appearing in our main results clearly indicates that fluctuations that regularly occur in the gas density are of order , as expected. Indeed, this can already be observed in Fig. 2. The first example above illustrates that such small fluctuations are not easily measured in a macroscopic system of this sort under normal ambient circumstances. The second example suggests that it should be possible to produce laboratory conditions in which is indeed of the order . Fluctuations of this sort should, therefore, be observable.

We note in passing that experimentalists working on cold atom systems are regularly able to generate atomic gases containing on the order of atoms, confined to millimeter-cubed sized regions. Moreover, it is also possible through the application of magnetic fields to tune the inter-particle scattering length of some atomic species to zero, thus producing a gas of effectively non-interacting particles. The local particle density in such a gas can then be monitored through fluctuations in the intensity of a finely collimated laser beam passing through it.

### 5.2 Generalizations

As already pointed out in the introduction, because our proofs use very little information on the one-particle dynamics, Theorem 4.1 is easily generalized to other geometries. As a first set of examples, suppose that the particles move in a configuration space , which can be a bounded set with boundary in or a manifold. In the first case we suppose the particle executes a billiard motion, in the second that it moves along geodesics. We write for the corresponding particle trajectories. Given a one-particle probability measure on the corresponding one-particle phase space , suppose one can prove there exists a probability measure on such that

(5.2) |

This is the equivalent of (3.5). Once such a result is obtained, the rest of the argument leading to Theorem 4.1 goes through unaltered, with replaced by .

This will be the case, for example, if is a negatively curved manifold of finite volume, so that the geodesic flow is mixing with respect to the Liouville measure on an energy surface. One can then consider any initial probability measure which is absolutely continuous with respect to the latter, and the probability measure is then the normalized Riemannian volume element on . Note that it does therefore not depend on the initial probability measure . In other words, particles initially distributed according to in phase space, spatially homogenize according to the measure on , with overwhelming probability and for long periods of time.

A similar situation occurs if is a chaotic billiard. It also occurs when is a cubic billard: the standard trick of “unfolding” single trajectories (see for example [Be10]) reduces this case to straight line motion on a torus, which is explicitly treated in this paper.

But other measures may arise that do depend on . For example, if is a disk in two dimensions, and if we take supported close to the boundary, with a distribution of momenta such that there is a minimal impact parameter, then none of the particles will enter a small disk close to the origin. This will not happen with a physical ideal gas, of course, since then the collisions of the gas particles with each other will change their direction. Such collisions have not been accounted for in the model considered here and become critically important in, e.g., the situation in which an external potential is added to the one particle Hamiltonian. Consider for example noninteracting particles bouncing back and forth in the closed interval , subject to a constant force . Such particles will spend less time close to the right edge of the interval than to the left edge, since they move faster on the right than on the left. This is clearly not what happens in a physical gas where interparticle collisions allow for equipartition of the energy. Indeed, in a physical gas the mean speed of the particles is constant throughout, and particles spend more time in the region of lower potential energy, on the right rather than on the left. Results of the present paper, obtained for independent, non-interacting particles, clearly rely heavily on the statistical independence of those particles, which implies a law of large numbers. Thus, while they provide useful insight into the mechanism of spatial homogenization for this class of models, it is not clear that similar techniques could be usefully applied to the much more difficult, physically relevant case, in which the particles interact and exchange energy.

## 6 The Kac ring model

The Kac ring model, introduced in [Ka59], describes the dynamics of particles placed on equidistant sites on a circle, one particle per site. There are, at each integer time , black balls and white balls, so that . Among the sites, are marked. The system undergoes a discrete-time deterministic dynamics, defined as follows. At each integer time , all balls move one step counterclockwise. A ball will change color if it starts on a marked site, and not otherwise. For all , if site is marked, otherwise . Also, if at time the ball on site is white, and otherwise. One therefore has the following dynamics

For the sake of the discussion here, we will suppose , so all balls are white initially. The indices are taken modulo . We are interested in the evolution of the macroscopic variable

We have

(6.1) |

We will study this quantity for choices of marked sites constructed as follows. We consider the as independent Bernoulli random variables with, for some ,

(6.2) |

We will show that with an overwhelmingly large probability, and for within an appropriate -dependent time range, the random variable tends to zero. This generalizes, as we will explain, known results on the model, recalled below and proven in [Ka59]; see also [GoOl09]. We first remark that, provided ,

since for ,

(6.3) |

This follows since all factors occurring in the product are independent. Hence, in analogy with (3.9)

(6.4) |

In addition, it is proven in [Ka59, GoOl09], that

(6.5) |

Hence, in analogy with (3.10), we have here

(6.6) |

So, in the Kac ring model, the difference between the number of white and black balls, as a fraction of the total number of balls, tends on average to zero for large and . These statements are the analogues of (3.9)-(3.10) in the expanding gas. Note however the restriction in (6.4), which is a consequence here of the fact that , where is the number of markers. Hence , so that the system is time-periodic, with the same period for all configurations of markers. In addition, the variance of tends to zero on the slightly shorter time scale . As in the case of the expanding gas, these two statements do not suffice to say that “typical” configurations of markers will result in the system converging to the equilibrium value, meaning with “overwhelming” probability. Of course, Markov’s inequality together with the bound in (6.5) can be used to yield a weak law of large numbers and a lower bound on this probability, but only one algebraic in . As an example of such an approach, it is proven in [MaNeSh09] that, for all and fixed,

We now show how to apply a further result of [Ho63] to obtain a (sub-) exponential estimate. For that purpose, we introduce, as before, for each , for each and , the “good” set of markers ,

(6.7) |

for which the difference between the number of white and black balls, as a fraction of , is within from its equilibrium value at the time . Also

(6.8) |

Our main result is then:

###### Theorem 6.1.

Let and . Let , and . Then, for all , for all , one has

(6.9) |

Hence

(6.10) |

Note that the time scale is now only a power law in , and not exponentially large. This is inherent in the model, which is, as remarked above, -periodic. So Poincaré recurrences occur trivially at and hence the time over which the system stays in macroscopic equilibrium cannot be longer than , for some . Note also that the estimate on the probability of the “good” initial conditions gets worse as approaches .

###### Proof.

Given , we define by , with . It follows from the definition of the in (6.1) that and are independent provided , where the distance between and is measured modulo . We then rewrite as follows:

with

Note that, in view of what precedes, each is a sum of independent random variables and that

(6.11) |

It follows that