Central limit theorem for a class of one-dimensional kinetic equations

# Central limit theorem for a class of one-dimensional kinetic equations

Federico Bassetti Università degli Studi di Pavia
via Ferrata 1, 27100, Pavia, Italy
piazza Leonardo da Vinci 32, 20133, Milano, Italy
also affiliated to CNR-IMATI Milano, Italy
and  Daniel Matthes Institut für Analysis und Scientific Computing
Technische Universität Wien
Wiedner Hauptstraße 8–10
1040 Wien, Austria
###### Abstract.

We introduce a class of Boltzmann equations on the real line, which constitute extensions of the classical Kac caricature. The collisional gain operators are defined by smoothing transformations with quite general properties. By establishing a connection to the central limit problem, we are able to prove long-time convergence of the equation’s solutions towards a limit distribution. If the initial condition for the Boltzmann equation belongs to the domain of normal attraction of a certain stable law , then the limit is a scale mixture of . Under some additional assumptions, explicit exponential rates for the convergence to equilibrium in Wasserstein metrics are calculated, and strong convergence of the probability densities is shown.

###### Key words and phrases:
Central limit theorem; Boltzmann equation; Domains of normal attraction; Kac model; Smoothing transformations; Stable law; Sums of weighted independent random variables
###### 1991 Mathematics Subject Classification:
Primary: 60F05; Secondary:82C40

## 1. Introduction

In a variety of recent publications, intimate relations between the central limit theorem of probability theory and the celebrated Kac caricature of the Boltzmann equation from statistical physics have been revealed. The idea to represent the solutions of the Kac equation in a probabilistic way dates back at least to the works of McKean in the 60’s, see e.g. McKean (1966), but has been fully formalized and employed in the derivation of analytic results only in the last decade. For instance, probabilistic methods have been used to get estimates on the quality of approximation of solutions by truncated Wild sums in Carlen et al. (2000), to study necessary and sufficient conditions for the convergence to a steady state in Gabetta and Regazzini (2006b), to study the blow-up behavior of solutions of infinite energy in Carlen et al. (2007, 2008), to obtain rates of convergence to equilibrium of the solutions both in strong and weak metrics, Gabetta and Regazzini (2006c); Dolera et al. (2007); Dolera and Regazzini (2007). The power of the probabilistic approach is illustrated, for instance, by the fact that in Dolera et al. (2007) very refined estimates for the classical central limit theorem enabled the authors to deliver the first proof of a conjecture that has been formulated by McKean about fourty years ago.

The applicability of probabilistic methods is not restricted to the classical Kac equation, but extends to the inelastic Kac model, proposed by Pulvirenti and Toscani (2004). In the inelastic model, the energy (second moment) of the solution is not conserved but dissipated, and hence infinite energy is needed initially to obtain a non-trivial long-time limit. In Bassetti et al. (2008) probabilistic methods have been used to study the speed of approach to equilibrium under the assumption that the initial condition belongs to the domain of normal attraction of a suitable stable law. In this context, indeed, the steady states are the corresponding stable laws.

In the current paper, we continue in the spirit of the aforementioned results. By means of the central limit theorem for triangular arrays, we are able to study the long time behavior of solutions of a wide class of one-dimensional Boltzmann equations, which contains (essentially) the classical and the inelastic Kac model as special cases.

To be more specific, recall that the Kac equation describes the evolution of a time-dependent probability measure on the real axis, and is most conveniently written as an evolution equation for the characteristic function of . The equation has the form

 (1) {∂tϕ(t;ξ)+ϕ(t;ξ)=ˆQ+[ϕ(t;⋅),ϕ(t;⋅)](ξ)(t>0,ξ∈R)ϕ(0;ξ)=ϕ0(ξ)

where the collisional gain operator is given by

 (2) ˆQ+[ϕ(t;⋅),ϕ(t;⋅)](ξ):=E[ϕ(t;Lξ)ϕ(t;Rξ)].

Above, is a random vector defined on a probability space and denotes the expectation with respect to . The initial condition is the characteristic function of a prescribed real random variable ; by abuse of notation, we shall also refer to , to its probability distribution function or to its law as the initial condition.

For the classical Kac equation, one writes , with uniformly distributed on , and hence a.s. The inelastic Kac equation is obtained by

 (L,R)=(sin(Θ)|sin(Θ)|p,cos(Θ)|cos(Θ)|p),

with , and hence a.s., if . It is worth recalling that the study of the respective initial value problems can be reduced to the study of the same problems under the additional assumption that the initial distribution is symmetric, i.e. the initial characteristic function is real, and . See Section 2.1.

In this paper, we consider the problem (1), where the random variables and in the definition of the collision operator in (2) are non-negative and satisfy the condition

 (3) E[Lα+Rα]=1,

for some in . The therewith defined bilinear operators are examples of smoothing transformation, which have been extensively studied in the context of branched random walks, see e.g. Kahane (1976); Durrett and Liggett (1983); Guivarc’h (1990); Liu (1998); Iksanov (2004) and the references therein.

Our motivation, however, originates from applications to statistical physics. These applications are discussed in Section 2.1. At this point, we just mention the two main examples.

1. Passing from the Kac condition to (3) with , the model retains its crucial physical property to conserve the second moment of the solution. However, the variety of possible steady states grows considerably: depending on the law of , the latter may exhibit heavy tails.

2. For certain distributions satisfying (3) with , equation (1) has been used to model the redistribution of wealth in simplified market economies, which conserve the society’s total wealth (first moment). Whereas the condition would correspond to deterministic trading and lead eventually to a fair but unrealistic distribution of wealth in the long time limit, the relaxed condition (3) allows trade mechanisms that involve randomness (corresponding to risky investments) and lead to a realistic, highly unequal distribution of wealth.

Our main results from Theorems 3.2 and 3.4 can be rephrased as follows:

Assume that (3) holds with , but , and in addition that for some . Let , for , be the probability measure on such that its characteristic function is the unique solution to the associated Boltzmann equation (1). Assume further that the initial datum lies in the normal domain of attraction of some -stable law , and that is centered if . Then, as , the probability measures converge weakly to a limit distribution , which is a non-trivial scale mixture of .

The results in the case are more involved; see Theorems 3.3 and 3.5.

Under the previous general hypotheses, no more than weak convergence can be expected. However, slightly more restrictive assumptions on the initial condition suffice to obtain exponentially fast convergence in some Wasserstein distance. Finally, if the initial condition possesses a density with finite Linnik-Fisher functional and the condition holds a.s. for some , then the probability density of exists for every and converges strongly in the Lebesgue spaces and as .

The largest part of the paper deals with the proofs of weak convergence, which are obtained in application of the central limit theorem for triangular arrays. Consequently, the core element of the proof is to establish a suitable probabilistic interpretation of the solution to (1). The link to probability theory is provided by a semi-explicit solution formula: the Wild sum,

 (4) ϕ(t)=e−t∞∑n=0(1−e−t)n^qn,

represents the solution as a convex combination of characteristic functions , which are obtained by iterated application of the gain operator to the initial condition — see formula (11). Following Gabetta and Regazzini (2006b) we consider a sequence of random variables such that has as its characteristic function, and possesses the representation

 (5) Wn=n∑j=1βj,nXj,

where the are independent and identically distributed random variables with common characteristic function . The weights are random variables themselves and are obtained in a recursive way, see (12).

The behavior of in (4) as is obviously determined by the behavior of the law of as . It is important to note that a direct application of the central limit theorem to the study of is inadmissible since the weights in (5) are not independent. However, one can apply the central limit theorem to study the conditional law of , given the array of weights .

Representations in the form (4) with (5) are known for the (classical and inelastic) Kac equation, see Gabetta and Regazzini (2006b) and Bassetti et al. (2008). The situation here is more involved, since (3) only implies that

 (6) E[βα1,n+βα2,2+⋯+βαn,n]=1,

whereas for the Kac equation,

 (7) βα1,n+βα2,n+⋯+βαn,n=1a.s.

In order to be able to apply the central limit theorem, one needs to prove that converges in probability to zero, and that converges (almost surely) to a random variable. Thanks to (7), the latter condition is immediately satisfied for the Kac equation, while it is not always true for the general model considered here. We stress that the generality of (6) in comparison to (7) is the origin of the richness of possible steady states in (1).

The paper is organized as follows. In Section 2, we recall some basic facts about the Boltzmann equation under consideration, present a couple of examples to which the theory applies, and derive the stochastic representation of solutions. Section 3 contains the statements of our main theorems. The results are classified into those on convergence in distribution (Section 3.1), convergence in Wasserstein metrics at quantitative rates (Section 3.2) and strong convergence of the probability densities (Section 3.3). All proofs are collected in Section 4.

## 2. Examples and preliminary results

One-dimensional kinetic equations of type (1)-(2), like the Kac equation and its variants, provide simplified models for a spatially homogeneous gas, in which particle move only in one spatial direction. The measure , whose characteristic function is the solution of (1), describes the probability distribution of the velocity of a molecule at time . The basic assumption is that particles change their velocities only because of binary collisions. When two particles collide, then their velocities change from and , respectively, to

 (8) v′=L1v+R1wandw′=R2v+L2w

with and .

More generally, one can consider binary interaction obeying (8), where and are two identically distributed random vectors (not necessarily independent) with the same law of . This leads, at least formally, to equation (1).

### 2.1. Examples

The following applications are supposed to serve to motivate the study of the Boltzmann equation (1) with the condition (3). The first two examples are taken from gas dynamics, while the third originates from econophysics.

#### Kac like models

Instead of discussing the physical relevance of the Kac model, we simply remark that it constitutes the most sensible one-dimensional caricature of the Boltzmann equation for elastic Maxwell molecules in three dimensions. A comprehensive review on the mathematical theory of the latter is found e.g. in Bobylëv (1988). The term “elastically” refers to the fact that the kinetic energy of two interacting molecules – which is proportional to the square of the particles’ velocities – is preserved in their collisions. Indeed, since and , one obtains .

We shall not detail any of the numerous results available in the extensive literature on the Kac equation, but simply summarize some basic properties that are connected with our investigations here. First, we remark that the microscopic conservation of the particles’ kinetic energy implies the conservation of the average energy, which is the second moment of the solution to (1). Moreover, it is easily proven that the average velocity, i.e. the first moment of , converges to zero exponentially fast. For , the solution converges weakly to a Gaussian measure that is determined by the conserved second moment.

As already mentioned in the introduction, the study of the original Kac model can be reduced to the study of a particular case of the model we are considering. Indeed, it is well-known that the solution of the Kac equation can be written as

 (9) ϕ(t,ξ)=e−tIm(ϕ0(ξ))+ϕ∗(t,ξ)

where is the solution to problem (1) with in the place of , and . Hence, we can invoke Theorem 3.4, which provides another proof of the large-time convergence of solutions to a Gaussian law. In fact, also Theorem 3.8 is applicable, which shows that the densities of converge in and , provided possesses a density with finite Linnik functional.

These consequences are weak in comparison to the various extremely refined convergence estimates for the solutions to the Kac equation available in the literature. See, e.g., the review Regazzini (2008). On the other hand, our proofs do not rely on any of the symmetry properties that are specific for the Kac model. Thus, our aforementioned results extend — word by word — to the wide class of problems (1)-(2) with a.s.

The variant of the Kac equation, introduced in Pulvirenti and Toscani (2004), is called inelastic because the total kinetic energy of two colliding particles is not preserved in the collision mechanism, but decreases in general. Consequently, if the second moment of the initial condition is finite, then the second moment of the solution converges to zero exponentially fast in . Non-trivial long-time limits are thus necessarily obtained from initial conditions with infinite energy. In Bassetti et al. (2008), it is shown that the solution converges weakly to a -stable law if belongs to the normal domain of attraction of ,

As for the Kac model, the study of the inelastic Kac model too can be reduced to the framework of the present paper. Hence, Theorem 3.8 and Proposition 3.6 yield new results concerning strong convergence of densities in and convergence with respect to the Wasserstein metrics.

#### Inelastic Maxwell molecules

We shall now consider a variant of the Kac model in which the energy is not conserved in the individual particle collisions, but gains and losses balance in such a way that the average kinetic energy is conserved. This is achieved by relaxing the condition to , which is (3) with .

Just as the Kac equation is a caricature of the Boltzmann equation for elastic Maxwell molecules, the model at hand can be thought of as a caricature of a Boltzmann equation for inelastic Maxwell molecules in three dimensions. For the definition of the corresponding model, its physical justification, and a collection of relevant references, see Carrillo et al. (2008). We stress, however, that the Kac caricature of inelastic Maxwell molecules is not the same as the inelastic Kac model from the preceeding paragraph.

Conservation of the total energy can be proven for centered solution ; like symmetry, also centering is propagated from to any by (1). The argument leading to energy conservation is given in the remarks following Theorem 3.4.

Relaxation from strict energy conservation to conservation in the mean affects the possibilities for the large-time dynamics of . It follows from Theorem 3.4 that if for some , then any solution , which is centered and of finite second moment initially, converges weakly to a non-trivial steady state . However, unless a.s., is not a Gaussian. In fact, can be chosen in such a way that possesses only a finite number of moments. In physics, such velocity distributions are referred to as “high energy tailed”, and typically appear when the molecular gas is connected to a thermal bath.

An example leading to high energy tails is the following: let such that and . One verifies that and , so Theorem 3.4 guarantees the existence of a non-degenerate steady state . Moreover, , and one concludes further from Theorem 3.4 that the sixth moment of diverges, whereas all lower moments are finite.

#### Wealth distribution

Recently, an alternative interpretation of the equation (1) has become popular. The homogeneous gas of colliding molecules is replaced by a simple market with a large number of interacting agents. The current “state” of each individual is characterized by a single number, his or her wealth . Correspondingly, the measure represents the distribution of wealth among the agents. The collision rule (8) describes how wealth is exchanged between agents in binary trade interactions. See, e.g., Slanina (2004); Cordier et al. (2005).

Typically, it is assumed that is supported on the positive semi-axis. In fact, the first moment of represents the total wealth of the society and plays the same rôle as the energy in the previous discussion. In particular, it is conserved by the evolution.

In the first approaches, see e.g. Angle (1986), conservation of wealth in each trade was required, i.e. . Hence, assuming and in (8), this yields a.s. However, the obtained results were unsatisfactory: in the long time limit, the wealth distribution concentrates on the society’s average wealth, so that asymptotically, all agents possess the same amount of money. This also follows from our Theorem 3.3.

More realistic results have been obtained by Matthes and Toscani (2008), where trade rules have been introduced that satisfy (3) with , but in general . Thus wealth can be increased or diminished in individual trades, but the society’s total wealth, i.e. the first moment of , remains constant in time. The proof of conservation of the mean wealth can also be found in the remarks after Theorem 3.3.

A typical example for trade rules is the following. Let and with and , where in is a relative price of an investment and in is a risk factor. The interpretation reads as follows: each of the two interacting agents buys from the other one some risky asset at the price of the th fraction of the respective buyer’s current wealth; these investments either pay off and produce some additional wealth, or lose value, both proportional (with ) to their original price. Over-simplified as this model might be, it is able to produce (for suitable choices of and ) steady distributions with only finitely many moments, as are typical wealth distributions for western countries; see Matthes and Toscani (2008) for further discussion.

An example is provided by choosing and . One easily verifies that , and . By Theorem 3.3, it follows that there exists a non-degenerate steady distribution that possesses all moments up to the third, whereas the third moment diverges.

### 2.2. Probabilistic representation of the solution

As already mentioned, a convenient way to represent the solution to the problem (1) is

 (10) ϕ(t;ξ)=∞∑n=0e−t(1−e−t)n^qn(ξ)(t≥0,ξ∈R)

where is recursively defined by

 (11) {^q0(ξ):=ϕ0(ξ)^qn(ξ):=1n∑n−1j=0E[^qj(Lξ)^qn−1−j(Rξ)](n=1,2,…).

The series in (10) is referred to as Wild sum, since the representation (10) has been derived in Wild (1951) for the solution of the Kac equation. In this section, we shall rephrase the Wild sum in a probabilistic way. The idea goes back to McKean (1966, 1967), where McKean relates the Wild series to a random walk on a class of binary trees, the so–called McKean trees. It is not hard to verify that each of the expressions in the Wild series is indeed a characteristic function. Now, following Gabetta and Regazzini (2006b), we shall define a sequence of random variables such that .

On a sufficiently large probability space , let the following be given:

• a sequence of independent and identically distributed random variables with common distribution function ;

• a sequence of independent and identically distributed random vectors, distributed as ;

• a sequence of independent integer random variables, where each is uniformly distributed on the indices ;

• a stochastic process with values in and .

We assume further that

 (In)n≥1,(Ln,Rn)n≥1,(Xn)n≥1and(νt)t>0

are stochastically independent. The random array of weights is recursively defined as follows:

 β1,1:=1,(β1,2,β2,2):=(L1,R1)

and, for any ,

 (12) (β1,n+1,…,βn+1,n+1):=(β1,n,…,βIn−1,n,LnβIn,n,RnβIn,n,βIn+1,n,…,βn,n).

Finally set

 (13) Wn:=n∑j=1βj,nXjandVt:=Wνt=νt∑j=1βj,νtXj.

There is a direct interpretation of this construction in terms of McKean trees. For an introduction to McKean trees, see, e.g., Carlen et al. (2000). Each finite sequence corresponds to a McKean tree with leaves. The tree associated to is obtained from the tree associated to upon replacing the -th leaf (counting from the left) by a binary branching with two new leaves. The left of the new branches is labelled with , and the right one with . Finally, the weights are associated to the leaves of the -tree; namely, is the product of the labels assigned to the branches along the ascending path connecting the th leaf to the root. The trees with and , respectively, are displayed in Figure 1.

In the Wild construction (11), McKean trees with leaves are obtained by joining pairs of trees with and leaves, respectively, at a new common root. Wheras our construction produces the leaved trees from the leaved trees replacing a leaf by a binary branching. In a way, the second construction is much more natural — or, at least, more biological! The next proposition shows that both constructions indeed lead to the same result.

In the rest of the paper expectations with respect to will be denoted by .

###### Proposition 2.1 (Probabilistic representation).

Equation (1) has a unique solution , which coincides with the characteristic function of , i.e.

 ϕ(t,ξ)=E[eiξVt]=∞∑n=0e−t(1−e−t)nE[eiξWn+1](t>0,ξ∈R).
###### Proof.

The respective proof for the Kac case is essentially already contained in McKean (1966). See Gabetta and Regazzini (2006b) for a more complete proof. Here, we extend the argument to the problem (1). First of all it is easy to prove, following Wild (1951) and McKean (1966), that formulas (10) and (11) produce the unique solution to problem (1). See also Sznitman (1986). Hence, comparing the Wild sum representation (10) and the definition of in (13), it obviously suffices to prove that

 (14) ^qℓ−1(ξ)=E[eiξWℓ],

which we will show by induction on . First, note that and

 E[eiξW2]=E[eiξ(L1X1+R1X2)]=E[E[eiξ(L1X1+R1X2)|L1,R1]]=^q1(ξ),

which shows (14) for and . Let , and assume that (14) holds for all ; we prove (14) for .

Recall that the weights are products of random variables and . By the recursive definition in (12), one can define a random index such that all products with contain as a factor, while the remaining products with contain . (In terms of McKean trees, is the number of leaves in the left sub-tree, and the number of leaves in the right one.) By induction it is easy to see that

 P{Kn=i}=1n−1i=1,…,n−1;

c.f. Lemma 2.1 in Carlen et al. (2000). Now,

 AKn:=Kn∑j=1βj,nL1Xj,BKn:=n∑j=Kn+1βj,nR1Xjand(L1,R1)

are conditionally independent given . By the recursive definition of the weights in (12), the following is easily deduced: the conditional distribution of , given , is the same as the (unconditional) distribution of , which clearly is the same distribution as that of . Analogously, the conditional distribution of , given , equals the distribution of , which further equals the distribution of . Hence,

 E[eiξWn] =1n−1n−1∑k=1E[eiξ(L1Ak+R1Bk)∣∣{Kn=k}] =1n−1n−1∑k=1E[E[eiξL1Wk|L1,R1]E[eiξR1Wn−k|L1,R1]] =1n−1n−1∑k=1E[^qk−1(L1ξ)^qn−k−1(R1ξ)]=1n−1n−2∑j=0E[^qn−2−j(L1ξ)^qj(R1ξ)]

which is by the recursive definition in (11). ∎

## 3. Convergence results

In order to state our results we need to review some elementary facts about the central limit theorem for stable distributions. Let us recall that a probability distribution is said to be a centered stable law of exponent (with ) if its characteristic function is of the form

 (15) ^gα(ξ)=⎧⎪ ⎪⎨⎪ ⎪⎩exp{−k|ξ|α(1−iηtan(πα/2)signξ)}if α∈(0,1)∪(1,2)exp{−k|ξ|(1+2iη/πlog|ξ|signξ)}if α=1exp{−σ2|ξ|2/2}if α=2.

where and .

By definition, a distribution function belongs to the domain of normal attraction of a stable law of exponent if for any sequence of independent and identically distributed real-valued random variables with common distribution function , there exists a sequence of real numbers such that the law of

 1n1/αn∑i=1Xi−cn

converges weakly to a stable law of exponent .

It is well-known that, provided , belong to the domain of normal attraction of an -stable law if and only if satisfies

 (16) limx→+∞xα(1−F(x))=c+<+∞,limx→−∞|x|αF(x)=c−<+∞.

Typically, one also requires that in order to exclude convergence to the probability measure concentrated in , but here we shall include the situation as a special case. The parameters and of the associated stable law in (15) are identified from and by

 (17) k=(c++c−)π2Γ(α)sin(πα/2),η=c+−c−c++c−,

with the convention that if . In contrast, if , belongs to the domain of normal attraction of a Gaussian law if and only if it has finite variance .

For more information on stable laws and central limit theorem see, for example, Chapter 2 of Ibragimov and Linnik (1971) and Chapter 17 of Fristedt and Gray (1997).

### 3.1. Convergence in distribution

We return to our investigation of solutions to the initial value problem (1)-(2). For definiteness, let the two non-negative random variables and , which define the dynamics in (2), be fixed from now on. We assume that they satisfy

 (18) E[Lα+Rα]=1

for some number . We introduce the convex function by

 S(s)=E[Ls+Rs]−1,

where we adopt the convention . From (18) it follows that . Recall that is the probability distribution function of the initial condition for (1), and its characteristic function is .

The main results presented below show that if belongs to the domain of normal attraction of an -stable law, then the solution to the problem (1)-(2) converges, as , to the characteristic function of a mixture of stable distributions of exponent . The mixing distribution is given by the law of the limit for of the random variables

 M(α)n=n∑j=1βαj,n,

which are defined in terms of the random weights defined in (12). The content of the following lemma is that converges almost surely to a random variable .

###### Lemma 3.1.

Under condition (18),

 (19) E[M(α)n]=E[M(α)νt]=1for all n≥1 and t>0,

and converges almost surely to a non-negative random variable .

In particular,

• if a.s., then for every and for every . Moreover, almost surely;

• if and if for some , then almost surely;

• if and if for some , then is a non-degenerate random variable with and . Moreover, the characteristic function of is the unique solution of

 (20) ψ(ξ)=E[ψ(ξLα)ψ(ξRα)](ξ∈R)

with . Finally, for any , the moment is finite if and only if .

We are eventually in the position to formulate our main results. The first statement concerns the case where and .

###### Theorem 3.2.

Assume that (18) holds with and that for some . Moreover, let condition (16) be satisfied for and let be centered if . Then converges in distribution, as , to a random variable with the following characteristic function

 (21) ϕ∞(ξ)=E[exp(iξV∞)]=E[exp{−|ξ|αkM(α)∞(1−iηtan(πα/2)signξ)}](ξ∈R),

where the parameters and are defined in (17). In particular, is a non-degenerate random variable if and , whereas a.s. if , or if . Moreover, if a.s., then the distribution of is an -stable law. Finally, if is non-degenerate, then if and only if .

If then the limit distribution is a mixture of symmetric stable distributions. For instance this is true if is the distribution function of a symmetric random variable.

If and , then clearly and the limit distribution is a mixture of positive stable distributions. Recall that a positive stable distribution is characterized by its Laplace transform ; hence, in this case,

 E[exp(−sV∞)]=E[exp{−sα¯kM(α)∞}]for all s>0, with ¯k=c+∫+∞0(1−e−y)yα+1dy.

A consequence of Theorem 3.2 is that if , then the limit is zero almost surely, since . The situation is different in the cases and , where is non-trivial provided that the first respectively second moment of is finite.

###### Theorem 3.3.

Assume that (18) holds with and that for some . If the initial condition possesses a finite first moment , then converges in distribution, as , to . In particular, is non-degenerate if and , whereas if . Moreover, if a.s., then a.s. Finally, if is non-degenerate and , then if and only if .

We remark that under the hypotheses of the previous theorem, the first moment of the solution is preserved in time. Indeed one has,

 E[Vt]=E[E[∑νtj=1βj,νtXj∣∣νt,β1,νt,…,βνt,νt]]=m0E[M(1)νt]=m0,

where the last equality follows from (19).

Theorem 3.3 above is the most natural generalization of the results in Matthes and Toscani (2008), where the additional condition for some has been assumed. The respective statement for reads as follows.

###### Theorem 3.4.

Assume that (18) holds with and that for some . If and , then converges in distribution, as , to a random variable with characteristic function

 (22) ϕ∞(ξ)=E[exp(iξV∞)]=E[exp(−ξ2σ22M(2)∞)](ξ∈R).

In particular, is a non-degenerate random variable if and , whereas a.s. if . Moreover, if a.s., then is a Gaussian random variable. Finally, if is non-degenerate and , then if and only if .

Some additional properties of the solution should be mentioned: centering is obviously propagated from the initial condition to the solution at all later times . Moreover, under the hypotheses of the theorem, the second moment of the solution is preserved in time. Indeed, taking into account that the are independent and centered,

 E[V2t]=E[E[∑νtj,k=1βj,νtβk,νtXjXk∣∣νt,β1,νt,…,βνt,νt]]=σ2E[M(2)νt]=σ2,

where we have used (19) in the last step.

The technically most difficult result concerns the situation for an initial condition of infinite first moment. Weak convergence to a limit can still be proven if , but the law of belongs to the domain of normal attraction of a 1-stable distribution. However, a suitable time-dependent centering needs to be applied to the random variables .

###### Theorem 3.5.

Assume that (18) holds with and that for some . Moreover, let the condition (16) be satisfied for . Then the random variable

 (23) V∗t:=Vt−νt∑j=1qj,νt,whereqj,n:=∫Rsin(βj,nx)dF0(x),

converges in distribution to a limit with characteristic function

 (24) ϕ∞(ξ)=E[exp(iξV∗∞)]=E[exp{−|ξ|kM(1)∞(1+2iη/πlog|ξ|signξ)}](ξ∈R)

where the parameters and are defined in (17). In particular, is a non-degenerate random variable if and , whereas a.s. if , or if . Moreover, if a.s., then the distribution of is a -stable law. Finally, if is non-degenerate, then if and only if .

### 3.2. Rates of convergence in Wasserstein metrics

Recall that the Wasserstein distance of order between two random variables and is defined by

 (25)

The infimum is taken over all pairs of real random variables whose marginal distribution functions are the same as those of and , respectively. In general, the infimum in (25) may be infinite; a sufficient (but not necessary) condition for finite distance is that both and . For more information on Wasserstein distances see, for example, Rachev (1991).

Recall that the are random variables whose characteristic functions solve the initial value problem (1) for the Boltzmann equation for , and is the limit in distribution of as .

###### Proposition 3.6.

Assume (18) and , for some with or . Assume further that (16) holds if , or that if , respectively. Then

 (26) Wγ(Vt,V∞)≤AWγ(X0,V∞)e−Bt|S(γ)|,

with if , or and otherwise.

Clearly, the content of Proposition 3.6 is void unless

 (27) Wγ(X0,V∞)<∞.

In the case , the hypothesis guarantees (27). In all other cases, (27) is a non-trivial requirement since, by Theorem 3.2, either or . The following Lemma provides a sufficient criterion for (27), tailored to the situation at hand.

###### Lemma 3.7.

Assume, in addition to the hypotheses of Proposition 3.6, that and that satisfies hypothesis (16) in the more restrictive sense that there exists a constant and some with

 (28) |1−c+x−α−F0(x)| 0, (29) |F0(x)−c−(−x)−α|

Provided that it follows , and then estimate (26) is non-trivial.

### 3.3. Strong convergence of densities

As already mentioned in the introduction, under suitable hypotheses, the probability densities of exist and converge strongly in the Lebesgue spaces and .

###### Theorem 3.8.

For given , let the hypotheses of Theorem 3.2 or Theorem 3.4 hold with . Assume further that (16) holds with if , so that the converges in distribution, as , to a non-degenerate limit . Moreover assume also that

1. a.s. for some ,

2. possesses a density with finite Linnik-Fisher functional, i.e. , or equivalently, its Fourier transform satisfies

 ∫R|ξ|2∣∣ˆh(ξ)∣∣2dξ<+∞.

Then, the random variable possesses a density for all , has a density , and the converges, as , to in any with , that is

 limt→+∞∫R|f(t;v)−f∞(v)|pdv=0.
###### Remark 1.

Some comments on the hypotheses (H1) and (H2) are in order.

• In view of , condition (H1) can be satisfied only if . Notice that (H1) becomes the weaker the smaller is; in fact, the sets exhaust the first quadrant as .

• The smoothness condition (H2) is not quite as restrictive as it may seem. For instance, recall that the convolution of any probability density with an arbitrary “mollifier” of finite Linnik-Fisher functional (e.g. a Gaussian) produces again a probability density of finite Linnik-Fisher functional.

## 4. Proofs

We continue to assume that the law of the random vector is given and satisfies (18) with , implying .

### 4.1. Properties of the weights βj,n (Lemma 3.1)

In this subsection we shall prove a generalization of a useful result obtained in Gabetta and Regazzini (2006a). Set

 (30) Gn=(I1,…,In−1,L1,R1,…,Ln−1,Rn−1).

and denote by the -algebra generated by .

###### Proposition 4.1.

If for some , then

 E[M(s)n]=E[n∑j=1βsj,n]=Γ(n+S(s))Γ(n)Γ(S(s)+1)

and

 E[M(s)νt]=∑</