From the totally asymmetric simple exclusion process to the KPZ fixed point

From the totally asymmetric simple exclusion process to the KPZ fixed point

Jeremy Quastel    Konstantin Matetski University of Toronto, 40 St. George Street, Toronto, Ontario, Canada M5S 2E4 quastel@math.toronto.edu, matetski@math.toronto.edu
Abstract

These notes are based on the article [Matetski, Quastel, Remenik, The KPZ fixed point, 2016] and give a self-contained exposition of construction of the KPZ fixed point which is a Markov process at the centre of the KPZ universality class. Starting from the Schütz’s formula for transition probabilities of the totally asymmetric simple exclusion process, the method is by writing them as the biorthogonal ensemble/non-intersecting path representation found by Borodin, Ferrari, Prähofer and Sasamoto. We derive an explicit formula for the correlation kernel which involves transition probabilities of a random walk forced to hit a curve defined by the initial data. This in particular yields a Fredholm determinant formula for the multipoint distribution of the height function of the totally asymmetric simple exclusion process with arbitrary initial condition. In the 1:2:3 scaling limit the formula leads in a transparent way to a Fredholm determinant formula for the KPZ fixed point, in terms of an analogous kernel based on Brownian motion. The formula readily reproduces known special self-similar solutions such as the Airy and Airy processes.

TASEP, growth process, biorthogonal ensemble, determinantal point process, KPZ fixed point
\customizeamsrefs\theoremstyle

plain \theoremstyledefinition

\subjclass

[2010]Primary 60K35; Secondary 82C27

1 The totally asymmetric simple exclusion process

The totally asymmetric simple exclusion process (TASEP) is a basic interacting particle system studied in non-equilibrium statistical mechanics. The system consists of particles performing totally asymmetric nearest neighbour random walks on the one-dimensional integer lattice with the exclusion rule. Each particle independently attempts to jump to the neighbouring site to the right at rate , the jump being allowed only if that site is unoccupied. More precisely, if we denote by a particle configuration (where if there is a particle at the site , and if the site is empty), then TASEP is a Markov process with infinitesimal generator acting on cylinder functions by

where denotes the configuration with interchanged values at and :

See [ligg1] for the proof of the non-trivial fact that this process is well-defined.

Exercise 1

Prove that the following measures are invariant for TASEP, i.e. :

  1. the Bernoulli product measures with any density ,

  2. the Dirac measure on any configuration with for , for .

It is known [ligg1] that these are the only invariant measures.

The TASEP dynamics preserves the order of particles. Let us denote positions of particles at time by

where is the position of the -th particle. Adding into the state space and placing a necessarily infinite number of particles at infinity allows for left- or right-finite data with no change of notation (the particles at are playing no role in the dynamics). We follow the standard practice of ordering particles from the right; for right-finite data the rightmost particle is labelled .

TASEP is a particular case of the asymmetric simple exclusion process (ASEP) introduced by Spitzer in [Spitzer]. Particles in this model jump to the right with rate and to the left with rate such that , following the exclusion rule. Obviously, in the case we get TASEP. In the case the model becomes significantly more complicated comparing to TASEP, for example Schütz’s formula described in Section 2 below cannot be written as a determinant which prevents the following analysis in the general case. ASEP is important because of the weakly asymmetric limit, which means to diffusively rescale the growth process introduced below as while at the same time taking , in order to obtain the KPZ equation [berGiaco].

1.1 The growth process.

Of special interest in non-equilibrium physics is the growth process associated to TASEP. More precisely, let

denote the label of the rightmost particle which sits to the left of, or at, at time . The TASEP height function associated to is given for by

(2)

which fixes . The height function is a random walk path with if there is a particle at at time and if there is no particle at at time . We can also easily extend the height function to a continuous function of by linearly interpolating between the integer points.

Exercise 3

Show that the dynamics of is that local max’s become local min’s at rate ; i.e. if then at rate , the rest of the height function remaining unchanged (see the figure below). What happens if we consider ASEP?

Figure 1: Evolution of TASEP and its height function.

Two standard examples of initial data for TASEP are the step initial data (when for ) and -periodic initial data (when for ) with . Analysis of TASEP with one of this initial data is much easier than in the general case. In particular the results presented in Sections 5 and 6 below were known from [borFerPrahSasam, bfp, ferrariMatr] and served as a starting point for our work.

2 Distribution function of TASEP

If there are a finite number of particles, we can alternatively denote their positions

where is called the Weyl chamber. The transition probabilities for TASEP with a finite number of particles was first obtained in [MR1468391] using (coordinate) Bethe ansatz.

Proposition 4 (Schütz’s formula)

The transition probability for TASEP particles has a determinantal form

(5)

with , and

(6)

where is any simple loop oriented anticlockwise which includes and .

In the rest of this section we provide a proof of this result using Bethe ansatz and in Section 2.2 we show that Schütz’s formula can alternatively be easily checked to satisfy the Kolmogorov forward equation.

2.1 Proof of Schütz’s formula using Bethe ansatz.

In this section we will prove Proposition 4 following the argument of [MR2824604]. We will consider particles in TASEP and derive the master (Kolmogorov forward) equation for the process , where is the Weyl chamber defined above. For a function we introduce the operator

where and is the discrete derivative

(7)

acting on the -th argument of . One can see that this is the infinitesimal generator of TASEP in the variables . Thus, if

is the transition probability of particles of TASEP from to , then the master equation (=Kolmogorov forward equation) is

(8)

The idea of [Bethe] was to rewrite (8) as a differential equation with constant coefficients and boundary conditions, i.e. if solves

(9)

with the boundary conditions

(10)

then for one has

(11)
Exercise 12

Prove this by induction on .

The strategy is now to find a general solution to the master equation (9) and then a particular one which satisfies the boundary and initial conditions. The method is known as (coordinate) Bethe ansatz.

Solution to the master equation.

For a fixed , we are going to find a solution to the equation (9). For this we will consider indistinguishable particles, so that the state space of the system is given by

where is the symmetric group and . With this in mind we define the generating function

where , and . Since we would like the identity (11) to hold, it is reasonable to assume that which guarantees locally absolute convergence of the sum above and all the following computations. Then (9) yields

where for . From the last identity we conclude that

for a function which is independent of , but can depend on . Then Cauchy’s integral theorem gives a solution to the master equation

(13)

where and is a contour in around the origin. Our next goal it to find and such that this solution satisfied the initial and boundary conditions for (9).

Satisfying the boundary conditions.

We are going to find functions and a contour such that the solution (13) satisfies the boundary conditions (10). We will look for a solution in a more general form than (13). More precisely, we will consider functions depending on , which gives us the Bethe ansatz solution

(14)

In the case , the boundary condition (10) yields

In particular, this identity holds if for we have

for all . Let be the transposition , i.e. it interchanges the elements and . Then the last identity holds if we have

In particular, one can see that the following functions satisfy this identity

(15)

for any function . Thus we need to find a specific function so that the initial condition in (9) is satisfied.

Satisfying the initial condition.

Since the equation (9) preserves the Weyl chamber, it is sufficient to check the initial condition only for . Combining (14) with (15), the initial condition at is given by

(16)

If is the identity permutation and then obviously

For this to hold we need to choose the function in (15) to be

Thus, a candidate for the solution is given by

which can be written as Schütz’s formula (5). It is obvious that the contour should go around and , since otherwise the determinant in (5) will vanish when and are far enough.

In order to complete the we still need to prove that this solution satisfies the initial condition. To this end we notice that for we have

which in particular implies that for and , and . In the case , we have for all , and since . This yields and the determinant in (5) vanishes, because the matrix contains a row of zeros. If , then we have for all , and all entries of the first column in the matrix from (5) vanish, except the first entry which equals . Repeating this argument for , and so on, we obtain that the matrix is upper-triangular with delta-functions at the diagonal, which gives us the claim.

Remark 17

Similar computations lead to the distribution function of ASEP [MR2824604]. Unfortunately, this distribution function doesn’t have the determinantal form as (5) which makes its analysis significantly more complicated.

2.2 Direct check of Schütz’s formula.

We will show that the determinant in (5) satisfies the master equation (9) with the boundary conditions (10), providing an alternate proof to the one in Section 2.1. To this end we will use only the following properties of the functions , which can be easily proved,

(18)

where has been defined in (7) and . Furthermore, it will be convenient to define the vectors

(19)

Then, denoting by the right-hand side of (5), we can write

where the operators in the first and second sums are applied only to the -th column, and where we made use of the first identity in (19) and multi-linearity of determinants. Here, is as before the operator acting on .

Now, we will check the boundary conditions (10). If , then using again multi-linearity of determinants and the second identity in (19) we obtain

The latter determinant vanishes, because the matrix has two equal columns. A proof of the initial condition was provided at the end of the previous section.

3 Determinantal point processes

In this section we provide some results on determinantal point processes, which can be found e.g. in [Bor11, borodinRains, johansson]. These processes were studied first in [Macchi1975] as ‘fermion’ processes and the name ’determinantal’ was introduced in [borOlsh00].

Definition 20

Let be a discrete space and let be a measure on . A determinantal point process on the space with correlation kernel is a signed111 In our analysis of TASEP we will be using only a counting measure assigning a unit mass to each element of . However, a determinantal point process can be defined in full generality on a locally compact Polish space with a Radon measure (see [BHKPV]). Moreover, in contrast to the usual definition we define the measure to be signed rather than a probability measure. This fact will be crucial in Section 4.1 below, and we should also note that all the properties of determinantal point processes which we will use don’t require to be positive. measure on (the power set of ), integrating to and such that for any points one has the identity

(21)

where the sum runs over finite subsets of .

The determinants on the right-hand side are called -point correlation functions or joint intensities and denoted by

(22)

One can easily see that these functions have the following properties: they are symmetric under permutations of arguments and vanish if for .

Exercise 23

In the case that is a positive measure, show that if is the kernel of the orthogonal projection onto a subset of dimension , then the number of points in is almost surely equal to .

Usually, it is non-trivial to show that a process is determinantal. Below we provide several examples of determinantal point processes (these ones are not signed).

Example 24 (Non-intersecting random walks)

Let , , be independent time-homogeneous Markov chains on with one step transition probabilities satisfying the identity (i.e. every time each random walk makes one unit step either to the left or to the right). Let furthermore be reversible with respect to a probability measure on , i.e. for all and . Then, conditioned on the events that the values of the random walks at times and are fixed, i.e. for all where each is even, and no two of them intersect on the time interval , the configuration of mid-positions is a determinantal point process on with respect to the measure , i.e.

(25)

where the probability is conditioned by the described event (assuming of course that its probability is non-zero). Here, the correlation kernel is given by

(26)

where the functions and are defined by

with the matrix having the entries . Invertibility of the matrix follows from the fact that the probability of the condition is non-zero and Karlin-McGregor formula (see Exercise 28 below). This result is a particular case of a more general result of [johansson] and it can be obtained from Karlin-McGregor formula similarly to [BHKPV, Cor. 4.3.3].

Exercise 27

Prove that the mid-positions of the random walks defined in the previous example form a determinantal process with the correlation kernel (26).

Exercise 28 (Karlin-McGregor formula [karlinMcGregor])

Let , , be i.i.d. (time-inhomogeneous) Markov chains on with transition probabilities satisfying for all and . Let us fixed initial states for such that each is even. Then the probability that at time the Markov chains are at the states , and that no two of the chains intersect up to time , equals .

Hint (this idea is due to S.R.S. Varadhan): for a permutation and , define the process

(29)

which is a martingale with respect to the filtration generated by the Markov chains . This implies that the process is also a martingale. Obtain the Karlin-McGregor formula by applying the optional stopping theorem to for a suitable stopping time.

Example 30 (Gaussian unitary ensemble)

The most famous example of determinantal point processes is the Gaussian unitary ensemble (GUE) introduced by Wigner. Let us define the matrix to have i.i.d. standard complex Gaussian entries and let . Then the eigenvalues of form a determinantal point process on with the correlation kernel

with respect to the Gaussian measure , where are Hermite polynomials which are orthonormal on . A proof of this result can be found in [mehta, Ch. 3].

Example 31 (Aztec diamond tilings)

The Aztec diamond is a diamond-shaped union of lattice squares (see Figure 2). Let’s now color some squares in gray following the pattern of a chess board and so that all the bottom left squares are colored. It is easy to see that the Aztec diamond can be perfectly covered by domino tilings, which are or rectangles, and the number of tilings growth exponentially in the width of the diamond. Let’s draw a tiling uniformly from all possible tilings and let’s mark gray left squares of horizontal dominos and gray bottom squares of vertical dominos. This random set is a determinantal point process on the lattice  [MR2118857].

Figure 2: Aztec diamond tiling.

3.1 Probability of an empty region.

A useful property of determinantal point processes is that the ‘probability’ (recall that the measure in Definition 20 is signed) of having an empty region is given by a Fredholm determinant.

Lemma 32

Let be a determinantal point process on a discrete set with a measure and with a correlation kernel . Then for a Borel set one has

where the latter is the Fredholm determinant defined by

(33)
{proof}

Using Definition 20 and the correlation functions (22) we can write

which is exactly our claim. Note, that the condition cabe be omitted, because vanishes on the diagonals.

Exercise 34

Prove that if is finite and is the counting measure, then the Fredholm determinant (33) coincides with the usual determinant.

3.2 -ensembles of signed measures.

A more restrictive definition of a determinantal process was introduced in [borodinRains]. In order to simplify our notation, we take the measure in this section to be the counting measure and we will skip it in notation below.

With the notation of Definition 20, let us be given a function . For any finite subset we define a symmetric minor . Then one can define a (signed) measure on , called the -ensemble, by

(35)

for , if the Fredholm determinant is non-zero (recall the definition (33)).

Exercise 36

Check that the measure defined in (35) integrates to .

The requirement guarantees that there exists a unique function such that , where is the convolution on and is the identity function non-vanishing only on the diagonal. Furthermore, it was proved in [Macchi1975] that the -ensemble is a determinantal point process:

Proposition 37

The measure defined in (35) is a determinantal point process with correlation kernel .

Example 38 (Non-intersecting random walks)

It is not difficult to see that the distribution of the mid-positions of the random walks from Example 24 is the -ensemble with the function

The correlation kernel can be computed from Proposition 37 and it coincides with (25).

Exercise 39

Perform the computations from the previous example.

3.3 Conditional -ensembles.

An -ensemble can be conditioned by fixing certain values of the determinantal process. More precisely, consider a nonempty subset and a given -ensemble on . We define a measure on , called conditional -ensemble, in the following way:

(40)

for any