Bi-orthogonal Polynomials and the Five parameter Asymmetric Simple Exclusion Process

Bi-orthogonal Polynomials and the Five parameter Asymmetric Simple Exclusion Process

R. Brak   and W. Moore
School of Mathematics,
The University of Melbourne
Parkville, Victoria 3052,
Australia
rb1@unimelb.edu.au
Abstract

We apply the bi-moment determinant method to compute a representation of the matrix product algebra – a quadratic algebra satisfied by the operators and – for the five parameter (, , , and ) Asymmetric Simple Exclusion Process. This method requires an decomposition of the “bi-moment matrix”. The decomposition defines a new pair of basis vectors sets, the ‘boundary basis’. This basis is defined by the action of polynomials and on the quantum oscillator basis (and its dual). Theses polynomials are orthogonal to themselves (ie. each satisfy a three term recurrence relation) and are orthogonal to each other (with respect to the same linear functional defining the stationary state). Hence termed ‘bi-orthogonal’. With respect to the boundary basis the bi-moment matrix is diagonal and the representation of the operator is tri-diagonal. This tri-diagonal matrix defines another set of orthogonal polynomials very closely related to the the Askey-Wilson polynomials (they have the same moments).

Keywords:

Askey-Wilson polynomials, bi-orthogonal polynomials, orthogonal polynomials, totally asymmetric simple exclusion process, -decomposition, diffusion algebra, quadratic algebra.

1 Introduction

The ASEP is a continuous time Markov process defined by particles hopping along a line of sites – see Figure 1. Particles hop on to the line on the left (resp. right) with rates (resp. ), off at the right (resp.left) with rate (resp. ) and they hop to neighbouring sites to the left with rate and rate one to the right with the constraint that only one particle can occupy a site.

Figure 1: Five parameter ASEP hopping model

The matrix product Ansatz [1] expresses the stationary distribution of a given state as an inner product on a certain product of matrices and which satisfy the relation

and requires two vectors and which satisfy

(1.1a)
(1.1b)

We will call and the boundary vectors. The vectors and are used to define a linear functional which maps any (non-commutative) polynomial, in the matrices and to the set of (commuting) polynomials, , via

(1.2)

The representations of the matrices and fall into three natural cases;

Two parameter:
Three parameter:
Five parameter:

The two parameter case is (algebraically) simple. The three parameter case has been studied in [2]. In this paper we apply the method introduced in [2] to the five parameter case. This generalisation is not a simple extension of the three parameter case – several new difficulties appear. The more important ones are discussed in the Concluding Remarks section.

For the five parameter case, new parameters are defined from the five hopping parameters, leading to the following change in the set of parameters

(1.3a)
(1.3b)
(1.3c)
(1.3d)

This change is motivated by the parameters that occur in the Askey-Wilson polynomials [3] discussed further below.

Rather than using and the algebra is simplified by working with the standard shifted variables,

(1.4a)
(1.4b)

where . In these variables the commutation relations of and become [4],

Two parameter: (1.5a)
Three and Five parameter: (1.5b)

and (1.1) can be written in the form,

(1.6a)
(1.6b)

Each matrix representation of and is associated with a basis for the vector space upon which the matrices act. The standard quantum oscillator basis is the set . If the linear operators and in the respective cases are defined by their action on the basis vectors by:

Two parameter:
Three and Five parameter:

then it is simple to show that (two parameter) and (three and five parameter respec.). Thus the basis , in conjunction with the action of and above, gives the standard, [5], matrix representation for and which satisfy the appropriate commutation relations. In this representation the matrix is tri-diagonal and for the three and five parameter cases gives a three term recurrence related to -Hermite polynomials [6].

To find the vector (respec. ) there are (at least) two approaches. The first is to express (respec. ) as a linear combination of the standard basis vectors ie.  (respec. ), and then compute the coefficient (respec. ). For example, in the three parameter case this leads to

(1.7)

The second approach (used in this paper) is to find a new pair of bases and such at and . This pair of bases are constructed such that the “bimoment matrix”, , is diagonal. The matrix elements of are defined by a linear functional (see Definition 1) via

(1.8)

We will call and the boundary basis. This method of finding a basis (and hence representation) reduces to computing determinants and ultimately to finding an decomposition.

Representations of the and matrices for the five parameter model can be found in [6] (and references therein), with one of those representations reproduced in (1.17a). If the matrices associated with a given representation have sufficiently simple structure (eg. bi- or tri-diagonal) then they can be usefully interpreted as transfer matrices for lattice path models [7]. This leads to combinatorial methods for computing the inner product (or linear functional, ).

The primary objective of this paper is to the find the change of basis associated with the five parameter model representation obtained by Uchiyama et. al. [6] where the tri-diagonal matrix gives a three term recurrence related to the Askey-Wilson polynomials. As will be shown this change of basis is affected by the action of sequences of polynomials (with matrix argument) acting on the boundary vectors.

The Askey-Wilson polynomials play a prominent role in the representation of the and matrices. The polynomials also motivate the , , and choice of parameters (rather than the hopping rates) defined above and several other choices defined below. We thus briefly discuss the Askey-Wilson polynomials.

The Askey-Wilson polynomials [8, 9, 10] are ‘-orthogonal’ polynomials with four parameters, and (and ). They are at the top of the Askey-scheme of -orthogonal, one variable polynomials. The basic hypergeometric functions, , give a compact expression for the Askey-Wilson polynomial, , , which is given by

(1.9)

with and the basic hypergeometric function is

(1.10)

where the -shifted factorial is

(1.11)

The Askey-Wilson polynomial satisfies a three-term recurrence relation

(1.12)

with , and

(1.13)
(1.14)
(1.15)

and

(1.16)

Uchiyama et. al. [6] found a representation of and related to the Askey-Wilson polynomials. The matrices are tridiagonal and given by

(1.17a)
where
(1.17b)
(1.17c)
(1.17d)
and
(1.17e)
(1.17f)

We have introduced the parameters, , , and as they will reoccur in computations in the rest of this paper.

2 The Linear Functional

In this section we set up the tensor algebra used to represent the ASEP [11]. Let be the ring of integer coefficient commutative polynomials, and the -module (or tensor algebra)

(2.1)

where is a free rank two -module with generators and . Here denotes the ring of the module and ( factors). The homogeneous submodule , of degree , is generated by the standard monomial basis elements where . For brevity we will frequently omit the tensor product symbol, thus denotes etc.

We use the five parameter version of the original matrix Ansatz algebra equations of Derrida et. al. [1] as modified in [12]. The latter form allows for arbitrary monomial pre- and post-factors ( and in the equations below). The original algebra (in [1]) was stated in terms of matrices and vectors. Here we give a slightly more abstract version by using a linear functional in terms of and .

Definition 1.

Let be any monomial basis elements of . The -module homomorphism is defined by the following equations:

(2.2a)
(2.2b)
(2.2c)

where and are defined in (1.3), and extended linearly to other elements of .

The matrix product Ansatz of [1] for the stationary state can now be (trivially) restated using the linear functional .

Theorem 1 ( Derrida, Evans, Hakim and Pasquier [1]).

The stationary state probability distribution, , of the five parameter ASEP for the system in state , is given by

(2.3)
where
(2.4)

and if site is occupied and zero otherwise.

3 Bi-Orthogonal Pair of Polynomial Sequences

In this section we define a pair of polynomials sequences. These polynomials are then used to construct the boundary basis which leads to a matrix representation of and .

Consider the pair of sequences,

(3.1)

of monic polynomials where and are degree . We wish to determine if it is possible to find such a pair which are orthogonal with respect to (as defined in Definition 1), that is , ?

In order to show such a pair of sequences does indeed exist we consider the infinite dimensional ‘bimoment matrix’, , whose matrix elements are defined to be

(3.2)

The bimoment matrix elements satisfy a pair of partial difference equations as given in the following theorem.

Theorem 2.

The bimoment matrix elements, (3.2), satisfy the recursions

(3.3a)
(3.3b)

for with boundary values , satisfying

(3.4a)
(3.4b)
and =1.

Note, the bimoment matrix elements satisfy both (3.3a) and (3.3b), however, to generate the matrix elements it is sufficient to use only one of (3.3a) or (3.3b) together with both boundary recurrences. The reason both (3.3a) and (3.3b) are stated is that it makes explicit a symmetry of the matrix which we will use below.

Proof.

The idea of the proof is similar to [2]. However, eliminating an (resp. ) from the left (resp. right) side of is more complicated due to the more complicated boundary vector equations (1.6). Using (2.2c) (resp. (2.2b)), an (resp. ) on the left (resp. right) side of a monomial can be removed giving,

(3.5)
(3.6)

Similar to the proof in [2], commuting an all the way to the left and then eliminating the on the left using (3.5) gives the following recurrence (3.3a),

(3.7)

Commuting a all the way to the right and then eliminating it using (3.6) gives the following recurrence (3.3b),

(3.8)

By rearranging (2.2b) and (2.2c), an can be eliminated from the left and a eliminated from the right, as distinct from the 3-parameter case, giving,

(3.9)
(3.10)

Finally, using the recursions (3.3a) and (3.3b) for and gives the following expression in terms of the boundary values,

(3.11)
(3.12)

Combining these results gives a recurrence for the boundary value terms. ∎

The existence of the pair of polynomial sequences (3.1) requires that the determinant of the sub-matrix

be non-zero for all (see [2] for further details).

In the case of the three parameter model the corresponding determinant was evaluated using theorems from [13] and [14]. In this five parameter case we have been unable to find any similar theorems and thus attempted an decomposition directly.

For small values of the determinant, can be found by computer by iterating the recurrence relations (3.3a) to construct . From these values a product form for (stated below in (3.27)) can be conjectured. It is similarly possible to conjecture the decomposition of , that is, find upper and lower triangular matrices, and respectively, such that

(3.13)

(with diagonal). The product of the first diagonal elements of then gives the determinants, . These small computations lead us to define a lower triangular matrix via a recurrence relation for the matrix elements given by

(3.14)

where

(3.15)

For small values of this matrix (and a similar one for ) give the decomposition of , but unfortunately we have not been able to prove the decomposition for arbitrary . Thus we make the following conjecture.

Conjecture 1.

The bimoment matrix, (3.2), has an LDU decomposition with lower triangular matrix, , given by (3.14).

We make two remarks: first, one of the final results of this conjecture is a representation for and . Having obtained a candidate representation it is then straightforward to verify that is a representation by substituting back into the defining algebra – Definition 1. This has been done and hence the representation verified. Assuming the logic of the calculation can be reversed, that would prove the conjecture. However, one of the primary aims of this paper is to derive the representation and thus from this perspective is more satisfactory if the conjecture be proved directly.

Secondly, it is only necessary to conjecture as, assuming (3.14) is valid, we can compute the corresponding recurrence relations for the upper triangular matrix using a symmetry of . The bimoment matrix is invariant when taking the transpose and performing the substitutions or . Under this action, the equations (3.3a) and (3.3b) swap as do the equations (3.4a) and (3.4b). Thus the upper triangular matrix can be obtained from the lower triangular matrix by performing these substitutions.

Corollary 1.

Assuming Conjecture 1 is true. The the upper triangular matrix elements of the decomposition of the bimoment matrix are given by the recurrence relation

(3.16)

The diagonal matrix of the decomposition can be calculated using the inverse of the matrix.

Corollary 2.

Assuming Conjecture 1 is true. Let be the lower triangular matrix of the decomposition of the bimoment matrix . Then the elements of the inverse of satisfy

(3.17)

Let be the upper triangular matrix of the decomposition of the bimoment matrix . Then the elements of the inverse of satisfy

(3.18)
Proof.

We will show that . Thus, substituting (3.17) and then using (3.14) gives,

(3.19)

All entries above the main diagonal are zero since the matrix is lower triangular. On the diagonal (3.19) gives,

(3.20)

All that needs to be shown is that all the other diagonals contain only zeros. The recurrence (3.19) gives a matrix element in terms of elements from its own diagonal and the two diagonals above it. The only non-zero elements above the first off-diagonal are in the main diagonal. Therefore,

(3.21)

Similarly for the second off-diagonal,

(3.22)

Since the first two off-diagonals are zero, we get for

(3.23)

The proof for follows similarly. ∎

We can now compute the elements of the diagonal matrix .

Theorem 3.

Assuming Conjecture 1 is true.  The diagonal matrix elements of the matrix of (3.13) satisfy a first order recurrence relation giving

(3.24)

for , where is given by (3.15) and .

Proof.

This proof follows similarly to the proof of . Assuming the conjecture is true, is an upper triangular matrix. Using (3.17), the recurrence for , gives,

(3.25)

Using (3.3b), the recurrence for the bimoment matrix, gives

(3.26)

with and the fact that the matrix product is upper triangular gives the stated result. ∎

The value of the determinant, , is simple to calculate from the -decomposition of the bimoment matrix it being the product of the elements of the diagonal matrix.

Theorem 4.

Assuming Conjecture 1 is true.  Let be the truncated bimoment matrix whose elements are defined by Theorem 2. Then

(3.27)

We now use the bimoment matrix to show the existence and uniqueness of the polynomials sequences (3.1). For we require the bi-orthogonality condition

(3.28)

where is a sequence of non-zero normalisation factors determined by and the monic constraint.

If this bi-orthogonality is translated into the form of the original matrix product Ansatz, then the equation is asking the question: Does there exist polynomials and in the matrices and such that

(3.29)

for the boundary vectors and satisfying (1.6). If the sequences exist we get a new pair of basis vectors and their orthonormal (with respect to ) duals , given by

(3.30)

where and . We normalise so that . From these basis vectors, we get matrix representations for and by computing

(3.31)

Returning to the question of the existence of bi-orthogonal polynomials we have the following theorem stating a unique pair of sequences exists.

Theorem 5.

Assuming Conjecture 1 is true.  Let and be a pair of sequences of monic polynomials satisfying

(3.32)

where the linear functional is defined by equations (2.2). Then and exist and are unique with

(3.33)

for

The proof is exactly the same as that which appears in [2] (except for the different value of ) and thus we omit it.

To find the explicit form of the polynomials we need to evaluate two determinants (see [2] for their derivation),

(3.34)

and

(3.35)

The two determinants can be evaluated by decomposition of the two matrices leading to the following result.

Theorem 6.

Assuming Conjecture 1 is true.  The pair of sequences of monic polynomials and satisfy

(3.36a)
(3.36b)

where and are the matrix elements of the lower triangular matrix and upper triangular matrix given by (3.14) and (3.16) respectively.

Proof.

Since the two matrices are very similar to the bimoment matrix thus, once the LDU decomposition of the the bimoment matrix is know that for (3.34) and (3.35) are readily obtained. Thus, assuming Conjecture 1, we get the following:

(3.37)

and

(3.38)

Expanding (3.37) using the bottom row leaves a sub-matrix determinant which reduces down to a determinant of the same form as (3.37) but with and hence is . Thus we get (3.36a). Similarly for (3.36b). ∎

Corollary 3.

Assuming Conjecture 1 is true.  The pair of sequences of monic polynomials and can be expressed as

(3.39a)
(3.39b)

where and are the matrix elements of the inverse lower triangular and inverse upper triangular given by (3.17) and (3.18) respectively.

We now use (3.39a) and (3.39b) to find a recursion formulation for and .

Theorem 7.

Assuming Conjecture 1 is true.  The pair of sequences of monic polynomials