# Properties of powers of functions satisfying second-order linear differential equations with applications to statistics

## Abstract

We derive properties of powers of a function satisfying a second-order linear differential equation. In particular we prove that the -th power of the function satisfies an -th order differential equation and give a simple method for obtaining the differential equation. Also we determine the exponents of the differential equation and derive a bound for the degree of the polynomials, which are coefficients in the differential equation. The bound corresponds to the order of differential equation satisfied by the -fold convolution of the Fourier transform of the function. These results are applied to some probability density functions used in statistics.

Keywords and phrases: characteristic function, exponents, holonomic function, indicial equation, skewness

## 1 Introduction

In statistics it is important to study the distribution of a sum (i.e. convolution) of independent random variables. Usually the distribution is studied through the characteristic function, because the convolution of probability density functions corresponds to the product of characteristic functions. If the random variables are identically distributed, then we study the -th power of a characteristic function. The central limit theorem is proved by analyzing the limiting behavior of the -th power of a characteristic function as . Often the technique of asymptotic expansion is employed to improve the approximation for large . However for finite , the exact distribution of the sum of random variables is often difficult to treat. Hence it is important to develop methodology for studying properties of the -th power of a function.

Recently techniques based on holonomic functions ([9], Chapter 6 of [6]) have been introduced to statistics and successfully applied to some difficult distributional problems (e.g. [12], [5]). In this paper we investigate the case that the function satisfies a second-order linear differential equation with rational function coefficients, which we call holonomic differential equation. In Section 2 we prove that the -th power satisfies an -th order differential equation and give a simple method for obtaining the differential equation. Also we determine the exponents of the differential equation and derive a bound for the degree of the polynomials which appear as coefficients of the differential equation.

As shown in Section 3, there are some important examples in statistics which falls into this case. We discuss sum of beta random variables and sum of cubes of standard normal random variables. The differential equations reveal many interesting properties of the characteristic function and the probability density function of the sum of random variables. These properties are hard to obtain by other methods. We end the paper with some discussions in Section 4.

## 2 Main results

In this section we present our main results in Theorems 2.4, 2.8 and 2.12. Theorem 2.4 gives the differential equation satisfied by the -th power. Theorem 2.8 bounds the degree of coefficient polynomials. Theorem 2.12 derives exponents of the differential equation.

Let denote the field of rational functions in with complex coefficients and let

denote the ring of differential operators with rational function coefficients. In , the product of and is defined as , where is the derivative of with respect to . In order to distinguish the product in and the action of to a function, we denote the latter by the symbol .

###### Example 2.1.

If we write , both and are the elements of . Hence . On the other hand, if we write , this is a function. Hence .

In this paper we study which is a holonomic function satisfying a second-order differential equation:

(1) |

### 2.1 Order of the differential equation of the -th power and its Fourier transform

Let be an dimensional column vector and let

(2) |

be an tridiagonal matrix with entries from . Furthermore define

(3) | ||||

with entries from . Let

(4) |

be an matrix with entries from . If we write , , then

or writing down the elements we have

(5) |

where . Hence it is easy to compute the elements of the columns of recursively, starting from the first column.

Define

(6) |

From (5) we can easily prove that is an upper-triangular matrix with non-zero diagonal elements, although is not a square matrix (cf. Example 2.3 below).

###### Lemma 2.2.

if . .

###### Proof.

We use induction on . The result is trivial for . Assume and . Then by (5) we have and . ∎

This lemma implies , or . Hence the element of is unique up to the multiplication of a rational function. Here note that we are using the linear algebra over .

Let

(7) |

where , . Once we set , then by the triangularity of , are successively determined. Moreover, if we set , then we obtain . Hence for . Often we set . For theoretical investigation it is convenient to clear the common denominators of ’s and take ’s as polynomials.

###### Example 2.3.

Let and let . Then

(8) |

If we set , we successively obtain

(9) |

Multiplying by we obtain with polynomial elements.

We now derive a holonomic differential equation satisfied by the -th power of the holonomic function .

###### Theorem 2.4.

The -th power of satisfies the following -th order holonomic differential equation:

(10) |

where ’s are given in (7).

###### Proof.

###### Remark 2.5.

If we just want to show the existence of a holonomic differential equation of order , we have only to consider

(17) |

Then is a left -module as well as a vector space over of dimension at most . Hence elements, , , …, , which belong to , are linearly dependent over . Similarly, we see that when satisfies a holonomic differential equation of order , satisfies a holonomic differential equation of order .

There exists a function satisfying a second-order holonomic differential equation, such that does not satisfy any holonomic differential equation of order less than .

###### Example 2.6.

Let , with . We prove by contradiction that , or , , , are linearly independent over . It is obvious for . Let be the smallest integer such that are linearly dependent. Then, there exist rational functions , not all zero, such that

(18) |

By putting , has infinite number of zeros, and therefore is identically zero. Divide the equation (18) by , and we obtain , which is a contradiction.

Since are linearly independent and the matrix is non-singular by Lemma 2.2 over , are linearly independent over . Thus, there does not exist a holonomic differential equation of order less than satisfied by .

We have already remarked that we can take , , as polynomials in (10). Also we can cancel common factors in them. Hence we can assume that they are coprime polynomials. We now investigate the highest degree of these polynomials, which is important when the differential equation is Fourier transformed, because it is equal to the order of the transformed equation.

For the rest of this subsection we assume that are Laurent polynomials. Here, we define mindeg and maxdeg of a Laurent polynomial.

###### Definition 2.7.

For a non-zero Laurent polynomial , we define

(19) |

We define , .

Note that for a polynomial , .

Now we state the following theorem on the largest degree of the polynomials.

###### Theorem 2.8.

###### Proof.

Let denote . We prove

(21) |

for by induction. It is easy to check them for . Assuming them up to , by (5), we have

(22) | ||||

(23) | ||||

(24) | ||||

(25) | ||||

(26) |

Thus, the results are shown by induction.

Let denote the polynomial ring in and with complex coefficients. The Fourier transform , which is a ring isomorphism of , is defined by (Section 6.10 of [6])

(29) |

Hence the Fourier transform of is given by .

This definition is based on the fact that if a function satisfies the differential equation , then the Fourier transform satisfies the differential equation under some regularity conditions. If is a rapidly decreasing holonomic function, then the correspondence (29) is immediate (Section 5.1.4 of [16]). The correspondence can be justified in the class of slowly increasing functions. See Chapter 5 of [4].

### 2.2 Exponents for the differential equation of the -th power and the Fourier transformed equation

Consider an -th order differential equation

(30) |

If are all analytic at , then is said to be a regular singular point for the equation. If and are all analytic at , then is said to be a regular singular point for the equation.

When the equation (30) is holonomic, is a regular singular point if the denominators of do not have a factor , and is a regular singular point if and are all proper.

When is a regular singular point for the equation, the -th degree equation

(31) |

where (cf. (6)), is called the indicial equation (Section 9.5 of [7], Chapter 15 of [8]) for (30) relative to the regular singular point . The roots of the indicial equation are called the exponents.

The case can be reduced to the case by the transform and the case can be reduced to by . Hence in the following we put .

The equation (30) is equal to

(32) |

where is the Euler operator, since . This shows that is obtained by expressing the differential equation in terms of and , and substituting and formally.

In this subsection we assume that is a regular singular point for the equation (1) for . Let be the exponents for (1) relative to the regular singular point .

We show the following lemma on the eigenvalues of a matrix before the proof of Theorem 2.12 on the exponents for (10) relative to .

###### Lemma 2.10.

The eigenvalues of an tridiagonal matrix

(33) |

are

###### Proof.

The eigenvalues of are equal to those of the matrix

(34) |

because the determinant of a tridiagonal matrix depends only on the diagonal elements and the products of off-diagonal elements .

If , it is obvious that the eigenvalues are . Otherwise, putting , we prove that the eigenvalues of the matrix are .

For , all of ’s are different. We show that the eigenvector corresponding to is where

(35) |

Here, the summation for is over the finite interval .

The -th entry of equals

(36) | ||||

(37) |

The first two terms equal

(38) | |||

(39) |

by the relations , and . Similarly, the last two terms equal . Those show that .

For , all of ’s are identical. Let be dimensional vectors where

(40) |

Then, we can show and as above. Hence ’s, which are linearly independent, are the generalized eigenvectors of the matrix. ∎

###### Remark 2.11.

We now show the following theorem on the exponents for (10).

###### Theorem 2.12.

###### Proof.

We put without loss of generality by translation. Then, the equation (1) can be rearranged to

where and are analytic at .

Let