# Orthogonality of Quasi-Orthogonal Polynomials

###### Abstract.

A result of Pólya states that every sequence of quadrature formulas with nodes and positive numbers converges to the integral of a continuous function provided for a space of algebraic polynomials of certain degree that depends on . The classical case when the algebraic degree of precision is the highest possible is well-known and the quadrature formulas are the Gaussian ones whose nodes coincide with the zeros of the corresponding orthogonal polynomials and the numbers are expressed in terms of the so-called kernel polynomials. In many cases it is reasonable to relax the requirement for the highest possible degree of precision in order to gain the possibility to either approximate integrals of more specific continuous functions that contain a polynomial factor or to include additional fixed nodes. The construction of such quadrature processes is related to quasi-orthogonal polynomials. Given a sequence of monic orthogonal polynomials and a fixed integer , we establish necessary and sufficient conditions so that the quasi-orthogonal polynomials defined by

with , and for , also constitute a sequence of orthogonal polynomials. Therefore we solve the inverse problem for linearly related orthogonal polynomials. The characterization turns out to be equivalent to some nice recurrence formulas for the coefficients . We employ these results to establish explicit relations between various types of quadrature rules from the above relations. A number of illustrative examples are provided.

###### Key words and phrases:

Orthogonal polynomials, quasi-orthogonal polynomials, positive quadrature formulas, Gaussian quadrature formulas, Christoffel numbers, inverse problems.###### 2010 Mathematics Subject Classification:

33C45; 42C05## 1. Introduction

Some results obtained during the early development of the theory of orthogonal polynomials were motivated by the desire to build quadrature formulas with positive Christoffel numbers whose nodes are zeros of known polynomials. Nowadays these quadratures are succinctly denominated as positive quadrature formulas. The study of this kind of problems was inspired by the Gauss’ theorem on quadrature with the highest algebraic degree of precision with nodes at the zeros of the polynomials orthogonal with respect to the measure of integration as well as by the result of Pólya [38] on convergence of quadrature rules. This led Riesz, Fejér and Shohat to search for the properties of certain linear combinations of orthogonal polynomials and the further developments resulted in deep outcome. The most convincing example is the Askey and Gasper [7, 8] proof of the positivity of certain sums of Jacobi polynomials which played a key role in the final stage of de Branges’ proof of the Bieberbach conjecture. We refer to the nice survey of Askey [6] for the motivation to study positive Jacobi polynomial sums, coming from positive quadratures, and for further information about these natural connections.

The construction of positive quadrature rules is connected with the so-called quasi-orthogonal polynomials. Let be a given sequence of monic orthogonal polynomials, generated by the three-term recurrence relation

(1.1) |

with and Then, given , the polynomials defined by

(1.2) |

are said to be a sequence of quasi-orthogonal polynomials of order or, simply, -quasi-orthogonal polynomials if . Here for , are real numbers. By convention we set , , when , and also when and . Notice that for we have the standard orthogonality. This notion was introduced by Riesz while studying the moment problem and the reason for this nomenclature is rather simple: is orthogonal to every polynomial of degree not exceeding with respect to the functional of orthogonality of . M. Riesz himself considered only the case while Fejér [22] concentrated his attention on the specific case when , are the Legendre polynomials and . It seems that Shohat [41] was the first who studied the general case. The renewed recent interest on the quasi-orthogonal polynomials brought a large number of interesting results. Peherstorfer [34, 35, 36] and Xu [44] obtained results concerning the location of the zeros of the quasi-orthogonal polynomials and the positivity of the Christoffel numbers when are orthogonal on with respect to a measure that belongs to Szegő’s class. Xu [45] established general properties of quasi-orthogonal polynomials and, under the assumption that is also orthogonal, studied the relation between the Jacobi matrices associated with both sequences. The zeros of some quasi-orthogonal polynomials were studied recently by Beardon and Driver [9] and Brezinski, Driver and Redivo-Zaglia [12].

Motivated by the relation between positive quadrature rules and quasi-orthogonal polynomials, we provide necessary and sufficient conditions in order that the sequence of polynomials , obeying (1.2), is also orthogonal. The latter problem is purely algebraic in nature. We solve it via a constructive approach by taking into account classical results on Sturm sequences. It becomes evident then that one may look at the solution in terms of a relation between the Jacobi matrices associated with the sequences of orthogonal polynomials. As a result the solution is explicit in the sense that we establish the connection between the three term recurrence relations that generate the sequences and as well as between the linear functionals related to them. These results allow us to judge about the nodes of two Gaussian type quadrature formulas whose location coincides with the zeros of the polynomials and . Moreover, the Christoffel numbers of the quadrature rules are obtained explicitly as a consequence of the closed forms of the corresponding kernel polynomials which are also derived from our general approach.

The structure of the paper is as follows. In Section 2 we state the necessary and sufficient conditions of the orthogonality of a sequence of quasi-orthogonal polynomials of order as well as the expression of the polynomial associated with the Geronimus transformation of the initial linear functional. In Section 3, the proofs of those theorems are given as well as an algorithm to deduce the sequence of connection coefficients. Section 4 is focussed on the relation between the corresponding Jacobi matrices. Thus, we have a computational approach to the zeros of since they are the eigenvalues of the th principal leading submatrices of the corresponding Jacobi matrix. The Christoffel numbers are their normalized eigenvectors. We also prove some results concerning the zeros of the polynomial as well as the expression of the kernel polynomials in terms of the initial ones. In Section 5 we analyze some examples illustrating the problems considered in the previous sections. First, the case when is a symmetric linear functional is considered. The results are implemented for Chebyshev polynomials of the second kind. Second, the non-symmetric case is studied and implemented for Laguerre polynomials. Finally, we study the case of constant coefficients. In such a case, we solve a problem posed in [3] for in such a way in a symmetric case, periodic sequences for the parameters of the three term recurrence relation appear.

## 2. Orthogonality of quasi-orthogonal polynomials

The characterization of those quasi-orthogonal polynomials (1.2) which form a sequence of orthogonal polynomials themselves can be approached from a general point of view. Let be the linear space of algebraic polynomials with complex coefficients. Then denotes the action of the linear functional over the polynomial where denotes the algebraic dual of the linear space . The sequence of monic orthogonal polynomials (SMOP) with respect to the linear functional obeys the conditions , where for all , and is the Kronecker delta. A linear functional is said to be regular or quasi-definite (see [16]) when the leading principal submatrices of the Hankel matrix composed by the moments , , are non-singular for each . When the determinants of are positive for all nonnegative integers the functional is called positive-definite. If the linear functional is regular, then the SMOP satisfies the three-term recurrence relation (1.1) with and if is positive-definite then . Conversely, if a sequence of polynomials is generated by the recurrence relation (1.1) and , then there is a linear functional , such that is a sequence of polynomials orthogonal with respect to and this is the statement of Favard’s theorem ([16]). Moreover, if for every then the linear functional is positive-definite and it has an integral representation , , where is a positive Borel measure supported on an infinite subset of (see [16]).

The linear functional is called a rational perturbation of , if there exist polynomials and , such that

Detailed information about the direct problems studied from several points of view can be found in [2, 13, 23, 30, 46]. In particular, the connection formula between the polynomials orthogonal with respect to and is called the generalised Christoffel’s formula (see [23]). The relation between the corresponding Jacobi matrices was studied in [20].

Let be a SMOP, and are positive integers. Let consider another sequence of monic polynomials related to by

(2.1) |

with , . Then the problem to find necessary and sufficient conditions so that is also a SMOP and to obtain the relation between the corresponding regular linear functionals is called an inverse problem. Observe that we adopt the convention that when either or is equal to one, then the corresponding sum does not appear, that is, we interpret it as an empty one. A vast number of interesting results have been obtained on topics related to the inverse problem (see [1, 3, 4, 5, 10, 11, 25, 26, 32, 37]).

In the present contribution we also focus our attention on the quasi-orthogonal polynomials defined by (1.2) under the only natural restriction and for . This corresponds to a very general situation when we set and in (2.1). Therefore, in what follows we consider this setting. Many particular results, when one looks for the relation between the functionals and , with respect to which the polynomial sequences and are orthogonal, are known [13, 15, 18, 19, 29, 46] but the general case that we discuss in the present contribution has not been approached in the literature yet. In this paper we provide necessary and sufficient conditions so that the sequence of monic polynomials is also orthogonal.

Let be a SMOP corresponding to a regular linear functional . Now we give the necessary and sufficient conditions ensuring the orthogonality of the monic polynomial sequence that satisfies the three-term recurrence relation

with the initial conditions and , and the condition , for .

###### Theorem 2.1.

Let be a sequence of monic polynomials defined by . Then is a SMOP with recurrence coefficients and if and only if the coefficients , , , satisfy the following conditions

(2.2) |

(2.3) |

(2.4) |

and

(2.5) | |||||

for and .

Moreover, the recurrence coefficients of are given by

(2.6) | |||||

(2.7) |

and the coefficients also satisfy

(2.8) |

The above relations provide a complete characterization of the orthogonality of the polynomial sequence . When you recover Theorem 1 in [3].

On the other hand, a natural question arises about the relation between the regular linear functionals and such that and are the corresponding SMOP. In this case, the functional which describes the orthogonality of the sequence is a Geronimus spectral transformation of degree of the linear functional . In other words, , where is a polynomial of degree (see [32]). Our next result furnishes a method to determine .

###### Theorem 2.2.

The coefficients of the polynomial

(2.9) |

such that , are the unique solution of a system of linear equations, where the entries of the corresponding matrix depend only on the sequences of connection coefficients , .

A detailed description of the linear system and about the explicit form of the coefficients will be done in the sequel.

It is worth pointing out that an alternative way to compute the coefficients of is via a relation between the Jacobi matrices related to the sequences and . We discuss this method in Section 4.

Since the quasi-orthogonal polynomials arise naturally in the context of quadrature formulae of Gaussian type, many properties that can be classified more than as analytic rather than algebraic, such as the behaviour of their zeros and the positivity of the Christoffel numbers have been analysed. Most of these results deal with rather specific particular cases when either is a small integer or the orthogonal polynomials belong to classical families. In Section 4.2 we obtain some results about the zeros of the polynomials and .

Many illustrative examples are analysed when the linear functional is a symmetric one, as well as when one deals with constant connection coefficients. The latter problem is motivated by a result in [24] where is the sequence of Chebyshev polynomials.

## 3. Proofs of Theorems 2.1 and 2.2 and the direct problem

### 3.1. Proof of Theorem 2.1

The core of the overall approach is a classical result of Sturm [42] on counting the number of real zeros of an algebraic polynomial. We refer to [39, Section 10.5] and [33, Sections 2.4, 2.5] for detailed information about various versions of Sturm’s result as well as about the historical background. We state the general version of Sturm’s theorem in the setting we need. Let and be polynomials of exact degree and , respectively, with monic leading coefficients. Execute the Euclidean algorithm

(3.1) |

A careful inspection of the general version of Sturm’s theorem shows that the following holds:

###### Theorem A.

(Sturm) Under the above assumptions, the polynomials and have real and strictly interlacing zeros if and only if are positive real numbers. Furthermore, the zeros of the polynomial are all real and the zeros of two consecutive polynomials are strictly interlacing.

It follows immediately from Theorem A and Favard’s theorem that, given two polynomials and with positive leading coefficients and with real and strictly interlacing zeros, the Euclidean algorithm (3.1) generates the sequence , , such that these are the first terms of a sequence of orthogonal polynomials, which can be constructed by using the standard three term recurrence relation. In other words, any two polynomials of consecutive degrees and interlacing zeros may be “embedded” in a sequence of orthogonal polynomials. This straightforward but beautiful observation was pointed out by Wendroff [43] and the statement is nowadays called Wendroff’s theorem. Observe that and generate , uniquely “backwards” via (3.1) while the sequence , of all the polynomials can be extended “forward” in various ways. The complete characterization of the sequences of orthogonal polynomials and that are related by the relation (1.2) is obtained via Theorem A.

Proof of Theorem 2.1.

Applying the Euclidean algorithm (3.1) with “initial” polynomials and and setting , we obtain

where is a polynomial of degree at most . Using (1.2) together with the recurrence relation (1.1) we conclude that

(3.2) |

where and . Moreover, when , we have for all .

Now we can determine necessary and sufficient conditions in order to the polynomial coincides with the polynomial , i.e.,

(3.3) |

Comparing the coefficients that multiply and in (3.2) and (3.3) we derive the conditions

and the latter obviously correspond to (2.6) and (2.7). This means that

(3.4) |

Since , we obtain the constraint

which is exactly (2.2).

Similarly, comparing the coefficients of in (3.2) and (3.3), we obtain the following conditions:

(3.5) | |||||

(3.6) | |||||

and

(3.7) |

Now (2.3) follows from (3.6) and (3.7) while (2.4) is a consequence of (3.4) and (3.7). Finally, (3.4) and (3.5) imply (2.5).

It is important to check that at the last step the coefficient must be different from zero in order to be consistent with the quasi-orthogonality condition. This completes the proof.

Theorem 2.1 provides also a forward algorithm to compute the coefficients for . Starting with coefficients , from the linear combination

we choose the coefficients for and write

Then we compute , for , using equation (2.3) and , , , and (see the first scheme in Fig. 1). We compute , for , using equation (2.4) and , , , and (see the second scheme in Fig. 1).

We compute , for and , using equation (2.5) and , , , , and also , , , and . This is illustrated as the first scheme in Fig. 2. Alternatively, , for and , is given by

using , , , , and also , , , , (see the second scheme in Fig. 2).

As we have pointed out above, after the computations at level , it is necessary to verify if , for .

The initial coefficients , , for , starting from and , are uniquely determined by the “backward” process described by the Euclidean algorithm and by Theorem A.

Let us notice the key role played by the connection coefficients for the polynomials and as initial data to run the above algorithm.

As a summary, you can generate the coefficients of quasi-orthogonal polynomials in a recursive way, assuming some initial conditions.

### 3.2. Proof of Theorem 2.2

The dual basis of is defined, as usual, by the conditions (see [31])

It is easy to see that the elements of the basis, dual to SMOP with respect to the regular linear functional , are . Let us define the left-multiplication of a linear functional by any polynomial via

Let , given by relation , be a SMOP with respect to a regular linear functional . According to [31], if we use the expansion of the linear functional in terms of the dual basis of the SMOP , in view of orthogonality properties and relation , we obtain the following relation between the corresponding linear functionals.

###### Lemma 3.1.

(3.8) |

where is a polynomial of degree because its leading coefficient is

Proof of Theorem 2.2

For we have

For , we obtain

(3.9) | |||||

Since, for

assuming and using (1.2), we derive

Now we write the equations (3.2) as a system of linear equations where

The latter can be rewritten in the form

Using the backward technique for solution of systems of linear equations, we obtain, for ,

(3.10) |

In order to simplify (3.10), let be the tridiagonal matrix corresponding to the SMOP , that is,

where and

Notice that, for and we have

where denotes the entry of the matrix . Then the equalities

hold for .

Now it is clear that the inner products , , can be expressed in terms of the coefficients , , and from the value of . Indeed, we rewrite (1.2) in the form

which implies

for , so that

(3.12) |

Using equations (3.12), for and including the equation we obtain the following system of equations:

(3.13) |

Let us denote by the matrix of the latter system. Then the solution , , is obtained in terms of the coefficients , , and .

Replacing the solution of (3.13) into (3.2) we conclude that

where is the inverse of the matrix . Finally we solve the system (3.10) and find all coefficients of the polynomial as functions of and Thus, Theorem 2.2 is proved.

The above result shows that the sequences , , defined in Theorem 2.1 must satisfy the constraints on the coefficients of the polynomial given in Theorem 2. In other words, the sequences , together with the coefficients of the three term recurrence relation, determine uniquely the polynomial . Moreover, since the matrix is nonsingular, any polynomial of the form (2.9) determines uniquely the coefficients , for . We discuss this question thoroughly in the next section.

Notice that the latter observations provide not only an algorithm to calculate , but also an alternative proof about the relation between the Geronimus transformation and the quasi-orthogonal polynomials.

It is easy to see from (3.10) that the leading coefficient of is given, in an alternatively way, by

(3.14) |

Considering and the normalization , we obtain

where are given by .