Algebraic Diagonals and Walks
The diagonal of a multivariate power series is the univariate power series generated by the diagonal terms of . Diagonals form an important class of power series; they occur frequently in number theory, theoretical physics and enumerative combinatorics. We study algorithmic questions related to diagonals in the case where is the Taylor expansion of a bivariate rational function. It is classical that in this case is an algebraic function. We propose an algorithm that computes an annihilating polynomial for . Generically, it is its minimal polynomial and is obtained in time quasi-linear in its size. We show that this minimal polynomial has an exponential size with respect to the degree of the input rational function. We then address the related problem of enumerating directed lattice walks. The insight given by our study leads to a new method for expanding the generating power series of bridges, excursions and meanders. We show that their first terms can be computed in quasi-linear complexity in , without first computing a very large polynomial equation.
Algebraic Diagonals and Walks
|Inria, Laboratoire LIP|
|(U. Lyon, CNRS, ENS Lyon, UCBL)|
Categories and Subject Descriptors:
I.1.2 [Computing Methodologies]: Symbolic and Algebraic Manipulations — Algebraic Algorithms
General Terms: Algorithms, Theory.
Keywords: Diagonals, walks, algorithms.
Context. The diagonal of a multivariate power series with coefficients is the univariate power series with coefficients . Particularly interesting is the class of diagonals of rational power series (ie, Taylor expansions of rational functions). In particular, diagonals of bivariate rational power series are always roots of nonzero bivariate polynomials (ie, they are algebraic series) [?, ?]. Since it is also classical that algebraic series are D-finite (ie, satisfy linear differential equations with polynomial coefficients), their coefficients satisfy linear recurrences and this leads to an optimal algorithm for the computation of their first terms [?, ?, ?]. In this article, we determine the degrees of these polynomials, the cost of their computation and related applications.
Previous work. The algebraicity of bivariate diagonals is classical. The same is true for the converse; also the property persists for multivariate rational series in positive characteristic [?, ?, ?]. The first occurrence we are aware of in the literature is Pólya’s article [?], which deals with a particular class of bivariate rational functions; the proof uses elementary complex analysis. Along the lines of Pólya’s approach, Furstenberg [?] gave a (sketchy) proof of the general result, over the field of complex numbers; the same argument has been enhanced later [?],[?, §6.3]. Three more different proofs exist: a purely algebraic one that works over arbitrary fields of characteristic zero [?, Th. 6.1] (see also [?, Th. 6.3.3]), one based on non-commutative power series [?, Prop. 5], and a combinatorial proof [?, §3.4.1]. Despite the richness of the topic and the fact that most proofs are constructive in essence, we were not able to find in the literature any explicit algorithm for computing a bivariate polynomial that cancels the diagonal of a general bivariate rational function.
Diagonals of rational functions appear naturally in enumerative combinatorics. In particular, the enumeration of unidimensional walks has been the subject of recent activity, see [?] and the references therein. The algebraicity of generating functions attached to such walks is classical as well, and related to that of bivariate diagonals. Beyond this structural result, several quantitative and effective results are known. Explicit formulas give the generating functions in terms of implicit algebraic functions attached to the set of allowed steps in the case of excursions [?, §4],[?], bridges and meanders [?]. Moreover, if and denote the upper and lower amplitudes of the allowed steps, the bound on the degrees of equations for excursions has been obtained by Bousquet-Mélou, and showed to be tight for a specific family of step sets, as well as generically [?, §2.1]. From the algorithmic viewpoint, Banderier and Flajolet gave an algorithm (called the Platypus Algorithm) for computing a polynomial of degree that annihilates the generating function for excursions [?, §2.3].
Contributions. We design (Section Algebraic Diagonals and Walks) the first explicit algorithm for computing a polynomial equation for the diagonal of an arbitrary bivariate rational function. We analyze its complexity and the size of its output in Theorem LABEL:thm:bound_diagonals. The algorithm has two main steps. The first step is the computation of a polynomial equation for the residues of a bivariate rational function. We propose an efficient algorithm for this task, that is a polynomial-time version of Bronstein’s algorithm [?]; corresponding size and complexity bounds are given in Theorem 10. The second step is the computation of a polynomial equation for the sums of a fixed number of roots of a given polynomial. We design an additive version of the Platypus algorithm [?, §2.3] and analyze it in Theorem 12. We show in Proposition LABEL:prop:generic that generically, the size of the minimal polynomial for the diagonal of a rational function is exponential in the degree of the input and that our algorithm computes it in quasi-optimal complexity (Theorem LABEL:thm:bound_diagonals).
In the application to walks, we show how to expand to high precision the generating functions of bridges, excursions and meanders. Our main message is that pre-computing a polynomial equation for them is too costly, since that equation might have exponential size in the maximal amplitude of the allowed steps. Our algorithms have quasi-linear complexity in the precision of the expansion, while keeping the pre-computation step in polynomial complexity in (Theorem LABEL:thm:walks).
Structure of the paper. After a preliminary section on background and notation, we first discuss several special bivariate resultants of broader general interest in Section 9. Next, we consider diagonals, the size of their minimal polynomials and an efficient way of computing annihilating polynomials in Section Algebraic Diagonals and Walks.
In this section, that might be skipped at first reading, we introduce notation and technical results that will be used throughout the article.
In this article, denotes a field of characteristic 0. We denote by the set of polynomials in of degree less than . Similarly, stands for the set of rational functions in with numerator and denominator in , and for the set of power series in truncated at precision .
If is a polynomial in , then its degree with respect to (resp. ) is denoted (resp. ), and the bidegree of is the pair . The notation is used for univariate polynomials. Inequalities between bidegrees are component-wise. The set of polynomials in of bidegree less than is denoted by , and similarly for more variables.
The valuation of a polynomial or a power series is its smallest exponent with nonzero coefficient. It is denoted , with the convention .
The reciprocal of a polynomial is the polynomial . If , the notation stands for the generating series of the Newton sums of :
A squarefree decomposition of a nonzero polynomial , where or , is a factorization , with squarefree, the ’s pairwise coprime and . The corresponding squarefree part of is the polynomial . If is squarefree then .
The coefficient of in a power series is denoted . If , then denotes the polynomial . The exponential series is denoted . The Hadamard product of two power series and is the power series such that for all .
If is a bivariate power series in , the diagonal of , denoted is the univariate power series in defined by
In several places, we need bounds on degrees of coefficients of bivariate rational series. In most cases, these power series belong to and have a very constrained structure: there exists a polynomial and an integer such that the power series can be written
with and , for all . We denote by the set of such power series. Its main properties are summarized as follows.
Let , and .
The set is a subring of ;
Let with , then ;
The products obey
For (3), if and belong respectively to and , then the th coefficient of their product is a sum of terms of the form . Therefore, the degree of the numerator is bounded by , whence (3) is proved. Property (1) is proved similarly. In Property (2), the condition on makes well-defined. The result follows from (1). ∎
As consequences, we deduce the following two results.
Let with be such that . Let be a squarefree part of . Then
Write with . Then the result when is squarefree () follows from Part (2) of Lemma 1, with . The general case then follows from Parts (1,3). ∎
Let and be polynomials in , with , and . Then for all ,
The Taylor expansion of has for coefficients the derivatives of . We consider it either in or in . Corollary 2 applies directly for the degree in . The saving on the degree in follows from observing that in the first part of the proof of the corollary, the decomposition has the property that . This is then propagated along the proof thanks to Part (3) of Lemma 1. ∎
We recall classical complexity notation and facts for later use. Let be again a field of characteristic zero. Unless otherwise specified, we estimate the cost of our algorithms by counting arithmetic operations in (denoted “ops.”) at unit cost. The soft-O notation indicates that polylogarithmic factors are omitted in the complexity estimates. We say that an algorithm has quasi-linear complexity if its complexity is , where is the maximal arithmetic size (number of coefficients in in a dense representation) of the input and of the output. In that case, the algorithm is said to be quasi-optimal.
Univariate operations. Throughout this article we will use the fact that most operations on polynomials, rational functions and power series in one variable can be performed in quasi-linear time. Standard references for these questions are the books [?] and [?]. The needed results are summarized in Fact 4 below.
The following operations can be performed in ops. in :
addition, product and differentiation of elements in , and ; integration in and ;
extended gcd, squarefree decomposition and resultant in ;
multipoint evaluation in , at points in ; interpolation in and from (resp. ) values at pairwise distinct points in ;
inverse, logarithm, exponential in (when defined);
conversions between and .
Multivariate operations. Basic operations on polynomials, rational functions and power series in several variables are hard questions from the algorithmic point of view. For instance, no general quasi-optimal algorithm is currently known for computing resultants of bivariate polynomials, even though in several important cases such algorithms are available [?]. Multiplication is the most basic non-trivial operation in this setting. The following result can be proved using Kronecker’s substitution; it is quasi-optimal for fixed number of variables .
Polynomials in and power series in can be multiplied using ops.
A related operation is multipoint evaluation and interpolation. The simplest case is when the evaluation points form an -dimensional tensor product grid , where is a set of cardinal .
[?] Polynomials in can be evaluated and interpolated from values that they take on points that form an -dimensional tensor product grid using ops.
Again, the complexity in Fact 6 is quasi-optimal for fixed .
A general (although non-optimal) technique to deal with more involved operations on multivariable algebraic objects (eg, in ) is to use (multivariate) evaluation and interpolation on polynomials and to perform operations on the evaluated algebraic objects using Facts 4–6. To put this strategy in practice, the size of the output needs to be well controlled. We illustrate this philosophy on the example of resultant computation, based on the following easy variation of [?, Thm. 6.22].
Let and be bivariate polynomials of respective bidegrees and . Then,
Let and be polynomials in . Then belongs to , where . Moreover, the coefficients of can be computed using ops. in .
The degrees estimates follow from Fact 7. To compute , we use an evaluation-interpolation scheme: and are evaluated at points forming an dimensional tensor product grid; univariate resultants in are computed; is recovered by interpolation. By Fact 6, the evaluation and interpolation steps are performed in ops. The second one has cost . Using the inequality concludes the proof. ∎
We conclude this section by recalling a complexity result for the computation of a squarefree decomposition of a bivariate polynomial.
[?] A squarefree decomposition of a polynomial in can be computed using ops.
We are interested in a polynomial that vanishes at the residues of a given rational function. It is a classical result in symbolic integration that in the case of simple poles, there is a resultant formula for such a polynomial, first introduced by Rothstein [?] and Trager [?]. This was later generalized by Bronstein [?] to accommodate multiple poles as well. However, as mentioned by Bronstein, the complexity of his method grows exponentially with the multiplicity of the poles. Instead, we develop in this section an algorithm with polynomial complexity.
Let be a nonzero element in , where are two coprime polynomials in . Let be a squarefree decomposition of . For , if is a root of in an algebraic extension of , then it is simple and the residue of at is the coefficient of in the Laurent expansion of at . If is the polynomial , this residue is the coefficient of in the Taylor expansion at of the regular rational function , computed with rational operations only and then evaluated at . If this coefficient is denoted , with polynomials and , the residue at is a root of . When , this is exactly the Rothstein-Trager resultant. This computation leads to Algorithm 1, which avoids the exponential blowup of the complexity that would follow from a symbolic pre-computation of the Bronstein resultants.
Let be an integer, and let be the rational function . The poles have order . In this example, the algorithm can be performed by hand for arbitrary : a squarefree decomposition has and , the other ’s being 1. Then and the next step is to expand
Expanding the binomial series gives the coefficient of as , with
The residues are then cancelled by , namely
Bounds. In our applications, as in the previous example, the polynomials and have coefficients that are themselves polynomials in another variable . Let then , , and be the bidegrees in of , , and , where is a squarefree part of . In Algorithm 1, has degree at most in and total degree in . Similarly, has degree in and total degree in . When , by Proposition 3, the coefficient in the power series expansion of has denominator of bidegree bounded by and numerator of bidegree bounded by . Thus by Fact 7, is at most
while its degree in is bounded by the number of residues . Summing over all leads to the bound
If , a direct computation gives the bound .
Let . Let be a squarefree part of wrt y. Let be bounds on the bidegree of . Then the polynomial computed by Algorithm 1 annihilates the residues of , has degree in bounded by and degree in bounded by
It can be computed in operations in .
Note that both bounds above (when and ) are upper bounded by , independently of the multiplicities. The complexity is also bounded independently of the multiplicities by .
The bounds on the bidegree of are easily derived from the previous discussion.
By Fact 9, a squarefree decomposition of can be computed using ops. We now focus on the computations performed inside the th iteration of the loop. Computing requires an exact division of polynomials of bidegrees at most ; this division can be performed by evaluation-interpolation in ops. Similarly, the trivariate polynomial can be computed by evaluation-interpolation wrt in time . By the discussion preceding Theorem 10, both and have bidegrees at most , where and . They can be computed by evaluation-interpolation in ops. Finally, the resultant has bidegree at most , and since the degree in of and is at most , it can be computed by evaluation-interpolation in ops by Lemma 8. The total cost of the loop is thus , where
Using the (crude) bounds , , and shows that is bounded by
which, by using the inequalities and , is seen to belong to .
Gathering together the various complexity bounds yields the stated bound and finishes the proof of the theorem. ∎
Remark. Note that one could also use Hermite reduction combined with the usual Rothstein-Trager resultant in order to compute a polynomial that annihilates the residues. Indeed, Hermite reduction computes an auxiliary rational function that admits the same residues as the input, while only having simple poles. A close inspection of this approach provides the same bound for the degree in of , but a less tight bound for its degree in , namely worse by a factor of . The complexity of this alternative approach appears to be (using results from [?]), to be compared with the complexity bound from Theorem 10.
Given a polynomial of degree with coefficients in a field of characteristic 0, let be its roots in the algebraic closure of . For any positive integer , the polynomial of degree defined by
has coefficients in . This section discusses the computation of summarized in Algorithm 2, which can be seen as an additive analogue of the Platypus algorithm of Banderier and Flajolet [?].
We recall two classical formulas (see, eg, [?, §2]), the second one being valid for monic only::
Truncating these formulas at order makes a representation of the polynomial (up to normalization), since both conversions above can be performed quasi-optimally by Newton iteration [?, ?, ?]. The key for Algorithm 2 is the following variant of [?, §2.3].
Let be a polynomial of degree , let denote the generating series of its Newton sums and let be the series . Let be the polynomial in defined by
Then the following equality holds
By construction, the series is
When applied to the polynomial , this becomes
This expression rewrites:
and the last expression equals . ∎
The correctness of Algorithm 2 follows from observing that the truncation orders in and in of the power series involved in the algorithm are sufficient to enable the reconstruction of from its first Newton sums by (3).
Bivariate case. We now consider the case where is a polynomial in . Then, the coefficients of wrt may have denominators. We follow the steps of Algorithm 2 (run on viewed as a polynomial in with coefficients in ) in order to compute bounds on the bidegree of the polynomial obtained by clearing out these denominators. We obtain the following result.
Let , let be a positive integer such that and let . Let denote the leading coefficient of wrt and let be defined as in Eq. (2). Then is a polynomial in of bidegree at most that cancels all sums of roots of , with . Moreover, this polynomial can be computed in ops.
This result is close to optimal. Experiments suggest that for generic of bidegree the minimal polynomial of has bidegree . In particular, our degree bound is precise in , and overshoots by a factor of only in . Similarly, the complexity result is quasi-optimal up to a factor of only.
The Newton series has the form
with . Since both factors belong to , Lemma 1 implies that . Applying this same lemma repeatedly, we get that (stability under the integration of Algorithm 2 is immediate). Since has degree wrt , we deduce that is a polynomial that satisfies the desired bound. By evaluation and interpolation at points, and Newton iteration for quotients of power series in (Fact 4), the power series can be computed in ops. The power series is then computed from in ops. To compute we use evaluation-interpolation wrt at points, and fast exponentials of power series (Fact 4). The cost of this step is ops. Then, is computed for additional ops. The last exponential is again computed by evaluation-interpolation and Newton iteration using ops. ∎
The relation between diagonals of bivariate rational functions and algebraic series is classical [?, ?]. We recall here the usual derivation when while setting our notation.
Let be a rational function in , whose denominator does not vanish at . Then the diagonal of is a convergent power series that can be represented for small enough by a Cauchy integral
where the contour is for instance a circle of radius inside an annulus where remains in the domain of convergence of . This is the basis of an algebraic approach to the computation of the diagonal as a sum of residues of the rational function
with and two coprime polynomials. For small enough, the circle can be shrunk around 0 and only the roots of tending to 0 when lie inside the contour [?]. These are called the small branches. Thus the diagonal is given as
where the sum is over the distinct roots of tending to 0. We call their number the number of small branches of and denote it by .
Since the ’s are algebraic and finite in number and residues are obtained by series expansion, which entails only rational operations, it follows that the diagonal is algebraic too. Combining the algorithms of the previous section gives Algorithm LABEL:algo:diagonal that produces a polynomial equation for . The correctness of this algorithm over an arbitrary field of characteristic 0 follows from an adaptation of the arguments of Gessel and Stanley [?, Th. 6.1],[?, Th. 6.3.3].