Fast Computation of Common Left Multiples of Linear Ordinary Differential Operators\titlenoteWe warmly thank the referees for their very helpful comments. — This work was supported in part by the MSR–INRIA Joint Centre, and by two NSFC grants (91118001 and 60821002/F02).

Fast Computation of Common Left Multiples
of Linear Ordinary Differential Operators\titlenoteWe warmly thank the referees for their very helpful comments. — This work was supported in part by the MSR–INRIA Joint Centre, and by two NSFC grants (91118001 and 60821002/F02).

Alin Bostan, Frédéric Chyzak, Bruno Salvy
Algorithms Project, INRIA (France)
Ziming Li
KLMM and AMSS (China)
{alin.bostan,frederic.chyzak,bruno.salvy}@inria.fr zmli@mmrc.iss.ac.cn
Abstract

We study tight bounds and fast algorithms for LCLMs of several linear differential operators with polynomial coefficients. We analyse the arithmetic complexity of existing algorithms for LCLMs, as well as the size of their outputs. We propose a new algorithm that recasts the LCLM computation in a linear algebra problem on a polynomial matrix. This algorithm yields sharp bounds on the coefficient degrees of the LCLM, improving by one order of magnitude the best bounds obtained using previous algorithms. The complexity of the new algorithm is almost optimal, in the sense that it nearly matches the arithmetic size of the output.

\numberofauthors

2

Categories and Subject Descriptors:
I.1.2 [Computing Methodologies]: Symbolic and Algebraic Manipulations — Algebraic Algorithms

General Terms: Algorithms, Theory.

Keywords: Algorithms, complexity, linear differential operators, common left multiples.

1 Introduction

The complexity of operations in the polynomial ring over a field  has been intensively studied in the computer algebra literature. It is well established that polynomial multiplication is a commutative complexity yardstick, in the sense that the complexity of operations in  can be expressed in terms of the cost of multiplication, and for most of them, in a quasi-linear way.

Linear differential operators in the derivation and with coefficients in  form a non-commutative ring, denoted , that shares many algebraic properties with the commutative ring . The structural analogy between polynomials and linear differential equations was discovered long ago by Libri and Brassinne [27, 9, 15]. They introduced the bases of a non-commutative elimination theory, by defining the notions of greatest common right divisor (GCRD) and least common left multiple (LCLM) for differential operators, and by designing a Euclidean-type algorithm for GCRDs. This was formalised by Ore [28, 29], who set up a common algebraic framework for polynomials and linear differential operators (and other skew polynomials, including difference and -difference operators). Yet, the algorithmic study of linear differential operators is currently much less advanced than in the polynomial case. The cost of product in has been addressed only recently in [23, 7].

The general aim of this work is to take a step towards a systematic study of the complexity of operations in . We promote the idea that (polynomial) matrix multiplication may well become the common yardstick for measuring complexities in this non-commutative setting. The specific goal of the present article is to illustrate this idea for LCLMs. We focus on LCLMs since several higher level algorithms rely crucially on the efficiency of this basic computational primitive. For instance, algorithms for manipulating D-finite functions represented by annihilating equations use common left multiples for performing addition [35, 33]. LCLMs of several operators are also needed as a basic task in various other higher-level algorithms [3, 25, 13]. Our approach is based on using complexity analysis as a tool for algorithmic design, and on producing tight size bounds on the various objects involved in the algorithms.

It is folklore that Ore’s non-commutative Euclidean algorithm is computationally expensive; various other algorithms for computing common left multiples of two operators have been proposed [22, 35, 33, 26, 6, 2, 24]. As opposed to Ore’s algorithm, these alternatives have the common feature that they reduce the problem of computing LCLMs to linear algebra. However, few complexity analyses [18, 19, 6, 24] and performance comparisons [26, 2] are available.

Algorithm Heffter’s + DAC Li’s + DAC van der Hoeven’s + DAC van Hoeij’s New
Complexity
Figure 1: Costs of various algorithms for the LCLM computation of  operators of bidegrees in . Algorithms marked by a star () also compute cofactors for the same complexity.

Main contributions. As a first contribution, we design a new algorithm for computing the LCLM of several operators. It reduces the LCLM computation to a linear algebra problem on a polynomial matrix. The new algorithm can be viewed as an adaptation of Heffter’s algorithm [22] to several operators. At the same time, we use modern linear-algebra algorithms [36, 38] to achieve a low arithmetic complexity. Our algorithm is similar in spirit to Grigoriev’s method [21, §5] for computing GCRDs of several operators.

Before stating more precisely our main results, we need the following conventions and notations, that will be used throughout the paper. All algorithms take as input linear differential operators with polynomial coefficients, that is, belonging to , instead of rational function coefficients. From the computational viewpoint, this is not too severe a restriction, since the rational coefficients case is easily reduced to that of polynomial coefficients, by normalisation. For in , we write for the primitive LCLM of in . We say that an operator has bidegree at most in if it has order at most  and polynomial coefficients of degree at most . The cost of our algorithms is measured by the number of arithmetic operations that they use in the base field . The constant stands for a feasible exponent for matrix multiplication over  (see definition in Section 2), and the soft-O notation indicates that polylogarithmic factors are neglected. Our main result is the following.

Theorem 1

Let be operators in of bidegrees at most in . Then  has order at most , degrees in at most , and it can be computed in arithmetic operations in .

The upper bound on coefficient degrees is sharp, in the sense that it is reached on generic inputs. It improves by one order of magnitude the best bound obtained using previous algorithms. Moreover, for fixed , the cost of the new algorithm is almost optimal, in the sense that it nearly matches the arithmetic size of the LCLM.

As a second contribution, we analyse the worst-case arithmetic complexity of existing algorithms for LCLMs, as well as the size of their outputs. For instance, we show that the extension to several operators of the “folklore” algorithm in [35, 33] has complexity . We call this extension van Hoeij’s algorithm, after the name of the implementor in Maple’s package DEtools of one of its variants. These estimates are in accordance with our experiments showing that our new algorithm performs faster for large order , while van Hoeij’s algorithm is well suited for large .

Using our tight degree bounds, we also show that any algorithm that computes the LCLM of two operators of bidegree  in in complexity can be used as the building block of a divide-and-conquer (DAC) algorithm that computes the LCLM of operators of bidegree  in complexity . The costs of several algorithms are summarised in Figure 1, where notation + DAC indicates that algorithm is used in a DAC scheme.

As a third contribution, we prove an upper bound on the total degree in of nonzero common left multiples (not necessarily of minimal order). This is a new instance of the philosophy, initiated in [8], of relaxing order minimality for linear differential operators, in order to achieve better arithmetic size. While, by Theorem 1, the total arithmetic size of the LCLM is typically , there exist common left multiples of total size only.

A fourth contribution is a fast Magma implementation that outperforms Magma’s LCLM routine. Experimental results confirm that the practical complexity of the new algorithm behaves as predicted by our theoretical results.

Last, but not least, we have undertaken an extensive bibliographic search, which we now proceed to describe.

Previous work. Libri [27] and Brassinne [9] (see also [15]) defined the notions of GCRD and LCLM of linear differential operators, and sketched a Euclidean-type algorithm for GCRDs. Von Escherich [16] defined the related notion of differential resultant of two linear differential operators. Articles [9, 16] contain the embryo of an algorithm for the LCLM based on linear algebra; that algorithm was explicitly stated by Heffter [22], and later rediscovered by Poole in his classical book [32]. The roots of a subresultant theory for differential operators are in Pierce’s articles [30, 31]. Blumberg [4] gave one of the first systematic accounts of the algebraic properties of linear differential operators. Building on predecessors’ works, Ore [28, 29] extended the Euclidean-type theory to the more general framework of skew polynomials. He showed [29, Theorem 8, §3] that, while the LCLM is not related to the GCRD by a simple formula as in the commutative case, there nevertheless exists a formula expressing the LCLM in terms of the successive remainders in the Euclidean scheme. Almost simultaneously, Wedderburn [41, §7-8] showed that the LCLM can also be computed by an extended version of the Euclidean-Ore algorithm, that computes Bézout cofactors along the way.

In the computer algebra literature, algorithmic issues for skew polynomials emerged in the 1990s, and were popularised by Bronstein and Petkovšek [10, 11]. Grigoriev [21, §6] designed a fast algorithm for computing the GCRD of a family of linear differential operators; to do so, he proved tight bounds on the degree of the GCRD, by extending von Escherich’s construction of the Sylvester matrix for two differential operators to an arbitrary number of operators. The bound is linear in the number of operators, in their maximal order and in their maximal degree [21, Lemma 5.1]. Giesbrecht analysed the complexity of the LCLM computation for two operators, but only in terms of their order [18, 19]. (Strictly speaking, his method was proposed for a different Ore ring, but it extends to more general settings, including the differential case.) For two operators of orders at most , the first (Heffter-style) algorithm [18, Lemma 5] computes in operations in , while the second one [19, Lemma 2.1] (based on the extended Euclidean-Ore scheme) uses operations in . To our knowledge, no algorithm currently exists similar to the Lehmer-Knuth-Schönhage half-gcd algorithm [17, Chapter 11], using a number of operations in that is quasi-linear in . Li [26] pointed out that algorithms for the LCLM computation that have good complexity with respect to the order, such as the naive Euclidean-Ore algorithm, do not necessarily behave well because of coefficient growth. He developed a generalisation of the classical subresultant theory to Ore polynomials, that provides determinantal formulas and degree bounds for the GCRD and the LCLM [26]. He also compared the practical efficiency of Maple implementations of several algorithms.

Giesbrecht and Zhang [20, Theorem 2.1] mention a complexity bound of for the LCLM computation of two operators of bidegree in , based on an unpublished 2002 note of Li. Over fields of characteristic zero, Bostan [6, Chapter 10] sketched a general strategy for computing several constructions on differential operators (including LCLMs), based on an evaluation-interpolation approach on power series solutions. He stated, without proofs, several degree bounds and complexity results. For two operators of bidegree in , he announced that using fast Hermite-Padé approximation for the interpolation step yields an algorithm that computes in operations. The approach was enhanced by van der Hoeven [24], who showed that the costs of the basic operations on differential operators can be expressed in terms of the cost of multiplication in , and proved the complexity bound stated without proof in [6, §10.5].

2 Preliminaries

Let be a differential field, that is, a field equipped with an additive map satisfying the Leibniz rule for all . We denote by  the ring of linear differential operators over the differential field . A nonzero element  in  is of the form

where  with . We call  the order of , and denote it by . The noncommutative ring  is a left (and right) principal ideal domain, for which a Euclidean algorithm exists [28, 29].

Let  be nonzero elements in . Then  is called a common left multiple (CLM) of  if  for some . A common left multiple of the least order is called a least common left multiple (LCLM). Two LCLMs of , …,  are -linearly dependent.

Our main focus is on the particular case , the field of rational functions with coefficients in , and , the usual derivation with respect to . In this case, we use the notation for , and for the primitive LCLM of in , that is the LCLM of computed in and normalised in with trivial content. However, in order to keep the mathematical exposition as independent as possible of any particular case, we stick to the more general setting whenever we discuss mathematical properties and bird’s-eye view descriptions of algorithms.

Polynomial and matrix arithmetic

The cost of our algorithms will be measured by the number of field operations in they use. To simplify the presentation, we assume that polynomials in (i.e., of degree less than  in ) can be multiplied within operations in , using the FFT-based algorithms in [34, 12]. Most basic polynomial operations in (division, extended gcd, interpolation, etc.) have cost  [17]. We suppose that is a feasible exponent for matrix multiplication over , that is, a real constant , such that two matrices with coefficients in can be multiplied in time . The current tightest upper bound is  [40], following work of Coppersmith and Winograd [14], and Stothers [39].

The following result, due to Storjohann and Villard [36, 38], will be helpful to estimate complexities for solving linear systems arising from LCLM computations. Note that this is currently the best complexity result on polynomial linear algebra. The probabilistic aspects of the algorithms described in this article are entirely inherited from it.

Theorem 2

[36, 38] Let  be an matrix with entries in . The rank  of  can be computed together with  linearly independent polynomial elements in the left kernel of  within operations in by a (certified) randomised Las Vegas algorithm.

Moreover, if , then the determinant of  can be computed using operations in .

3 Linear formulation for
common left multiples

In order to connect the computation of common left multiples with linear algebra, we introduce some more notation. For a nonnegative integer , we denote by  the -linear subspace of  consisting of all linear differential operators whose orders are at most . Moreover, we define a -linear bijection

For a nonzero element  of order , and for an integer  with , we define the Sylvester-type matrix

The matrix  has  rows and  columns. In particular,  is the identity matrix of size . This matrix enables one to express multiplication by in  as a vector-matrix product. Precisely, for ,

(1)

Let  be nonzero elements in . For , the matrix

(2)

has  rows and  columns.

The following theorem is the main result of this section.

Theorem 3

Let  be elements in  of orders , and let 

  1. If  is a common left multiple of  such that and then the vector  belongs to the left kernel of the matrix  defined in (2).

  2. If the vector  is a nonzero vector in the left kernel of , where  for  and , then  is nonzero, and  is a common left multiple of , …,  with respective left cofactors , …, .

  3. If  is the rank of , then

Proof. Suppose that for . By (1),

which is equivalent to Therefore the vector  belongs to the left kernel of . The first assertion is proved.

Conversely, suppose that  is a nonzero vector in the left kernel of . Then  for all  with . It follows from (1) that

Thus, is nonzero, for otherwise, since is an isomorphism, all the would be equal to zero. The second assertion follows.

To prove the last assertion, we set  and . Assume further that  for all  with . Then , …, are common left multiples of  …, of orders at most , and such that

By the first assertion, for , the vector

belongs to the left kernel of . These vectors are -linearly independent because  , …, are. On the other hand, if  is a nonzero vector in the left kernel of , where  and , then  is a common left multiple of , …,  with order no greater than  by the second assertion. Hence,  is a -linear combination of , , …, , because it is a left multiple of . Hence, there exist , , …, in  such that

which implies that the last coordinates of the vector are all equal to zero. Since this vector belongs to the left kernel of , all its coordinates are zero by the second assertion. We conclude that is a -basis of the left kernel of , and thus  is its dimension. Then (iii) follows from the rank-nullity theorem, because  has  rows.

Since the rank of  is at most , a direct consequence of Theorem 3 (iii) is the following classical result.

Corollary 4

For ,

4 Computing LCLMs

In this section, we review a few known methods for computing LCLMs and present a new one based on Theorem 3.

4.1 Computing an LCLM of two operators

Given two nonzero elements  and  of respective orders  and , we consider various methods for computing their LCLMs. The first methods compute left cofactor(s) of the given operator(s) first, and find an LCLM by multiplication in . The last method is specific to .

4.1.1 Heffter’s algorithm

The first method can be traced back to Brassinne [9], von Escherich [16] and Heffter [22]. The sequence:

has  elements, each of which is of order at most . Thus, these elements are -linearly dependent. To compute , the strategy is to find the maximal integer  and corresponding elements , with  such that

Set Then . Therefore, the product  is an LCLM of  and  due to the maximality of .

This method can be reformulated using the notation introduced in Section 3. For a vector  represented by

in a finite-dimensional -vector space equipped with the standard basis we define  to be . For , define

(3)

Then, Heffter’s method consists in computing a vector  in the left kernel of  such that  is maximal.

The next lemma connects the order of  with the rank of . It easily follows from the observation that a maximal subset of -linearly independent elements in  consists of  …, and , where .

Lemma 5

Let  be two nonzero elements in of orders . Then

4.1.2 Euclidean algorithms

The second family of methods is based on the Euclidean-Ore algorithm for differential operators [29].

Ore’s algorithm. Assume that . Setting  , one can compute the Euclidean (right) divisions

for quotients , and remainders  satisfying for , and . Then, as in the commutative case, is shown to be the GCRD of and . Ore [29, §2] proved that the following product

(4)

is an LCLM of and . (Here denotes the exact left quotient of and , that is such that .)

Extended Euclidean-Ore algorithm. Wedderburn [41, §7-8] observed (see also [10]) that the computation of (4) can be avoided, if one replaces the Euclidean algorithm by its extended version. Precisely, letting , , and

the product  is an LCLM of  and .

Li’s determinantal expression. As in the commutative case, a more efficient version of the extended Euclidean-Ore algorithm is based on subresultants [26, §5]. To avoid technicalities, we present an alternative, efficient, variant of the subresultant algorithm, based on a determinantal formulation [26, Proposition 6.1]. This method assumes that the order  of the GCRD of and is already known. Then, one constructs a square matrix of size whose first columns are the first columns of the matrix and whose last column is the transpose of the vector

If is denoted , then is an LCLM of and  .

4.1.3 Van der Hoeven’s algorithm

The algorithm that we very briefly mention now is specific to the case , where the base field has characteristic zero. It works by evaluation-interpolation. The idea, originating from [6], is to perform operations on differential operators by working on their fundamental systems of solutions. Due to space limitations, and in view of its complexity analysis, we do not give more details here, and refer the reader to the article [24].

4.2 Computing an LCLM of several operators

Given several nonzero operators  we describe various ways to compute .

4.2.1 Iterative LCLMs

An obvious method is to compute an LCLM of operators iteratively, that is,

(5)

A computationally more efficient (though mathematically equivalent) method is by a divide-and-conquer algorithm, based on the repeated use of the formula

(6)

Of course, the efficiency of an iterative algorithm depends on that of the algorithm used for the LCLM of two operators. This is quantified precisely in Section 5.

4.2.2 Van Hoeij’s algorithm

Another algorithm for computing the LCLM of  linear differential operators was implemented by van Hoeij as Maple’s DEtools[LCLM] command; it seemingly was never published. For , the method is folklore; it is implicit, for instance, in the proof of [35, Theorem 2.3]. A variant of it is also implemented by the ‘diffeq+diffeq‘ command in Salvy and Zimmermann’s gfun package for Maple [33].

Informally speaking, the method consists in considering a generic solution  of  for , then in finding the first linear dependency between the row vectors . In order to perform actual computations, these vectors are represented by the canonical forms where  denotes the remainder of the right Euclidean division of  by . Let

where  are undetermined coefficients in . For all  with , let  be the right remainder of  in the division by . Then Since  has generic coefficients, is equal to , e.g., by  [26, Lemma 2.3]. Note that  depend linearly on the coefficients of the ’s. There are coefficients in . Equating , for we obtain a linear system

where  is an matrix over . Thus, computing  amounts to computing a nontrivial vector  in the left kernel of  with  being maximal. The rank of  is equal to the order of , e.g., by [2, Proposition 4.3]. Note that the original version of van Hoeij’s algorithm does not make use of this last fact, and potentially needs to solve more linear systems, thus being less efficient when the LCLM is not of maximal order.

4.2.3 The new algorithm

As a straightforward consequence of Theorem 3, the can be computed by determining a nontrivial vector  in the left kernel of  given in equation (2), with  being maximal. This method computes not only the LCLM, but also its left cofactors , , …, , while van Hoeij’s algorithm does not compute any cofactor.

5 Algorithms, bounds, complexity

Heffter’s algorithm Compute the matrix  defined in (3). Determine its rank ; set . Extract submatrix  of . Find the 1-dim kernel of . Construct from the first  coordinates of . Compute and return . van Hoeij’s algorithm For all  and , compute View the as rows in ; compute rank of . Extract submatrix  of . Find the 1-dim kernel of . Construct the LCLM from . Our new algorithm Compute  defined in (2). Determine its rank  ; set . Extract submatrix  of . Find the 1-dim kernel of . Construct the LCLM from the last  coordinates of . Return the LCLM.
Figure 2: Pseudo-code for Heffter’s algorithm, van Hoeij’s algorithm and our new algorithm.

In this section, we let  be the field of rational functions with coefficients in a field , and be the usual derivation with respect to . Recall that in this case we use the notation for , and for the primitive LCLM of in .

All algorithms analysed below are specialisations of the algorithms reviewed in the previous section to . Moreover, we make the non-restrictive assumption that all algorithms take as input linear differential operators with polynomial coefficients, that is, belonging to .

The degree of a nonzero operator , denoted , is defined as the maximal degree of its coefficients. As in the case of usual commutative polynomials,

5.1 Tight degree bounds for the LCLM

First, we give a sharp degree bound for LCLMs. As we show later, this bound improves upon the bound that can be derived from van Hoeij’s algorithm.

Theorem 6

Let  be operators in . Let If , then

Proof. By Corollary 4, . It follows from Theorem 3 and Cramer’s rule that every nonzero coefficient of  is a quotient of two minors of . Note that every square submatrix of  has size at most , since  has  rows and  columns. Thus, the degree of the determinant of such a submatrix is bounded by , because every entry of  is of degree at most , and the last  rows of  are free of .

As a consequence of Corollary 4 and Theorem 6, the first part of Theorem 1 is easily deduced.

5.2 LCLMs of two operators

The following result encapsulates complexity analyses of LCLM algorithms for two operators. Heffter’s, van Hoeij’s and our new algorithm are summarised in Figure 2.

Theorem 7

Let be operators of bidegrees at most in . Then it is possible to compute the LCLM of and in complexity (a) by Heffter’s and van der Hoeven’s algorithms, (b) by Li’s and by van Hoeij’s algorithms, (c) by the new algorithm.

Proof. By [24, Theorems 5, 8 & 23], and using bounds from Theorem 6, the complexity of van der Hoeven’s algorithm is . The most costly parts of Heffter’s algorithm are Steps 24 and 6. Since the matrix  has size  and polynomial coefficients of degree at most , the rank and kernel computations involved in Steps 2 and 4 can be performed using operations, by Theorem 2. Step 6 consists in multiplying two operators in of bidegrees at most and in . This can be done using . This proves (a).

The dominant parts of Li’s algorithm are the computation of , and the expansion of minors of a polynomial matrix of size and degree at most . By using [21, Lemma 5.1] and Theorem 2, can be computed using operations in , and the minors can be expanded in . The dominant parts of van Hoeij’s algorithm are Steps 2 and 4. Since , matrix  has size . By an easy induction, its th row has polynomial coefficients of degrees at most , thus has degree . By Theorem 2, the rank and kernel computations have complexity . This proves (b).

The dominant parts of the new algorithm are Steps 2 and 4. Since , the polynomial matrix  has size and degree at most . By Theorem 2 again, the rank and kernel computations have cost . This completes the proof.

Quite surprisingly, the costs of Heffter’s and of van der Hoeven’s algorithms are penalised by the complexity of multiplication of operators, which is not well-understood yet for general bidegrees. Precisely, it is an open problem whether two operators of bidegree in can be multiplied in nearly optimal time . If such an algorithm were discovered, then the costs of both algorithms would become , improving the corresponding entries in Figure 1.

5.3 LCLMs of several operators

We analyse three algorithms for LCLMs of several operators: DAC, van Hoeij’s and our new algorithm.

5.3.1 LCLMs by divide-and-conquer

Theorem 8

Suppose that we are given an algorithm computing the LCLM of two differential operators which, on input of bidegree at most in , computes in complexity for some constants and independent of  and .

There exists an algorithm which, on input of bidegrees at most in , computes using operations in .

Proof. Suppose without loss of generality that is a power of . To compute , we use a strategy based on (6), similar to that of the subproduct tree [17, §10.1]: we partition the family into pairs, compute the LCLM of each pair using algorithm  available for two operators, remove the polynomial content, then compute LCLMs of pairs, and so on. Let denote the LCLM of with the content removed. At level 1, the algorithm computes the operators , at level 2 the operators , and so on, the last computation at level  being that of as the LCLM of and . Let denote the complexity of algorithm  on inputs of bidegrees at most . By Theorem 6, the operators computed at level  have bidegree at most . Thus, the total cost of the DAC algorithm on inputs of bidegree at most is bounded by plus the cost of the content removal, which is negligible. Up to polylogarithmic factors, the cost is bounded by which is . This concludes the proof.

The cost of the algorithm is essentially that of its last step; this is a typical feature of DAC algorithms. A similar analysis shows that the iterative algorithm based on formula (5) is less efficient, and has complexity .

As a corollary of Theorems 7 and 8, we get a proof of the complexity estimates in the first three entries of Figure 1.

5.3.2 Van Hoeij’s and the new algorithm

Theorem 9

Let have bidegrees at most in . One can compute (a) in operations by van Hoeij’s algorithm, (b) in operations by the new algorithm.

Proof. The proof is similar to that of Theorem 7(b) and (c). The most costly parts of van Hoeij’s algorithm are Steps 2 and 4. Matrix  has size and polynomial coefficients of degree . By Theorem 2, the rank and kernel computations have complexity . This proves (a). The dominant parts of the new algorithm are Steps 2 and 4. The polynomial matrix  has size and degree at most . By Theorem 2, the rank and kernel computations have cost .

As a corollary of Theorem 9, we get a proof of the complexity estimates in the last two entries of Figure 1. Note that Cramer’s rule applied to the matrix analysed in the previous proof yields the bound on the coefficient degrees of the LCLM. This bound is improved by Theorem 1.

6 Smaller common left multiples

Our approach to computing more common left multiples (CLMs), that are generally not of minimal order, but smaller in total arithmetic size than the LCLM, is similar to the linear-algebraic approach used in Section 3. However, instead of considering a matrix encoding the , with polynomial coefficients, we turn our attention to a matrix encoding the , with constant coefficients.

Existence of smaller CLMs. The new building block to consider is, for an operator  in  of total degree  in