Using the cohomology theory of Dwork, as developed by Adolphson and Sperber, we exhibit a deterministic algorithm to compute the zeta function of a nondegenerate hypersurface defined over a finite field. This algorithm is particularly well-suited to work with polynomials in small characteristic that have few monomials (relative to their dimension). Our method covers toric, affine, and projective hypersurfaces and also can be used to compute the -function of an exponential sum.
assertion[equation]Assertion \newnumberedconjecture[equation]Conjecture \newnumbereddefinition[equation]Definition
Computing zeta functions]Computing zeta functions of nondegenerate hypersurfaces
with few monomials \classno11Y16 (primary), 11M38, 14D10, 14F30 (secondary) \extralineThe second author was partially supported by the National Security Agency under Grant Number H98230-09-1-0037.
Let be prime and let be a finite field with elements. Let be a variety defined over , described by the vanishing of a finite set of polynomial equations with coefficients in . We encode the number of points on over the extensions of in an exponential generating series, called the zeta function of :
The zeta function is a rational function in , a fact first proved using -adic methods by Dwork [16, 17]. The algorithmic problem of computing efficiently is of significant foundational interest, owing to many practical and theoretical applications (see e.g. Wan  for a discussion).
From a modern point of view, we consider cohomologically: we build a -adic cohomology theory that functorially associates to certain vector spaces over a -adic field , each equipped with a (semi-)linear operator , such that is given by an alternating product of the characteristic polynomials of acting on the spaces . The theory of -adic étale cohomology, for example, was used by Deligne to show that satisfies a Riemann hypothesis when is smooth and projective. Parallel developments have followed in the -adic (de Rham) framework, including the theories of Monsky-Washnitzer, crystalline, and rigid cohomology (see Kedlaya  for an introduction). In this paper, for a toric hypersurface defined by a (nondegenerate) Laurent polynomial in variables over , we employ the cohomology theory of Dwork, working with a space obtained as the quotient of a -adic power series ring over in variables by the subspace generated by the images of differential operators.
Efforts to make these cohomology theories computationally effective have been extensive. Schoof’s algorithm for counting points on an elliptic curve  (generalized by Edixhoven and his coauthors  to compute coefficients of modular forms) can be viewed in this light, using the theory of mod étale cohomology. A number of results on the -adic side have also emerged in recent years. In early work, Wan  and Lauder and Wan  demonstrated that the -adic methods of Dwork can be used to efficiently compute zeta functions in small (fixed) characteristic. Lauder and Wan use the Dwork trace formula and calculate the trace of Frobenius acting on a -adic Banach space, following the original method of Dwork and working on the “chain level”. In this paper, we instead work with the extension of Dwork’s theory due to Adolphson and Sperber ; this point of view was also pursued computationally by Lauder and Wan in the special case of Artin-Schreier curves [48, 49]. Under the hypothesis that the Laurent polynomial is nondegenerate (see below for the precise definition), the zeta function can be recovered from the action of Frobenius on a certain single cohomology space . This method works with exponential sums and so extends naturally to the case of toric, affine, or projective hypersurfaces . (It suffices to consider the case of hypersurfaces to compute the zeta function of any variety defined over a finite field using inclusion-exclusion or the Cayley trick.)
The method of Dwork takes into account the terms that actually occur in the Laurent polynomial ; these methods are especially well-suited when the monomial support of is small, so that certain combinatorial aspects are simple. This condition that have few monomials in its support, in which case we say (loosely) that is fewnomial (a term coined by Kouchnirenko ), is a natural one to consider. For example, many explicit families of hypersurfaces of interest, including the well-studied (projective) Dwork family of Calabi-Yau hypersurfaces  (as well as more general monomial deformations of Fermat hypersurfaces ) can be written with few monomials. In cryptographic applications, the condition of fewnomialness also often arises. Finally, the running time of algorithms on fewnomial input are interesting to study from the point of view of complexity theory: see, for example, work of Bates, Bihan, and Sottile .
To introduce our result precisely, we now set some notation. Let be a toric hypersurface, the closed subset of defined by the vanishing of a Laurent polynomial
We use multi-index notation, so . We sometimes write . Let be the Newton polytope of , the convex hull of its support
in . For simplicity, we assume throughout that . For a face , let . Then we say is (-)nondegenerate if for all faces (including itself), the system of equations
has no solution in , where is an algebraic closure of . The set of -nondegenerate polynomials with respect to a polytope forms an open subset in the affine space parameterizing their coefficients : under mild hypothesis, such as when contains a unimodular simplex, then this subset is Zariski dense. (See Batyrev and Cox  as a reference for this notion as well as the work of Castryck and the second author  for a detailed analysis of nondegenerate curves.) We distinguish here between and which is the convex closure of : for the Laurent polynomial in variables, is -nondegenerate if and only if is nondegenerate with respect to in the sense of Kouchnirenko , Adophson and Sperber , and others. Nondegenerate hypersurfaces are an attractive class to consider because many of their geometric properties can be deduced from the combinatorics of their Newton polytopes.
Let and let be the -matrix with entries in whose columns are the vectors for . Let be the rank of modulo . Let be the normalized volume of , so that a unit hypercube has normalized volume and the unit simplex has normalized volume .
We say that is confined if is contained in an orthotope (box) with side lengths with . We say that is confined if is confined. A slight extension of a theorem of Lagarias and Ziegler  shows that every polytope is -equivalent to a confined polytope; this existence can also be made effective. (See section 3 for more detail.) In other words, for each Laurent polynomial there is a computable monomial change of basis of , giving rise to an equality of zeta functions, under which is confined. (In the theorem below, at the expense of introducing a factor of , where where , one can remove the assumption that is confined.)
For functions , we say that if there exists and such that for every we have . (The reader is warned that not all properties familiar to big-Oh notation for functions of one variable extend to the multivariable case; see Howell . In fact, our analysis also holds with Howell’s more restrictive definition, but we will not pursue this further here.) We further use the “soft-Oh” notation, where if for some .
Our main result is as follows.
Let . Then there exists an explicit algorithm that, on input a nondegenerate Laurent polynomial with and an integer , computes as output modulo . If further is confined, then this algorithm uses
To recover the zeta function (as an element of ), if we fix both the dimension and the number of monomials, we have the following result.
Let and . Then there exists an explicit algorithm that, on input a confined, nondegenerate Laurent polynomial with and , computes as output using
According to a theorem of Adolphson and Sperber , under the hypothesis that is nondegenerate, is a polynomial of degree times . Therefore, in the context of Theorem B, if is small (or fixed), then our algorithm runs in polynomial time in the (dense) output size, which is the best one could hope for (aside from minimizing the degree of this polynomial). (The fewnomial input size, on the other hand, is for confined and fixed.)
In our theorem, we require the dimension to be fixed for several reasons. First, we employ well-known algorithms for lattice polytopes which have only been analyzed assuming that the dimension is fixed. (They further assume that arithmetic operations in take time , which is nearly valid in the usual bit-complexity model if is confined and is fixed; for a discussion of this point, see Section 3.) Second, it is often quite natural from a geometric point of view to consider the dimension to be fixed; one often considers families of hypersurfaces of a fixed dimension, for example. Finally, allowing fewnomial input and output and varying dimension, the problem of computing is harder than the NP-complete problem 3-SAT (indeed, for the latter one only wishes to know if for an affine hypersurface of degree ). For these reasons, we restrict our analysis (continuing below) to fixed dimension.
Our method follows in the same vein as other recently introduced -adic cohomological techniques. The methods of Lauder and Wan  mentioned above compute for a polynomial of total degree using bit operations. The dense input size of is ; consequently if the prime (and dimension ) are fixed then their algorithm runs in polynomial time in the dense input size. Their method, although apparently not practical, is completely general and does not require any hypothesis on . Our method can be analyzed on dense input (see Section 5) as well, running in time with no condition on the number of monomials in the support of .
In a different direction, Kedlaya  (see also the presentation by Edixhoven ) used Monsky-Washnitzer cohomology to compute the zeta function of a hyperelliptic curve of genus over in time . (Note here that the dense input size is .) This idea has been taken up by several others: see, for example, work of Abbott, Kedlaya, and Roe , who compute the zeta function of a projective hypersurface by working in the complement of the hypersurface and using Mumford reduction. (Indeed, Kedlaya has suggested that there should be a natural extension  of his ideas to the realm of toric hypersurfaces.) Our method also mirrors the algorithm of Castryck, Denef, and Vercauteren , who tackle the case of nondegenerate curves. Their method has good asymptotic behavior but to be practical needs an optimized implementation [9, §1.2.4]. However, rather than following this vein and working with Monsky-Washnitzer (-adic de Rham) cohomology, we employ the cohomology theory of Dwork, which has a more combinatorial flavor.
In a yet further direction, Lauder has used Dwork’s theory of -adic differential equations to compute zeta functions using deformation  and recursion . The Frobenius, acting on the members of a one-parameter family, satisfies a differential equation coming from the Gauss-Manin connection. Lauder uses this equation to solve for the action given an initial condition arising from the action on the cohomology of a simple variety which one can compute directly. Our method fits into this framework as it provides natural base varieties to deform from: indeed, the idea of deformation in the context of nondegenerate curves has been pursued by Tuitman . The methods of Lauder show that one can compute for a smooth projective hypersurface of degree with and nonvanishing diagonal terms in time . The deformation method has also been pursued fruitfully by Gerkmann  and others in different contexts.
Adapting an idea of Chudnovsky and Chudnovsky and Bostan, Gaudry, and Schost  for accelerated reduction, Harvey  has improved Kedlaya’s method for hyperelliptic curves, with a runtime of . This approach appears to extend to higher dimensions as well, extending the method of Abbott, Kedlaya, and Roe : his method appears to give a runtime of under a smoothness hypothesis analogous to the condition of nondegeneracy (but somewhat weaker). It would be interesting to see how his ideas for lowering the exponent on might apply in our situation.
This paper is organized as follows. In section 1, we introduce the cohomology theory of Dwork and give an overview of our method. In section 2, we discuss each step of the algorithm in turn: computing the splitting function and the Jacobian ring, the computation of Frobenius (where the condition of sparsity enters), and the reduction theory in cohomology. In section 3, we give some algorithms for computing with polytopes. We then discuss running time and precision estimates for the complete algorithm in section 4. Finally, in section 5 we discuss the case and consider some other possible modifications. We conclude in section 6 with some examples.
In this section, we give an overview of our algorithm. Our introduction will be concise; for a more complete treatment of the theory of Dwork , see Koblitz , Lauder and Wan , and Monsky .
In this section, we assume ; see section 5 for a discussion of the case .
Let be a Laurent polynomial and let be the toric hypersurface defined by the vanishing of . Let be a nontrivial additive character (with a commutative ring of characteristic zero), so that
for all . A point of departure for the theory of Dwork is the observation that for , we have
In other words, counting the set of points can be achieved by instead evaluating an exponential sum (on either or ).
For , we define a system of nontrivial additive characters by where is the trace map, and we define the exponential sums
The -function associated to over is defined to be
Then by (1.1) we have
The Dwork splitting function and interpolation
Complex characters are defined via the exponential map, but the theory takes off when the ring where the character takes values is a -adic ring, and the exponential function does not have a large enough -adic radius of convergence to be useful. We improve this radius of convergence by using a modified exponential function as follows. Let be an element of the algebraic closure of that satisfies ; then where is a primitive th root of unity. We define the function
which is called a Dwork splitting function. It is sometimes denoted by to distinguish it from other splitting functions. We have
where is the -adic valuation normalized so that ; thus, in fact . We observe that is a primitive th root of unity, and so we obtain our additive characters via the maps
where is the absolute trace.
The values of the characters can be -adically interpolated in a way consistent with field extensions, as follows. Let be the unramified extension of of degree , and let denote its ring of integers, so that is the Witt vectors over . Let denote the -power Frobenius (lifting the th power map on the residue field .) There is a canonical character , called the Teichmüller character, of the multiplicative group taking values in that takes an element to the element , satisfying and such that reduces to in . For such a Teichmüller representative lifting , we find that
and extending this for we have
for and a Teichmüller lift of . Let by denote the -power Frobenius, the ring automorphism of that reduces to the map modulo . Then
We now consider these character values applied to values of our Laurent polynomial . Write in multi-index notation; we assume that each . Let , where is the Teichmüller lift of . In light of (1.5), we are led to consider the power series
where denotes the power series obtained by applying to the coefficients of . (The abuse of notation which identifies a power series and its specializations will only occur in this paragraph.) It then follows from (1.5) and a straightforward calculation that
for all , where denotes the Teichmüller lift. We have thereby extended the interpolation of the values of to the power series (1.6).
Dwork trace formula
So far, we have related the zeta function to the -function of an exponential sum via a -adic additive character arising from the Dwork splitting function, and we have interpolated these character values in a power series . In order to move this to cohomology, we define a space of -adic analytic functions like with similar support and -adic growth.
Let be the Newton polytope of , the convex hull of its support
For , let denote the th dilation of . Let be the ring
The estimate (1.4) implies that , and so multiplication by defines a linear operator which we also denote .
On the space , we have a “left inverse of Frobenius” defined by
in multi-index notation, the condition means for all . The map is -semi-linear as a map of free -modules.
Finally, let and . Then is -linear, and another calculation reveals in fact that (composition times).
The Dwork trace formula  then asserts that
It follows  that
We now proceed one step further and move to the level of cohomology.
We now consider a Koszul complex associated to as follows. To ease notation in this subsection, let . For , let
and define the operator by
(the latter is the operator given by multiplication by ). The operators commute. For , let
Let be the complex
Now induces a map on the complex :
since one checks that for all . (One similarly has a map induced by , replacing by .)
We recall our notation from the introduction. For a face , let . Then we say is nondegenerate if for all faces (of any dimension, including itself), the system of equations
has no solution in , where is an algebraic closure of .
Suppose is nondegenerate. Then all the cohomology spaces are trivial except for . Let
Then by work of Adolphson and Sperber , the -module is free of dimension (equal to the normalized volume of the cone over in ) and
Let (resp. ) be a matrix of (resp. ) acting on . Then we have
where denotes the matrix where is applied to each entry in the matrix.
An overview of the algorithm
We now describe how to effectively compute the terms in the formula (1.12), and in particular the matrix (1.13). We sketch an overview and wait to describe each of these steps and their running time in detail in the sections that follow.
The algorithm takes as input a nondegenerate (confined) Laurent polynomial and a precision , and it produces as output the polynomial modulo . For large, we recover the coefficients in and then from (1.2) we recover .
Let . (By carefully factoring out the algebraic element , we may work in this smaller ring; see Lemma 2.10 and the accompanying discussion.)
In (1.7) we have worked with the power series ring , but by the convergence behavior of elements of , working modulo these power series become polynomials. So we define
The ring is the monoid algebra arising from the cone over with coefficients in , and it is naturally -graded by . Recall (1.9) that we have defined
for (and ).
Our algorithm has 4 steps.
Compute the Teichmüller lift of . Using linear algebra over , compute a monomial basis for the Jacobian ring
(The monomial basis for yields a basis for .)
For each monomial , compute the action of the Frobenius using “fewnomial enumeration”.
For each , reduce in cohomology using the differential operators to an element in the -span of . (The matrices implicitly computed in Step 1 are used in this reduction.)
Compute the resulting matrix modulo , then compute using (1.13) and finally
2 The algorithm
We now describe each step of the algorithm announced in our main theorem and introduced in section 1. We retain the notation from section 1.
Step 1: Computing the Jacobian ring
We begin by describing the computation of a basis of the Jacobian ring
as result of this computation, we also obtain matrices which will be used in the reduction step in Step 3.
If is nondegenerate, then is a free -module with basis of cardinality consisting of monomials with degree .
The proof of Monsky  (see work of Adolphson and Sperber [3, Appendix]) shows that under the nondegeneracy hypothesis, the associated Koszul complex is acyclic modulo and therefore lifts to an acyclic complex over (as the modules are complete, separated, and flat over ).
To compute with the ring , we need some standard algorithms for computing with polytopes. For a set , we denote by its convex hull.
There exists an efficient algorithm that, given a finite set , computes the set .
We discuss this algorithm and its running time in detail in the next section (Proposition 3.3); it will be treated as a black box for now.
Let be a term order on the monomials in that respects the -grading. We begin by computing in each degree the set of monomials in using Lemma 2.2 and we order them by . Then, for each such a spanning set for the degree subspace of the Jacobian ideal, , is given by the products of the monomials in with the generators of the Jacobian ideal. Finally, for each , let be the matrix whose columns are indexed by , i.e. the monomial basis for ordered by , and whose rows record the coefficients of the spanning set for .
We then compute the row-echelon form of for using linear algebra over . According to Lemma 2.1, every pivot in can be taken to be a unit in ; thus, a monomial basis for is then obtained as where is a choice of monomial basis for the cokernels of . In particular, the matrix has full rank and so has a maximal square submatrix with unit determinant.
Step 2: Computing the action of Frobenius
Recall we have defined the Dwork splitting function
with . The image of modulo in is a polynomial of degree less than . We compute as the product of two polynomials of this degree.
Here we have used that is odd. For , the splitting function above does not converge sufficiently fast for the algorithm described below to work. For the modifications necessary, see section 5.
We have with and and
One option is to multiply out the product (2.3) naively. Instead, we seek to take advantage of the fewnomialness of . We have
Let and abbreviate . Expanding (2.4) out we obtain
We make further abbreviations using multi-index notation as follows: for , which we abbreviate , we write and , and we write , and finally for the dot product. Then (2.5) becomes simply
Let . Then from (2.6) we have
The set of indices for the sum on the right side of (2.7) is then contained in the set
When the set