Computing hypergeometric functions rigorously

Computing hypergeometric functions rigorously

Fredrik Johansson111LFANT, INRIA and Institut de Mathématiques de Bordeaux (fredrik.johansson@gmail.com). This research was partially funded by ERC Starting Grant ANTICS 278537.
Abstract

We present an efficient implementation of hypergeometric functions in arbitrary-precision interval arithmetic. The functions , , and (or the Kummer -function) are supported for unrestricted complex parameters and argument, and by extension, we cover exponential and trigonometric integrals, error functions, Fresnel integrals, incomplete gamma and beta functions, Bessel functions, Airy functions, Legendre functions, Jacobi polynomials, complete elliptic integrals, and other special functions. The output can be used directly for interval computations or to generate provably correct floating-point approximations in any format. Performance is competitive with earlier arbitrary-precision software, and sometimes orders of magnitude faster. We also partially cover the generalized hypergeometric function  and computation of high-order parameter derivatives.

1 Introduction

Naive numerical methods for evaluating special functions are prone to give inaccurate results, particularly in the complex domain. Ensuring just a few correct digits in double precision (53-bit) floating-point arithmetic can be hard, and many scientific applications require even higher accuracy [3]. To give just a few examples, Bessel functions are needed with quad precision (113-bit) accuracy in some electromagnetics and hydrogeophysics simulations [61, 40], with hundreds of digits in integer relation searches such as for Ising class integrals [4], and with thousands of digits in number theory for computing with L-functions and modular forms [8].

In this work, we address evaluation of the confluent hypergeometric functions , and  (equivalently, the Kummer -function) and the Gauss hypergeometric function for complex parameters and argument, to any accuracy and with rigorous error bounds. Based on these very general functions, we are in turn able to compute incomplete gamma and beta functions, Bessel functions, Legendre functions, and others, covering a large portion of the special functions in standard references such as Abramowitz and Stegun [1] and the Digital Library of Mathematical Functions [21].

The implementation is part of the C library Arb [30]222https://github.com/fredrik-johansson/arb/, which is free software (GNU LGPL). The code is extensively tested and documented, thread-safe, and runs on common 32-bit and 64-bit platforms. Arb is a standard package in SageMath [54], which provides a partial high-level interface. Partial Python and Julia bindings are also available. Interfacing is easy from many other languages, including Fortran/C++.

2 Hypergeometric functions

A function is called hypergeometric if the Taylor coefficients form a hypergeometric sequence, meaning that they satisfy a first-order recurrence relation where the term ratio is a rational function of .

The product (or quotient) of two hypergeometric sequences with respective term ratios is hypergeometric with term ratio (or ). Conversely, by factoring , we can write any hypergeometric sequence in closed form using powers and gamma functions , times a constant determined by the initial value of the recurrence. The rising factorial is often used instead of the gamma function, depending on the initial value. A standard notation for hypergeometric functions is offered by the generalized hypergeometric function (of order )

(1)

or the regularized generalized hypergeometric function

(2)

where and are called (upper and lower) parameters, and is called the argument (see [21, Chapter 16] and [70]). Both (1) and (2) are solutions of the linear ODE

(3)

2.1 Analytic interpretation

Some clarification is needed to interpret the formal sums (1) and (2) as analytic functions. If any , the series terminates as , a polynomial in  and a rational function of the other parameters. If any , is generally undefined due to dividing by zero, unless some with in which case it is conventional to use the truncated series.

If some and the series does not terminate earlier, (1) is ambiguous. One possible interpretation is that we can cancel against  to get a generalized hypergeometric function of order . Another interpretation is that , terminating the series. Some implementations are inconsistent and may use either interpretation. For example, Mathematica evaluates and . We do not need this case, and leave it undefined. Ambiguity can be avoided by working with , which is well-defined for all values of the lower parameters, and with explicitly truncated hypergeometric series when needed.

With generic values of the parameters, the rate of convergence of the series (1) or (2) depends on the sign of , giving three distinct cases.

Case : the series converges for any and defines an entire function with an irregular (exponential) singularity at . The trivial example is . The confluent hypergeometric functions and form exponential integrals, incomplete gamma functions, Bessel functions, and related functions.

Case : the series converges for . The function is analytically continued to the (cut) complex plane, with regular (algebraic or logarithmic) singularities at and . The principal branch is defined by placing a branch cut on the real interval . We define the function value on the branch cut itself by continuity coming from the lower half plane, and the value at as the limit from the left, if it exists. The Gauss hypergeometric function is the most important example, forming various orthogonal polynomials and integrals of algebraic functions. The principal branch is chosen to be consistent with elementary evaluations such as and with the usual convention that .

Case : the series only converges at , but can be viewed as an asymptotic expansion valid when . Using resummation theory (Borel summation), it can be associated to an analytic function of .

2.2 Method of computation

By (3), hypergeometric functions are D-finite (holonomic), i.e. satisfy linear ODEs with rational function coefficients. There is a theory of “effective analytic continuation” for general D-finite functions [19, 66, 65, 45, 46, 47]. Around any point , one can construct a basis of solutions of the ODE consisting of generalized formal power series whose terms satisfy a linear recurrence relation with rational function coefficients (the function arises in the special case where the recurrence at is hypergeometric, that is, has order 1). The expansions permit numerical evaluation of local solutions. A D-finite function defined by an ODE and initial values at an arbitrary point can be evaluated at any point by connecting local solutions along a path .

This is essentially the Taylor method for integrating ODEs numerically, but D-finite functions are special. First, the special form of the series expansions permits rapid evaluation using reduced-complexity algorithms. Second, by use of generalized series expansions with singular prefactors, and can be singular points (including ), or arbitrarily close to singular points, without essential loss of efficiency.

The general algorithm for D-finite functions is quite complicated and has never been implemented fully with rigorous error bounds, even restricted to hypergeometric functions. The most difficult problem is to deal with irregular singular points, where the local series expansions become asymptotic (divergent), and resummation theory is needed. Even at regular points, it is difficult to perform all steps efficiently. The state of the art is Mezzarobba’s package [44], which covers regular singular points.

In this work, we implement and rigorously using the direct series expansion at . This is effective when as long as is not too large, and when as long as . Further, and importantly, we provide a complete implementation of the cases , permitting efficient evaluation for any . The significance of order is that many second-order ODEs that arise in applications can be transformed to this case of (3). We are able to cover evaluation for any  due to the fact that the expansions of these ’s at (and for ) can be expressed as finite linear combinations of other ’s with via explicit connection formulas. This includes the Borel regularized function , which is related to the Kummer -function. Evaluation of near (which is used for the asymptotic expansions of and at ) is possible thanks to explicit error bounds already available in the literature. Analytic continuation via the hypergeometric ODE is used in one special case when computing .

Using hypergeometric series as the main building block allows us to cover a range of special functions efficiently with reasonable effort. For ’s of higher order, this simplified approach is no longer possible. With some exceptions, expansions of at are not hypergeometric, and methods to compute rigorous error bounds for asymptotic series have not yet been developed into concrete algorithms. Completing the picture for the higher functions should be a goal for future research.

2.3 Parameter derivatives and limits

Differentiating with respect to  simply shifts parameters, and batches of high-order -derivatives are easy to compute using recurrence relations. In general, derivatives with respect to parameters have no convenient closed forms. We implement parameter derivatives using truncated power series arithmetic (automatic differentiation). In other words, to differentiate up to order , we compute using arithmetic in the ring . This is generally more efficient than numerical differentiation, particularly for large . Since formulas involving analytic functions on translate directly to , we can avoid symbolically differentiated formulas, which often become unwieldy.

The most important use for parameter derivatives is to compute limits with respect to parameters. Many connection formulas have removable singularities at some parameter values. For example, if and when , we compute when by evaluating in , formally cancelling the zero constant terms in the power series division.

2.4 Previous work

Most published work on numerical methods for special functions uses heuristic error estimates. Usually, only a subset of a function’s domain is covered correctly. Especially if only machine precision is used, expanding this set is a hard problem that requires a patchwork of methods, e.g. integral representations, uniform asymptotic expansions, continued fractions, and recurrence relations. A good survey of methods for and has been done by Pearson et al. [51, 52]. The general literature is too vast to summarize here; see [21] for a bibliography.

Rigorous implementations until now have only supported a small set of special functions on a restricted domain. The arbitrary-precision libraries MPFR, MPC and MPFI [28, 24, 53] provide elementary functions and a few higher transcendental functions of real variables, with guaranteed correct rounding. Other rigorous implementations of restricted cases include [22, 23, 72, 20]. Computer algebra systems and arbitrary-precision libraries such as Mathematica, Maple, Maxima, Pari/GP, mpmath and MPFUN [71, 42, 34, 63, 43, 2] support a wide range of special functions for complex variables, but all use heuristic error estimates and sometimes produce incorrect output. Performance can also be far from satisfactory.

Our contribution is to simultaneously support (1) a wide range of special functions, (2) complex variables, (3) arbitrary precision, and (4) rigorous error bounds, with (5) high speed. To achieve these goals, we use interval arithmetic to automate most error bound calculations, pay attention to asymptotics, and implement previously-overlooked optimizations.

The point of reference for speed is mainly other arbitrary-precision software, since the use of software arithmetic with variable precision and unlimited exponents adds perhaps a factor 100 baseline overhead compared to hardware floating-point arithmetic. The goal is first of all to maintain reasonable speed even when very high precision is required or when function arguments lie in numerically difficult regions. With further work, the overhead at low precision could also be reduced.

3 Arbitrary-precision interval arithmetic

Interval arithmetic provides a rigorous way to compute with real numbers [64]. Error bounds are propagated automatically through the whole computation, completely accounting for rounding errors as well as the effect of uncertainty in initial values.

Arb uses the midpoint-radius form of interval arithmetic (“ball arithmetic”) with an arbitrary-precision floating-point midpoint and a low-precision floating-point radius, as in . This allows tracking error bounds without significant overhead compared to floating-point arithmetic [67]. Of course, binary rather than decimal numbers are used internally. We represent complex numbers in rectangular form as pairs of real balls; in some situations, it would be better to use true complex balls (disks) with a single radius.

The drawback of interval arithmetic is that error bounds must be overestimated. Output intervals can be correct but useless, e.g. . The combination of variable precision and interval arithmetic allows increasing the precision until the output is useful [53]. As in plain floating-point arithmetic, it helps to use algorithms that are numerically stable, giving tighter enclosures, but this is mainly a matter of efficiency (allowing lower precision to be used) and not of correctness.

3.1 Adaptivity

When the user asks for bits of precision, Arb chooses internal evaluation parameters (such as working precision and series cutoffs) to attempt to achieve relative error, but stops after doing work, always returning a correct interval within a predictable amount of time, where the output may be useless if convergence is too slow, cancellations too catastrophic, the input intervals too imprecise, etc. A wrapper program or the end user will typically treat the interval implementation as a black box and try with, say, bits of precision, followed by precisions if necessary until the output error bound has converged to the desired tolerance of . It is easy to abort and signal an error to the end user or perhaps try a different implementation if this fails after a reasonable number of steps. For example, evaluating the incomplete gamma function at 32, 64 and 128 bits with Arb gives:

Increasing the precision is only effective if the user can provide exact input or intervals of width about . We do not address the harder problem of bounding a function tightly on a “wide” interval such as . Subdivision works, but the worst-case cost for a resolution of increases exponentially with . For some common special functions (, etc.), one solution is to evaluate the function at the interval midpoint or endpoints and use monotonicity or derivative bounds to enclose the range on the interval. We have used such methods for a few particular functions, with good results, but do not treat the problem in general here.

We attempt to ensure heuristically that the output radius converges to 0 when , but this is not formally guaranteed, and some exceptions exist. For example, when parameters and are handled separately, convergence might fail with input like . Such limitations could be removed with further effort.

3.2 Floating-point output

It takes only a few lines of wrapper code around the interval version of a function to get floating-point output in any format, with prescribed guaranteed accuracy. A C header file is available to compute hypergeometric and other special functions accurately for the C99 double complex type.

The correct rounding of the last bit can normally be deduced by computing to sufficiently high precision. However, if the true value of the function is an exactly representable floating-point number (e.g. 3.25 = 3.250000…) and the algorithm used to compute it does not generate a zero-width interval, this iteration fails to terminate (“the table-maker’s dilemma”). Exact points are easily handled in trivial cases (for example, is the only case for ). However, hypergeometric functions can have nontrivial exact points, and detecting them in general is a hard problem.

3.3 Testing

Every function has a test program that generates many (usually to ) random inputs. Inputs are distributed non-uniformly, mixing real and imaginary parts that are exact, inexact, tiny, huge, integer, near-integer, etc., to trigger corner cases with high probability. For each input, a function value is computed in two different ways, by varying the precision, explicitly choosing different internal algorithms, or applying a recurrence relation to shift the input. The two computed intervals, say , must then satisfy , which is checked. In our experience, this form of testing is very powerful, as one can see by inserting deliberate bugs to verify that the test code fails. Other forms of testing are also used.

4 Direct evaluation of hypergeometric series

The direct evaluation of via (1) involves three tasks: selecting the number of terms , computing the truncated sum , and (unless the series terminates at this point) bounding the error , which is equal to when the series converges.

In Arb, is first selected heuristically with 53-bit hardware arithmetic (with some care to avoid overflow and other issues). In effect, for -bit precision, linear search is used to pick the first such that . If no such exists up to a precision-dependent limit , the that minimizes subject to is chosen ( allows us to compute a crude bounding interval for instead of getting stuck if the series converges too slowly).

Both and are subsequently computed using interval arithmetic, and the value of is used to compute a rigorous bound for the tail.

4.1 Tail bounds for convergent series

If is so large that is small for all , then is a good estimate of the error. It is not hard to turn this observation into an effective bound. Here, we define so that without the separate factorial.

Theorem 1.

With as in (1), if and for all , then where

Proof.

Looking at the ratio for , we cancel out upper and lower parameters and bound remaining lower parameter factors as Bounding the tail by a geometric series gives the result. ∎

The same principle is used to get tail bounds for parameter derivatives of (1), i.e. bounds for each coefficient in given . First, we fix some notation: if , is the coefficient of , is the power series , denotes which can be viewed as an element of , and signifies that holds for all . Using and , it is easy to check that , and where . Theorem 1 can now be restated for power series: if and for all , then where

To bound sums and products of power series with (complex) interval coefficients, we can use floating-point upper bounds with directed rounding for the absolute values instead of performing interval arithmetic throughout. For , we must pick a lower bound for and upper bounds for the coefficients of .

4.2 Summation algorithms

Repeated use of the forward recurrence , where with initial values yields and . This requires arithmetic operations in , where we consider fixed, or bit operations in the common situation where , being the precision. When is large, Arb uses three different series evaluation algorithms to reduce the complexity, depending on the output precision and the bit lengths of the inputs. Here we only give a brief overview of the methods; see [14, 12, 18, 57, 29, 65, 73, 13, 5, 16, 31, 32] for background and theoretical analysis.

Binary splitting (BS) is used at high precision when all parameters as well as the argument have short binary representations, e.g. . The idea is to compute the matrix product where

using a divide-and-conquer strategy, and clearing denominators so that only a single final division is necessary. BS reduces the asymptotic complexity of computing exactly (or to bits) to bit operations. BS is also used at low precision when is large, since the depth of operand dependencies often makes it more numerically stable than the forward recurrence.

Rectangular splitting (RS) is used at high precision when all parameters have short binary representations but the argument has a long binary representation, e.g.  (approximated to high precision). The idea is to write where , reducing the number of multiplications where both factors depend on to . One extracts the common factors from the coefficients so that all other multiplications and divisions only involve short coefficients. Strictly speaking, RS does not reduce the theoretical complexity except perhaps by a factor , but the speedup in practice can grow to more than a factor 100, and there is usually some speedup even at modest precision and with small . Another form of RS should be used when a single parameter has a long binary representation [31]; this is a planned future addition that has not yet been implemented in Arb for general hypergeometric series,

Fast multipoint evaluation (FME) is used at high precision when not all parameters have short binary representations. Asymptotically, it reduces the complexity to arithmetic operations. Taking , one computes as a matrix with entries in , evaluates this matrix at , and multiplies the evaluated matrices together. FME relies on asymptotically fast FFT-based polynomial arithmetic, which is available in Arb. Unlike BS and RS, its high overhead and poor numerical stability [39] limits its use to very high precision.

Arb attempts to choose the best algorithm automatically. Roughly speaking, BS and RS are used at precision above 256 bits, provided that the argument and parameters are sufficiently short, and FME is used above 1500 bits otherwise. Due to the many variables involved, the automatic choice will not always be optimal.

We note that Arb uses an optimized low-level implementation of RS to compute elementary functions at precisions up to a few thousand bits [33]. Although the elementary function series expansions are hypergeometric, that code is separate from the implementation of generic hypergeometric series discussed in this paper. Such optimizations could be implemented for other instances.

4.2.1 Parameter derivatives

A benefit of using power series arithmetic instead of explicit formulas for parameter derivatives is that complexity-reduction techniques immediately apply. Arb implements both BS and RS over , with similar precision-based cutoffs as over . FME is not yet implemented.

RS is currently only used for , due to a minor technical limitation. Binary splitting is always used when , as it allows taking advantage of asymptotically fast polynomial multiplication; in particular, the complexity is reduced to arithmetic operations in when all inputs have low degree as polynomials in .

4.2.2 Regularization

When computing , the recurrence relation is the same, but the initial value changes to . Above, we have implicitly assumed that no , so that we never divide by zero in . If some is part of the summation range (assuming that the series does not terminate on ), the recurrence must be started at at to avoid dividing by zero. With arithmetic in , all the terms are then zero. With arithmetic in , , the use of the recurrence must be restarted whenever the coefficient of in becomes zero. With , the terms between such points need not be identically zero, so the recurrence may have to be started and stopped several times. For nonexact intervals that intersect , the problematic terms should be skipped the same way. Arb handles such cases, but there is room for optimization in the implementation.

When computing , or directly over and , working in is avoided by using a direct formula, e.g.

5 The gamma function

Computation of is crucial for hypergeometric functions. Arb contains optimized versions of each of the functions , (avoiding division by zero when ), (with the correct branch structure), and , for each of the domains .

Small integer and half-integer are handled directly. A separate function is provided for with , with optimizations for denominators , including use of elliptic integral identities [10]. Euler’s constant is computed using the Brent-McMillan algorithm, for which rigorous error bounds were derived in [15].

Many algorithms for have been proposed, including [41, 12, 60, 55], but in our experience, it is hard to beat the asymptotic Stirling series

(4)

with argument reduction based on and , where the error term  is bounded as in [49]. Conveniently, (4) is numerically stable for large  and gives the correct branch structure for .

Fast evaluation of the rising factorial is important for the argument reduction when is small. Arb uses the RS algorithm described in [31], which builds on previous work by Smith [58]. It also uses BS when  has a short binary representation, or when computing derivatives of high order [32].

The drawback of the Stirling series is that Bernoulli numbers are required. An alternative for moderate is to use the approximation by a lower gamma function with a large . The methods for fast evaluation of hypergeometric series apply. However, as noted in [31], this is usually slower than the Stirling series if Bernoulli numbers are cached for repeated evaluations.

5.1 Bernoulli numbers and zeta constants

Arb caches Bernoulli numbers, but generating them the first time could be time-consuming if not optimized. The best method in practice uses together with the von Staudt-Clausen theorem to recover exact numerators. Instead of using the Euler product for as in [26], the L-series is used directly. The observation made in [6] is that if one computes a batch of Bernoulli numbers in descending order , the powers in can be recycled, i.e. . On an Intel i5-4300U, Arb generates all the exact Bernoulli numbers up to in 0.005 s, in 1 s and in 10 min; this is far better than we have managed with other methods, including methods with lower asymptotic complexity.

For , Arb computes via the Stirling series when and via when (at -bit precision). The -constants are computed using the Euler product for large , and in general using the convergence acceleration method of [11], with BS at high precision when is small and with a term recurrence as in [62] otherwise. For multi-evaluation, term-recycling is used [32, Algorithm 4.7.1]; much like in the case of Bernoulli numbers, this has lower overhead than any other known method, including the algorithms proposed in [9]. The Stirling series with BS over could also be used for multi-evaluation of -constants, but this would only be competitive when computing thousands of constants to over bits. As noted in [32], the Stirling series is better for this purpose than the approximation used in [37, 9].

We mention -constants since they are of independent interest, e.g. for convergence acceleration of series [27]. In fact, such methods apply to slowly converging hypergeometric series [7, 56], though we do not pursue this approach here.

6 Confluent hypergeometric functions

The conventional basis of solutions of consists of the confluent hypergeometric functions (or Kummer functions) and , where .

Table 1 gives a list of functions implemented in Arb that are expressible via and . In this section, we outline the evaluation approach. Other functions that could be implemented in the same way include the Whittaker functions, parabolic cylinder functions, Coulomb wave functions, spherical Bessel functions, Kelvin functions, and the Dawson and Faddeeva functions. Indeed, the user can easily compute these functions via the functions already available in Table 1, with the interval arithmetic automatically taking care of error bounds.

Function Notation
Confluent hypergeometric function ,
Confluent hypergeometric limit function
Bessel functions , , ,
Airy functions ,
Error function , ,
Fresnel integrals ,
Incomplete gamma functions
Generalized exponential integral
Exponential and logarithmic integral , ,
Trigonometric integrals , , ,
Laguerre function (Laguerre polynomial)
Hermite function (Hermite polynomial)
Table 1: Implemented variants and derived cases of confluent hypergeometric functions.

6.1 Asymptotic expansion

It turns out to be convenient to define the function , which is asymptotic to 1 when . We have the important asymptotic expansion

(5)

where for fixed when . The series is divergent when (unless or so that it reduces to a polynomial), but it is natural to define the function for all in terms of via (5). The choice between and (or ) is then just a matter of notation. It is well known that this definition is equivalent to Borel resummation of the series.

We use Olver’s bound for the error term , implementing the formulas almost verbatim from [21, 13.7]. We do not reproduce the formulas here since lengthy case distinctions are needed. As with convergent hypergeometric series, the error is bounded by times an easily computed function. To choose , the same heuristic is used as with convergent series.

We have not yet implemented parameter derivatives of the asymptotic series, since the main use of parameter derivatives is for computing limits in formulas involving non-asymptotic series. Parameter derivatives of could be bounded using the Cauchy integral formula.

6.2 Connection formulas

For all complex and all ,

(6)
(7)

with the understanding that principal branches are used everywhere and in (6) when . Formula (6), which allows computing when is too small to use the asymptotic series, is [21, 13.2.42]. Formula (7), which in effect gives us the asymptotic expansion for , can be derived from the connection formula between and given in [21, 13.2.41]. The advantage of using and the form (7) instead of is that it behaves continuously in interval arithmetic when straddles the real axis. The discrete jumps that occur in (7) when crossing branch cuts on the right-hand side only contribute by an exponentially small amount: when is large enough for the asymptotic expansion to be used, is continuous where dominates and is continuous where is negligible.

6.3 Algorithm selection

Let be the target precision in bits. To support unrestricted evaluation of or , the convergent series should be used for fixed when , and the asymptotic series should be used for fixed when . Assuming that , the error term in the asymptotic series is smallest approximately when , and the magnitude of the error then is approximately . In other words, the asymptotic series can give up to bits of accuracy. Accordingly, to compute either or , there are four main cases:

  • For when , use the series directly.

  • For when , use (7) and evaluate two series (with error bounds for each).

  • For or when , use the series directly.

  • For or when , use (6) and evaluate two series. If , compute by substituting and evaluating the right hand side of (6) using power series arithmetic.

The cutoff based on and is correct asymptotically, but somewhat naive since it ignores cancellation in the series and in the connection formula (6). In the transition region , all significant bits may be lost. The algorithm selection can be optimized by estimating the amount of cancellation with either formula, and selecting the one that should give the highest final accuracy. For example, assuming that both are small, while the terms in the series grow to about , so bits are lost to cancellation, while no cancellation occurs in the series. A more advanced scheme should take into account and . The algorithm selection in Arb generally uses the simplified asymptotic estimate, although cancellation estimation has been implemented for a few specific cases. This is an important place for future optimization.

In general, the Kummer transformation is used to compute for , so that worst-case cancellation occurs around the oscillatory region , . An interesting potential optimization would be to use methods such as [17] to reduce cancellation there also.

6.4 Bessel functions

Bessel functions are computed via the series for small and via for large . The formula together with (7) gives the asymptotic expansions of

To compute itself when is large, and are used according to the sign of midpoint of , to avoid evaluating square roots on the branch cut. For , we use when is large, and

(8)

otherwise, with a limit computation when . Note that it would be a mistake to use (8) for all , not only because it requires evaluating four asymptotic series instead of one, but because it would lead to exponentially large cancellation when .

6.5 Other functions

The other functions in Table 1 are generally computed via , , or series for small (in all cases, could be used, but the alternative series are better), and via or for large . Airy functions, which are related to Bessel functions with parameter , use a separate implementation that does not rely on the generic code for and . This is done as an optimization since the generic code for hypergeometric series currently does not have an interface to input rational parameters with non-power-of-two denominators exactly.

In all cases, care is taken to use formulas that avoid cancellation asymptotically when , but cancellation can occur in the transition region . Currently, the functions automatically evaluate the function at the midpoint of the input interval and compute a near-optimal error bound based on derivatives, automatically increasing the internal working precision to compensate for cancellation. This could also be done for other functions in the future, most importantly for exponential integrals and Bessel functions of small order.

7 The Gauss hypergeometric function

Function Notation
Hypergeometric function
Chebyshev functions
Jacobi function
Gegenbauer (ultraspherical) function
Legendre functions
Spherical harmonics
Incomplete beta function
Complete elliptic integrals
Table 2: Implemented variants and derived cases of the Gauss hypergeometric function.

The function  is implemented for general parameters, together with various special cases shown in Table 2. For Legendre functions, two different branch cut variants are implemented. We list for completeness; Chebyshev functions are generally computed using trigonometric formulas (or binary exponentiation when ), and complete elliptic integrals are computed using arithmetic-geometric mean iteration. Some other functions in Table 2 also use direct recurrences to improve speed or numerical stability in special cases, with the representation used as a general fallback.

7.1 Connection formulas

The function can be rewritten using the Euler and Pfaff transformations

(9)

It can also be written as a linear combination of two ’s of argument , , or (see [21, 15.8]). For example, the transformation reads

(10)

The first step when computing is to check whether the original series is terminating, or whether one of (9) results in a terminating series. Otherwise, we pick the linear fractional transformation among

that results in the argument of smallest absolute value, and thus the most rapid convergence of the hypergeometric series. In Arb, experimentally-determined tuning factors are used in this selection step to account for the fact that the first two transformations require half as many function evaluations. The coverage of the complex plane is illustrated in Figure 1. This strategy is effective for all complex except near , which we handle below.

Figure 1: Different shades correspond to different fractional transformations chosen depending on the location of in the complex plane. The coordinate axes and the unit circle are drawn in white. Black indicates the corner cases where no transformation is effective.

7.2 Parameter limits

The and transformations involve a division by , and the and transformations involve a division by . Therefore, when or respectively is an integer, we compute using arithmetic.

The limit computation can only be done automatically when (or ) is an exact integer. If, for example, are intervals representing and thus is a nonexact interval containing 1, the output would not be finite. To solve this problem, Arb allows the user to pass boolean flags as input to the function, indicating that or represent exact integers despite being inexact.

7.3 Corner case

When is near , we use analytic continuation. The function satisfies