Asymptotics of Landau constants with optimal error bounds

# Asymptotics of Landau constants with optimal error bounds

Yutian Li, Saiyu Liu, Shuaixia Xu and Yuqiu Zhao111Corresponding author. E-mail address: stszyq@mail.sysu.edu.cn
Institute of Computational and Theoretical Studies, and Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong
School of Mathematics and Computational Science, Hunan University of Science and Technology, Xiangtan 411201, Hunan, China

Institut Franco-Chinois de l’Energie Nucléaire, Sun Yat-sen University, GuangZhou 510275, China
Department of Mathematics, Sun Yat-sen University, GuangZhou 510275, China

Abstract: We study the asymptotic expansion for the Landau constants

 πGn∼lnN+γ+4ln2+∞∑s=1β2sN2s,  n→∞,

where , is Euler’s constant, and are positive rational numbers, given explicitly in an iterative manner. We show that the error due to truncation is bounded in absolute value by, and of the same sign as, the first neglected term for all nonnegative . Consequently, we obtain optimal sharp bounds up to arbitrary orders of the form

 lnN+γ+4ln2+2m∑s=1β2sN2s<πGn

for all , , and .

The results are proved by approximating the coefficients with the Gauss hypergeometric functions involved, and by using the second order difference equation satisfied by , as well as an integral representation of the constants .

MSC2010: 39A60; 41A60; 41A17; 33C05

Keywords: Landau constants; second-order linear difference equation; asymptotic expansion; sharper bound.

## 1 Introduction and statement of results

A century ago, it was shown by Landau [7] that if a function is analytic in the unit disc, such that , with the Maclaurin expansion

 f(z)=a0+a1z+a2z2+⋯+anzn+⋯,   |z|<1,

then it holds

 |a0+a1+a2+⋯+an|≤Gn,   n=0,1,2,⋯,

where and

 Gn=1+(12)2+(1⋅32⋅4)2+⋯+(1⋅3⋯(2n−1)2⋅4⋯(2n))2 (1.1)

for , and the equal sign can be attained for each . The constants are termed Landau’s constants; see, e.g., Watson [15].

Efforts have been made to approximate these constants from the very beginning. Indeed, Landau himself [7] has worked out the large- behavior

 Gn∼1πlnn,   as  n→∞;

Since then, the approximation of goes to two related directions. One is to find sharper bounds of for all positive integers , and the other is to obtain large- asymptotic approximations for the constants.

### 1.1 Sharper bounds

Many authors have worked on the sharp bounds of . For example, in 1982, Brutman [2] obtains

 1+π−1ln(n+1)≤Gn<1.0663+π−1ln(n+1),  n=0,1,2,⋯.

The result is improved in 1991 by Falaleev [4] to give

 1.0662+π−1ln(n+0.75)

In 2000, an attempt is made by Cvijović & Klinowski [3] to use the digamma function (see, e.g., [13, p.136, (5.2.2)]). They prove that

 c0+π−1ψ(n+5/4)

and

 0.9883+π−1ψ(n+3/2)

where , is the Euler constant ([13, (5.2.3)]).

Inequalities of this type are revisited in a 2002 paper [1] of Alzer. In that paper, the problem is turned into the following: to find the largest and smallest such that

 c0+π−1ψ(n+α)≤Gn≤c0+ψ(n+β)  for~% {}all n≥0.

The answer is that and , appealing to the complete monotonicity of .

In 2009, Zhao [19] starts seeking higher terms in the bounds. A formula in [19], holding for all positive integer , reads

 ln(16n)+γ−14n+5192n2<πGn−1

Several authors have made improvements. In a 2011 paper [9], Mortici gives an inequality of the above type involving higher order term . A term is brought in by Granath in a recent paper [5] in 2012.

It seems possible to obtain sharper bounds involving terms of higher and higher orders. Accordingly, difficulties may arise. The case by case process of taking more and more terms might be endless.

### 1.2 Asymptotic approximations

Most of the above inequalities can be used to derive asymptotic approximations for . Such approximations can also be obtained by employing integral representations, generating functions and relations with hypergeometric functions; see, e.g., [8]. Indeed, back to Watson [15], a formula of asymptotic nature is derived by using a certain integral representation:

 Gn =1πln(n+1)+{Γ(n+3/2)}2π2Γ(n+1)m−1∑l=1{Γ(l+1/2)}2Γ(l+1)Γ(n+l+2)× (1.3) ×{ψ(l+n+2)−ln(n+1)+ψ(l+1)−2ψ(l+1/2)}+O{(n+1)1−m}

for large and positive integer . Theoretically, an asymptotic expansion can be extracted from (1.3) by substituting the large- expansions of and into it. In fact, Waston obtains

 Gn∼1π[ln(n+1)+γ+4ln2−14(n+1)+5192(n+1)2+⋯],

of which (1.2) is an extended version.

We skip to some very recent progress in this direction. In the manuscript [6], Ismail, Li and Rahman derive a complete asymptotic expansion for the Landau constants , using the asymptotic sequence . The approach is based on a formula of Ramanujan, which connects the Landau constants with a hypergeometric function.

Several relevant papers are worth mentioning. In [11], Nemes and Nemes derive full asymptotic expansions using a formula in [3]. They also conjecture a symmetry property of the coefficients in the expansion. The conjecture has been proved by G. Nemes himself in [10].

###### Proposition 1.

(Nemes) Let . The Landau constants have the following asymptotic expansions

 Gn∼1πln(n+h)+1π(γ+4ln2)−∑k≥1gk(h)(n+h)k (1.4)

as , where the coefficients are certain computable constants that satisfy for every .

As an important special case, Nemes [10] has further proved that

 πGn∼ln(n+3/4)+γ+4ln2+∞∑s=1β2s(n+3/4)2s,  n→∞, (1.5)

where the coefficients are positive rational numbers.

The argument in [10] is based on an integral representation of involving a Gauss hypergeometric function in the integrand. While in [8], the authors of the present paper study this asymptotic problem by using an entirely different approach, starting from an obvious observation that the Landau constants satisfy a difference equation

 Gn+1−Gn=[2n+12n+2]2(Gn−Gn−1),   n=0,1,⋯, (1.6)

as can be seen from the explicit formula (1.1), where .

By applying the theory of Wong and Li for second-order linear difference equations [17] to (1.6), the general expansion in (1.4) is obtained, and the conjecture of [11] is also confirmed. An advantage of this approach, compared with the previous ones, is that all coefficients in the expansion are given iteratively in an explicit manner.

### 1.3 A question and numerical evidences

As pointed out in [8], the case corresponding to (1.5) is numerically efficient since all odd terms in the expansion vanish. We will find that this expansion in terms of is even more special, both from asymptotic and sharper bound points of view.

From (1.5), as suggested by the alternating signs and by numerical calculations, there is a natural question as follows:

###### Question 1.

Is the error due to truncation of (1.5) bounded in absolute value by, and of the same sign as, the first neglected term? Or, more precisely, do we have the following?

 εl(N)β2l/N2l∈(0,1)  for  n=0,1,2,⋯  and  l=1,2,⋯, (1.7)

where , and

 εl(N)=πGn−{lnN+γ+4ln2+l−1∑s=1β2sN2s}. (1.8)

Recalling that are positive, it is readily seen that a positive answer to (1.7) is equivalent to

 ε2k(N)<0  and  ε2k−1(N)>0 (1.9)

for all and .

The question reminds us of an earlier work of Shivakumar and Wong [14], where an asymptotic expansion is obtained for the Lebesgue constants associated with the polynomial interpolation at the zeros of the Chebyshev polynomials, and the error in stopping the series at any time is shown to have the sign as, and is in absolute value less than, the first term neglected. Similar discussion can be found in, e.g., Olver [12, p.285], on the Euler-Maclaurin formula.

Numerical experiments agree with (1.7). The functions , , are depicted in Figure 1 for the first few .

### 1.4 Statement of results

In the present paper, we will justify (1.9). In fact, we will prove the following theorem.

###### Theorem 1.

For , it holds

 (−1)l+1εl(N)>0 (1.10)

for and , where is defined in (1.8), the coefficients are determined iteratively in (2.4) below.

The above theorem has direct applications both in asymptotics and sharp bounds. In asymptotic point of view, we can obtain error bounds which in a sense are optimal. To be precise, we have the following.

###### Theorem 2.

The error due to truncation of (1.5) is bounded in absolute value by, and of the same sign as, the first neglected term for all nonnegative . That is,

 0<(−1)l+1εl(N)=|εl(N)|<|β2l|N2l=(−1)l+1β2lN2l (1.11)

for and , where .

The error bound in (1.11) is the first neglected term in the asymptotic expansion, and hence is optimal and can not be improved. The inequalities in (1.11) can be derived from Theorem 1 by noticing that , as can be seen from (1.8).

Another application of Theorem 1 is the construction of sharp bounds up to arbitrary orders.

###### Theorem 3.

For , it holds

 lnN+γ+4ln2+2m∑s=1β2sN2s<πGn

for all , , and .

The inequalities in (1.12) are understood as sharp bounds on both sides up to arbitrary orders. In a sense, the bounds are optimal and can not be improved.

The first few coefficients are listed in Table 1, as can be evaluated via (2.4).

## 2 Proof of Theorem 1

The proof is based on the difference equation (1.6) and an approximation of the coefficients . To justify Theorem 1, several lemmas are stated, and all, except one, are proved in the present section. While the validity of Lemma 2 is the objective of the next section.

### 2.1 The coefficients βs in (1.5)

Write the difference equation (1.6) in the symmetrical form

 (1+14N)2w(N+1)−(2+18N2)w(N)+(1−14N)2w(N−1)=0, (2.1)

in which for . As mentioned earlier and as in the previous paper [8], the Landau constants solves (2.1), having an asymptotic expansion

 πGn∼lnN+γ+4ln2+∞∑s=1βsNs, (2.2)

and the coefficients are determined by a formal substitution of (2.2) into (2.1); see Wong and Li [17]. The following result then follows:

###### Lemma 1.

For , the coefficients in expansion (2.2) fulfill

 β2k+1=0,  k=0,1,2,⋯, (2.3)

and

 β2k=−14k2(dk−1,k+1β2k−2+dk−2,k+1β2k−4+⋯+d1,k+1β2−d0,k+1),  k=1,2,⋯, (2.4)

where for ,

 dj,s=(2s+2j−2)(2s−2)!(2s−2j)!(2j−1)!+(2s−3)!8(2s−2j−2)!(2j−1)!  for  s≥j+2, (2.5)

and

 d0,s=(1s−12s−1)+116(s−1),  s=2,3,⋯. (2.6)

 β2k=(−1)k+1|β2k|,  k=1,2,⋯. (2.7)

Part of this lemma ( (2.3) and (2.7) ) has been proved in Nemes’ recent paper [10]. Part of it, namely (2.3), and an equivalent form of (2.4), has been proved in our earlier paper [8]. Following Wong and Li [17], (2.3) and (2.4) can be justified by substituting (2.2) into (2.1), expanding both sides in formal power series of , and equalizing the coefficients of the same powers.

It is readily seen that all for and , and for .

### 2.2 Analysis of Rl(N)

Here,

 Rl(N)=(1+14N)2εl(N+1)−(2+18N2)εl(N)+(1−14N)2εl(N−1), (2.8)

with the error term being given in (1.8), and .

There are several facts worth mentioning. It is readily seen from (1.8) that satisfies the difference equation (2.1), and can then be removed from in (2.8). If we write , then the logarithmic singularity at is also cancelled in . Therefore, each is an analytic function in for . Hence the asymptotic expansion for , in descending powers of , is actually a convergent Taylor expansion in ,

 Rl(N)=∞∑s=l+1rl,sx2s,  |x|<1, (2.9)

where

 rl,s=−(dl−1,sβ2l−2+dl−2,sβ2l−4+⋯+d1,sβ2−d0,s) (2.10)

for , with the leading coefficient ; cf. (2.4).

For later use, we estimate the ratio of the consecutive coefficients . To this end, we introduce a sequence of positive constants

 ρ0=1,  and ρl=(−1)l+1β2l(2l−1)!,  l=1,2,⋯. (2.11)

We shall use the following lemma and leave its proof to Section 3 below.

###### Lemma 2.

It holds

 ρk/ρk+1≤449π2 (2.12)

for .

Now we proceed to analyze (sometimes denoted by , understanding that ), so as to show that for . More precisely, we prove a much stronger result, as follows:

###### Lemma 3.

For and with , we have

 (−1)l+1rl,s>0,  s≥l+1, (2.13)

where are given in (2.9) and (2.10).

Proof: The lemma can be proved by using induction with respect to . Initially, we have

 R1(N)=∞∑s=2d0,sx2s.

Since for ; cf. (2.6), we see that (2.13) holds for .

In view of the fact that ; cf. Table 1, it is readily verified that

 R2(N)=∞∑s=3r2,sx2s=∞∑s=3(−11192d1,s+d0,s)x2s,

with all coefficients being negative. Indeed, in view of (2.5) and (2.6), we have

 r2,s=−11192(2s+2s−38)+[(1s−12s−1)+116(s−1)]<−11192⋅(2s)+1s<0

for . Thus (2.13) holds for .

Similarly, we can verify (2.13) for , recalling that ; cf. Table 1.

Assume that for , it holds for . From (2.10), we can write

 (−1)k+3rk+2,s=(−1)k+1rk,s+(−1)k(dk+1,sβ2k+2+dk,sβ2k) (2.14)

for . To show that (2.13) is valid for , it suffices to show that

 (−1)k(dk+1,sβ2k+2+dk,sβ2k)>0

for since the validity of (2.13) for is trivial. This is equivalent to show that

 ck+1,s−ck,sρk/ρk+1=(−1)k(dk+1,sβ2k+2+dk,sβ2k)8(s−1)2(2s−3)!ρk+1>0, (2.15)

where is defined in (2.11), and ; cf. (3.2) below. In view of (2.5), we may write

 ck,s={12(2s−2k)!+k2(s−1)(2s−2k)!}+{164(s−1)2(2s−2k−2)!}:=A+B.

For and , we have

 ck+1,s≥(2s−2k−1)(2s−2k)A+(2s−2k−3)(2s−2k−2)B≥56A+30B>4789ck,s.

The last inequality holds since .

From (2.12) in Lemma 2, it is readily verified that

 4789>449π2≥ρkρk+1

for . Then (2.15) holds for , and it follows that for . Accordingly, from (2.14) we see that (2.13) holds for . This completes the proof of Lemma 3.

### 2.3 Proof of Theorem 1

Now Lemma 3 implies that for all and all .

To show that for all , we note first that as . Hence for large enough. Now assume that (1.10) is not true. Then there exists a finite defined as

 M=max{N=n+3/4: n∈Z and ~εl(N)≤0},

so that is a positive integer and , while . For simplicity, we denote and . From (2.8) we have

 aM+1~εl(M+2)=(aM+1+bM+1)~εl(M+1)+bM+1(−~εl(M))+(−1)l+1Rl(M+1).

The later terms on the right-hand side are non-negative, hence we obtain

 aM+1~εl(M+2)≥(aM+1+bM+1)~εl(M+1),

which implies that

 ~εl(M+2)>~εl(M+1). (2.16)

Using (2.8) again for , we have

 aM+2~εl(M+3)≥(aM+2+bM+2)~εl(M+2)−bM+2~εl(M+1).

A combination of the previous two inequalities gives

 ~εl(M+3)>~εl(M+2). (2.17)

In general, we obtain

 ~εl(M+k+1)>~εl(M+k) (2.18)

by induction. From the equalities in (2.16), (2.17) and (2.18), we conclude that

 ~εl(M+k)>~εl(M+1) (2.19)

for all . Recalling that for , letting will give . This contradicts the definition of . Thus we have proved Theorem 1.

## 3 Proof of Lemma 2

The idea is simple: to approximate the coefficients , and then to work out the ratio . Yet the procedure is complicated.

A brief outline of the proof is as follows: In Section 3.1, we bring in an ordinary differential equation (3.10) with a specific analytic solution , of which are coefficients of the Maclaurin expansion. The function is then extended, in Section 3.2, and via the hypergeometric functions, to a function analytic in the cut-strip . An integral representation is then obtained by using the Cauchy integral formula, and the integration path is deformed based on the analytic continuation procedure. In Section 3.3, the integral is spilt, approximated, and estimated, and hence bounds for on both sides are established in (3.25) for all . Eventually, in Section 3.4, an upper bound for is obtained for all non-negative integer .

### 3.1 Differential equation

In terms of defined in (2.11), namely, and for , formula (2.4) can be written as

 cl,l+1ρl−cl−1,l+1ρl−1+⋯+(−1)kcl−k,l+1ρl−k+⋯+(−1)l−1c1,l+1ρ1+(−1)lc0,l+1ρ0=0 (3.1)

for , where , and

 cl−k,l+1=(2l−2k−1)!(2l−1)!dl−k,l+12dl,l+1=12(2k+2)!+l−k2l(2k+2)!+164l2⋅(2k)! (3.2)

for and . It can be verified from (2.6) that also takes the same form, that is, (3.2) is also valid for , .

The idea now is to approximate , and then to estimate the ratio .

Taking in (3.1), we have

 a1:=c1,2ρ1−c0,2ρ0=0,a2:=c2,3ρ2−c1,3ρ1+c0,3ρ0=0,a3:=c3,4ρ3−c2,4ρ2+c1,4ρ1−c0,4ρ0=0,⋯⋯.

Summing up gives

 ρ0(−c0,2x2+c0,3x4−c0,4x6+⋯)+∞∑k=1ρkx2k∞∑s=1(−1)s−1ck,k+sx2s−2=0. (3.3)

In view of (3.2), it is readily verified by summing up the series that

 −c0,2x2+c0,3x4−c0,4x6+⋯=−14+h(x)2−∫x0dt1t1∫t10th(t)dt16, (3.4)

where

 h(x):=1−cosxx2.

Also we have, for ,

 ∞∑s=1(−1)s−1ck,k+sx2s−2=h(x)2+kx2k∫x0t2k−1h(t)dt−1x2k∫x0dt1t1∫t10t2k+1h(t)dt16. (3.5)

Substituting (3.4) and (3.5) into (3.3), we obtain an equation

 −14+12h(x)u(x)+∫x012h(t)u′(t)dt−∫x0dt1t1∫t10t16h(t)u(t)dt=0, (3.6)

where

 u(x):=∞∑k=0ρkx2k. (3.7)
###### Remark 1.

The existence of defined above and the validity of (3.3) can be justified by showing that for all positive integers , and being positive constants. Indeed, from (3.2) it is readily seen that for . Now we assume that for , where is small enough such that . Then, by using (3.1) we have

 12|ρl|≤l∑k=1|cl−k,l+1||ρl−k|≤l∑k=1M0δ2k−2l(2k)!≤M0δ2l(coshδ−1).

Hence we have by induction.

Applying the operator to both sides of (3.6), we see that solves the second order differential equation

 [x(12h′(x)u(x)+h(x)u′(x))]′−xh(x)u(x)16=0 (3.8)

in a neighborhood of , with initial conditions and .

In the next few steps we derive a representation of for later use. First, substituting

 v(x)=√h(x)u(x)=√2sinx2xu(x) (3.9)

into equation (3.8) yields

 sinx2v′′(x)+12cosx2v′(x)−116sinx2v(x)=0 (3.10)

in a neighborhood of , with and .

It is shown in Remark 1 that is analytic at the origin. So is ; cf. (3.9). What is more, near , the function can be represented as a hypergeometric function. Indeed, a change of variable

 t=1−cosx2=sin2x2

turns the equation into the hypergeometric equation

 t(1−t)d2vdt2+(1−32t)dvdt−116v=0. (3.11)

Taking the initial conditions into account, it is easily verified that

 v(x)=1√2F(14,14;1;sin2x2)=1√2F(12,12;1;sin2x4), (3.12)

initially at , and then analytically extended elsewhere; cf. [13, (15.2.1)]. The second equality follows from a quadratic hypergeometric transformation; see [13, (15.8.18)].

### 3.2 Analytic continuation

Well-known formulas for hypergeometric functions include