A new scheme for the running coupling constant in gauge theories using Wilson loops

A new scheme for the running coupling constant in gauge theories using Wilson loops

Erek Bilgici111E-mail: erek.bilgici@uni-graz.at    Antonino Flachi222E-mail: flachi@yukawa.kyoto-u.ac.jp    Etsuko Itou333E-mail: itou@yukawa.kyoto-u.ac.jp    Masafumi Kurachi444E-mail: kurachi@lanl.gov    C.-J David Lin555E-mail: dlin@mail.nctu.edu.tw    Hideo Matsufuru666E-mail: hideo.matsufuru@kek.jp    Hiroshi Ohki777E-mail: ohki@yukawa.kyoto-u.ac.jp    Tetsuya Onogi888E-mail: onogi@yukawa.kyoto-u.ac.jp and Takeshi Yamazaki999E-mail: yamazaki@ccs.tsukuba.ac.jp
\preprintnumber

[5cm]YITP-09-11
KU-PH-003
LA-UR 09-01034
KEK-CP-221
KUNS-2193
UTCCS-P-52 \abstWe propose a new renormalization scheme of the running coupling constant in general gauge theories using the Wilson loops. The renormalized coupling constant is obtained from the Creutz ratio in lattice simulations and the corresponding perturbative coefficient at the leading order. The latter can be calculated by adopting the zeta-function resummation techniques. We perform a benchmark test of our scheme in quenched QCD with the plaquette gauge action. The running of the coupling constant is determined by applying the step-scaling procedure. Using several methods to improve the statistical accuracy, we show that the running coupling constant can be determined in a wide range of energy scales with relatively small number of gauge configurations.

1 Introduction

One of the key subjects upon which recent attention has been focused is the flavor dependence of Yang-Mills theories. In particular, given a number of flavors , the question is whether the theory has an (approximate) infrared fixed point. This question is triggered by efforts to construct an alternative mechanism of electroweak symmetry breaking, via assuming the existence of a new, strongly interacting sector beyond the electroweak scale [1]. The earliest model of this sort, the so-called technicolor[2, 3], gives rise to a dynamical electroweak symmetry breaking by introducing a QCD-like sector scaled up to some TeV. While theoretically appealing, the simplest form of the technicolor model and its variants with QCD-like dynamics are ruled out or disfavored by electroweak precision measurements. However, the possibility of such mechanism with a non-QCD-like theory [4, 5, 6, 7, 8, 9] is still open, and may provide observable signatures at the LHC. It is thus an important but challenging task to investigate the low-energy landscape of spontaneously broken, strongly interacting gauge theories [10].

Among the theoretical tools at hand, the numerical approach to lattice gauge theories has made it possible to gain quantitative information about strong dynamics of gauge theories. The current understanding can be summarized as follows. A vector-like gauge theory, e.g. QCD, is known to exhibit confinement and dynamical chiral symmetry breaking for small number of massless fermions, , in the fundamental representation of the gauge group. When is just below the value at which the asymptotic freedom sets in, the theory is conformal (unbroken chiral symmetry, no confinement) in the infrared. Such a theory is believed to remain conformal down to some critical value , where the coupling becomes strong enough and the transition to the confined chirally broken phase occurs. The range is called the conformal window.

It is thus essential to investigate strongly interacting gauge theories in a wide range of parameters, such as the number of colors, the number of flavors, and the fermion representations [11]. Several modern lattice studies in this research direction have recently been performed [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. In particular, the authors of Refs. [13, 23] performed calculation of the running coupling constant using the Schrödinger functional scheme, and found evidence for an infrared fixed point in gauge theory with . However, it is important to study the running coupling constant in different renormalization schemes, in order to conclude that the fixed point is not an artifact due to a particular renormalization prescription but a physical one. For this purpose, we propose a new renormalization scheme which uses the Wilson loops as a key ingredient. Such a scheme is applicable to general gauge theories as long as the Wilson loops can be defined, and provides an efficient computational method for lattice gauge theories. Specifically, a renormalized amplitude is defined as the ratio among the Wilson loops, namely the Creutz ratio, and its perturbative counterpart. The former can be evaluated non-perturbatively by Monte Carlo simulation, while the latter calculated analytically once the underlying theory is specified. By properly defining the non-perturbatively renormalized coupling constant, its scale dependence is extracted using the step-scaling procedure, i.e., from the volume dependence of the coupling [25].

Applying our scheme will provide not only an independent check on the extent of the conformal window, but also several computational advantages. The Creutz ratio can be obtained without discretization errors, provided these errors are absent in the lattice action. This means our scheme is in principle free from any systematic effect. Furthermore, this scheme only involves simple gluonic observables, therefore does not introduce any particular kinematical setup which can deteriorate the discretization error or break chiral symmetry. Therefore it can be applied to simulations with dynamical fermions of any type, without restrictions on . For these features, this scheme may be an attractive alternative to the Schrödinger functional scheme or the twisted Polyakov loop scheme[26, 27, 28].

Before performing calculations for the gauge theories with dynamical fermions, as a benchmark test, we apply this new scheme to the computation of the running coupling constant in quenched lattice QCD. The numerical calculation is performed using the plaquette gauge action with periodic boundary conditions. These boundary conditions are chosen for simplicity. Nevertheless, it results in effects of degenerate vacua known as the “toron” [29]. Our scheme can, however, be applied in principle to any choice of boundary conditions, such as twisted boundary conditions, which ensure no unwanted zero-mode contributions by inducing non-trivial background configurations. Adopting several methods to improve statistical accuracy, we can determine the running of the coupling constant in a wide range of energy scales with a relatively small number of gauge configurations.

Another essential ingredient of our scheme is the perturbative calculation of the renormalization constant. This is performed analytically using zeta function resummation techniques, which prove to be quite convenient. First of all, zeta function techniques offer a natural method to study the analyticity (and regularity) properties of the perturbative counterpart of the Creutz ratio. In addition, some algebraic rearrangements of zeta functions, originally due to Chowla and Selberg [30], allow us to recast the expressions in terms of analytic functions accompanied by some exponentially converging series, whose evaluation is almost trivial and requires little computer power. The zeta function methods we apply can be easily extended to any boundary conditions and to the case of the Polyakov lines [31].

This paper is organized as follows. In the next section we give the definition of the new scheme. The perturbative calculation is illustrated in Sec. 3. Section 4 is devoted to the details of our numerical simulations, after brief introduction to the step-scaling procedure. Section 5 contains discussion on the numerical results and comparison with other results in the literature. Finally, Sec. 6 summarizes our conclusions. The paper contains two appendices where technical details and simulation parameters are reported. Preliminary results of this work have been presented in Ref. [32]

2 Wilson Loop Scheme

In this section, we define a new renormalization scheme, the ‘Wilson loop scheme’. Let us consider an amplitude whose tree-level approximation is

(1)

where is the bare coupling constant, and is a coefficient of proportionality that does not depend on and can be explicitly calculated for a given underlying theory. With a non-perturbatively calculated amplitude at the scale , the renormalization constant relates the renormalized coupling constant, , to the bare one, leading to the relation

(2)

Although in particle physics an -matrix element, i.e. a scattering amplitude, is usually adopted as , to define one can equivalently use any physical quantity that can be perturbatively expanded and is proportional to at the tree-level.

We define the Wilson loop scheme by taking the ‘amplitude’ to be

(3)

where is the Wilson loop with the temporal and spatial sizes and , on a lattice of the physical size . In this work, we take to be the same as , and drop it in the argument of the Wilson loop. The scale will be identified as the renormalization scale later. On a finite lattice, , and thus , also depend on the lattice spacing which is determined by the bare coupling . The dependence of on is removed by taking the continuum limit, . A pictorial definition of the Wilson loop is shown in Fig. 2.

Figure 1: Wilson loop defined on the latticized space-time box. , and , represent the size of the box and the Wilson loop in the temporal and spatial directions, respectively; is the lattice spacing.
Figure 2: Schematic illustration of the perturbative expansion of the Wilson loop.
Figure 1: Wilson loop defined on the latticized space-time box. , and , represent the size of the box and the Wilson loop in the temporal and spatial directions, respectively; is the lattice spacing.

Using lattice perturbation theory (see Fig. 2), can be shown to be proportional to at the lowest order. Thus, once the value of is calculated, relation (2) leads to, after taking the continuum limit, a prescription to obtain the renormalized coupling:

(4)

In the above expression we have made explicit that in the continuum limit is a regular function of only. This will be proved in the next section. The remaining factor on the right hand side of Eq. (4) can be evaluated on the lattice as the Creutz ratio,

(5)

where and . The value of is evaluated by a Monte Carlo simulation.

The renormalized coupling constant in the Wilson loop scheme can be written as

(6)

The quantity depends on three different scales, , , and ; by taking the ratio to , we use , , and as the independent parameters. Fixing to a specific value means fixing the renormalization scheme. The ratio specifies the discretization of the box, and can be removed by taking the continuum limit, . After fixing the two dimensionless parameters and , becomes a function of single scale, . In our scheme, following the step-scaling procedure, is identified as the scale at which the renormalized coupling is defined.

Ref. [33] offers an alternative definition of the QCD running coupling related to Wilson loops and discusses its finite-size effects in . Differently from us, their scheme defines the coupling constant in the limit, in which the renormalized coupling is related to the quark force.

There are several advantages in using the Wilson loop scheme. An evident one is that our scheme does not contain systematic effects as long as they are absent in the lattice action. This is because the Creutz ratio is free from discretization errors, due to the automatic improvements of the heavy quark propagator after the redefinition of the mass and the wave function [34]. This is in contrast to the case of the Schrödinger functional scheme, in which the boundary counter terms give rise to additional systematic errors. Such a particular kinematical setup also breaks chiral symmetry. Furthermore, this scheme only involves simple gluonic observables and can be easily applied to the case with any type of dynamical fermions without restriction to the number of flavors.

3 Computation of

One of the indispensable ingredients of the scheme presented in the previous section is the calculation of the coefficient in Eq. (1). It can be generically split into two terms:

(7)

where represents the zero-mode contribution, while can be expressed as

(8)

where the summation is taken over integer values of (=0,…,3) except for the case (indicated by the prime in the sum), and . The zero mode contribution depends on the boundary conditions. In the following, we will concentrate on the case of periodic boundary conditions. In this case, was initially calculated in Ref. [29]. For SU gauge group, is given by:

(9)

The scope of this section is to present a method to compute the quantity . The method we develop will be illustrated for the case of periodic boundary conditions, but it can be applied, with minor changes, to the case of twisted or mixed boundary conditions. As we have seen, the contribution from the zero mode, , can be separated from the rest and is obviously regular. Thus to compute (and to prove the regularity of) , we only need to consider . Our starting point is the quantity:

(10)

can be obtained from via

Although it is not possible to find a closed form for in terms of elementary functions, the use of zeta function resummation techniques and basic analytic continuation allows us to recast into the form of a practically computable quantity, and to prove the regularity of through an explicit calculation. The computation is carried out in a few steps. The first is the evaluation of the sum over by using the Poisson summation formula. Then, the summation over and , is written in terms of the Epstein zeta functions. After these steps, the expression of becomes compact. However, without further rearrangements, it is of little practical use. To this aim, it is convenient to rewrite the Epstein zeta functions using the Chowla-Selberg formula that renders the zeta functions into the form of elementary analytic functions plus some rapidly converging series. The subsequent step is to analytically perform the integrals introduced when using the Poisson summation formula, and finally perform the remaining summations numerically. Although the above procedure may seem involved, the actual implementation is rather simple. The method has also the bonus of providing a proof of the regularity of the Creutz ratio, as we will explicitly show in the following.

The first step of our procedure is to employ the Poisson summation formula:

(11)

A straightforward application of the above relation to the function gives:

(12)

where we have used the standard definition of the generalized Epstein zeta function:

(13)

The parameter is a regulator, introduced to perform the necessary analytical continuations. The limit will be taken at the end of the calculation. It is interesting that the function can be entirely written in terms of the integral function

(14)

Although compact, the result Eq. (12) requires further manipulation. A useful way to handle these functions is to make use of the Chowla-Selberg formula. Refs. [35, 36] develop the appropriate formalism that allows us to express as the sum of analytic functions plus a rapidly converging series:

(15)

The other tool is the following integral formula (see Ref. [37]):

The procedure is now straightforward and consists in using the relations (15) and (3) in Eq. (12). Some computations lead to

(17)

It can be easily checked that, in the above expression, the limit can be taken safely giving

(18)

It is evident that the quantity is regular. We can now substitute Eq. (18) into Eq. (10), and rearrange it as

(19)

where we have separated the contribution from the remaining part which is exponentially suppressed. This separation leads to the definition

(20)
(21)

where

(22)
(23)
(24)
(25)
(26)
(27)

Due to the exponential suppression, the terms , , and , and thus , are clearly regular. Therefore, to prove the regularity of , we only have to show that is also regular. To show that the terms (22), (23), and (24) also lead to a regular expression for (and to compute them), it requires further manipulations.

The first term, (22), can be computed analytically:

(28)

where is the Euler psi function [37]. The remaining two terms, and , can be rearranged by performing first the summation over , and then by using the Chowla-Selberg formula. We leave the details in Appendix A, and present the results here:

(29)
(30)

Written as above, it is a trivial matter to see that, after taking the derivative of , , and with respect to , , and thus , is nicely behaved due to the exponential fall-off of the Kelvin functions .

The last step of our procedure consists in evaluating the above expressions. The numerical computation of the sums in Eqs. (25), (26), (27), (29), and (30), does not present any problem due to the exponential suppression. Differentiating with respect to , substituting , and combining the results according to Eqs. (8), (9), (10) lead to the result for . Figure 3 shows the dependence of the function with respect to . Table 1 provides some indicative values for .

Figure 3: The figure shows the dependence of on . The solid line represents , in the continuum limit, according to the analytical expressions given in the text. The squares are results obtained using numerical lattice calculation for the sum in Eq. (8). As can be seen from the figure, the (continuum) limit of for exists and is finite. Also, the convergence of the lattice computation to the continuum value is faster for larger values of .
Table 1: Values for from the continuum calculation.

4 Numerical Simulation

In this section, we will describe the details of our numerical simulations. For later use, we define the coupling-squared, ,

(31)

where . Note that we express as a function of , , and instead of , and . The above redefinition is chosen for convenience, since , , and are the actual input parameters for the simulations.

4.1 Step Scaling

We begin by briefly reviewing the step-scaling procedure (see Refs. [25, 38, 39] for details), that we use to evaluate the evolution of the running coupling in a wide range of the energy scale on the lattice.

The first step is to fix a value for , and find a set of parameters, , which produce the same value of for several different choices of :

(32)

We achieve this by tuning the value of in such a way that the physical volume is fixed for different values of . We denote this fixed physical volume for the starting point of the step-scaling procedure by .

The next step is to vary the physical volume from to , which gives the evolution of the running coupling from the energy scale to , where is the scaling factor. This step can be performed by changing the lattice size from to , leaving each value of unchanged. Values of calculated with these new parameter sets should be considered as the coupling at the energy scale up to discretization errors, and the extrapolation to the continuum limit can be taken

(33)

where is the renormalization factor as defined below Eq.(1). The resultant value of the coupling, , should be considered as the renormalized coupling at the energy scale . This is the way to obtain a single discrete step of evolution of the running coupling with scaling factor .

Next, we find a new parameter set for , which reproduces the value of obtained in the previous step. Here, we chose the parameter set in such a way that the new lattice size is equal to the original one, . From here, we can repeat exactly the same procedure described so far: we calculate with the parameter set . By iterating this procedure times, we obtain the evolution of the running coupling from the energy scale to .

4.2 Simulation Parameters

We use the standard Wilson plaquette gauge action defined on a four-dimensional Euclidean lattice with finite volume . In this work, we adopt untwisted periodic boundary conditions. However, it is straightforward to use other boundary conditions (e.g., twisted boundary conditions) when necessary. Gauge configurations are generated by using the pseudo-heatbath algorithm with over-relaxation, mixed in the ratio of :. In the remainder of this paper, we use the word “a sweep” to refer to the combination of one pseudo-heatbath update sweep followed by five over-relaxation sweeps. In order to eliminate the influence of autocorrelation, we either take large enough number of sweeps between measurements, or adopt the method of binning with a large enough size of bin to estimate the statistical error reliably. We perform the numerical simulations based on the step-scaling procedure explained in the previous section for a fixed value . (The reason for this choice will be given in the next subsection.) We set the scaling parameter with five different starting lattice sizes being , , , and , which means lattice sizes after the scaling at each step are , , , and , respectively. We take (which corresponds to ) as the starting value of the first step of the step-scaling procedure. Tunings of the values of (namely, finding values of which satisfy for each , , , and in the first step) are carried out by interpolating the data obtained from simulations for different values of and shown in Fig. 4.

Figure 4: obtained from simulations for different values of and .

Each data point in the figure is calculated from 200 gauge configurations with 1000-sweep separation between configurations. Once we obtain values of which reproduce for , , , and , we carry out simulation for step-scaling, namely simulations for , , , and with the values of we tuned. These results are used to take the continuum limit, then the resultant value of becomes a starting value for the next step. We iterate this procedure seven times. The combination of and used for the simulations are shown in Table 2 101010The values of in Table II have numerical ambiguities coming from the statistical errors of the data used for the interpolation. These ambiguities propagate to the error of the physical scale at each step which we are trying to fix. However, it turned out that the effect of that fluctuation to the result of simulation for at each lattice size was negligibly small compared to the statistical error of the simulation itself. Thus, we ignore those errors and resultant fluctuation of the physical scale in the rest of our analysis..

Step Step Step Step
Step Step Step
Table 2: Parameter sets, and , used for the simulation.

4.3 Simulation Details

There are several practical steps to calculate the quantity from numerical simulations. Here we explain various technical details of our computations.

We use the APE smearing [40] of link variables defined by the following equation;

(34)

where and denote the smearing level and the smearing parameter, respectively. The smearing is done for links in all four directions. The result does not depend on the value of significantly, and we take in the present study. Here, we need to find the optimal values of and the smearing level , by considering the following requirements. For better control of discretization error, it is preferable to choose a larger value of . Meanwhile, for the purpose of reducing the statistical error, it is better to take a smaller value of and higher number of . Fig. 5 shows the smearing-level dependence of in the case of and as an example. From this figure, we find the statistical error is notably reduced even at the smearing level one. In order to avoid over-smearing, should be smaller than . This condition leads to the lower bound, . We summarize the bound from this requirement in Table 3. We observe (see Fig. 5 for the example of the case at ) that the data of and in higher smearing level are not reliable because of over-smearing. By considering all the above requirements, we find that is the optimal choice.

Figure 5: The values of with statistical error for several values of in the case of and . Data connected by solid, dotted and dashed lines denote the data with , and smearing levels, respectively.
Table 3: The lower bound on to avoid over smearing.

Once we fix the value of ( in our current study), we need to estimate the value of for non-integer . We interpolate the value of using a quadratic function:

(35)

with interpolation ranges for each lattice size listed in Table 4. An example is shown in Fig. 5 where corresponds to the interpolation to in the case of .

Table 4: Ranges used to interpolate the value of . The column “” is the value that corresponds to .

The last step of the calculation is to take the continuum limit of from data obtained for different combinations of and listed in each column of Table 2. We show two example plots in Fig. 6, which are the continuum extrapolations for Step 1 and Step 7. Since our Wilson loop scheme does not contain systematic errors, we extrapolate to the continuum limit using a fit function linear in . Four data points (, , and ) are used for this extrapolation (shown as red lines in Fig. 6), and the resultant value is adopted as the central value of in the continuum limit. We also take the continuum limit by using a fit function quadratic in with five data points (, , , and ) (indicated by pink curves in Fig. 6), and the difference between the central values of two fits are adopted as the systematic error coming from possible higher order discretization effects. In Fig. 6, we have also plotted extrapolation by a linear function with five points of data for comparison. In this figure, resultant values of the continuum limit obtained from different fit functions are plotted at . (For better visibility, we slightly displaced the data obtained from 5-point quadratic and 5-point linear extrapolations.) All the error bars shown in Fig. 6 are statistical only.

Figure 6: The continuum limit of . The left and right panels show Steps 1 and 7, respectively.

4.4 Numerical Results

We now show the results of our simulations which were performed using parameters in Table 2 with procedures explained in the previous section. Details of parameter choice and numerical results are summarized in Appendix B.

4.4.1 Running coupling

Figure 7: Results of simulations and continuum limit of in Steps  (from bottom to top).

In Fig. 7, we plot the resulting values of and their statistical errors for , , and for Steps . The continuum limit was taken in the way explained in the previous section, and both the statistical and systematic errors are estimated. In Fig. 7 the values of in the continuum limit are shown with statistical and systematic errors added in quadrature.

The running coupling constant is extracted by dividing by . The evolution of the running coupling constant is obtained by connecting the resultant values for Steps by assigning appropriate scales to these steps. We plot the results in Fig. 8.

Figure 8: Evolution of the running coupling constant in Wilson loop scheme, , obtained from the step-scaling procedure. Horizontal axis shows the energy scale in units of . Three curves, from bottom to top, show scheme-independent perturbative running couplings with one-loop and two-loop approximation, as well as that with three-loop approximation in the scheme.

We define the starting energy scale of Step as , and the evolution of the running coupling constant is plotted as a function of energy in units of . In this figure, errors are accumulated with the evolution of the running coupling appropriately111111For the appropriate procedure of accumulating error, we need values of derivative of the step scaling function . Here, we used the result of polynomial global fitting of to obtain approximate values of . See the next sub-subsection for detail. in the same way as explained in Ref. [41] . For comparison, we also plot scheme-independent perturbative running couplings with one-loop and two-loop approximation as well as three-loop approximation in the scheme (from bottom to top). In the high energy region, where the perturbative computation is reliable, the Wilson loop scheme is consistent with the perturbation theory. The figure also shows that our simulation is reaching deep into the low energy region, in which perturbative calculation is no longer reliable.

4.4.2 Beta function

From the results of the simulation, we can also extract the non-perturbative function by using the method explained in Ref. [42] . To this end, it is useful to define the step scaling function in the continuum limit as [41]

(36)

We list the simulation results for the step scaling function in Table 5.

1.728 1.97(6)
1.97 2.23(7)
2.23 2.68(5)
2.68 3.29(2)
3.29 4.33(14)
4.33 6.89(22)
6.89 11.5(3)
Table 5: Simulation results for the step scaling function . The values in parentheses represent total errors in units of the last digits.

In the week coupling region, can be perturbatively expanded as [42]

(37)

with the coefficients

(38)
(39)

where and are the one-loop and the two-loop coefficients of the function, respectively. In Fig. 9, we plot the data listed in Table 5 together with two-loop perturbative curve for (solid), as well as two curves which are the result of the following two kinds of polynomial fit.

Figure 9: Polynomial fit (dashed and dotted curves) of the data for the step scaling function . Fit ansatz and resultant values of fitting parameters for two curves are explained in the text. The two-loop perturabative curve (solid) is also plotted for comparison.

For the upper (dashed) curve, we fitted to the data in the range of , and obtained . For the middle (dotted) curve, we fitted to all the data, and obtained and . As is expected, in the week coupling region, the data is well explained by the two-loop perturbative result, and polynomial functions fit well to the data also. Meanwhile, the figure clearly shows that neither two-loop perturbative curve, nor simple polynomial fits can explain the behavior of the data in larger values of . This is nothing but an indication of the emergence of the non-perturbative effect.

The formula to obtain non-perturbative function from the step scaling function was given in Ref. [42] as

(40)

By applying this formula recursively, we obtained the discrete function as shown in Fig. 10.

Figure 10: Non-perturbative function in the Wilson-loop scheme. One-loop (green dashed curve) and two-loop (blue dotted curve) perturbative functions are also plotted for comparison.

Here, while the values of the are directly given by the results of the simulation121212Unlike was done in Ref. [42], we do not have to carry out interpolation in since we have data exactly on the value of we need., we need to use the result of the fitting to obtain the values of . To obtain the latter, we adopted the global fitting explained above. As we noted, it fails to fit to the data in larger values of , however, as can be seen from the figure, we can expect that it gives approximate values. Also, we used the value of perturbative two-loop approximation for the value of at the smallest as an initial input value to use Eq.(40) recursively. In the figure, we also plotted the one-loop and two-loop perturbative function for comparison. From this result, we confirmed again that for smaller coupling region, the data consistently reproduce the perturbative picture, while the non-perturbative effect begins to emerge for larger coupling region.

5 Discussion

We have hitherto concentrated on how the coupling constant runs under the relative change of the scale without referring to its absolute value. The absolute scale can be estimated using the Sommer scale, , defined by

(41)

Based on the phenomenological potential models, corresponds to about fm. In this work, however, we use only to set the reference scale for comparison with other scheme. The formula relating and is given in Ref.[43],