A new scheme for the running coupling constant in gauge theories using Wilson loops\preprintnumber
UTCCS-P-52 \abstWe propose a new renormalization scheme of the running coupling constant in general gauge theories using the Wilson loops. The renormalized coupling constant is obtained from the Creutz ratio in lattice simulations and the corresponding perturbative coefficient at the leading order. The latter can be calculated by adopting the zeta-function resummation techniques. We perform a benchmark test of our scheme in quenched QCD with the plaquette gauge action. The running of the coupling constant is determined by applying the step-scaling procedure. Using several methods to improve the statistical accuracy, we show that the running coupling constant can be determined in a wide range of energy scales with relatively small number of gauge configurations.
One of the key subjects upon which recent attention has been focused is the flavor dependence of Yang-Mills theories. In particular, given a number of flavors , the question is whether the theory has an (approximate) infrared fixed point. This question is triggered by efforts to construct an alternative mechanism of electroweak symmetry breaking, via assuming the existence of a new, strongly interacting sector beyond the electroweak scale . The earliest model of this sort, the so-called technicolor[2, 3], gives rise to a dynamical electroweak symmetry breaking by introducing a QCD-like sector scaled up to some TeV. While theoretically appealing, the simplest form of the technicolor model and its variants with QCD-like dynamics are ruled out or disfavored by electroweak precision measurements. However, the possibility of such mechanism with a non-QCD-like theory [4, 5, 6, 7, 8, 9] is still open, and may provide observable signatures at the LHC. It is thus an important but challenging task to investigate the low-energy landscape of spontaneously broken, strongly interacting gauge theories .
Among the theoretical tools at hand, the numerical approach to lattice gauge theories has made it possible to gain quantitative information about strong dynamics of gauge theories. The current understanding can be summarized as follows. A vector-like gauge theory, e.g. QCD, is known to exhibit confinement and dynamical chiral symmetry breaking for small number of massless fermions, , in the fundamental representation of the gauge group. When is just below the value at which the asymptotic freedom sets in, the theory is conformal (unbroken chiral symmetry, no confinement) in the infrared. Such a theory is believed to remain conformal down to some critical value , where the coupling becomes strong enough and the transition to the confined chirally broken phase occurs. The range is called the conformal window.
It is thus essential to investigate strongly interacting gauge theories in a wide range of parameters, such as the number of colors, the number of flavors, and the fermion representations . Several modern lattice studies in this research direction have recently been performed [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. In particular, the authors of Refs. [13, 23] performed calculation of the running coupling constant using the Schrödinger functional scheme, and found evidence for an infrared fixed point in gauge theory with . However, it is important to study the running coupling constant in different renormalization schemes, in order to conclude that the fixed point is not an artifact due to a particular renormalization prescription but a physical one. For this purpose, we propose a new renormalization scheme which uses the Wilson loops as a key ingredient. Such a scheme is applicable to general gauge theories as long as the Wilson loops can be defined, and provides an efficient computational method for lattice gauge theories. Specifically, a renormalized amplitude is defined as the ratio among the Wilson loops, namely the Creutz ratio, and its perturbative counterpart. The former can be evaluated non-perturbatively by Monte Carlo simulation, while the latter calculated analytically once the underlying theory is specified. By properly defining the non-perturbatively renormalized coupling constant, its scale dependence is extracted using the step-scaling procedure, i.e., from the volume dependence of the coupling .
Applying our scheme will provide not only an independent check on the extent of the conformal window, but also several computational advantages. The Creutz ratio can be obtained without discretization errors, provided these errors are absent in the lattice action. This means our scheme is in principle free from any systematic effect. Furthermore, this scheme only involves simple gluonic observables, therefore does not introduce any particular kinematical setup which can deteriorate the discretization error or break chiral symmetry. Therefore it can be applied to simulations with dynamical fermions of any type, without restrictions on . For these features, this scheme may be an attractive alternative to the Schrödinger functional scheme or the twisted Polyakov loop scheme[26, 27, 28].
Before performing calculations for the gauge theories with dynamical fermions, as a benchmark test, we apply this new scheme to the computation of the running coupling constant in quenched lattice QCD. The numerical calculation is performed using the plaquette gauge action with periodic boundary conditions. These boundary conditions are chosen for simplicity. Nevertheless, it results in effects of degenerate vacua known as the “toron” . Our scheme can, however, be applied in principle to any choice of boundary conditions, such as twisted boundary conditions, which ensure no unwanted zero-mode contributions by inducing non-trivial background configurations. Adopting several methods to improve statistical accuracy, we can determine the running of the coupling constant in a wide range of energy scales with a relatively small number of gauge configurations.
Another essential ingredient of our scheme is the perturbative calculation of the renormalization constant. This is performed analytically using zeta function resummation techniques, which prove to be quite convenient. First of all, zeta function techniques offer a natural method to study the analyticity (and regularity) properties of the perturbative counterpart of the Creutz ratio. In addition, some algebraic rearrangements of zeta functions, originally due to Chowla and Selberg , allow us to recast the expressions in terms of analytic functions accompanied by some exponentially converging series, whose evaluation is almost trivial and requires little computer power. The zeta function methods we apply can be easily extended to any boundary conditions and to the case of the Polyakov lines .
This paper is organized as follows. In the next section we give the definition of the new scheme. The perturbative calculation is illustrated in Sec. 3. Section 4 is devoted to the details of our numerical simulations, after brief introduction to the step-scaling procedure. Section 5 contains discussion on the numerical results and comparison with other results in the literature. Finally, Sec. 6 summarizes our conclusions. The paper contains two appendices where technical details and simulation parameters are reported. Preliminary results of this work have been presented in Ref. 
2 Wilson Loop Scheme
In this section, we define a new renormalization scheme, the ’Wilson loop scheme’. Let us consider an amplitude whose tree-level approximation is
where is the bare coupling constant, and is a coefficient of proportionality that does not depend on and can be explicitly calculated for a given underlying theory. With a non-perturbatively calculated amplitude at the scale , the renormalization constant relates the renormalized coupling constant, , to the bare one, leading to the relation
Although in particle physics an -matrix element, i.e. a scattering amplitude, is usually adopted as , to define one can equivalently use any physical quantity that can be perturbatively expanded and is proportional to at the tree-level.
We define the Wilson loop scheme by taking the ‘amplitude’ to be
where is the Wilson loop with the temporal and spatial sizes and , on a lattice of the physical size . In this work, we take to be the same as , and drop it in the argument of the Wilson loop. The scale will be identified as the renormalization scale later. On a finite lattice, , and thus , also depend on the lattice spacing which is determined by the bare coupling . The dependence of on is removed by taking the continuum limit, . A pictorial definition of the Wilson loop is shown in Fig. 2.
Using lattice perturbation theory (see Fig. 2), can be shown to be proportional to at the lowest order. Thus, once the value of is calculated, relation (2) leads to, after taking the continuum limit, a prescription to obtain the renormalized coupling:
In the above expression we have made explicit that in the continuum limit is a regular function of only. This will be proved in the next section. The remaining factor on the right hand side of Eq. (4) can be evaluated on the lattice as the Creutz ratio,
where and . The value of is evaluated by a Monte Carlo simulation.
The renormalized coupling constant in the Wilson loop scheme can be written as
The quantity depends on three different scales, , , and ; by taking the ratio to , we use , , and as the independent parameters. Fixing to a specific value means fixing the renormalization scheme. The ratio specifies the discretization of the box, and can be removed by taking the continuum limit, . After fixing the two dimensionless parameters and , becomes a function of single scale, . In our scheme, following the step-scaling procedure, is identified as the scale at which the renormalized coupling is defined.
Ref.  offers an alternative definition of the QCD running coupling related to Wilson loops and discusses its finite-size effects in . Differently from us, their scheme defines the coupling constant in the limit, in which the renormalized coupling is related to the quark force.
There are several advantages in using the Wilson loop scheme. An evident one is that our scheme does not contain systematic effects as long as they are absent in the lattice action. This is because the Creutz ratio is free from discretization errors, due to the automatic improvements of the heavy quark propagator after the redefinition of the mass and the wave function . This is in contrast to the case of the Schrödinger functional scheme, in which the boundary counter terms give rise to additional systematic errors. Such a particular kinematical setup also breaks chiral symmetry. Furthermore, this scheme only involves simple gluonic observables and can be easily applied to the case with any type of dynamical fermions without restriction to the number of flavors.
3 Computation of
One of the indispensable ingredients of the scheme presented in the previous section is the calculation of the coefficient in Eq. (1). It can be generically split into two terms:
where represents the zero-mode contribution, while can be expressed as
where the summation is taken over integer values of (=0,…,3) except for the case (indicated by the prime in the sum), and . The zero mode contribution depends on the boundary conditions. In the following, we will concentrate on the case of periodic boundary conditions. In this case, was initially calculated in Ref. . For SU gauge group, is given by:
The scope of this section is to present a method to compute the quantity . The method we develop will be illustrated for the case of periodic boundary conditions, but it can be applied, with minor changes, to the case of twisted or mixed boundary conditions. As we have seen, the contribution from the zero mode, , can be separated from the rest and is obviously regular. Thus to compute (and to prove the regularity of) , we only need to consider . Our starting point is the quantity:
can be obtained from via
Although it is not possible to find a closed form for in terms of elementary functions, the use of zeta function resummation techniques and basic analytic continuation allows us to recast into the form of a practically computable quantity, and to prove the regularity of through an explicit calculation. The computation is carried out in a few steps. The first is the evaluation of the sum over by using the Poisson summation formula. Then, the summation over and , is written in terms of the Epstein zeta functions. After these steps, the expression of becomes compact. However, without further rearrangements, it is of little practical use. To this aim, it is convenient to rewrite the Epstein zeta functions using the Chowla-Selberg formula that renders the zeta functions into the form of elementary analytic functions plus some rapidly converging series. The subsequent step is to analytically perform the integrals introduced when using the Poisson summation formula, and finally perform the remaining summations numerically. Although the above procedure may seem involved, the actual implementation is rather simple. The method has also the bonus of providing a proof of the regularity of the Creutz ratio, as we will explicitly show in the following.
The first step of our procedure is to employ the Poisson summation formula:
A straightforward application of the above relation to the function gives:
where we have used the standard definition of the generalized Epstein zeta function:
The parameter is a regulator, introduced to perform the necessary analytical continuations. The limit will be taken at the end of the calculation. It is interesting that the function can be entirely written in terms of the integral function
Although compact, the result Eq. (12) requires further manipulation. A useful way to handle these functions is to make use of the Chowla-Selberg formula. Refs. [35, 36] develop the appropriate formalism that allows us to express as the sum of analytic functions plus a rapidly converging series:
The other tool is the following integral formula (see Ref. ):
It can be easily checked that, in the above expression, the limit can be taken safely giving
where we have separated the contribution from the remaining part which is exponentially suppressed. This separation leads to the definition
Due to the exponential suppression, the terms , , and , and thus , are clearly regular. Therefore, to prove the regularity of , we only have to show that is also regular. To show that the terms (22), (23), and (24) also lead to a regular expression for (and to compute them), it requires further manipulations.
The first term, (22), can be computed analytically:
where is the Euler psi function . The remaining two terms, and , can be rearranged by performing first the summation over , and then by using the Chowla-Selberg formula. We leave the details in Appendix A, and present the results here:
Written as above, it is a trivial matter to see that, after taking the derivative of , , and with respect to , , and thus , is nicely behaved due to the exponential fall-off of the Kelvin functions .
The last step of our procedure consists in evaluating the above expressions. The numerical computation of the sums in Eqs. (25), (26), (27), (29), and (30), does not present any problem due to the exponential suppression. Differentiating with respect to , substituting , and combining the results according to Eqs. (8), (9), (10) lead to the result for . Figure 3 shows the dependence of the function with respect to . Table 1 provides some indicative values for .
4 Numerical Simulation
In this section, we will describe the details of our numerical simulations. For later use, we define the coupling-squared, ,
where . Note that we express as a function of , , and instead of , and . The above redefinition is chosen for convenience, since , , and are the actual input parameters for the simulations.
4.1 Step Scaling
We begin by briefly reviewing the step-scaling procedure (see Refs. [25, 38, 39] for details), that we use to evaluate the evolution of the running coupling in a wide range of the energy scale on the lattice.
The first step is to fix a value for , and find a set of parameters, , which produce the same value of for several different choices of :
We achieve this by tuning the value of in such a way that the physical volume is fixed for different values of . We denote this fixed physical volume for the starting point of the step-scaling procedure by .
The next step is to vary the physical volume from to , which gives the evolution of the running coupling from the energy scale to , where is the scaling factor. This step can be performed by changing the lattice size from to , leaving each value of unchanged. Values of calculated with these new parameter sets should be considered as the coupling at the energy scale up to discretization errors, and the extrapolation to the continuum limit can be taken
where is the renormalization factor as defined below Eq.(1). The resultant value of the coupling, , should be considered as the renormalized coupling at the energy scale . This is the way to obtain a single discrete step of evolution of the running coupling with scaling factor .
Next, we find a new parameter set for , which reproduces the value of obtained in the previous step. Here, we chose the parameter set in such a way that the new lattice size is equal to the original one, . From here, we can repeat exactly the same procedure described so far: we calculate with the parameter set . By iterating this procedure times, we obtain the evolution of the running coupling from the energy scale to .
4.2 Simulation Parameters
We use the standard Wilson plaquette gauge action defined on a four-dimensional Euclidean lattice with finite volume . In this work, we adopt untwisted periodic boundary conditions. However, it is straightforward to use other boundary conditions (e.g., twisted boundary conditions) when necessary. Gauge configurations are generated by using the pseudo-heatbath algorithm with over-relaxation, mixed in the ratio of :. In the remainder of this paper, we use the word “a sweep” to refer to the combination of one pseudo-heatbath update sweep followed by five over-relaxation sweeps. In order to eliminate the influence of autocorrelation, we either take large enough number of sweeps between measurements, or adopt the method of binning with a large enough size of bin to estimate the statistical error reliably. We perform the numerical simulations based on the step-scaling procedure explained in the previous section for a fixed value . (The reason for this choice will be given in the next subsection.) We set the scaling parameter with five different starting lattice sizes being , , , and , which means lattice sizes after the scaling at each step are , , , and , respectively. We take (which corresponds to ) as the starting value of the first step of the step-scaling procedure. Tunings of the values of (namely, finding values of which satisfy for each , , , and in the first step) are carried out by interpolating the data obtained from simulations for different values of and shown in Fig. 4.
Each data point in the figure is calculated from
200 gauge configurations with 1000-sweep separation
Once we obtain values of which reproduce
, , , and , we carry out simulation
for step-scaling, namely simulations for , ,
, and with the values of we tuned. These results are
used to take the continuum limit, then the resultant value of
becomes a starting value for the next step.
We iterate this procedure seven times.
The combination of and used for the simulations are shown
in Table 2
4.3 Simulation Details
There are several practical steps to calculate the quantity from numerical simulations. Here we explain various technical details of our computations.
We use the APE smearing  of link variables defined by the following equation;
where and denote the smearing level and the smearing parameter, respectively. The smearing is done for links in all four directions. The result does not depend on the value of significantly, and we take in the present study. Here, we need to find the optimal values of and the smearing level , by considering the following requirements. For better control of discretization error, it is preferable to choose a larger value of . Meanwhile, for the purpose of reducing the statistical error, it is better to take a smaller value of and higher number of . Fig. 5 shows the smearing-level dependence of in the case of and as an example. From this figure, we find the statistical error is notably reduced even at the smearing level one. In order to avoid over-smearing, should be smaller than . This condition leads to the lower bound, . We summarize the bound from this requirement in Table 3. We observe (see Fig. 5 for the example of the case at ) that the data of and in higher smearing level are not reliable because of over-smearing. By considering all the above requirements, we find that is the optimal choice.
Once we fix the value of ( in our current study), we need to estimate the value of for non-integer . We interpolate the value of using a quadratic function:
The last step of the calculation is to take the continuum limit of from data obtained for different combinations of and listed in each column of Table 2. We show two example plots in Fig. 6, which are the continuum extrapolations for Step 1 and Step 7. Since our Wilson loop scheme does not contain systematic errors, we extrapolate to the continuum limit using a fit function linear in . Four data points (, , and ) are used for this extrapolation (shown as red lines in Fig. 6), and the resultant value is adopted as the central value of in the continuum limit. We also take the continuum limit by using a fit function quadratic in with five data points (, , , and ) (indicated by pink curves in Fig. 6), and the difference between the central values of two fits are adopted as the systematic error coming from possible higher order discretization effects. In Fig. 6, we have also plotted extrapolation by a linear function with five points of data for comparison. In this figure, resultant values of the continuum limit obtained from different fit functions are plotted at . (For better visibility, we slightly displaced the data obtained from 5-point quadratic and 5-point linear extrapolations.) All the error bars shown in Fig. 6 are statistical only.
4.4 Numerical Results
We now show the results of our simulations which were performed using parameters in Table 2 with procedures explained in the previous section. Details of parameter choice and numerical results are summarized in Appendix B.
In Fig. 7, we plot the resulting values of and their statistical errors for , , and for Steps . The continuum limit was taken in the way explained in the previous section, and both the statistical and systematic errors are estimated. In Fig. 7 the values of in the continuum limit are shown with statistical and systematic errors added in quadrature.
The running coupling constant is extracted by dividing by . The evolution of the running coupling constant is obtained by connecting the resultant values for Steps by assigning appropriate scales to these steps. We plot the results in Fig. 8.
We define the starting energy scale of Step as , and
the evolution of the running coupling constant is plotted as a function of
energy in units of .
In this figure, errors are accumulated with the evolution of the running coupling
From the results of the simulation, we can also extract the non-perturbative function by using the method explained in Ref.  . To this end, it is useful to define the step scaling function in the continuum limit as 
We list the simulation results for the step scaling function in Table 5.
In the week coupling region, can be perturbatively expanded as 
with the coefficients
where and are the one-loop and the two-loop coefficients of the function, respectively. In Fig. 9, we plot the data listed in Table 5 together with two-loop perturbative curve for (solid), as well as two curves which are the result of the following two kinds of polynomial fit.
For the upper (dashed) curve, we fitted to the data in the range of , and obtained . For the middle (dotted) curve, we fitted to all the data, and obtained and . As is expected, in the week coupling region, the data is well explained by the two-loop perturbative result, and polynomial functions fit well to the data also. Meanwhile, the figure clearly shows that neither two-loop perturbative curve, nor simple polynomial fits can explain the behavior of the data in larger values of . This is nothing but an indication of the emergence of the non-perturbative effect.
The formula to obtain non-perturbative function from the step scaling function was given in Ref.  as
By applying this formula recursively, we obtained the discrete function as shown in Fig. 10.
Here, while the values of the