IPPP/09/33

DCPT/09/66

Cavendish-HEP-09/06

25th August 2009

Uncertainties on in global PDF analyses and

[1ex] implications for predicted hadronic cross sections

A.D. Martin, W.J. Stirling, R.S. Thorne and G. Watt

Institute for Particle Physics Phenomenology, University of Durham, DH1 3LE, UK

Cavendish Laboratory, University of Cambridge, CB3 0HE, UK

Department of Physics and Astronomy, University College London, WC1E 6BT, UK

We determine the uncertainty on the strong coupling due to the experimental errors on the data fitted in global analysis of hard-scattering data, within the standard framework of leading-twist fixed-order collinear factorisation in the scheme, finding that at next-to-leading order (NLO) and at next-to-next-to-leading order (NNLO). We do not address in detail the issue of the additional theory uncertainty on , but an estimate is at NLO and at most at NNLO. We investigate the interplay between uncertainties on and uncertainties on parton distribution functions (PDFs). We show, for the first time, how both these sources of uncertainty can be accounted for simultaneously in calculations of cross sections, and we provide eigenvector PDF sets with different fixed values to allow further studies by the general user. We illustrate the application of these PDF sets by calculating cross sections for , , Higgs boson and inclusive jet production at the Tevatron and LHC.

## 1 Introduction

There has been a steady improvement in the precision and the variety of the data for deep-inelastic and related hard-scattering processes. Global parton analyses of these data allow the partonic structure of the proton to be quantified with ever-increasing accuracy. Analyses are now possible at next-to-next-to-leading order (NNLO) in the running strong coupling . Indeed, the accuracy is such that the theoretical formalism needs to consider effects at the level of . One area which needs careful treatment at this level of accuracy is the strong coupling itself. This applies both to the definition of the coupling, and to the uncertainties associated with it. The definition of is not unique and different definitions can give noticeable differences in the results. In the recent “MSTW 2008” global fit [1] we changed the definition of the strong coupling compared to that used in previous “MRST” analyses. We discuss the technical details of this change in detail in the Appendix. However, the main emphasis of this article is the issue of the uncertainty on parton distribution functions (PDFs) and derived quantities such as cross sections, arising from the uncertainty in the strong coupling . This is a non-trivial, and sometimes significant, source of uncertainty which is often not considered when extracting PDFs from global fits or when calculating uncertainties on hadronic cross sections.^{1}

## 2 Parton distributions and the strong coupling

The recent MSTW analysis [1] (using an improved definition of ) was based on the obtained “best-fit” values of the strong coupling, i.e. the values obtained by minimisation of a global goodness-of-fit quantity, , simultaneously with the PDF parameters at an input scale GeV. In the NLO and NNLO global PDF fits, the values were found to be and , respectively. In addition to the best-fit PDFs and corresponding best-fit values, we also determined a set of eigenvector (“error”) PDF sets, designed to span the variation in the parton distributions allowed by the uncertainties on the data included in the global fit. We introduced and used a new “dynamic tolerance” method for determining 68 or 90 confidence-level (C.L.) error PDFs, which is an extension of an earlier method introduced in the CTEQ6 analysis [4, 5, 6]. Then the prediction and corresponding uncertainty for a physical quantity that depends on the PDFs, such as a cross section at a hadron–hadron collider, is given by where^{2}

(1) |

Here, is the central PDF set, and () are the eigenvector PDF sets, with in the MSTW 2008 analysis [1]. To illustrate our method and its results, we used as examples in Ref. [1] the total cross sections for and production at the Tevatron and LHC, i.e. we computed the theoretical predictions for the total cross sections and their corresponding PDF uncertainties, .

However, the PDF sets obtained in this previous study are only defined for the particular values of found by the best fit. If the user has a preferred (different) value of then PDFs determined with that particular value in the global analysis should strictly be used. Hence, in this paper we provide best-fit PDFs for a range of values of . This is a straightforward matter of repetition. However, when considering *uncertainties* on PDFs and on physical quantities the process is more involved. The true theoretical uncertainty on a predicted cross section should also include a contribution from the allowed variation of about its best-fit value. This is particularly important for cross sections that at leading-order are proportional to a power of the coupling, for example, for production via gluon–gluon fusion at the Tevatron and LHC. A naïve way of doing this would be to define the overall theory error as , where is the cross section variation when is allowed to vary within some range, determined, for example, by its world-average error, for a fixed set of PDFs. However, it is not consistent to use different values of in the partonic cross section describing the hard subprocess and in the PDF evolution. Moreover, in a global PDF analysis, there are non-negligible correlations between the PDFs and the value of the strong coupling. For example, the evolution of the structure function at small is driven by the combination , and so a large value of the coupling will correspond, in principle, to a smaller gluon distribution at low . This is indeed observed in practice.

This PDF– correlation has been known for some time, and indeed early global analyses derived a series of best-fit PDFs for a set of discrete values spanning a given range, for example, [7]. Other relevant studies have been made by the CTEQ group [8, 2]. However, given that the precision of the data used in global analyses has increased significantly in recent years, with a corresponding reduction in the error on both the PDFs and on , a more sophisticated approach is now required, that is, one in which (i) the allowed variation of is determined by the data, and (ii) the correlation between the variation in and in the PDFs is fully taken into account. The latter would be very difficult to achieve if and its uncertainty were taken as externally determined quantities rather than obtained directly from the fit.

In this paper we investigate the interplay between uncertainties on and uncertainties on parton distributions in the framework of the recent MSTW 2008 analysis [1]. We show how both these sources of uncertainty can be properly accounted for simultaneously in calculations of cross sections. The remainder of this paper is organised as follows. In Section 3 we discuss how the description of each data set included in the global fit depends on the value of . In Section 4 we determine the (that is, the 68 C.L.) and 90 C.L. uncertainties on as determined in the NLO and NNLO global analyses.^{3}

## 3 Description of data sets as a function of

The MSTW 2008 global fit [1] used a wide variety of data from both fixed-target experiments and the HERA and Tevatron colliders. Neutral-current structure functions ( and ) were included from fixed-target lepton–nucleon scattering experiments (BCDMS [9, 10], NMC [11, 12], E665 [13] and SLAC [14, 15, 16]), low-mass Drell–Yan cross sections from the E866/NuSea experiment [17, 18], and charged-current structure functions ( and ) and dimuon cross sections from neutrino–nucleon scattering experiments (CCFR/NuTeV [19, 20] and CHORUS [21]). From the HERA experiments, H1 and ZEUS, data were included on neutral- and charged-current reduced cross sections ( and ) [22, 23, 24, 25, 26, 27, 28, 29, 30], the charm structure function () [31, 32, 33, 34, 35, 36, 37], and inclusive jet production in deep-inelastic scattering [38, 39, 40]. From the Tevatron experiments, CDF and DØ, Run II data were included on inclusive jet production [41, 42], the lepton charge asymmetry from decays [43, 44] and the rapidity distribution [45, 46]. A more detailed description of the treatment of each of these data sets can be found in Ref. [1].

The definition of the goodness-of-fit quantity, , for each data set is given in Section 5.2 of Ref. [1]. Note that the normalisation of each data set in the global analysis is taken as a fitted free parameter to allow for uncertainties in the measured luminosities, with a *quartic* penalty term given by Eq. (37) of Ref. [1] rather than the more usual quadratic penalty term, in order to bind the fitted data set normalisations more closely to their nominal values of 1, and to recognise that the normalisation uncertainties may well not be Gaussian in nature. For most data sets, particularly the older ones where correlations are unavailable or statistical uncertainties often dominate, the statistical and systematic (other than normalisation) uncertainties are simply added in quadrature in the definition of . However, for the data on inclusive jet production, where at the Tevatron the systematic errors are often many times larger than the statistical errors, and the CDF rapidity distribution, the full information on correlated systematic uncertainties is included by defining as in Eq. (38) of Ref. [1], i.e. the data points are allowed to shift by the correlated systematic uncertainties in order to give the best fit, with a quadratic penalty term to limit large shifts by more than the experimental errors.

We plot the profiles for each data set as the difference from the value at the *global* minimum, , when varying ; see Figs. 1–4 for the NNLO profiles for each of the 29 different types of data. The points () in Figs. 1–4 are generated for fixed values of between 0.107 and 0.127 in steps of 0.001. These points are then fitted to a quadratic function of shown by the solid curves. The horizontal dashed lines in the plots indicate the 68% and 90% C.L. limits for each data set, determined according to a “hypothesis-testing” criterion which we will describe below in Section 4.

We see from Figs. 1–4 that for most data sets the variation of with respect to is indeed approximately quadratic, with minima in the vicinity of the value of determined by the global fit. However, a few data sets have minima which are significantly displaced from the global best-fit values of 0.1171 at NNLO (or 0.1202 at NLO). For example, the BCDMS data on prefer values around 0.110. With the global best-fit , in the region falls too quickly with increasing for the BCDMS data, particularly for low (and hence low inelasticity ), and so a lower is preferred since it gives a flatter slope.^{4}

We find that the preference of the inclusive HERA data for high values holds also when we perform a NLO fit to these data alone, which at first sight seems to contradict previous findings by H1 [23] and ZEUS [48] indicating a preference for low values of 0.115 and 0.110, respectively, although each with a large uncertainty, when fitting only to inclusive data. Relatively low values of are also preferred by the inclusive HERA 96/97 data sets in the CTEQ global analysis (see Fig. 4 of Ref. [2]). However, the low- gluon distribution is strongly anticorrelated with . The input gluon parameterisation in the H1 [23], ZEUS [48] and CTEQ [2] analyses was assumed to have the form at low , whereas the more flexible MSTW [1] (and MRST [47]) input gluon parameterisation is effectively a sum of two powers at low , where the second term is required by the fit to be negative, at the input scale GeV. We find that a NLO fit to only inclusive HERA data using a restricted input gluon parameterisation, with the second term omitted, gives (with a large uncertainty). Repeating the *global* fits gives at NLO and at NNLO. However, the global is worse by 63 at NLO and by 80 at NNLO, so the more flexible low- gluon parameterisation is clearly required (particularly for the PDF uncertainties, see Section 6.5 of Ref. [1]). Since the CTEQ PDFs are input at the slightly higher GeV, and other sets have even higher input scales, the effect of only having a single power for the gluon parameterisation will be reduced compared to our above example. However, the general feature of a reduction in due to a restricted parameterisation is very likely to persist to a greater or lesser extent.

The data set that exhibits particularly anomalous behaviour, where is significantly reduced for larger , is the DØ Run II charge asymmetry; see Fig. 4. We have already pointed out the difficulties of fitting these data in the standard NLO and NNLO global analyses (see Section 11.1 and Fig. 44 in Ref. [1]), with both fits yielding a of 25 for 10 data points. We also noted that the asymmetry is sensitive to the separation into valence and sea quarks, particularly at lower lepton . Indeed, this explains the behaviour of with : as increases, the valence quarks evolve more rapidly at high than the sea quarks, and the asymmetry is reduced. For at NNLO (or at NLO) the decrease in is approximately 15, and a good fit is obtained. However, such large values are completely inconsistent with most of the other data sets in the global analysis (and with other determinations of ).

MSTW 2008 NLO () | MSTW 2008 NNLO () |
---|---|

The NLO profiles are similar to those at NNLO for most data sets. In Fig. 5 we compare the NLO and NNLO profiles for those data sets where there are notable differences between the two fits. Specifically, the profiles for the H1 and ZEUS data, the NMC/BCDMS/SLAC data and the E866/NuSea Drell–Yan cross sections are clearly more quadratic at NNLO than at NLO, with minima closer to the best-fit values. This indicates a strong preference for the NNLO description, which is not so apparent if only the global best-fit values are known. Let us give the reasons for this, taking each of these data sets in turn. The minimum for the data at NNLO is consistent with the global average value of , whereas a minimum is not even visible in the NLO plot. It was shown in Ref. [49] that at NNLO the value of is lowered compared to the NLO value. The preference of the data for this is borne out by comparing both the values of at each order and the shape of the profile, i.e. the NLO fit tries to flatten the slope with a lower coupling. The NNLO coefficient functions for [50, 51] are positive and significant, and similarly the NLO fit tries to mimic these with a higher value of .

The explanation of the different NLO and NNLO profiles for the E866/NuSea Drell–Yan cross sections, seen in Fig. 5, is more complicated. The positive NNLO correction [52] requires an increased E866/NuSea normalisation (1.09 at NNLO compared to 1.01 at NLO, cf. the 1- normalisation uncertainty of 6.5%), with a correspondingly larger penalty (3.2 at NNLO compared to practically zero at NLO). When varying in the range shown in Fig. 5, the penalty term due to the fitted normalisation of the Drell–Yan data set is negligible at NLO ( unit), while it increases dramatically at NNLO when increasing from the best-fit value, reaching 22 units when , i.e. a sizeable proportion of the total increase in relative to the best-fit . The Drell–Yan data slightly prefer a higher value of at NLO than the global best-fit value, while at NNLO a lower value of is preferred mainly in order to reduce the penalty term due to the fitted normalisation.

The complete NNLO corrections are not yet known for inclusive jet production in deep-inelastic scattering or in hadron–hadron collisions. For the Tevatron data, we use the approximation to the NNLO corrections obtained from threshold resummation [53], which is included in the fastnlo package [54]. No such approximation is available for the HERA data, and so the HERA data on inclusive jet production are simply omitted from the NNLO global fit.^{5}

For completeness, in Fig. 6(a) we show the profiles for the HERA inclusive jet production data in the NLO fit, since these H1 and ZEUS data sets are not included in the NNLO fit. Although the impact on the high- gluon distribution is diluted^{6}*global* parton analysis, it is seen from Fig. 6 that these data sets are quite sensitive to the value of , with the H1 data preferring and the ZEUS data preferring . We also show in Fig. 6(b) the corresponding NLO plots for the Tevatron inclusive jet production data: the preferred values of are 0.120 for the CDF data ( jet algorithm) and 0.119 for the DØ data (cone jet algorithm), which are very consistent with the best-fit values.

We provide public PDF grids (without uncertainties) for fits with in the range 0.110 to 0.130 (at NLO) and 0.107 to 0.127 (at NNLO) in steps of 0.001 [56]. These grids will allow users to make determinations from other data sets. Note that we have implicitly done this for each of the individual data sets in Figs. 1–6, where the values of at the minima can be read off from the plots, together with an experimental uncertainty for an appropriate choice of . (Generally, the 1- experimental uncertainty is taken to be the change in which gives an increase of one unit in with respect to the minimum value.) However, more detailed studies may be desirable, for example, including a theoretical uncertainty in the determination from the observed renormalisation scale dependence. Moreover, extractions from other quantities may be of interest, for example, from the ratio of three-jet to two-jet rates at the Tevatron.

## 4 Experimental uncertainty on

Ideally, we would expect the errors on to be given by or 2.71 for a 68% or 90% C.L. limit respectively. However, in practice, there are some inconsistencies between the independent fitted data sets, so these “parameter-fitting” criteria are not appropriate for global PDF analyses. Instead, we follow the procedure of Section 6.2 of Ref. [1], where we described how to choose an appropriate value of the tolerance for each eigenvector of the covariance matrix according to “hypothesis-testing” criteria. Here, we will use the same method to determine the appropriate uncertainty on . To summarise, we perform the following steps.

We define the 90% C.L. region for each data set (comprising data points) by the condition that [1]

(2) |

where is the 90th percentile of the -distribution with degrees of freedom, and is the most probable value. (These quantities are defined in detail in Section 6.2 of Ref. [1]; see also the example discussed around Eq. (6) below.) Similarly for the 68% C.L. region. The 90% and 68% C.L. regions determined in this way are shown as the horizontal dashed lines in Figs. 1–6. We then record the values of for which the for each data set are minimised, together with the 90% and 68% C.L. limits defined by the intercepts of the quadratic curves with the horizontal dashed lines in Figs. 1–6.

In Fig. 7 the points () indicate these values of for which is minimised, while the inner error bars extend across the 68% C.L. region and the outer error bars extend across the 90% C.L. region defined by Eq. (2). Note that inclusive jet production at the Tevatron is the only process included in the global fit which is proportional to at leading-order, therefore these data sets provide a strong constraint on . However, it might initially be considered surprising that the error bars in Fig. 7 are much smaller for CDF than for DØ. Both factors on the right-hand side of Eq. (2) are smaller for CDF than for DØ, meaning that the 90% C.L. region for , indicated by the horizontal dashed lines in Fig. 6(b) at NLO (and the corresponding plots in Figs. 3 and 4 at NNLO), is smaller by almost a factor of 2 for CDF. More importantly, the profiles are significantly steeper for CDF compared to DØ, i.e. the CDF data are more sensitive to the value. This can largely be explained by the fact that the normalisation of the CDF jet data is tied to the more constraining CDF rapidity distribution, due to the common luminosity uncertainty. This is an interesting example of the interplay between data sets in a global fit. The normalisation of the DØ jet data is independent of the DØ data, since the latter is presented as a rapidity shape distribution, i.e. divided by the measured total cross section. The DØ jet data therefore has more freedom to compensate for the variation in by changing the normalisation.

We choose the uncertainty on , indicated by the horizontal dashed lines in Fig. 7, so that all data sets are described within their 90% or 68% C.L. limits. It is seen from Fig. 7 that the upper limit on is fixed by the deterioration of the quality of the fit to the BCDMS data (within the context of the full global fit), while the lower limit is provided by the deterioration of the quality of the fit to the SLAC data. In each case a number of additional data sets are very close to providing the constraint, i.e. as with the determination of the tolerance values in Ref. [1] there is no particular reliance on any one data set in fixing the limits on . The final results are:

(3) | ||||

(4) |

In Fig. 8 we show the change in as is varied. The vertical lines indicate the 68% and 90% C.L. uncertainties given by Eqs. (3) and (4). The uncertainties on at NNLO, determined using the procedure just described, amount to taking (68% C.L.) or (90% C.L.). These are much larger values than the canonical “parameter-fitting” values of (68% C.L.) or (90% C.L.). Conversely, if these standard “parameter-fitting” values of were taken seriously, then the uncertainties on would be (68% C.L.) or (90% C.L.). The experimental errors on given in Eqs. (3) and (4) are a much refined revision of the previous estimate of from the MRST 2001 analysis [47] obtained using a fixed .

In Table 1 we compare our determination of at NLO and NNLO with the results obtained by other PDF fitting groups, showing only the experimental uncertainties.

NLO | (expt. unc. only) |
---|---|

MSTW (this work) | |

CTEQ [2] | |

H1 [23] | |

ZEUS [48] | |

Alekhin [57] | |

BBG [58] | |

GJR [59] | |

NNLO | (expt. unc. only) |

MSTW (this work) | |

AMP [60] | |

BBG [58] | |

ABKM [61] | |

JR [62] |

The values in Table 1 can be compared with the world average value quoted by the Particle Data Group (PDG) of [63], where the PDG world average is dominated by NNLO results, with the NLO results included carrying little weight. The 2009 world average value of obtained by Bethke [64] is . The MSTW values are completely consistent with the world average values, but are a little higher than all the others in Table 1, which are mostly from fits mainly, or exclusively, to structure functions in deep-inelastic scattering. The NNLO values in Table 1 are generally smaller than the corresponding NLO values from the same fitting group, as is to be expected since NNLO corrections to splitting functions and coefficient functions automatically lead to quicker evolution. The CTEQ^{7}

Indeed, there is a distinct sensitivity to the data sets used. The fits containing only a limited number of data sets tend to obtain lower values of . This is often because the BCDMS data has a strong influence in the fit, as seen in Fig. 7. This tendency for the BCDMS data to bring the value of down was well-illustrated in the HERA-LHC “benchmark” fits [65, 66], where both the data sets chosen and the cuts applied led to the BCDMS data carrying a high weight. It is often thought that the inclusive jet data favour high values of , and are thus responsible for raising the values significantly in global fits. The picture is not quite so simple, as the values preferred by these data sets shown in Fig. 7 illustrate, with none lying far above the global best-fit values. Indeed, in Section 12 of Ref. [1] it was shown that removing the Tevatron jet data results in the best-fit value of at NLO falling only from 0.1202 to 0.1197. It is more the case that the inclusion of the jet data can alter the shape of the gluon distribution obtained in a fit to only deep-inelastic scattering data, which indirectly affects the value of , or can turn a low value of with large uncertainty into a much better constrained higher value, as in Ref. [48].

In addition to the data fitted, the differences seen in Table 1 between the experimental uncertainties on obtained by different groups can be traced partly to the different methods used for error propagation and in particular to the choice of the tolerance, . For example, the large CTEQ uncertainty corresponds to compared to the MSTW values of –, GJR/JR use at NLO/NNLO respectively, while the other groups fit a smaller range of data and use .

We shall not address in detail the issue of the theory uncertainty on our determination. The widely-used method of varying the renormalisation and factorisation scales up and down by a factor of around two is not really sufficient in a global fit, as it misses contributions depending on and at higher orders and such issues as ambiguities in heavy quark flavour scheme definitions.^{8}

A simple estimate of the theory uncertainty can be obtained by taking the difference between the NLO and NNLO determinations of . This gives , which could be considered to be a minimum theory uncertainty at NLO and a maximum theory uncertainty at NNLO. It coincides with the estimate at NLO quoted in the MRST 2001 analysis [47] and the estimate at NLO obtained from scale variation by Alekhin [67]. It is also of the same order as changes invoked by introducing models of higher-order corrections or changing the cuts on data at NLO [68], which to us seems to be one of the most reliable estimates. It is probably an overestimate at NNLO and more quantitative studies are needed. However, at NNLO the estimates from changing data cuts and introducing models for theoretical corrections [68] suggested that variations in are distinctly lower than at NLO, and that perhaps – would be more appropriate. Hence, to be conservative we estimate a NNLO theoretical uncertainty of at most , a slightly larger value than that obtained from scale variation at NNLO by Alekhin [67] to account for the more complicated global fit.

## 5 Eigenvector PDF sets with varying

Following on from our method of determining PDF uncertainties using the Hessian method, we might consider simply letting be another free parameter when diagonalising the covariance matrix of the fit, resulting in 42 rather than 40 eigenvector PDF sets, each with a slightly different value of . We do not follow this approach for various reasons. The first of these is because the coupling does not quite sit on an equal footing with the input PDF parameters. Let us consider a superposition of different eigenvector PDF sets. The linear nature of the DGLAP evolution equations means that any linear combination of PDFs, all with a common value of , is also a well defined PDF set evolving precisely according to the DGLAP evolution equations. Consequently, the precise linear combination is the same whatever the factorisation scale at which we sample the PDFs. But if the eigenvector PDF sets in the superposition have differing , then this picture for linear combinations is not preserved. A linear combination of PDFs evolving with different values do not evolve in precisely the same manner as one single PDF set with a fixed , i.e. they do not follow precisely a real trajectory in PDF space. This is not a problem in the particular case of using the eigenvector PDF sets to map out the uncertainty band, since each is used independently. However, it is an issue when rediagonalising the Hessian matrix, such as in Ref. [69], where, from a well-defined starting point, the new eigenvector PDF sets would not correspond to a particular PDF set which evolves in *precisely* the manner prescribed by the evolution equations. Since the whole Hessian method is based on manipulating linear combinations of perturbations in PDF parameters, which to first order in the Taylor expansion is equivalent to manipulating linear combinations of PDF sets, we find this feature troubling, even though it would certainly be a small effect in practice.

There are also more practical reasons why we reject the option of simply including as an extra parameter in the Hessian matrix. We limit ourselves to 20 eigenvectors because a larger number of free input PDF parameters leads to too large correlations, and to a breakdown of the quadratic behaviour of the global distribution in some eigenvector directions [1]. A free would introduce more correlation between parameters, and reduce the stability of the eigenvectors, a feature we prefer to avoid. Finally, this approach would limit the uncertainty analysis to an expansion about the single best-fit PDF set, with a unique value of . We acknowledge that the user may prefer more flexibility than this, and might wish to utilise PDF sets including uncertainties for a variety of different values of . Hence, we provide eigenvector PDF sets with different, but fixed, values to allow studies of this sort. In the rest of this section, we give the details of our extraction of PDFs with uncertainties at different values, and examples of our recommended use of them will be given in Section 6. Here we just note that as we go away from the best-fit value of , the fit quality is automatically deteriorating, so that the PDFs cannot vary as much before the fit quality becomes unacceptable. It is an automatic result of our procedure that the PDF uncertainty shrinks as deviates from the preferred value determined by the global fit.

We provide eigenvector PDF sets with fixed at the limits of the 68% and 90% C.L. uncertainty regions, given by Eqs. (3) and (4). We also provide eigenvector PDF sets where is fixed at half these limits. (Further intermediate values of are unnecessary, as we will explain later in Section 6.) The eigenvector PDF sets are generated, for each fixed value of , from input PDF parameters [1]

(5) |

where are the best-fit input PDF parameters for that value of , and the rescaled eigenvectors are . The covariance (inverse Hessian) matrix has eigenvalues and orthonormal eigenvectors (with components ). The distance along each rescaled eigenvector direction is adjusted to give the desired tolerance . In determining the tolerance for each of the eigenvector PDF sets using Eq. (2), and the corresponding equation for the 68% C.L. uncertainties, we take to be the values at the overall *global* minimum, that is, the values obtained using the best-fit value of .

Consider the situation when is fixed at its *upper* 1- limit in the NLO fit. Recall, from Fig. 7, that this limit is fixed by the BCDMS data set, for which for degrees of freedom. In this case we have and (while ); see Fig. 7 of Ref. [1]. For this data set we define the 68% C.L. region by the condition that

(6) |

The rescaling factor () is necessary to take account of the fact that the value of at the global minimum is quite far from the most probable value of of this data set . Indeed, we see that in this case the best-fit value lies outside the strict 68% C.L. region . After applying the rescaling factor, the 68% C.L. region is given by . That is, the central fit for the BCDMS data, with fixed at its upper 1- limit, will have of 9.8 units worse than in the overall best fit.

This can be clearly seen in Fig. 9(a) where we show when moving along eigenvector number 6. At zero distance, , indicated by the horizontal dashed line in Fig. 9(a). Moving in the negative direction along this eigenvector, the increases further, meaning that the tolerance for this eigenvector is *zero* in the negative direction. In the positive direction, the constraint which fixes the tolerance of 4.0 is provided by another data set, namely the NuTeV dimuon data, as seen in Fig. 9(b).

In Fig. 9(c) we show the corresponding plot when is fixed at its *lower* 1- limit in the NLO fit. Recall from Fig. 7 that this limit is fixed by the SLAC data, where is units worse than the best-fit value of . Here, the tolerance for eigenvector number 6 is zero in the positive direction, while the constraint in the negative direction is again provided by the NuTeV dimuon data, this time giving a tolerance of 2.6, as seen in Fig. 9(d).

The tolerance values for each of the eigenvectors, when is fixed at each of its 1- limits in the NLO fit, are shown in Fig. 10. The examples given for eigenvector number 6 were typical. However, two anomalous situations are seen to arise. Firstly, when is fixed at its upper 1- limit, then the for the BCDMS data when moving along eigenvector numbers 1, 2 and 10 has a minimum (or is almost flat) at zero distance, so the tolerance is zero in both the positive and negative directions. Secondly, when is fixed at its lower 1- limit, then the for the SLAC data when moving along eigenvector numbers 14, 18 and 20 has a maximum at zero distance. The tolerance is then non-zero in both the positive and negative directions, since the constraint is provided by data sets other than the SLAC data. Of course, in these two anomalous situations the minima/maxima in will not occur at *exactly* zero distance, and this is an artifact of working in discrete distance units of , which is sufficiently accurate for our purposes.

Comparing the tolerance plots in Fig. 10 with those in Fig. 10 of Ref. [1], we see that the PDF uncertainties will be much smaller (and more asymmetric) when is fixed at each of its 1- limits, compared to when it is fixed at the best-fit value. This statement is confirmed in Fig. 11 where we show the PDF uncertainties at GeV for the cases where is fixed at the best-fit value or shifted to each of its 1- limits. Note that in all cases the uncertainty bands of the PDFs when is at its 1- limits are at most only slightly outside those for the PDFs when is at its best-fit value.

There are a number of interesting features to note from Fig. 11 about the manner in which the central values of the PDFs change as a function of . At high the valence quarks (and the total up and down quark distributions) are anticorrelated with . This occurs for two reasons. Firstly, the higher-order coefficient functions for structure functions in deep-inelastic scattering are positive at high . Increasing therefore means that we increase this contribution and hence require fewer quarks to fit the fixed-target structure function data at relatively low . Secondly, the increased speed of evolution with larger results in more migration from high to lower values. In absolute terms the effect is similar for up and down quarks, but the greater precision on up quarks means that the proportional effect is greater, and for the central value of the default up quark distribution is outside the error bands of the distributions generated with fixed at its 1- limits.

Another interesting feature, seen in Fig. 11(f), is the confirmation of the anticorrelation between the small- gluon and . This is seen for between and at GeV, and is a consequence of maintaining the fit quality to the small- HERA data, i.e. the values of . From the momentum sum rule this results in a positive correlation of the high- gluon and . Note that there is some asymmetry in the deviation. We will return to this point in the next section. Since the quark distributions at small are driven by the gluon, this change in the gluon affects the quarks. However, we note that there is in fact a slight *correlation* between the small- quark distributions and , showing that the increase in evolution from the increased coupling slightly outweighs the effect of the decreased small- gluon distribution.

In Fig. 12 we show similar plots for the gluon distribution at GeV in both the NLO and NNLO fits. One can see that, in relative terms, the PDFs at the 1- limits for are further from the best-fit values at GeV than at GeV, although the error bands always overlap. This just illustrates that DGLAP evolution drives PDFs together at asymptotic values of .

## 6 Implications for cross section calculations

Each fixed value of is associated with a central PDF set and eigenvector PDF sets defined by Eq. (5), where and (corresponding to 20 input PDF parameters). An observable PDF-dependent quantity , such as a hadronic cross section, calculated using a particular value of , has a central value and asymmetric PDF uncertainties given using the Hessian method by [5, 70, 71, 72, 1]

(7) | ||||

(8) |

for a fixed value of .

How should this prescription be generalised to calculate an overall “PDF+” uncertainty on an observable , i.e. accounting for the additional uncertainty on due to the uncertainty on ? Ideally, we would vary continuously within its experimental uncertainty determined by the global fit. Each value of would give a central value with PDF uncertainties given by Eqs. (7) and (8). The overall best-fit prediction is then , where is the best-fit value, and the overall “PDF+” uncertainties are given by the spread in the predictions, including the PDF uncertainty, for each value. More formally, the “PDF+” uncertainties are given by

(9) | ||||

(10) |

where the maximum and minimum are calculated when is varied continuously within, for example, the range to determine the 1- “PDF+” uncertainties, and the PDF uncertainties are given by Eqs. (7) and (8) for each value of in this range.

First suppose that is varied within a given range for the *same* central PDFs and their uncertainties and that is a monotonic function of (which is usually the case). Then the extreme values of the observable would obviously occur when is at either of its limits. However, in practice, the PDF uncertainty decreases as gets further from its best-fit value. Moreover, correlations between the relevant PDFs and can enhance the dependence of , while anticorrelations will reduce it. Therefore, the extreme values of the observable could, in principle, occur at any intermediate value within a given range of . For reasons of economy, we provide PDF sets with uncertainties only for five different fixed values of (i.e. the best-fit , fixed at the two limits, and at half these two limits). In fact we find that in the majority of the processes we consider, the extreme values of the observable come from fixed at either the best-fit value or one of the two limits. Even in the rare cases where the extreme values of come from fixed at half the limit, omitting these intermediate values would not significantly reduce the overall “PDF+” uncertainty. Hence we do not consider it necessary to provide PDF uncertainty sets at further intermediate values.

Since this prescription might seem quite complicated at first sight, we will give a few concrete examples of its application and consequences in the following subsections.^{9}

### 6.1 and total cross sections

In Fig. 13(a) we show the PDF uncertainties on the total cross section at the Tevatron and LHC for each of the five sets with different fixed values. (The situation is similar for the cross sections.) The cross sections are calculated as described in Section 15 of Ref. [1], e.g. using the PDG 2008 [63] electroweak parameters. The increase of the cross section with increasing is due to a combination of two effects. Firstly, there is the effect of the dependence in the (positive) higher-order corrections to the partonic cross section. Secondly, there is the dependence of the (predominantly) quark distributions. In Fig. 13(b) we show the up quark distribution at . The momentum fractions (assuming , i.e. LO kinematics) probed at the Tevatron and LHC at central rapidity () are indicated. As previously noted, the small- up quark distribution is slightly correlated with the value of (see also Fig. 11 for other parton flavours). However, when integrating over rapidity to obtain the total cross section, the PDFs will also be sampled at larger (and smaller) values of . From Fig. 13(b), the up quark distribution is *anti*correlated with in the large region, which will be sampled more at the Tevatron than at the LHC, effectively cancelling out some of the correlation with arising from the sampling of smaller values. This explains why there is less dependence on at the Tevatron compared to the LHC in Fig. 13(a).

We also indicate in Fig. 13(a) how the spread of the five individual predictions can be used to give an overall uncertainty, which is larger than that obtained when is fixed at the global best-fit value (as is usually done). Note that the extreme values for the cross section at the Tevatron arise when is shifted to half its 1- limit. In general, there exists a such that the extreme value of a given observable (or a PDF itself) occurs when is shifted by - from the best-fit value. Often, perhaps in most cases, , but in other cases , as we have here for production at the Tevatron. However, we see from Fig. 13(a) that the extreme values of the results are not far from the extreme values of the results.

Note that at the LHC there is an asymmetry in the extra uncertainty generated by allowing to vary, with more increase in the upwards direction than downwards. This is seen to be a consequence of a similar asymmetry in the PDF uncertainty of the up quark distribution when is fixed at its upper limit, shown in Fig. 13(b), with the PDF uncertainty in the relevant region giving more freedom for upwards movement compared to the best fit than downwards. This is due to the fact that the HERA structure function data at small would prefer a little more evolution than in the best global fit at NNLO. The HERA data therefore allow more freedom for extra evolution when increases than for reduced evolution when decreases.^{10}

Tevatron, TeV | (nb) | (nb) | |
---|---|---|---|

NNLO (PDF unc. only) | |||

NNLO (PDF+ unc.) | |||

LHC, TeV | (nb) | (nb) | |

NNLO (PDF unc. only) | |||

NNLO (PDF+ unc.) | |||

LHC, TeV | (nb) | (nb) | |

NNLO (PDF unc. only) | |||

NNLO (PDF+ unc.) | |||

LHC, TeV | (nb) | (nb) | |

NNLO (PDF unc. only) | |||

NNLO (PDF+ unc.) |

LHC, TeV | (nb) | (nb) | |
---|---|---|---|

NNLO (PDF unc. only) | |||

NNLO (PDF+ unc.) | |||

LHC, TeV | (nb) | (nb) | |

NNLO (PDF unc. only) |