EPS09 - Global NLO analysis of nuclear PDFs

EPS09 - Global NLO analysis of nuclear PDFs


In this talk, we present our recent work on next-to-leading order (NLO) nuclear parton distribution functions (nPDFs), which we call EPS09. As an extension to earlier NLO analyses, we complement the deep inelastic scattering and Drell-Yan dilepton data by inclusive midrapidity pion production measurements from RHIC to reduce the otherwize large freedom of the nuclear gluon densities. In addition, our Hessian-type error analysis leading to a collection of nPDF error sets, is the first of its kind among the nPDF analyses.

1 Introduction

The global analyses of the free nucleon parton distribution functions (PDFs) are based on the asymptotic freedom of QCD, parton evolution and factorization. These features provide the justification for computing hard-processes cross-sections, schematically as

where s denote the scale-dependent PDFs, and are perturbatively computable coefficients. The factorization theorem has turned out to work extremely well with more and more different types of data included in the free proton analyses. In the case of bound nucleons factorization is not as well-established, but it has nevertheless proven to provide a very good description [1, 2, 3, 4, 5, 6] of the observed nuclear modifications in deep inelastic scattering (DIS) and Drell-Yan (DY) dilepton production involving nuclear targets. Here, we summarize our recent NLO analysis of the nuclear PDFs and, in particular, their uncertainties [7].

2 Analysis method and framework

Our analysis follows very similar pattern as most free proton analyses do:

A. The PDFs are parametrized at an initial scale imposing the sum rules. In this work we do not parametrize the absolute PDFs, but rather the and dependences of the nuclear modification factors on top of a fixed set of free proton PDFs:

Above refers to a CTEQ set of the free proton PDFs [8] in the zero-mass variable flavour number scheme, and we consider three different modification factors: for both and valence quarks, for all sea quarks, and for gluons.

B. The nuclear PDFs are evolved to other perturbative scales by the DGLAP equations. An efficient numerical solver for the parton evolution is an indispensable ingredient for any parton analysis, but in the case of nuclear PDFs this is even more critical as we need to repeatedly do the evolution separately for 13 different nuclei. On the other hand, the relatively low -values spanned by the data utilized, , and the fact that they come as ratios which are more stable against small evolution inaccuracies, make such task somewhat easier. Our DGLAP code is based on a semi-analytical method described e.g. in [9, 10].

C. The cross-sections are computed using the factorization theorem.

D. The computed cross-sections are compared with the experimental data, and the initial parametrization is varied to establish an optimal fit to the data. We define the best agreement as the minimum of a generalized -function

Within each data set , the denotes the experimental data value with point-to-point uncertainty, and is the theory prediction corresponding to a parameter set . For the pion data, the PHENIX experiment estimates an overall normalization uncertainty , and the normalization factor is non-trivial, i.e. . Its value is determined by minimizing and the final is thus an output of the analysis. The weight factors are used to amplify the importance of those data sets whose content is physically relevant, but contribution to would otherwize be too small to be noticed by an automated minimization.

E. The uncertainties are estimated. Besides finding the central set of parameters that optimally fits the data, quantifying the uncertainties stemming from the experimental errors has become an integral part of the modern PDF fits. In this work, we employ the Hessian method [11], which rests on a quadratic approximation

around the vicinity of the minimum . Non-zero off-diagonal elements in the Hessian matrix defined above are a signal of correlations between the original fit parameters and it is useful to make a change of variables that diagonalizes the Hessian matrix. Constructing the so-called PDF error sets is what ultimately makes the Hessian method so useful. Each is obtained by displacing the fit parameters to the positive/negative direction along the eigenvector of the Hessian matrix such that grows by a certain amount . Using these sets, the upper and lower uncertainty of a quantity can be written e.g. as


where denotes the value of the quantity computed by the set and where is the best fit. Requiring each data set to remain close to its 90%-confidence range, we end up with a choice1 .

3 Results and Conclusions

Figure 1: The obtained nuclear modifications for Lead at the initial scale and at . The thick black lines indicate the best-fit, whereas the dotted green curves denote the individual error sets which combine to the shaded bands like in Eq. (1).

We briefly go through the main results from the present analysis, starting with Fig. 1 where we plot the obtained modifications for Lead at two scales making their scale-dependence thereby clearly visible. It should be noticed that even a rather large uncertainty band at small- gluons shrinks in the scale evolution — a clear prediction of the DGLAP approach that might be testable in the future colliders.

As the DIS and DY data constitute the major part of the available experimental data we display in Fig. 2 some representative examples of the measured nuclear modifications with respect to Deuterium,

for different nuclei compared with the EPS09.

Figure 2: The calculated and compared with the NMC [15, 16] and E772 [17] data.

The shaded blue bands denote the uncertainty propagated from the 30 EPS09 error sets and, as should be emphasized, their size is comparable to the experimental errors backing up our choice for the .

Figure 3: Left: Comparison of the nuclear modifications at from HKN07 [5], nDS [6] and this work, EPS09 [7]. Right: The computed for inclusive pion production compared with the PHENIX [18] and STAR [19] data multiplied by , respectively.

The nuclear modification for inclusive pion yield is defined as

where denotes the number of binary nucleon-nucleon collisions and the pion’s transverse momentum and rapidity. A comparison with the PHENIX and STAR data is shown in Fig. 3. Evidently, the shape of the spectrum — which in our calculation is a reflection of the similar shape in  — gets well reproduced by EPS09. Let us mention that the shape is practically independent of the fragmentation functions used in the calculation — modern sets like [12, 13, 14] all give equal results.

Figure 3 also presents a comparison of EPS09 gluon modifications with the earlier NLO analyses. The significant scatter of the curves highlight the difficulty of pinning down the nuclear modifications from the DIS and DY data alone — especially the behaviour of HKN07 looks different. Consequently, also the predictions for pion differ significantly as is easily seen in Fig. 3. This is actually good news as this type of data, especially with better statistics, may eventually discriminate between different proposed gluon modifications.

Attention should be paid to the experimentally observed scaling-violations and to the fact that the DGLAP dynamics reproduces them well. Most cleanly such effects are visible e.g from the small- structure function ratios versus , of which Fig. 4 shows an example.

Figure 4: The calculated scale evolution of the ratio compared with the NMC data [20].

In summary, the very good agreement with the experimental data found — especially the correct description of the scale-breaking effects — we argue, is compelling evidence for the applicability of collinear factorization in nuclear environment. In addition to the best fit, we release [21] 30 nPDF error-sets for practical use, encoding the neighborhood of the minimum. Although not discussed here, we have also performed the leading-order counterpart of the NLO analysis as we want to provide the uncertainty tools also for this widely-used framework. Although the best-fit quality is very similar both in LO and NLO, the uncertainty bands become smaller when going to higher order.


  1. See Appendix A of Ref. [7] for the detailed explanation.


  1. K. J. Eskola, V. J. Kolhinen and P. V. Ruuskanen, Nucl. Phys. B 535 (1998) 351 [arXiv:hep-ph/9802350].
  2. K. J. Eskola, V. J. Kolhinen and C. A. Salgado, Eur. Phys. J. C 9 (1999) 61 [arXiv:hep-ph/9807297].
  3. K. J. Eskola, V. J. Kolhinen, H. Paukkunen and C. A. Salgado, JHEP 0705 (2007) 002 [arXiv:hep-ph/0703104].
  4. K. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP 0807 (2008) 102 [arXiv:0802.0139 [hep-ph]].
  5. M. Hirai, S. Kumano and T. H. Nagai, arXiv:0709.3038 [hep-ph].
  6. D. de Florian and R. Sassot, Phys. Rev. D 69 (2004) 074028 [arXiv:hep-ph/0311227].
  7. K. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP 0904 (2009) 065 [arXiv:0902.4154 [hep-ph]].
  8. D. Stump, J. Huston, J. Pumplin, W. K. Tung, H. L. Lai, S. Kuhlmann and J. F. Owens, JHEP 0310 (2003) 046 [arXiv:hep-ph/0303013].
  9. P. Santorelli and E. Scrimieri, Phys. Lett. B 459 (1999) 599 [arXiv:hep-ph/9807572].
  10. H. Paukkunen, PhD Thesis, arXiv:0906.2529 [hep-ph].
  11. J. Pumplin et al., Phys. Rev. D 65 (2001) 014013 [arXiv:hep-ph/0101032].
  12. B. A. Kniehl, G. Kramer and B. Potter, Nucl. Phys. B 582 (2000) 514 [arXiv:hep-ph/0010289].
  13. S. Albino, B. A. Kniehl and G. Kramer, arXiv:0803.2768 [hep-ph].
  14. D. de Florian, R. Sassot and M. Stratmann, Phys. Rev. D 75 (2007) 114010 [arXiv:hep-ph/0703242].
  15. M. Arneodo et al. [New Muon Collaboration.], Nucl. Phys. B 441 (1995) 12 [arXiv:hep-ex/9504002].
  16. P. Amaudruz et al. [New Muon Collaboration], Nucl. Phys. B 441 (1995) 3 [arXiv:hep-ph/9503291].
  17. D. M. Alde et al., Phys. Rev. Lett. 64 (1990) 2479.
  18. S. S. Adler et al. [PHENIX Collaboration], Phys. Rev. Lett. 98 (2007) 172302 [arXiv:nucl-ex/0610036].
  19. J. Adams et al. [STAR Collaboration], Phys. Lett. B 637 (2006) 161 [arXiv:nucl-ex/0601033].
  20. M. Arneodo et al. [New Muon Collaboration], Nucl. Phys. B 481 (1996) 23.
  21. https://www.jyu.fi/fysiikka/en/research/highenergy/urhic/nPDFs
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description