Global reanalysis of nPDFs

Global reanalysis of nPDFs

K. J. Eskola , V. J. Kolhinen, H. Paukkunen
Department of Physics, University of Jyväskylä and Helsinki Institute of Physics, Finland
E-mail: kari.eskola,vesa.kolhinen,hannu.paukkunen@phys.jyu.fi
Speaker.This work was financially supported the Academy of Finland, Projects 73101, 80385, 206024 and 115262.
   C.A. Salgado
Dipartimento di Fisica, Università di Roma “La Sapienza” and INFN, Rome, Italy
E-mail: carlos.salgado@cern.ch
Financially supported by the 6th Framework Programme of the European Community under the Marie Curie contract MEIF-CT-2005-024624.
Abstract

In this talk, we present the results from our recent global reanalysis of nuclear parton distribution functions (nPDFs), where the DGLAP-evolving nPDFs are constrained by nuclear hard process data from deep inelastic scattering (DIS) and the Drell-Yan (DY) process in collisions, and by sum rules. The main improvements over our earlier work EKS98 are the automated minimization, better controllable fit functions and possibility for error estimates. The obtained 16-parameter fit to datapoints is good, . Fit quality comparison and the error estimates obtained show that the old EKS98 parametrization is fully consistent with the present automated reanalysis. Comparison with other global nPDF analyses is presented as well. Within the DGLAP framework we also discuss the possibility of incorporating a clearly stronger gluon shadowing, which is suggested by the RHIC BRAHMS data from d+Au collisions.

Global reanalysis of nPDFs

 

C.A. Salgadothanks: Financially supported by the 6th Framework Programme of the European Community under the Marie Curie contract MEIF-CT-2005-024624.


Dipartimento di Fisica, Università di Roma “La Sapienza” and INFN, Rome, Italy


E-mail: carlos.salgado@cern.ch

\abstract@cs

High- physics at LHC March 23-27, 2007 University of Jyväskylä, Jyväskylä, Finland

1 Introduction

Inclusive cross sections of hard processes in high energy hadronic and nuclear collisions are computable using the collinear factorization theorem of QCD, In these computations the short-distance pieces, the squared perturbative QCD (pQCD) matrix elements, are contained in . The long-distance nonperturbative input is included in the universal process-independent parton distribution functions (PDFs), , which depend on the momentum fraction carried by the colliding parton , the factorization scale , and the type of the colliding hadron or nucleus. Once the PDFs are known at an initial scale , the DGLAP equations [1] predict their behaviour at other (perturbative) scales.

In the global analysis of PDFs, the goal is to determine the DGLAP-evolving PDFs in a model-independent way on the basis of constraints offered by the sum rules for momentum, baryon number and charge conservation, and in particular by multitude of hard QCD-process data in hadronic and nuclear collisions. The global analysis becomes, however, cumbersome already on hadronic level, since the data from which the PDFs are to be determined do not lie along constant scales in the plane. Furthermore, the kinematical ranges of the data from different measurements often do not overlap and the precision of the data may vary. For these reasons, the global analysis of the PDFs usually proceeds with the following steps: (i) Choose a suitably flexible functional form for the PDFs, expressed in terms of several enough but not too many parameters at an initial scale  GeV. Use sum rules to reduce the number of parameters. (ii) Evolve the PDFs to higher scales according to the DGLAP equations. (iii) Compare with the hard process data available, compute overall to quantify the quality of the obtained fit. (iv) Iterate the initial parameter values until a best (local) minimum of in the multi-dimensional parameter space, a best fit, is found.

The global analyses carried out by the MRS group [2] and by the CTEQ collaboration [3] have been quite successful in pinning down the PDFs of the free proton at the next-to-leading order (NLO) level of pQCD, and the analysis there is now moving already to the NNLO level. For the nuclear PDFs (nPDFs), three groups have so far presented results from a global analysis:

  • EKS98 [4, 5] was the first global analysis performed for the nPDFs. This leading-order (LO) analysis demonstrated that the measured cross sections for deep inelastic lepton-nucleus scattering (DIS) and for the Drell-Yan (DY) dilepton production in proton-nucleus collisions, and in particular the -slopes of can all be reproduced and the momentum and baryon number sum rules required simultaneously within the DGLAP framework. The original data fitting in EKS98 was, however, done by eye only.

  • HKM [6] and HKN [7] were the first nPDF global analyses with minimization automated and also uncertainties estimated. The nuclear DY data were not included in HKM but were added in HKN. These analyses were still at the LO level.

  • nDS [8] was the first NLO global analysis for the nPDFs.

The main goals of the global reanalysis of nPDFs which we have recently performed in [9] and discuss in this talk, can be summarized as follows: As the main improvement over the EKS98, we now automate the minimization. We check whether the already good fits obtained in EKS98 could yet be improved. We now also report uncertainty bands for the EKS98-type nuclear effects of the PDFs. We also want to check whether the DIS and DY data could allow stronger gluon shadowing than obtained in EKS98, HKN and nDS. The motivation for this are the BRAHMS data [10] for inclusive hadron production in d+Au collisions at RHIC which show a systematic suppression relative to p+p at forward rapidities.

2 The framework

We define the nPDFs, , as the PDFs of bound protons. By nuclear modifications, , we refer to modifications relative to the free proton PDFs,

(2.0)

which in turn are here supposed to be fully known and which are taken from the CTEQ6L1 set [11]. Above, is the parton type and is the mass number of the nucleus. The PDFs of the bound neutrons are obtained through isospin symmetry ( etc.), which is exact for isoscalar nuclei and assumed to hold also for the non-isoscalar nuclei. The initial scale is the lowest scale of the CTEQ6L1 disctributions,  GeV. The small nuclear effects for deuterium are neglected.

As in EKS98, we consider only three different modifications at : for valence quarks, for all sea quarks, and for gluons. Further details cannot, unfortunately, be specified simply due to the lack of data. Each of these ratios consists of three pieces, which are matched together at the antishadowing maximum at and at the EMC minimum at (cf. Fig. 1):

(2.0)
(2.0)
(2.0)

As explained in [9], we convert the parameters into the following more transparent set of seven parameters,

at , defining where shadowing levels off,
a slope factor in the exponential,
, position and height of the antishadowing maximum,
, position and height of the EMC mimimum,
slope of the divergence of caused by Fermi motion at .

The -dependence of nPDFs is contained in the -dependence of each parameter, taken to be of the following simple 2-parameter form,

(2.0)

where , and where Carbon () is chosen as the reference nucleus.

To reduce the number of parameters from down to our final set of 16 free parameters to be determined by minimization with the MINUIT routine [12], we impose baryon number and momentum conservation, and fix the initial large- gluon and sea quark modifications (which in practice remain unconstrained) to . Lots of manual labour was still required for finding converging fits, starting values and ranges for the 16 free fit parameters.

3 Results

The data sets against which the best fit was found, are the DIS data from NMC [13, 14, 15, 16] FNAL E665 [17] and SLAC E-139 [18], and the DY data from FNAL E772 [19] and FNAL E866 [20]. For the available -systematics and other details, consult Table 1 in [9]111Correction: NMC 96 data for Sn/C, used in our reanalysis, should appear in Table 1 of [9] as well..

The obtained parameters corresponding to the best fit found are shown in Table 1. The goodness of the fit was for data points and 16 free parameters, which corresponds to and .

Param. Valence Sea Gluon
1 baryon sum 0.88909 momentum sum
2 baryon sum -8.03454E-02 momentum sum
3 0.025 () 0.100 () 0.100 ()
4 0, fixed 0, fixed 0, fixed
5 0.12190 0.14011 as valence
6 0, fixed 0, fixed 0, fixed
7 0.68716 as valence as valence
8 0, fixed 0, fixed 0, fixed
9 1.03887 0.97970 1.071 ()
10 1.28120E-2 -1.28486E-2 3.150E-2 ()
11 0.91050 as valence as valence
12 -2.82553E-2 as valence as valence
13 0.3 as valence as valence
14 0, fixed as valence as valence

() upper limit; () lower limit;

Table 1: The obtained final results for the free and fixed parameters defining the initial modifications , and at  GeV. The powers define the -dependence in the form of Eq. (2), the other parameters are for the reference nucleus . Parameters which drifted to their upper (u) or lower (l) limits are indicated, see [9] for details.

The obtained initial nuclear modifications at GeV are shown in Fig. 1 for selected nuclei. Figs. 2-4 show the obtained good agreement with the DIS and DY data. The computed results are shown with filled symbols, the data with open ones. For further details, consult the figure captions.

Figure 1: Initial nuclear modifications (solid lines), (dotted lines), (dashed lines) and (dotted-dashed lines) for , 40, 117 and 208 as a function of at GeV.

Figure 2: Left: The calculated (filled symbols) against the data from SLAC E-139 (triangles) [18], E665 (diamonds) [17] and NMC (squares and circles) [13, 14]. The asterisks denote our results calculated at when of the data is below . Right: Comparison with the SLAC E-139 data [18] at different fixed scales.

Figure 3: Left: The computed ratios (filled squares) and the NMC data [15] (open squares). Right: The calculated scale evolution (solid lines) of the ratio against the NMC data [16] for various fixed values of . The inner error bars are the statistical ones, the outer ones represent the statistical and systematic errors added in quadrature.
Figure 4: Left: The computed LO DY ratio (filled squares) against the E772 data [19] (open squares). Right: The computed LO DY ratio (filled squares) compared with the E866 data [20] (open squares) as a function of at four different invariant-mass () bins.

We obtain uncertainty estimates for the initial nuclear modifications using the Hessian error matrix output provided by MINUIT (for details and refs., see again [9]). These bands are denoted as "Fit errors" in Fig. 5. To obtain physically more relevant large- errors for and , which in the analysis were fixed to at large-, we keep their small- parameters fixed and release the large- parameters for each at the time. This results in the "Large- errors" shown in Fig. 5. The estimated total errors are then the yellow bands.

Figure 5: Error estimates for for Lead. Fit errors are shown by the dashed lines. For large- sea quark and gluon modifications the errors shown by the dotted lines were calculated separately, and the yellow bands are the total error estimates obtained (see the text). The EKS98 results, correspondingly evolved downwards from  GeV, are shown by the dot-dashed red lines. An example of a stronger gluon shadowing case is shown by the dense-dashed green line.

4 Conclusions from global reanalysis

The total error bands in Fig. 5 demonstrate where and to what extent the available DIS and DY data constrain the nuclear modifications: the valence quarks average modifications are rather well, and independently of the functional form chosen, under control over the whole -range, and so are the sea quarks at . At larger , sea quarks and gluons are badly constrained. Gluons are constrained around the region where : If gluon shadowing (see Fig. 1) at were clearly stronger than that of sea quarks (which in turn is constrained by the DIS and DY data through the DGLAP evolution), then the -slopes caused by the DGLAP evolution at would become negative [21], and this would be in a clear contradiction with the NMC data for the dependence in Fig. 3. Thus, the three smallest- panels in Fig. 3 serve as the best constraint one currently obtains from DIS for nuclear gluons. At smaller , where no high- DIS data exist, again both sea quark and gluon modifications are badly constrained and remain specific to the parametric form chosen. Therefore, the uncertainty bands given in Fig. 5 are to be taken as lower limits for the true uncertainties.

Regarding the gluon shadowing in Fig. 1, we should also emphasize that like in EKS98 the sea quark and gluon shadowings become the same by construction rather than as a result of unbiased minimization: As the DIS data practically only constrains gluons at , momentum conservation alone is not able to fix the height or location of the antishadowing peak in in such a way that a clear enough minimum in would be obtained. Therefore, and also to test the EKS98 framework, we set the limits of and such that at . We nevertheless observed that the minimization tended to decrease the amount of gluon (anti)shadowing rather than support a stronger (anti)shadowing. We have also tested that if we keep the negligible gluon modifications at but double the gluon shadowing at (Fig. 5, the green line) the overall quality of the fits is not much deteriorated, , even if the quark sector is not changed at all and no further minimization is made. This demonstrates that the indirect constraints given by the DIS and DY data and the momentum sum rules for are not very stringent, and that further constraints are certainly necessary for pinning down the nuclear gluon distributions.

Table 2 summarizes the values obtained in the previous global analyses for the nPDFs. A more detailed comparison is presented in [9]. We conclude here that the old EKS98 analysis resulted in a fit whose quality is as good as in the automated analyses of the present work [9], and also that the we obtain is close to that in nDS and somewhat smaller than in HKM and HKN. Note, however, that the data sets included in each analysis are not identical. Interestingly, the NLO analysis of nDS seems to give the best so far.

Based on Fig. 5 and on the equally good overall quality of the fits obtained, we also conclude that the old EKS98 results agree quite nicely with our results from the automated minimization [9]: see the red lines for EKS98 in Fig. 5. Thus, there is no need for releasing a new LO parametrization for the nPDFs, the EKS98 works still very well. To improve our analysis in the future, however, we plan to include RHIC d+Au data (see the discussion below) as further constraints and also eventually extend the analysis to NLO pQCD.

Set Ref. GeV d.o.f.
This work [9] 1.69 514 16 410.15 0.798 0.824
EKS98 [4] 2.25 479 387.39 0.809
HKM [6] 1.0 309 9 546.6 1.769 1.822
HKN [7] 1.0 951 9 1489.8 1.567 1.582
nDS, LO [8] 0.4 420 27 316.35 0.753 0.806
nDS, NLO [8] 0.4 420 27 300.15 0.715 0.764
Table 2: The overall qualities of the fits obtained in different global analyses of nPDFs.

5 Stronger gluon shadowing?

Figure 6: Minimum bias inclusive hadron production cross section in d+Au collisions divided by that in p+p collisions at GeV at RHIC as a function of hadronic transverse momentum at four different pseudorapidities. The BRAHMS data [10] are shown with statistical error bars and shaded systematic error bands. A pQCD calculation for production with the EKS98 nuclear modifications and KKP fragmentation functions is shown by the red solid lines and that with the strong gluon shadowing of Fig. 5 by the dashed green lines.

Further data sets to be included in the global analysis of nPDFs in the future, are provided by the d+Au experiments at RHIC. Figure 6 with BRAHMS data [10] shows the ratio of inclusive distributions of hadrons at different pseudorapidities in d+Au collisions at GeV over those in p+p collisions. The corresponding QCD-factorized LO cross sections are of the form

(5.0)

with labeling the hadron (parton) type. The fragmentation functions we take from the KKP LO set [22]. We set the factorization scales and to the partonic and hadronic transverse momentum, correspondingly, and define as the fractional energy.

To test the sensitivity of the computed inclusive cross sections to gluon shadowing, we compute the cross sections by taking the nuclear modifications of PDFs from EKS98 and from present analysis supplemented with the stronger gluon shadowing in Fig. 5. Note, however, that the systematic error bars in the BRAHMS data are large, and also that at the largest rapidities the data stand for negative hadrons only, while the KKP gives an average , and that we have not tried to correct for this difference in the computation. In any case, the large- BRAHMS data seems to suggest a stronger gluon shadowing than the relatively weakly constrained modest gluon shadowing obtained on the basis of DIS and DY data in the global nPDF analyses. Too see whether such a strong gluon shadowing can be accommodated in the DGLAP framework without deteriorating the good fits obtained, a careful global reanalysis must, however, be performed. In particular, it will be interesting to see whether changes in the gluon shadowing induce changes in the quark sector in such a way that the good agreement with the measured slopes in Fig. 3 could be maintained.

References

  • [1] Y. L. Dokshitzer, Perturbation Theory In Quantum Sov. Phys. JETP 46 (1977) 641 [Zh. Eksp. Teor. Fiz. 73 (1977) 1216]; V. N. Gribov and L. N. Lipatov, Yad. Fiz. 15 (1972) 781 [Sov. J. Nucl. Phys. 15 (1972) 438]; V. N. Gribov and L. N. Lipatov, Yad. Fiz. 15 (1972) 1218 [Sov. J. Nucl. Phys. 15 (1972) 675]; G. Altarelli and G. Parisi, Nucl. Phys. B 126 (1977) 298.
  • [2] A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Eur. Phys. J. C 35 (2004) 325 [arXiv:hep-ph/0308087].
  • [3] J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky and W. K. Tung, JHEP 0207 (2002) 012 [arXiv:hep-ph/0201195].
  • [4] K. J. Eskola, V. J. Kolhinen and P. V. Ruuskanen, Nucl. Phys. B 535 (1998) 351 [arXiv:hep-ph/9802350].
  • [5] K. J. Eskola, V. J. Kolhinen and C. A. Salgado, Eur. Phys. J. C 9 (1999) 61 [arXiv:hep-ph/9807297].
  • [6] M. Hirai, S. Kumano and M. Miyama, Phys. Rev. D 64 (2001) 034003 [arXiv:hep-ph/0103208].
  • [7] M. Hirai, S. Kumano and T. H. Nagai, Phys. Rev. C 70 (2004) 044905 [arXiv:hep-ph/0404093].
  • [8] D. de Florian and R. Sassot, Phys. Rev. D 69 (2004) 074028 [arXiv:hep-ph/0311227].
  • [9] K. J. Eskola, V. J. Kolhinen, H. Paukkunen and C. A. Salgado, JHEP 05 (2007) 002 [arXiv:hep-ph/0703104].
  • [10] I. Arsene et al. [BRAHMS Collaboration], Phys. Rev. Lett. 93 (2004) 242303 [arXiv:nucl-ex/0403005].
  • [11] D. Stump et al., JHEP 0310, 046 (2003) [arXiv:hep-ph/0303013].
  • [12] F. James, MINUIT Function Minimization and Error Analysis, Reference Manual Version 94.1. CERN Program Library Long Writeup D506 (Aug 1998).
  • [13] M. Arneodo et al. [New Muon Collaboration.], Nucl. Phys. B 441 (1995) 12 [arXiv:hep-ex/9504002].
  • [14] P. Amaudruz et al. [New Muon Collaboration], Nucl. Phys. B 441 (1995) 3 [arXiv:hep-ph/9503291].
  • [15] M. Arneodo et al. [New Muon Collaboration], Nucl. Phys. B 481 (1996) 3.
  • [16] M. Arneodo et al. [New Muon Collaboration], Nucl. Phys. B 481 (1996) 23.
  • [17] M. R. Adams et al. [E665 Collaboration], Z. Phys. C 67 (1995) 403 [arXiv:hep-ex/9505006].
  • [18] J. Gomez et al., Phys. Rev. D 49 (1994) 4348.
  • [19] D. M. Alde et al., Phys. Rev. Lett. 64 (1990) 2479.
  • [20] M. A. Vasilev et al. [FNAL E866 Collaboration], Phys. Rev. Lett. 83 (1999) 2304 [arXiv:hep-ex/9906010].
  • [21] K. J. Eskola, H. Honkanen, V. J. Kolhinen and C. A. Salgado, Phys. Lett. B 532 (2002) 222 [arXiv:hep-ph/0201256].
  • [22] B. A. Kniehl, G. Kramer and B. Potter, Nucl. Phys. B 582 (2000) 514 [arXiv:hep-ph/0010289].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
165693
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description