Global reanalysis of nPDFs
In this talk, we present the results from our recent global reanalysis of nuclear parton distribution functions (nPDFs), where the DGLAP-evolving nPDFs are constrained by nuclear hard process data from deep inelastic scattering (DIS) and the Drell-Yan (DY) process in collisions, and by sum rules. The main improvements over our earlier work EKS98 are the automated minimization, better controllable fit functions and possibility for error estimates. The obtained 16-parameter fit to datapoints is good, . Fit quality comparison and the error estimates obtained show that the old EKS98 parametrization is fully consistent with the present automated reanalysis. Comparison with other global nPDF analyses is presented as well. Within the DGLAP framework we also discuss the possibility of incorporating a clearly stronger gluon shadowing, which is suggested by the RHIC BRAHMS data from d+Au collisions.
Global reanalysis of nPDFs
C.A. Salgado††thanks: Financially supported by the 6th Framework Programme of the European Community under the Marie Curie contract MEIF-CT-2005-024624.
Dipartimento di Fisica, Università di Roma “La Sapienza” and INFN, Rome, Italy
Inclusive cross sections of hard processes in high energy hadronic and nuclear collisions are computable using the collinear factorization theorem of QCD, In these computations the short-distance pieces, the squared perturbative QCD (pQCD) matrix elements, are contained in . The long-distance nonperturbative input is included in the universal process-independent parton distribution functions (PDFs), , which depend on the momentum fraction carried by the colliding parton , the factorization scale , and the type of the colliding hadron or nucleus. Once the PDFs are known at an initial scale , the DGLAP equations  predict their behaviour at other (perturbative) scales.
In the global analysis of PDFs, the goal is to determine the DGLAP-evolving PDFs in a model-independent way on the basis of constraints offered by the sum rules for momentum, baryon number and charge conservation, and in particular by multitude of hard QCD-process data in hadronic and nuclear collisions. The global analysis becomes, however, cumbersome already on hadronic level, since the data from which the PDFs are to be determined do not lie along constant scales in the plane. Furthermore, the kinematical ranges of the data from different measurements often do not overlap and the precision of the data may vary. For these reasons, the global analysis of the PDFs usually proceeds with the following steps: (i) Choose a suitably flexible functional form for the PDFs, expressed in terms of several enough but not too many parameters at an initial scale GeV. Use sum rules to reduce the number of parameters. (ii) Evolve the PDFs to higher scales according to the DGLAP equations. (iii) Compare with the hard process data available, compute overall to quantify the quality of the obtained fit. (iv) Iterate the initial parameter values until a best (local) minimum of in the multi-dimensional parameter space, a best fit, is found.
The global analyses carried out by the MRS group  and by the CTEQ collaboration  have been quite successful in pinning down the PDFs of the free proton at the next-to-leading order (NLO) level of pQCD, and the analysis there is now moving already to the NNLO level. For the nuclear PDFs (nPDFs), three groups have so far presented results from a global analysis:
EKS98 [4, 5] was the first global analysis performed for the nPDFs. This leading-order (LO) analysis demonstrated that the measured cross sections for deep inelastic lepton-nucleus scattering (DIS) and for the Drell-Yan (DY) dilepton production in proton-nucleus collisions, and in particular the -slopes of can all be reproduced and the momentum and baryon number sum rules required simultaneously within the DGLAP framework. The original data fitting in EKS98 was, however, done by eye only.
nDS  was the first NLO global analysis for the nPDFs.
The main goals of the global reanalysis of nPDFs which we have recently performed in  and discuss in this talk, can be summarized as follows: As the main improvement over the EKS98, we now automate the minimization. We check whether the already good fits obtained in EKS98 could yet be improved. We now also report uncertainty bands for the EKS98-type nuclear effects of the PDFs. We also want to check whether the DIS and DY data could allow stronger gluon shadowing than obtained in EKS98, HKN and nDS. The motivation for this are the BRAHMS data  for inclusive hadron production in d+Au collisions at RHIC which show a systematic suppression relative to p+p at forward rapidities.
2 The framework
We define the nPDFs, , as the PDFs of bound protons. By nuclear modifications, , we refer to modifications relative to the free proton PDFs,
which in turn are here supposed to be fully known and which are taken from the CTEQ6L1 set . Above, is the parton type and is the mass number of the nucleus. The PDFs of the bound neutrons are obtained through isospin symmetry ( etc.), which is exact for isoscalar nuclei and assumed to hold also for the non-isoscalar nuclei. The initial scale is the lowest scale of the CTEQ6L1 disctributions, GeV. The small nuclear effects for deuterium are neglected.
As in EKS98, we consider only three different modifications at : for valence quarks, for all sea quarks, and for gluons. Further details cannot, unfortunately, be specified simply due to the lack of data. Each of these ratios consists of three pieces, which are matched together at the antishadowing maximum at and at the EMC minimum at (cf. Fig. 1):
As explained in , we convert the parameters into the following more transparent set of seven parameters,
|at , defining where shadowing levels off,|
|a slope factor in the exponential,|
|,||position and height of the antishadowing maximum,|
|,||position and height of the EMC mimimum,|
|slope of the divergence of caused by Fermi motion at .|
The -dependence of nPDFs is contained in the -dependence of each parameter, taken to be of the following simple 2-parameter form,
where , and where Carbon () is chosen as the reference nucleus.
To reduce the number of parameters from down to our final set of 16 free parameters to be determined by minimization with the MINUIT routine , we impose baryon number and momentum conservation, and fix the initial large- gluon and sea quark modifications (which in practice remain unconstrained) to . Lots of manual labour was still required for finding converging fits, starting values and ranges for the 16 free fit parameters.
The data sets against which the best fit was found, are the DIS data from NMC [13, 14, 15, 16] FNAL E665  and SLAC E-139 , and the DY data from FNAL E772  and FNAL E866 . For the available -systematics and other details, consult Table 1 in 111Correction: NMC 96 data for Sn/C, used in our reanalysis, should appear in Table 1 of  as well..
The obtained parameters corresponding to the best fit found are shown in Table 1. The goodness of the fit was for data points and 16 free parameters, which corresponds to and .
|1||baryon sum||0.88909||momentum sum|
|2||baryon sum||-8.03454E-02||momentum sum|
|3||0.025 ()||0.100 ()||0.100 ()|
|4||0, fixed||0, fixed||0, fixed|
|6||0, fixed||0, fixed||0, fixed|
|7||0.68716||as valence||as valence|
|8||0, fixed||0, fixed||0, fixed|
|11||0.91050||as valence||as valence|
|12||-2.82553E-2||as valence||as valence|
|13||0.3||as valence||as valence|
|14||0, fixed||as valence||as valence|
() upper limit; () lower limit;
The obtained initial nuclear modifications at GeV are shown in Fig. 1 for selected nuclei. Figs. 2-4 show the obtained good agreement with the DIS and DY data. The computed results are shown with filled symbols, the data with open ones. For further details, consult the figure captions.
We obtain uncertainty estimates for the initial nuclear modifications using the Hessian error matrix output provided by MINUIT (for details and refs., see again ). These bands are denoted as "Fit errors" in Fig. 5. To obtain physically more relevant large- errors for and , which in the analysis were fixed to at large-, we keep their small- parameters fixed and release the large- parameters for each at the time. This results in the "Large- errors" shown in Fig. 5. The estimated total errors are then the yellow bands.
4 Conclusions from global reanalysis
The total error bands in Fig. 5 demonstrate where and to what extent the available DIS and DY data constrain the nuclear modifications: the valence quarks average modifications are rather well, and independently of the functional form chosen, under control over the whole -range, and so are the sea quarks at . At larger , sea quarks and gluons are badly constrained. Gluons are constrained around the region where : If gluon shadowing (see Fig. 1) at were clearly stronger than that of sea quarks (which in turn is constrained by the DIS and DY data through the DGLAP evolution), then the -slopes caused by the DGLAP evolution at would become negative , and this would be in a clear contradiction with the NMC data for the dependence in Fig. 3. Thus, the three smallest- panels in Fig. 3 serve as the best constraint one currently obtains from DIS for nuclear gluons. At smaller , where no high- DIS data exist, again both sea quark and gluon modifications are badly constrained and remain specific to the parametric form chosen. Therefore, the uncertainty bands given in Fig. 5 are to be taken as lower limits for the true uncertainties.
Regarding the gluon shadowing in Fig. 1, we should also emphasize that like in EKS98 the sea quark and gluon shadowings become the same by construction rather than as a result of unbiased minimization: As the DIS data practically only constrains gluons at , momentum conservation alone is not able to fix the height or location of the antishadowing peak in in such a way that a clear enough minimum in would be obtained. Therefore, and also to test the EKS98 framework, we set the limits of and such that at . We nevertheless observed that the minimization tended to decrease the amount of gluon (anti)shadowing rather than support a stronger (anti)shadowing. We have also tested that if we keep the negligible gluon modifications at but double the gluon shadowing at (Fig. 5, the green line) the overall quality of the fits is not much deteriorated, , even if the quark sector is not changed at all and no further minimization is made. This demonstrates that the indirect constraints given by the DIS and DY data and the momentum sum rules for are not very stringent, and that further constraints are certainly necessary for pinning down the nuclear gluon distributions.
Table 2 summarizes the values obtained in the previous global analyses for the nPDFs. A more detailed comparison is presented in . We conclude here that the old EKS98 analysis resulted in a fit whose quality is as good as in the automated analyses of the present work , and also that the we obtain is close to that in nDS and somewhat smaller than in HKM and HKN. Note, however, that the data sets included in each analysis are not identical. Interestingly, the NLO analysis of nDS seems to give the best so far.
Based on Fig. 5 and on the equally good overall quality of the fits obtained, we also conclude that the old EKS98 results agree quite nicely with our results from the automated minimization : see the red lines for EKS98 in Fig. 5. Thus, there is no need for releasing a new LO parametrization for the nPDFs, the EKS98 works still very well. To improve our analysis in the future, however, we plan to include RHIC d+Au data (see the discussion below) as further constraints and also eventually extend the analysis to NLO pQCD.
5 Stronger gluon shadowing?
Further data sets to be included in the global analysis of nPDFs in the future, are provided by the d+Au experiments at RHIC. Figure 6 with BRAHMS data  shows the ratio of inclusive distributions of hadrons at different pseudorapidities in d+Au collisions at GeV over those in p+p collisions. The corresponding QCD-factorized LO cross sections are of the form
with labeling the hadron (parton) type. The fragmentation functions we take from the KKP LO set . We set the factorization scales and to the partonic and hadronic transverse momentum, correspondingly, and define as the fractional energy.
To test the sensitivity of the computed inclusive cross sections to gluon shadowing, we compute the cross sections by taking the nuclear modifications of PDFs from EKS98 and from present analysis supplemented with the stronger gluon shadowing in Fig. 5. Note, however, that the systematic error bars in the BRAHMS data are large, and also that at the largest rapidities the data stand for negative hadrons only, while the KKP gives an average , and that we have not tried to correct for this difference in the computation. In any case, the large- BRAHMS data seems to suggest a stronger gluon shadowing than the relatively weakly constrained modest gluon shadowing obtained on the basis of DIS and DY data in the global nPDF analyses. Too see whether such a strong gluon shadowing can be accommodated in the DGLAP framework without deteriorating the good fits obtained, a careful global reanalysis must, however, be performed. In particular, it will be interesting to see whether changes in the gluon shadowing induce changes in the quark sector in such a way that the good agreement with the measured slopes in Fig. 3 could be maintained.
-  Y. L. Dokshitzer, Perturbation Theory In Quantum Sov. Phys. JETP 46 (1977) 641 [Zh. Eksp. Teor. Fiz. 73 (1977) 1216]; V. N. Gribov and L. N. Lipatov, Yad. Fiz. 15 (1972) 781 [Sov. J. Nucl. Phys. 15 (1972) 438]; V. N. Gribov and L. N. Lipatov, Yad. Fiz. 15 (1972) 1218 [Sov. J. Nucl. Phys. 15 (1972) 675]; G. Altarelli and G. Parisi, Nucl. Phys. B 126 (1977) 298.
-  A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Eur. Phys. J. C 35 (2004) 325 [arXiv:hep-ph/0308087].
-  J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky and W. K. Tung, JHEP 0207 (2002) 012 [arXiv:hep-ph/0201195].
-  K. J. Eskola, V. J. Kolhinen and P. V. Ruuskanen, Nucl. Phys. B 535 (1998) 351 [arXiv:hep-ph/9802350].
-  K. J. Eskola, V. J. Kolhinen and C. A. Salgado, Eur. Phys. J. C 9 (1999) 61 [arXiv:hep-ph/9807297].
-  M. Hirai, S. Kumano and M. Miyama, Phys. Rev. D 64 (2001) 034003 [arXiv:hep-ph/0103208].
-  M. Hirai, S. Kumano and T. H. Nagai, Phys. Rev. C 70 (2004) 044905 [arXiv:hep-ph/0404093].
-  D. de Florian and R. Sassot, Phys. Rev. D 69 (2004) 074028 [arXiv:hep-ph/0311227].
-  K. J. Eskola, V. J. Kolhinen, H. Paukkunen and C. A. Salgado, JHEP 05 (2007) 002 [arXiv:hep-ph/0703104].
-  I. Arsene et al. [BRAHMS Collaboration], Phys. Rev. Lett. 93 (2004) 242303 [arXiv:nucl-ex/0403005].
-  D. Stump et al., JHEP 0310, 046 (2003) [arXiv:hep-ph/0303013].
-  F. James, MINUIT Function Minimization and Error Analysis, Reference Manual Version 94.1. CERN Program Library Long Writeup D506 (Aug 1998).
-  M. Arneodo et al. [New Muon Collaboration.], Nucl. Phys. B 441 (1995) 12 [arXiv:hep-ex/9504002].
-  P. Amaudruz et al. [New Muon Collaboration], Nucl. Phys. B 441 (1995) 3 [arXiv:hep-ph/9503291].
-  M. Arneodo et al. [New Muon Collaboration], Nucl. Phys. B 481 (1996) 3.
-  M. Arneodo et al. [New Muon Collaboration], Nucl. Phys. B 481 (1996) 23.
-  M. R. Adams et al. [E665 Collaboration], Z. Phys. C 67 (1995) 403 [arXiv:hep-ex/9505006].
-  J. Gomez et al., Phys. Rev. D 49 (1994) 4348.
-  D. M. Alde et al., Phys. Rev. Lett. 64 (1990) 2479.
-  M. A. Vasilev et al. [FNAL E866 Collaboration], Phys. Rev. Lett. 83 (1999) 2304 [arXiv:hep-ex/9906010].
-  K. J. Eskola, H. Honkanen, V. J. Kolhinen and C. A. Salgado, Phys. Lett. B 532 (2002) 222 [arXiv:hep-ph/0201256].
-  B. A. Kniehl, G. Kramer and B. Potter, Nucl. Phys. B 582 (2000) 514 [arXiv:hep-ph/0010289].