Vector boson production in pPb and PbPb collisions at the LHC and its impact on nCTEQ15 PDFs
We provide a comprehensive comparison of vector boson production data in pPb and PbPb collisions at the LHC with predictions obtained using the nCTEQ15 PDFs. We identify the measurements which have the largest potential impact on the PDFs, and estimate the effect of including these data using a Bayesian reweighting method. We find this data set can provide information about both the nuclear corrections and the heavy flavor (strange quark) PDF components. As for the proton, the parton flavor determination/separation is dependent on nuclear corrections (from heavy target DIS, for example), this information can also help improve the proton PDFs.
e1e-mail: email@example.com \thankstexte2e-mail: firstname.lastname@example.org \thankstexte3e-mail: email@example.com \thankstexte4e-mail: firstname.lastname@example.org \thankstexte5e-mail: email@example.com \thankstexte6e-mail: firstname.lastname@example.org \thankstexte7e-mail: email@example.com \thankstexte8e-mail: firstname.lastname@example.org \thankstexte9e-mail: email@example.com
- 1 Introduction
- 2 Production at the LHC
- 3 Reweighting
- 4 Conclusions
|Aad:2015gta ()||;||Fig. 3|
|AtlasWpPb ()||; ;||Fig. (a)a|
|AtlasWpPb ()||; ;||Fig. (b)b|
|Khachatryan:2015pzs ()||; ;||Fig. 4|
|Khachatryan:2015hha ()||;||Fig. (a)a|
|Khachatryan:2015hha ()||;||Fig. (b)b|
|Aaij:2014pvu ()||; ; ;||Fig. 5|
|Senosi:2015omk ()||; ;||Fig. (a)a|
|Senosi:2015omk ()||; ;||Fig. (b)b|
|Aad:2012ew ()||;||Fig. (a)a|
|Aad:2014bha ()||; ; ;||Fig. (a)a|
|Chatrchyan:2014csa ()||;||Fig. (b)b|
|Chatrchyan:2012nt ()||; ;||Fig. (b)b|
|Beam Energy [TeV]||3.5||4||6.5||7|
Vector boson production in hadron collisions is a well understood process and serves as one of the “standard candle” measurements at the LHC. and bosons are numerously produced in heavy ion pPb and PbPb collisions at the LHC and can be used to gain insight into the structure of nuclear parton distribution functions (nPDFs). As the and bosons couple weakly, their interaction with the nuclear medium is negligible which makes these processes one of the cleanest probes of the nuclear structure available at the LHC. The possibility of using vector boson production data to constrain nPDFs was previously considered Paukkunen:2010qg (), and this demonstrated the strong potential for the proton-lead data (especially the asymmetries) to constrain the nuclear PDFs. The current LHC measurements for and production include rapidity and transverse momentum distributions for both proton-lead (pPb) and lead-lead (PbPb) collisions Aad:2015gta (); Khachatryan:2015pzs (); Aaij:2014pvu (); Khachatryan:2015hha (); AtlasWpPb (); Senosi:2015omk (); Aad:2012ew (); Chatrchyan:2014csa (); Aad:2014bha (); Chatrchyan:2012nt (). Some of these data were already used (along with jet and charged particle production data) in a recent analysis Armesto:2015lrg (); Ru:2016wfx () employing a reweighting method to estimate the impact of these data on EPS09 Eskola:2009uj () and DSSZ deFlorian:2011fp () nPDFs.111During the publication process of this study a new global analysis including pPb LHC data has been presented Eskola:2016oht ().
The LHC heavy ion data set is especially interesting as it can help to resolve the long-standing dilemma regarding the heavy flavor components of the proton PDFs. Historically, this has been an important issue as nuclear target data (especially -DIS) have been essential in identifying the individual parton flavors Ball:2014uwa (); Harland-Lang:2014zoa (); Dulat:2015mca (); Khanpour:2016pph (); however, this means that the uncertainties of the heavy flavors are intimately tied to the (large) nuclear uncertainties. The LHC heavy ion data has the potential to improve this situation due to the following two key features. First, this data is in a kinematic regime where the heavier quark flavors (such as strange and charm) contribute substantially. Second, by comparing the proton data with the heavy ion results we have an ideal environment to precisely characterize the nuclear corrections. The combination of the above can not only improve the nuclear PDFs, but also the proton PDFs which are essential for any LHC study.
In this work we present predictions for vector boson production in pPb and PbPb collisions at the LHC obtained using nCTEQ15 nuclear parton distributions, and perform a comprehensive comparison to the available LHC data. We also identify the measurements which have the largest potential to constrain the nPDFs, and perform a reweighting study which allows us to estimate the effects of including these data in an nPDF fit.
The rest of the paper is organized as follows. Sec. 2 is devoted to predictions of vector boson production at the LHC in nuclear collisions. In particular, we provide an overview of the kinematic range probed by the data and discuss the tools we will use for the calculation. Then we present our predictions for pPb and PbPb collisions at the LHC and compare them with the experimental data and other theoretical predictions. In Sec. 3 we perform a reweighting using nCTEQ15 distributions to assess the impact of the nuclear data on the nPDFs. Finally, Sec. 4 summarizes our results and observations.
2 Production at the LHC
We begin by presenting our predictions for and boson production in nuclear collisions at the LHC using the recently published nCTEQ15 PDFs Kovarik:2015cma ().
2.1 Experimental data and theoretical setup
For the theoretical calculations in our study we use the FEWZ (Fully Exclusive W, Z production) Gavin:2010az (); Gavin:2012sy () program version 2.1. Even though FEWZ can compute and production with decays up to next-to-next-to-leading order, we work at next-to-leading order (NLO) to be consistent with the order of evolution of the nPDFs.222The CT10 proton PDFs used in the theoretical calculations are also at NLO.
As FEWZ is designed to handle or collisions, we have extended it so that two different PDF sets can be used for the two incoming beams as required for the pPb collisions.
For the lead PDFs we use the nCTEQ15 nPDFs Kovarik:2015cma (), while we use the CT10 distributions Lai:2010vv () for the free protons; the only exception is the use of MSTW2008 PDFs Martin:2009iq () for the LHCb boson measurement Aaij:2014pvu () in order to match the original LHCb publication. Additionally, we compare these results with predictions calculated using nuclei made out of free proton PDFs, and in some cases free proton PDFs supplemented with EPS09 nuclear corrections Eskola:2009uj ().
We will consider LHC data on and boson production from the ALICE, ATLAS, CMS, and LHCb experiments. The exhaustive list of data sets that we use is provided in Table 1 along with the experimental kinematical cuts implemented in the analysis. While there are measurements for both the rapidity and transverse momentum distributions, for this study we will focus only on the rapidity measurements.
Using the transverse momentum () distributions to study the PDFs is more intricate as it requires resummations in the low region where the cross section is maximal; we reserve this for a future study.
In Fig. 1 we display the kinematic space probed by the production process Kusina:2012vh (). We translate between the and the variables for three values of the collider center of mass (CM) energy, . Table 2 lists the CM energy per nucleon as a function of the nominal proton beam energy which is determined from the relation:
where in case of lead we have and . Additionally for asymmetric collisions there is a rapidity shift, , between the CM and the laboratory (LAB) frame:
and in particular for the case of pPb collisions, giving , i.e. .
For the asymmetric case of pPb, we use the convention where is the proton momentum fraction, and is the lead momentum fraction. Thus, for pPb at large we have a large proton and a small lead ; conversely, at small we have a small proton and a large lead .
In Fig. 1, the pair of lines with =2.76 TeV corresponds to PbPb collisions with a beam energy of 3.5 TeV per proton, and =5.02 TeV corresponds to pPb collisions with a beam energy of 4 TeV per proton.
2.2 Comparison to Proton-Lead (pPb) data
We first consider the LHC pPb collisions at TeV. The distributions are shown in the CM frame, and include the appropriate rapidity shift according to Eq. (2). In Fig. 2, we display the kinematic range of the pPb data bins (central values) in the plane where is the rapidity in the CM frame of the relevant vector boson or lepton, and the lead parton momentum fraction. As expected, there is little data below and most of the constraints from these LHC data are in the low- to mid- region.
Figs. 3, 4 and 5 show our predictions for the ATLAS Aad:2015gta (), CMS Khachatryan:2015pzs () and LHCb Aaij:2014pvu () boson production measurements, respectively. In all three cases, results obtained with the nCTEQ15 nPDFs are shown along with those obtained with a lead nucleus composed by protons and neutrons, assuming isospin symmetry and using CT10 PDFs; the ratio of predictions over the data is shown in the lower panel. Note that the errors shown for the nCTEQ15 predictions are for nuclear uncertainties only (and only for the beam with momentum fraction ) which means that the PDF error of the proton beam is not accounted for.333For the symmetric case of PbPb collisions the errors on both beams are taken into account. Furthermore, the errors shown for the pPb predictions using lead nuclei constructed from CT10 and MSTW2008 proton PDFs are only for the beam with momentum fraction . By comparing the proton uncertainties (CT10 and MSTW2008) to the nuclear uncertainties, we see that the nuclear uncertainties are much larger.
The data and theory are generally compatible (without significant tension) both with and without nuclear corrections; this situation may change as the experimental errors and nuclear uncertainties are reduced.
Focusing on the LHCb data of Fig. 5, we find good agreement for negative , but large differences at positive . Despite these differences, the large uncertainties will yield a reduced impact in our subsequent reweighting procedure.
We now turn our attention to and production at the LHC. In Figs. 6, 7 and 8 we compare the data obtained by CMS Khachatryan:2015hha (), ATLAS AtlasWpPb () and ALICE Senosi:2015omk () for production with theoretical predictions obtained with nCTEQ15 and CT10 PDFs.
We find the CMS and ATLAS data are adequately described in the negative rapidity range (), but the tensions grow as we move to larger rapidity. This effect is magnified for the case of where we see substantive deviations at large rapidity (). Referring to Fig. 1, these deviations are in the smaller region () where we might expect nuclear shadowing of the and luminosities.444The nuclear correction factors are typically defined as the ratio of the nuclear quantity to the proton or isoscalar quantity. At large () in the EMC region the nuclear quantities are suppressed relative to the proton. In the intermediate region we find “anti-shadowing” where the nuclear results are enhanced. Finally, at smaller (a few ) we have the “shadowing” region where the nuclear results are suppressed. However, this low range is unconstrained by the data currently used in nPDF fits, so these results come from an extrapolation of the larger region. It is interesting to observe that a delayed shadowing (which shifts the shadowing down to smaller values) would improve the comparison of the data with the theory in the larger region; this type of behavior was observed in the nuclear corrections extracted from the neutrino-DIS charged current data Kovarik:2010uv (); Schienbein:2009kk (); Nakamura:2016cnn (). Taking into account the errors from both the experimental data and the theoretical predictions, no definitive conclusions can be drawn at the present time. Notwithstanding, this data has the potential to strongly influence the nPDF fits, especially in the small region, if the uncertainties could be reduced.
Finally, the ALICE data (Fig. 8) currently have large uncertainties, and we expect they will have a minimal impact on the reweighting.
2.3 Comparison to Lead-Lead data
We now consider the LHC PbPb collisions at TeV. As these beams are symmetric, we now have . Again, we will use nCTEQ15 Kovarik:2015cma () and CT10 Lai:2010vv () PDFs for the theoretical predictions. Results from ATLAS and CMS collaborations are available in the form of either event yields ( boson production) or charge asymmetries ().
In Fig. (a)a and (b)b we present the comparison of the ATLAS Aad:2012ew () and CMS Chatrchyan:2014csa () data with theoretical predictions with nCTEQ15 and CT10 PDFs. Note that the differential cross sections have been normalized to the total cross section. The PbPb data generally exhibits no tension as the distributions are well described across the kinematical range; however, this is in part due to the large uncertainties due to two nuclei in the initial state.
The measurement of charge asymmetries can provide strong constraints on the PDF fits as many of the systematic uncertainties cancel in such ratios. In Fig. 10 we compute the lepton () charge asymmetry :
for and bosons as measured by the ATLAS Aad:2014bha () and CMS Chatrchyan:2012nt () experiments. Unfortunately, it appears that the dependence on the nuclear corrections largely cancels out in the ratio as the nuclear nCTEQ15 result is indistinguishable from the CT10 proton result. Hence, these charge asymmetry ratios cannot constrain the nuclear corrections at the present time.
2.4 Cross Section Correlations
In order to analyze our results more quantitatively, it is very useful to look at PDF correlations. In particular, we are interested in assessing the importance of the strange quark in our results. We first review some standard definitions before presenting our analysis.
The definition of the correlation cosine of two PDF-dependent observables and is Nadolsky:2008zw ()
where is the PDF error of the corresponding observable. For the nCTEQ15 PDFs this corresponds to the symmetric error given by
is the observable evaluated along the error PDF eigenvector , and the summation runs over all eigenvector directions.
In our case we are interested in observables . Here, we focus on the planes formed by the (, ) and the (, ) boson production cross sections to visualize the correlations.
Fig. 11 shows the correlations of the and production cross sections for pPb collisions at the LHC in comparison with the CMS and ATLAS measurements. Similarly, in Fig. 12 we display the results for and bosons. The results are shown for three different rapidity regions, , and for several PDFs sets. For the proton side we always use the CT10 PDFs and for the lead side we examine three results: i) nCTEQ15, ii) CT10, and iii) CT10 PDFs supplemented by the nuclear corrections from EPS09 (CT10+EPS09). Finally, the central predictions are supplemented with uncertainty ellipses illustrating correlations between the cross sections. The ellipses are calculated in the following way Nadolsky:2008zw (),
where , represent PDF-dependent observables, () is the observable calculated with the central PDF, () is defined in Eq. (5), is the correlation angle defined in Eq. (4), and is a parameter ranging between and .
From Figs. 11 and 12 one can generally observe that the ellipses for the different PDF sets overlap. Furthermore, the central predictions for all three PDF sets lie in the overlapping area of the three ellipses. However, a trend can be observed as a function of the rapidity:
For negative rapidities (), the central predictions from the nuclear PDFs (nCTEQ15, EPS09) are closer to the experimental data as they yield larger cross sections than the uncorrected (proton) CT10 PDFs. This can be understood because the lead values probed in this rapidity bin lie in the region where the nPDFs are enhanced due to anti-shadowing (cf., Fig. 9 in Ref. Kovarik:2015cma ()). Due to the larger uncertainties associated with the nCTEQ15 predictions, the ATLAS and CMS cross sections lie within the 1 ellipse. Conversely, the measured data lie outside the uncorrected (proton) CT10 error ellipsis.
For the central rapidity bin (), the predictions from all three PDF sets lie generally very close together. In this case, the probed values lie in the range which is in the transition zone from the anti-shadowing to the shadowing region. We find the LHC and cross sections in Fig. 11 tend to lie above the theory predictions. Examining the cross section of Fig. 12, we find the CMS data agrees closely with the theory predictions, while the ATLAS data is larger by approximately 1.
For the positive rapidity bin (), we find the central predictions from CT10 match the data very closely, but slightly overshoot the data. The nuclear PDFs (nCTEQ15, EPS09) undershoot the data by a bit more than 1, but agree with the cross section within 1. Here, the probed values are ; in this region the lead PDFs are poorly constrained and the corresponding cross sections are dependent on extrapolations of the PDF parameterization in this region.
Interpreting the above set of results appears complicated, so we will try and break the problem down in to smaller components. We now compute the same results as above, but using only 2 flavors (one family) of quarks: ; specifically, these plots are produced by zeroing the heavy flavor components , but keeping and the gluon. For the production this eliminates the and (the smaller) contributions, while for production it is the contribution which drives the change. While the charm PDF does play a role in the above (the bottom contribution is minimal), is generated radiatively by the process (we assume no intrinsic component); thus, it is essentially determined by the charm mass value and the gluon PDF. In contrast, the “intrinsic” nature of the strange PDF leads to its comparably large uncertainties. For example, if we compare the free-proton PDF baselines (CTEQ6.1, CT10), the strange quark exhibits substantial differences while the charm (and bottom) distributions are quite similar; this pattern then feeds into the nPDFs. Therefore, the strange quark PDF will be the primary focus of the following discussion.
Examining Fig. 13, the shift of the 2 flavor results compared to the 5 flavor results can be as large as 30% and reflects, the contributions of the strange and charm quarks.
For the 5 flavor case (), the calculations are scattered to the low side of the data in both and . The CT10 result is the closest to the data, but due to the larger uncertainties of nCTEQ15, the data point is within range of both of their ellipses. We also observe that the CT10+EPS09 and CTEQ6.1+EPS09 results bracket the nCTEQ15 value; again, this is due to the very different strange PDF associated with CT10 and CTEQ6.1.
For the 2 flavor case (), all the nuclear results (nCTEQ15, CT10+EPS09, CTEQ6.1+EPS09) coalesce, and they are distinct from the non-nuclear result (CT10). This pattern suggests that the nuclear corrections of nCTEQ15 and EPS09 for the flavors are quite similar, and the spread observed in the 5 flavor case comes from differences of in the underlying base PDF. Thus we infer that the difference between the nuclear results and the proton result accurately represents the nuclear corrections for the 2 flavor case (for ), but for the 5 flavor case it is a mix of nuclear corrections and variations of the underlying sea quarks.
Fig. 14 displays the same information as Fig. 13 except it is divided into rapidity bins. As we move from negative to positive we move from high where the nPDFs are well constrained to small where the nPDFs have large uncertainties (cf., Fig. 2). Thus, it is encouraging that at we uniformly find the nuclear predictions yield larger cross sections than the proton results (without nuclear corrections) and thus lie closer to the LHC data.
Conversely, for we find the nuclear predictions yield smaller cross sections than the proton results. The comparison with the LHC data varies across the figures, but this situation suggests a number of possibilities.
First, the large nPDF uncertainties in this small region could be improved using the LHC data.
Second, the lower nPDF cross sections are partly due to the nuclear shadowing in the small region; if, for example, this shadowing region were shifted to even lower values, this would increase the nuclear results. Such a shift was observed in Refs. Kovarik:2010uv (); Schienbein:2009kk (); Nakamura:2016cnn () using charged current neutrino-DIS data, and this would move the nuclear predictions of Fig. 11 at toward the LHC data.
Finally, we note that measurements of the strange quark asymmetry Mason:2007zz () indicate that which is unlike what is used in the current nPDFs; this would influence the cross sections separately as (at leading-order) Kusina:2012vh () , , and . As the strange PDF has a large impact on the measurements, this observation could provide incisive information on the individual and distributions.
These points are further exemplified in Fig. 15 which displays production for both 2 and 5 flavors as a function of lepton rapidity . For large , (small lead ) the CT10 proton result separates from the collective nuclear results; presumably, this is due to the nuclear shadowing at small . Again, we note that in this small region there are minimal experimental constraints and the nPDFs come largely from extrapolation at higher values. Additionally, by comparing the 2 and 5 flavor results, we clearly see the impact of the heavier flavors, predominantly the strange quark PDF.
Furthermore, different strange quark PDFs in the baseline PDFs compared in Figs. 11 and 12, make it challenging to distinguish nuclear effects from different strange quark distributions. Thus, we find that the extraction of the nuclear corrections is intimately intertwined with the extraction of the proton strange PDF, and we must be careful to separately distinguish each of these effects. Fortunately, the above observations can help us disentangle these two effects.
In this section we perform a reweighting study to estimate the possible impact of the data on nCTEQ15 lead PDFs. For this purpose we will use only the pPb data sets.
We refrain from using PbPb data as typically the agreement of these data with current nPDFs is much better (in part due to the large uncertainties), so the impact in the reweighting analysis will be minimal. Secondly the factorization in lead-lead collisions is not firmly established theoretically Qiu:2003cg () such that the interpretation may be complicated.
3.1 Basics of PDF reweighting
In this section we summarize the PDF reweighting technique and provide formulas for our specific implementation of this method. Additional details can be found in the literature Giele:1998gw (); Ball:2010gb (); Ball:2011gg (); Sato:2013ika (); Paukkunen:2014zia ().
In preparation for the reweighting, we need to convert the nCTEQ15 set of Hessian error PDFs into a set of PDF replicas Watt:2012tq (); Armesto:2015lrg () which serve as a representation of the underlying probability distribution. The PDF replicas can be defined by a simple formula,555A detailed discussion on the construction of replicas from Hessian PDF sets in the case of asymmetric errors can be found in ref. Hou:2016sho ().
where represents the best fit (central) PDF, and are the plus and minus error PDFs corresponding to the eigenvector direction , and is the number of eigenvectors defining the Hessian error PDFs. Finally, is a random number from a Gaussian distribution centered at 0 with standard deviation of 1, which is different for each replica () and each eigen-direction ().
After producing the replicas, we can calculate the average and variance of any PDF-dependent observable as moments of the probability distribution:
In particular, it can be done for the PDFs themselves; we should be able to reproduce our central PDF by the average , and the (68% c.l.) Hessian error bands by the corresponding variance . Of course, the precision at which we are able to reproduce Hessian central PDFs and corresponding uncertainties depends on how well we reproduce the underlying probability distribution, and this will depend on the number of replicas, , we use. In the following we use which allows for a very good reproduction of both central and error PDFs (within or better).
We note here that since the nCTEQ15 error PDFs correspond to the 90% confidence level (c.l.) we need to convert the obtained uncertainties such that they correspond to the 68% c.l.666The 68% c.l. is necessary to correspond with the variance of the PDF set defined below. The conversion is done using the following approximate relation between the 68% c.l. and 90% c.l. Hessian uncertainties: .
In Fig. 16 we perform the above exercise and determine if our procedure is self consistent. Specifically, in Fig. (a)a we display the central value and uncertainty bands for the original gluon PDF and those generated from the replicas; they are indistinguishable. Additionally, in Fig. (b)b we demonstrate the convergence of the average of replicas to the central Hessian PDF for . For the central gluon is reproduced to better than 1% except at the highest values. This is certainly a sufficient accuracy considering the size of the PDF errors. Even the and replicas yield good results except at larger () where the PDFs are vanishing and the uncertainties are large. Since our computational cost will be mostly dictated by the number of Hessian error PDFs, we will use to get a better representation of the underlying probability distribution.
Having defined the replicas we can apply the reweighting technique to estimate the importance of a given data set on our current PDFs. The idea is based on Bayes theorem which states that the posterior distribution representing the probability of a hypothesis (new probability distribution representing the PDFs if we would perform a fit including the new data set we are using in the reweighting) is a product of the prior probability (PDFs without the new data set) and an appropriate likelihood function. This allows us to assign a weight to each of the replicas generated earlier according to eq. (7).
In the context of Hessian PDFs using a global tolerance criterion the appropriate weight definition is given by a modified Giele-Keller expression Giele:1998gw (); Sato:2013ika (); Paukkunen:2014zia (); Armesto:2015lrg ()777In the context of Monte Carlo PDF sets a NNPDF weight definition should be used Ball:2011gg ().
where is the tolerance criterion used when defining Hessian error PDFs888In the case of the nCTEQ15 PDFs, the tolerance criterion is which corresponds to a 90% c.l., the detailed explanation on how it was defined can be found in appendix A of Kovarik:2015cma (). The tolerance factor used in this analysis corresponds to the 68% c.l. which we obtain by rescaling the above: . and represents the of the data sets considered in the reweighting procedure for a given replica . The pPb and data do not provide correlated errors (the published errors are a sum of statistical and systematic errors in quadrature)999In our analysis we also add the normalization errors in quadrature to the statistical and systematic ones. so it is sufficient for our analysis to use a basic definition of the function given by
where index runs over all data points in the data set(s), is the total number of data points, is the experimental measurement at point , is the corresponding experimental uncertainty, and is the corresponding theoretical prediction calculated with PDFs given by replica .
With the above prescription we can now calculate the weights needed for the reweighting procedure. The expectation value and variance of any PDF-dependent observable can now be computed in terms of weighted sums:
For our reweighting analysis we will only use the pPb data sets. Because the uncertainty of the nuclear PDFs dominates the proton PDFs, it is sufficient to only vary the lead PDFs. Consequently, the pPb cross sections are linear in the lead uncertainties, and we can compute the reweighting by evaluating cross sections only on the Hessian error PDFs (32+1 in case of nCTEQ15) instead of the individual replicas ()
A similar decomposition can be used for pp or PbPb data to reduce the number of necessary evaluations. However, because of the quadratic dependence on the PDFs, the reduction is smaller and does not necessarily lead to lower computational costs.
We will compare the for each experiment calculated with the initial PDFs (before reweighting) and with the PDFs after the reweighting procedure; this will allow us to estimate the impact of each individual data set. We do this using the following formula
where is a theory prediction calculated as an average over the (reweighted or not-reweighted) replicas according to eq. (11) (with or without weights).
Finally, the effectiveness of the reweighting procedure can be (qualitatively) estimated by computing the effective number of replicas defined as Ball:2011gg ():
provides a measure of how many of the replica sets are effectively contributing to the reweighting procedure. By definition, is restricted to be smaller than . However, when it indicates that there are many replicas whose new weight (after the reweighting procedure) is sufficiently small that they provide a negligible contribution to the updated probability density. This typically happens when the new data is not compatible with the data used in the original fit, or if the new data introduces substantial new information; in both cases, the procedure becomes ineffective and a new global fit is recommended.
3.2 Reweighting using CMS rapidity distributions
As an example, we consider the reweighting using the CMS production data from pPb collisions Khachatryan:2015hha (). In this example we use rapidity distributions of charged leptons originating from the decay of both and bosons with replicas leading to .
In Fig. 17 we display the distribution of the weights obtained from the reweighting procedure. We see that the magnitudes of the weights are reasonable; they extend up to with a peak at the lowest bin. It will be useful to compare this distribution with later results using different observables and data sets.
In Fig. 18 we show the comparison of the data to theory before and after the reweighting procedure.101010We note here the difference of PDF uncertainties compared to the plots presented in Sec. 2; this is caused by the fact that now we use the 68% c.l. errors whereas in Sec. 2 we have used the 90% c.l. errors that are provided with the nCTEQ15 PDFs. This holds for all plots in Sec. 3. As expected, we see that after the reweighting procedure the description of the data is improved. This is true for both the (left figure) and (right figure) cases. We can quantify the improvement of the fit by examining the for the individual distributions. For the case, the is improved from before reweighting to after reweighting. Similarly, for the is improved from to . The amount of change due to the reweighting procedure should be proportional to the experimental uncertainties of the incorporated data; this is the same as we would expect from a global fit. For production investigated here, the uncertainties are quite substantial, and the effects are compounded by the lack of correlated errors.
Finally, we show the effect of the reweighting on the PDFs themselves. In Fig. 19, we display PDFs for the up quark and gluon at a scale of GeV. We can see that the reweighting has the largest effects in the low region, and this holds also for the other flavors as well. Generally the effects at intermediate and large values are limited, with the exception of the gluon which is poorly constrained and exhibits a substantial change for large .
In Figs. 18 and 19, in addition to the reweighting results, we also show results calculated using the Hessian profiling method Paukkunen:2014zia (). The Hessian profiling should agree precisely with our reweighting calculations, and this can serve as an independent cross-check of our results. Indeed, in the figures we observe that the profiling exactly matches the reweighted results. In the following figures we will display only the reweighting results, but in all presented cases we have checked that these two methods agree.
3.3 Using Asymmetries instead of differential cross sections
In this section we will re-investigate the reweighting analysis from the previous section employing the CMS production data. Instead of using rapidity distributions (as in the previous section), we will use two types of asymmetries which are constructed with the charged leptons. The lepton charge asymmetry is
and is defined per bin in the rapidity of the charged lepton where represents the corresponding number of observed events in a given bin. For the purpose of the theory calculation, will be replaced by the corresponding cross-section in a given bin.
It is useful to consider the expression for the charge asymmetry at leading order in the parton model assuming a diagonal CKM matrix:
Here, the partons with momentum fraction are in the proton, and those with momentum fraction are inside the lead. At large negative rapidities (small , large ), we have for all parton flavors () and the expression for the asymmetry simplifies to the following form
Assuming and , as it is the case in all the existing nPDF sets, the expression further simplifies
In the last equation, we have used the fact that the small up and down PDFs are very similar Arleo:2015dba (). Since the , we expect the asymmetry to be negative at large negative rapidities. One can also observe that the asymmetry calculated with either or will be the same. A non-zero strange asymmetry () would lead to a decrease of the asymmetry, thereby improving the description of the CMS data.
Conversely, at large positive rapidities (small , large ), we have for all parton flavors () and the expression for the asymmetry becomes
Again, assuming and , this expression further simplifies to
where we have again used at small . Since in the proton, we expect a positive asymmetry in the kinematic region of large positive rapidities. Furthermore, the reweighting of the nuclear PDFs will have very little impact on the charge asymmetry in this limit even if the precision of the data will increase in the future.
Another asymmetry used by CMS is the forward-backward asymmetry. This is defined as a ratio of the number of events in the forward and backward region in a given rapidity bin:
This asymmetry is defined separately for the and cases. It can also be combined into a single quantity, the forward-backward asymmetry of charge-summed bosons:
This is the quantity we will use for our analysis in this section.
We now use the asymmetries of Eqs. (15) and (22) to perform a reweighting of the nCTEQ15 lead PDFs. These asymmetries are just combinations of the rapidity distributions used in Sec. 3.2, and if both are employed at the same time they should encode similar information to the rapidity distributions themselves. In the literature it is sometimes argued that the asymmetries are more sensitive to the PDFs and in turn are better suited to performing PDF fits Paukkunen:2010qg (); Armesto:2015lrg (); Khachatryan:2015hha (). We will empirically check this statement by comparing reweighting predictions using rapidity distributions and the above mentioned asymmetries.
In the following, we present the results of the reweighting using the lepton charge asymmetry and forward-backward asymmetry of charge-summed bosons. In this case, the effective number of replicas is .
The distribution of weights is displayed in Fig. 20, and we can see that compared to the reweighting using directly the rapidity distributions (Fig. 17), the weights are smaller extending only to around and more evenly distributed.
In Fig. 21 we show a comparison of data and theory before and after the reweighting procedure. In the case of the charge asymmetry we do not see a large improvement, but this is not surprising as there is already good agreement between the data and theory before the reweighting. We note that the before the reweighting is and after the reweighting.
In the case of the forward-backward asymmetry the initial agreement between data and theory is not as good and the corresponding improvement is much larger; changes from to .
We now show the effect of the reweighting procedure on the PDFs. In Fig. 22 we display the PDFs for the up quark and gluon at a scale of GeV. We can see that in both cases the effect is limited to the low region and does not exceed few percent. The results for other flavors are similar, and overall the asymmetries with the current experimental uncertainties seem to have rather small effect on the nPDFs.
In particular it seems that using asymmetry ratios yields a reduced impact, at least compared to the rapidity distributions of Sec. 3.2. This is possibly due to the fact that much of the information on the nuclear corrections is lost when constructing the ratios. However, asymmetries can be still useful to explore the very forward and backward regions of the rapidity distributions (corresponding to higher/lower values) where experimental uncertainties are typically large but can cancel in the ratios.
3.4 Including all the data sets
Due to large experimental uncertainties, the effect of individual data sets presented in Sec. 2 on the lead PDFs is rather limited. The largest constraint is obtained from the CMS data Khachatryan:2015hha () (Secs. 3.2), and from (preliminary) ATLAS data AtlasWpPb (). In order to maximize the effects on the PDFs, we now employ all proton-lead data sets from Tab. 1 to perform the reweighting of the nCTEQ15 lead PDFs. Note that we use both the rapidity distributions and the asymmetries; although this can be regarded as “double counting”, it is a common practice in proton PDF analyses, e.g. Dulat:2015mca ().
As the impact of the reweighting on the theory predictions for ALICE production data Senosi:2015omk (), LHCb data Aaij:2014pvu () and both ATLAS Aad:2015gta () and CMS Khachatryan:2015pzs () production data is very small, we will not show the corresponding comparisons of theory predictions before and after the reweighting. We do note that in the majority of these cases the has improved indicating that the data sets are compatible, cf. Fig. 24. However, the initial for these data sets was already very small which reflects the large experimental uncertainties of these data sets and their limited constraining power on the nPDFs.
We start by examining the distribution of weights of the new replicas which is displayed in Fig. 23. We see that the distribution is steeply falling in a similar manner to the one from Fig. 17 obtained using only CMS rapidity distributions, but it extends to higher values of . These results are not very surprising as the CMS data set is the one introducing the most constraints. We also note that the reweighting procedure results in the effective number of replicas which is around 40% of the number of initial replicas. This suggests that the reweighting procedure should still yield reliable results.
Now we turn to the comparison of data with the theory predictions before and after the reweighting procedure. In Fig. 25 we show the predictions for the CMS data Khachatryan:2015hha (), and in Fig. 26 we show the corresponding predictions for the ATLAS data AtlasWpPb (). We can see that in both cases we observe an improvement in the data description that is confirmed by the corresponding values (see figures). The values tell us also that the largest effect comes from the CMS data which has smaller errors and for which the initial description (before the reweighting) was worse than in the ATLAS case.
Furthermore, comparing the values of for the CMS data after the reweighting using all data sets and using only CMS data (Sec. 3.2) we see further improvement of when more data is included. This shows that the different data sets are compatible with each other and that they pull the results in the same direction.
In addition, we show in Fig. 24 the before and after the reweighting for each of the experiments, as well as the combining all 102 data points from the different experiments. This highlights the fact that the CMS measurement yields the largest impact on the PDFs out of all the considered data sets.
Finally, in Figs. 27-29 we present the effects of the reweighting on the distributions in lead for a scale GeV. The effects are similar when looking at different scales. From the figures we can see that changes in the PDFs are generally affecting the low- distributions, and to a lesser extent the moderate to high- distributions.
When considering the ratios of PDFs, the effects of the reweighting appear to be quite substantial at large , especially for the gluon; however, as is evident from looking at the plots of the PDFs directly, they are approaching zero at large so the impact for physical observables is minimal.
Furthermore, when interpreting the results of the reweighting analysis it is important to remember that this method can only estimate the effects a given data set might have on the PDFs; it is not equivalent to a full fit. For example, a reweighting analysis cannot be used to explore new parameters or other dimensions that are not already spanned by the original PDF uncertainty basis. In particular, this study has shown us that the strange quark PDF can play an important role in the LHC pPb production of . As our current is parameterized proportional to , this restricts our ability to vary the strange PDF independently;111111This point was explored in more detail in ref. Kusina:2017bsq (). hence, an independent fit (in progress) is needed to better the impact of this data on the nPDFs.
3.5 Comparison with EPPS16
During the course of our analysis, a new global fit including LHC data (EPPS16 Eskola:2016oht ()) has been released. This gives us an opportunity to compare the results of our reweighting study with these new PDFs. We note here that this is a qualitative comparison as the data sets used in these two studies are different. Another important difference is that the EPPS16 fit has more parameters to describe the sea-quark PDFs as compared to the nCTEQ15 analysis; this provides EPPS16 additional flexibility to accommodate all the considered data. As mentioned earlier, our reweighting of nCTEQ15 cannot compensate for our more restrictive parametrization, so this must be considered when evaluating these comparisons.
In Figs. 30 and 31 we present a comparison of , , , , and for the nCTEQ15 PDFs before and after the reweighting, with the EPPS16 distributions at the scale of 80 GeV. There are a number of trends which emerge.
In the low region, the reweighted nCTEQ15 PDFs approach the EPPS16 distributions; for the and PDFs, the central values are very close. The effect of the reweighting appears mostly in this region where (prior to the LHC data) there were minimal constraints on the PDFs. Therefore, adding the LHC data is able to significantly adjust the PDFs in this region.
In the intermediate range (), the central values of the EPPS16 and both reweighted and initial nCTEQ15 PDFs coincide, and their uncertainty bands are also similar (except for the strange quark). This region was previously constrained by pre-LHC data, and we observe minimal changes in this region.
On the contrary, where is large, the differences are more important with no consistent pattern. This is a challenging region as the absolute value of the PDFs is small, and the nCTEQ15 parameterization may not be sufficiently flexible to accommodate the new data. Additionally, the inclusion of certain data sets in the EPPS16 analysis (such as the CHORUS -Pb data Onengut:2005kv ()) can have a significant impact.
Finally, we also see that the EPPS16 PDFs have consistently larger uncertainty bands (especially at low ). As the nCTEQ15 uncertainty bands in this region are essentially extrapolated from larger results, the EPPS16 uncertainties are probably a more realistic assessment. The issue of PDF parameterization is a perennial challenge for the nuclear PDFs as there is less data and more degrees of freedom as compared to the proton PDFs. The common solution is to impose assumptions on the nPDF parameters, or to limit the flexibility of the parameterization, and thereby underestimate the uncertainty. These issues highlight the importance of including this new LHC data in the nPDF analyses as they not only will help determine the central fits, but also provide for more reliable error estimation.
We have presented a comprehensive study of vector boson production () from lead collisions at the LHC. This LHC lead data is of particular interest for a number of reasons.
Comparisons with LHC proton data can determine nuclear corrections for large values; this is a kinematic range very different from nuclear corrections provided by fixed-target measurements.
The lead data are sensitive to the heavier quark flavors (especially the strange PDF), so this provides important information on the nuclear flavor decomposition.
Improved information on the nuclear corrections from the LHC lead data can also help reduce proton PDF uncertainties as fixed-target nuclear data is essential for distinguishing the individual flavors.
Predictions from the recent nCTEQ15 nPDFs are generally compatible with the LHC experimental data; however, this is partially due to the large uncertainties from both the nuclear corrections and the data. We do see suggestive trends (for example production in pPb at large ) which may impose influential constraints on the nPDF fits as the experimental uncertainties are reduced. Intriguingly, the large rapidity data seem to prefer nuclear PDFs with no or delayed shadowing at small , similar to what has been observed in -Fe DIS. This observation was validated by our reweighting study that demonstrated the impact of the pPb data on nPDFs.
The uncertainties of the currently available data are relatively large, and correlated errors are not yet available. Fortunately, we can look forward to more data (with improved statistics) in the near future as additional heavy ion runs are scheduled.
While the above reweighting technique provides a powerful method to quickly assess the impact of new data, there are limitations. For example, the reweighting method cannot introduce or explore new degrees of freedom. Thus, if the original fit imposes artificial constraints (such as linking the strange PDF to the up and down sea distributions), this limitation persists for the reweighted PDF Kusina:2017bsq ().
Most importantly, our correlation study (Sec. 2.4) demonstrated the importance of the strange distribution for the vector boson () production at the LHC, possibly even pointing to a nuclear strangeness asymmetry (). The comparison of the 2 flavor and 5 flavor results illustrates how flavor decomposition and nuclear corrections can become entangled. Therefore, it is imperative to separately control the strange PDF and the nuclear correction factor if we are to obtain unambiguous results. The investigations performed in this paper provide a foundation for improving our determination of the PDFs in lead, especially the strange quark component. Combining this information in a new nCTEQ fit across the full range can produce improved nPDFs, and thus yield improved nuclear correction factors. These improved nuclear correction factors, together with the LHC production data for , can refine our knowledge of the strange PDF in the proton.
The authors would like to thank
J.F. Owens, F. Petriello, R. Plačakytė, and V. Radescu for valuable discussions. We acknowledge the hospitality of CERN, DESY, and Fermilab where a portion of this work was performed. This work was also partially supported by the U.S. Department of Energy under Grant No. DE-SC0010129 and by the National Science Foundation under Grant No. NSF PHY11-25915.
- (1) G. Aad, et al., Phys. Rev. C92(4), 044915 (2015). DOI 10.1103/PhysRevC.92.044915
- (2) The ATLAS collaboration, ATLAS-CONF-2015-056 (2015)
- (3) V. Khachatryan, et al., Phys. Lett. B759, 36 (2016). DOI 10.1016/j.physletb.2016.05.044
- (4) V. Khachatryan, et al., Phys. Lett. B750, 565 (2015). DOI 10.1016/j.physletb.2015.09.057
- (5) R. Aaij, et al., JHEP 09, 030 (2014). DOI 10.1007/JHEP09(2014)030
- (6) K.J. Senosi, PoS Bormio2015, 042 (2015)
- (7) G. Aad, et al., Phys. Rev. Lett. 110(2), 022301 (2013). DOI 10.1103/PhysRevLett.110.022301
- (8) G. Aad, et al., Eur. Phys. J. C75(1), 23 (2015). DOI 10.1140/epjc/s10052-014-3231-6
- (9) S. Chatrchyan, et al., JHEP 03, 022 (2015). DOI 10.1007/JHEP03(2015)022
- (10) S. Chatrchyan, et al., Phys. Lett. B715, 66 (2012). DOI 10.1016/j.physletb.2012.07.025
- (11) H. Paukkunen, C.A. Salgado, JHEP 03, 071 (2011). DOI 10.1007/JHEP03(2011)071
- (12) N. Armesto, H. Paukkunen, J.M. Penín, C.A. Salgado, P. Zurita, Eur. Phys. J. C76(4), 218 (2016). DOI 10.1140/epjc/s10052-016-4078-9
- (13) P. Ru, S.A. Kulagin, R. Petti, B.W. Zhang, Phys. Rev. D94(11), 113013 (2016). DOI 10.1103/PhysRevD.94.113013
- (14) K.J. Eskola, H. Paukkunen, C.A. Salgado, JHEP 04, 065 (2009). DOI 10.1088/1126-6708/2009/04/065
- (15) D. de Florian, R. Sassot, P. Zurita, M. Stratmann, Phys. Rev. D85, 074028 (2012). DOI 10.1103/PhysRevD.85.074028
- (16) K.J. Eskola, P. Paakkinen, H. Paukkunen, C.A. Salgado, Eur. Phys. J. C77(3), 163 (2017). DOI 10.1140/epjc/s10052-017-4725-9
- (17) R.D. Ball, et al., JHEP 04, 040 (2015). DOI 10.1007/JHEP04(2015)040
- (18) L.A. Harland-Lang, A.D. Martin, P. Motylinski, R.S. Thorne, Eur. Phys. J. C75(5), 204 (2015). DOI 10.1140/epjc/s10052-015-3397-6
- (19) S. Dulat, T.J. Hou, J. Gao, M. Guzzi, J. Huston, P. Nadolsky, J. Pumplin, C. Schmidt, D. Stump, C.P. Yuan, Phys. Rev. D93(3), 033006 (2016). DOI 10.1103/PhysRevD.93.033006
- (20) H. Khanpour, S. Atashbar Tehrani, Phys. Rev. D93(1), 014026 (2016). DOI 10.1103/PhysRevD.93.014026
- (21) K. Kovarik, et al., Phys. Rev. D93(8), 085037 (2016). DOI 10.1103/PhysRevD.93.085037
- (22) R. Gavin, Y. Li, F. Petriello, S. Quackenbush, Comput. Phys. Commun. 182, 2388 (2011). DOI 10.1016/j.cpc.2011.06.008
- (23) R. Gavin, Y. Li, F. Petriello, S. Quackenbush, Comput. Phys. Commun. 184, 208 (2013). DOI 10.1016/j.cpc.2012.09.005
- (24) H.L. Lai, M. Guzzi, J. Huston, Z. Li, P.M. Nadolsky, J. Pumplin, C.P. Yuan, Phys. Rev. D82, 074024 (2010). DOI 10.1103/PhysRevD.82.074024
- (25) A.D. Martin, W.J. Stirling, R.S. Thorne, G. Watt, Eur. Phys. J. C63, 189 (2009). DOI 10.1140/epjc/s10052-009-1072-5
- (26) A. Kusina, T. Stavreva, S. Berge, F.I. Olness, I. Schienbein, K. Kovarik, T. Jezo, J.Y. Yu, K. Park, Phys. Rev. D85, 094028 (2012). DOI 10.1103/PhysRevD.85.094028
- (27) K. Kovarik, I. Schienbein, F.I. Olness, J.Y. Yu, C. Keppel, J.G. Morfin, J.F. Owens, T. Stavreva, Phys. Rev. Lett. 106, 122301 (2011). DOI 10.1103/PhysRevLett.106.122301
- (28) I. Schienbein, J.Y. Yu, K. Kovarik, C. Keppel, J.G. Morfin, F. Olness, J.F. Owens, Phys. Rev. D80, 094004 (2009). DOI 10.1103/PhysRevD.80.094004
- (29) S.X. Nakamura, et al., Rept. Prog. Phys. 80(5), 056301 (2017). DOI 10.1088/1361-6633/aa5e6c
- (30) P.M. Nadolsky, H.L. Lai, Q.H. Cao, J. Huston, J. Pumplin, D. Stump, W.K. Tung, C.P. Yuan, Phys. Rev. D78, 013004 (2008). DOI 10.1103/PhysRevD.78.013004
- (31) D. Mason, et al., Phys. Rev. Lett. 99, 192001 (2007). DOI 10.1103/PhysRevLett.99.192001
- (32) J.w. Qiu, hep-ph/0305161 (2003)
- (33) W.T. Giele, S. Keller, Phys. Rev. D58, 094023 (1998). DOI 10.1103/PhysRevD.58.094023
- (34) R.D. Ball, V. Bertone, F. Cerutti, L. Del Debbio, S. Forte, A. Guffanti, J.I. Latorre, J. Rojo, M. Ubiali, Nucl. Phys. B849, 112 (2011). DOI 10.1016/j.nuclphysb.2011.03.017,10.1016/j.nuclphysb.2011.10.024,10.1016/j.nuclphysb.2011.09.011. [Erratum: Nucl. Phys.B855,927(2012)]
- (35) R.D. Ball, V. Bertone, F. Cerutti, L. Del Debbio, S. Forte, A. Guffanti, N.P. Hartland, J.I. Latorre, J. Rojo, M. Ubiali, Nucl. Phys. B855, 608 (2012). DOI 10.1016/j.nuclphysb.2011.10.018
- (36) N. Sato, J.F. Owens, H. Prosper, Phys. Rev. D89(11), 114020 (2014). DOI 10.1103/PhysRevD.89.114020
- (37) H. Paukkunen, P. Zurita, JHEP 12, 100 (2014). DOI 10.1007/JHEP12(2014)100
- (38) G. Watt, R.S. Thorne, JHEP 08, 052 (2012). DOI 10.1007/JHEP08(2012)052
- (39) T.J. Hou, et al., JHEP 03, 099 (2017). DOI 10.1007/JHEP03(2017)099
- (40) F. Arleo, E. Chapon, H. Paukkunen, Eur. Phys. J. C76(4), 214 (2016). DOI 10.1140/epjc/s10052-016-4049-1
- (41) A. Kusina, F. Lyonnet, D.B. Clark, E. Godat, T. Jezo, K. Kovarik, F.I. Olness, I. Schienbein, J.Y. Yu, Acta Phys.Polon. B48 (2017) 1035. DOI 10.5506/APhysPolB.48.1035
- (42) G. Onengut, et al., Phys. Lett. B632, 65 (2006). DOI 10.1016/j.physletb.2005.10.062