An analysis of the impact of LHC Run I proton-lead data on nuclear parton densities
We report on an analysis of the impact of available experimental data on hard processes in proton-lead collisions during Run I at the Large Hadron Collider on nuclear modifications of parton distribution functions. Our analysis is restricted to the EPS09 and DSSZ global fits. The measurements that we consider comprise production of massive gauge bosons, jets, charged hadrons and pions. This is the first time a study of nuclear PDFs includes this number of different observables. The goal of the paper is twofold: i) checking the description of the data by nPDFs, as well as the relevance of these nuclear effects, in a quantitative manner; ii) testing the constraining power of these data in eventual global fits, for which we use the Bayesian reweighting technique. We find an overall good, even too good, description of the data, indicating that more constraining power would require a better control over the systematic uncertainties and/or the proper proton-proton reference from LHC Run II. Some of the observables, however, show sizable tension with specific choices of proton and nuclear PDFs. We also comment on the corresponding improvements on the theoretical treatment.
The main physics motivations Salgado:2011wc () for the proton-lead (p-Pb) collisions at the Large Hadron Collider (LHC) were to obtain a reliable baseline for the heavy-ion measurements and to shed light on the partonic behaviour of the nucleus, particularly at small values of momentum fraction . As such, this program constitutes a logical continuation of the deuteron-gold (d-Au) experiments at the Relativistic Heavy-Ion Collider (RHIC) but at significantly higher energy. The p-Pb data has, however, proved richer that initially pictured and also entailed genuine surprises (see the review Armesto:2015ioy ()).
One of the key factors in interpreting the p-Pb data are the nuclear parton distribution functions (nPDFs) Eskola:2012rg (); Paukkunen:2014nqa (). It is now more than three decades ago that, unexpectedly, large nuclear effects in deeply inelastic scattering were first found (for a review, see Ref. Arneodo:1992wf ()), which were later on shown to be factorisable into the PDFs Eskola:1998iy (). However, the amount and variety of experimental data that go into the global determinations of nPDFs has been very limited and the universality of the nPDFs has still remained largely as a conjecture — with no clear violation found to date, however. The new experimental data from the LHC p-Pb run give a novel opportunity to further check these ideas and also provide new constraints. The aim of this paper is, on the one hand, to chart the importance of nPDFs in describing the data (both globally and separately for individual data sets) and, on the other hand, to estimate the quantitative constraints that these data render. The latter question would have traditionally required a complete reanalysis adding the new data on top of the old ones. Luckily, faster methods, collectively known as reweighting techniques, have been developed Giele:1998gw (); Ball:2010gb (); Ball:2011gg (); Watt:2012tq (); Watt:2013oha (); Sato:2013ika (); Paukkunen:2014zia ().
In a preceding work Armesto:2013kqa (), a specific version Watt:2012tq () of the Bayesian reweighting technique was employed to survey the potential impact of the p-Pb program on nPDFs by using pseudodata. However, at that point the reweighting method used was not yet completely understood and certain caution regarding the results has to be practiced. Along with the developments of Ref. Paukkunen:2014zia (), we can now more reliably apply the Bayesian reweighting. Also, instead of pseudodata we can now use available p-Pb measurements. We will perform the analysis with two different sets of nPDFs (EPS09 Eskola:2009uj () and DSSZ deFlorian:2011fp ()) and, in order to control the bias coming from choosing a specific set of free proton reference, we will consider two sets of proton PDFs (MSTW2008 Martin:2009iq () and CT10 Lai:2010vv ()).
The paper is organised as follows: In Section 2 we briefly explain the Bayesian reweighting, devoting Section 3 to the observables included in the present analysis. In Section 4 we show the of data impact on the nPDFs, and discuss similarities and differences between the four possible PDF-nPDF combinations. Finally, in Section 5 we summarise our findings.
2 The reweighting procedure
2.1 The Bayesian reweighting method
The Bayesian reweighting technique Giele:1998gw (); Ball:2010gb (); Ball:2011gg (); Watt:2012tq (); Watt:2013oha (); Sato:2013ika (); Paukkunen:2014zia () is a tool to quantitatively determine the implications of new data within a set of PDFs. In this approach, the probability distribution of an existing PDF set is represented by an ensemble of PDF replicas , , and the expectation value and variance for an observable can be computed as
Additional information from a new set of data can now be incorporated, by the Bayes theorem, as
where stands for the conditional probability for the new data, for a given set of PDFs. It follows that the average value for any observable depending on the PDFs becomes a weighted average:
where the weights are proportional to the likelihood function . For PDF sets with uncertainties based on the Hessian method (with eigenvalues resulting in members) and fixed tolerance (which is the case in the present study), the functional form of the likelihood function that corresponds to a refit Paukkunen:2014zia () is
and is the covariance matrix. The ensemble of PDFs required by this approach is defined by
where is the central fit, and are the th error sets. The coefficients are random numbers selected from a Gaussian distribution centred at zero and with variance one. After the reweighting, the values of are evaluated as
where are computed as in Eq. (4). An additional quantity in the Bayesian method is the effective number of replicas , a useful indicator defined as
Having indicates that some of the replicas are doing a significantly better job in describing the data than others, and that the method becomes inefficient. In this case a very large number of replicas may be needed to obtain a converging result. In this work we have taken .
2.2 Bayesian reweighting in the linear case
The reweighting procedure begins by first generating the replicas by Eq. (8), which are then used to compute the observables required to evaluate the values of that determine the weights. In general, this involves looping the computational codes over the replicas, which can render the work quite CPU-time consuming. There is, however, a way to reduce the required time if the PDFs that we are interested in enter the computation linearly. Let us exemplify this with the process . The cross section corresponding to the -th replica can be schematically written as
where denotes in aggregate the kinematic integrations and summations over the partonic species. If now we replace by Eq. (8), we have
which can be written as
where is the cross section obtained with the central set, and are the cross sections evaluated with the error sets. In this way, only (31 for EPS09, 51 for DSSZ) cross-section evaluations are required (instead of ).
3 Comparison with the experimental data
All the data used in this work ( points in total) were obtained at the LHC during Run I, in p-Pb collisions at a centre-of-mass energy per nucleon: from ALICE and CMS, from ATLAS and CMS, jets from ATLAS, dijets from CMS, charged hadrons from ALICE and CMS, and pions from ALICE. Some of them are published as absolute distributions and some as ratios. We refrain from directly using the absolute distributions as they are typically more sensitive to the free proton PDFs and not so much to the nuclear modifications. In ratios of cross sections, the dependence of the free proton PDFs usually becomes suppressed. The ideal observable would be the nuclear modification --. However, no direct p-p measurement exists yet at the same centre-of-mass energy and such a reference is sometimes constructed by the experimental collaborations from their results at and . This brings forth a non-trivial normalisation issue and, with the intention of avoiding it, we decided to use (whenever possible) ratios between different rapidity windows instead — this situation is expected to be largely improved in the near future thanks to the reference p-p run at from LHC Run II. We note that, apart from the luminosity, no information on the correlated systematic uncertainties is given by the experimental collaborations. Thus, when constructing ratios of cross sections, we had no other option than adding all the uncertainties in quadrature. In the (frequent) cases where the systematic uncertainties dominate, this amounts to overestimating the uncertainties which sometimes reflects in absurdly small logarithmic likelihood, . The fact that the information of the correlations is not available undermines the usefulness of the data to constrain the theory calculations. This is a clear deficiency of the measurements and we call for publishing the information on the correlations as is usually done in the case of p-p and p- collisions. It is also worth noting that we (almost) only use minimum bias p-Pb data. While centrality dependent data are also available, it is known that any attempt to classify event centrality results in imposing a non-trivial bias on the hard-process observable in question, see e.g. Ref. Armesto:2015kwa ().
Note that not all PDF+nPDF combinations will be shown in the figures to limit the number of plots. Moreover, the post-reweighting results are not shown when they become visually indistinguishable from the original ones.
3.1 Charged electroweak bosons
Charged electroweak bosons ( and ) decaying into leptons have been measured by the ALICE Zhu:2015kpa () and CMS Khachatryan:2015hha () Collaborations.111Also preliminary ATLAS data have been shown ATLASWprelim () and they appear consistent with the CMS results. The theoretical values were computed at next-to-leading order (NLO) accuracy using the Monte Carlo generator MCFM Campbell:2011bn () fixing all the QCD scales to the mass of the boson.
The preliminary ALICE data includes events with charged leptons having at forward () and backward () rapidities in the nucleon-nucleon centre-of-mass (c.m.) frame. From these, we constructed “forward-to-backward” ratios as
A data-versus-theory comparison is presented in Figure 1. While the theoretical predictions do agree with the experimental values, the experimental error bars are quite large. Table 1 (the left-hand columns) lists the corresponding values of before the reweighting together with those obtained assuming no nuclear modifications in PDFs. It is clear that these data have no resolution with respect to the nuclear effects in PDFs.
The CMS collaboration has measured laboratory-frame pseudorapidity () dependent differential cross sections in the range with the transverse momentum of the measured leptons . The measured forward-to-backward ratios are compared to the theory computations in Figure 2 and the values are given in Table 1 (the right-hand columns). While the data are roughly compatible with all the PDF combinations, the data show a clear preference for nuclear corrections as implemented in EPS09 and DSSZ. These measurements probe the nuclear PDFs approximately in the range (from most forward to most backward bin), and the nuclear effects in the forward-to-backward ratio result from the sea-quark shadowing (small ) becoming divided by the antishadowing in valence quarks. While the impact of these data look here somewhat limited, they may be helpful for constraining the flavour separation of nuclear modifications. However, as both EPS09 and DSSZ assume flavour-independent sea and valence quark modifications at the parametrisation scale (i.e. the initial scale for DGLAP evolution), the present analysis cannot address to which extent this may happen.222During our analysis, an extraction of nPDFs with flavour separation was released Kovarik:2015cma ().
3.2 Z boson production
The boson production in its dilepton decay channel has been measured by three collaborations: CMS CMS:2015vqa (), ATLAS Aad:2015gta () and LHCb Aaij:2014pvu ().333The statistical uncertainties of the two LHCb data points are huge so we do not consider them here as they provide no constraining power. As in the case of , the theoretical values were computed using MCFM, with all scales fixed to the invariant mass of the lepton pair.
In the case of CMS, the kinematic cuts are similar to the ones applied for bosons: the leptons are measured within with a slightly lower minimum for both leptons ( GeV), and . The data are binned as a function of (rapidity of the lepton pair). Figure 3 presents a comparison between the data and theory values before the reweighting (NNE stands for no nuclear modification of parton densities but includes isospin effects) and Table 2 (the right-hand column) lists the values. The data appear to slightly prefer the calculations which include nuclear modifications. Similarly to the case of production, the use of nuclear PDFs eads to a suppression in . The rapid fall-off of towards large comes from the fact that the lepton pseudorapidity acceptance is not symmetric in the nucleon-nucleon c.m. frame. Indeed the range translates to and since there is less open phase space in the forward direction, the cross sections at a given tend to be lower than those at . This is clearly an unwanted feature since it gives rise to higher theoretical uncertainties (which we ignore in the present study) than if a symmetric acceptance (e.g. ) had been used.
The ATLAS data correspond to the full phase space of the daughter leptons within and . The data are only availabe as absolute cross sections from which we have constructed the forward-to-backward ratio . A comparison between the theoretical predictions (with and without nuclear modifications) and the experimental values before the reweighting can be seen in Figure 4 and the values are given in Table 2 (the left-hand column). The calculations including the nuclear modifications are now clearly preferred. For the larger phase space, is now significantly closer to unity than in Figure 3.
3.3 Jets & di-jets
Jet and di-jet distributions were computed at NLO Frixione:1995ms (); Frixione:1997np (); Frixione:1997ks () and compared with the results from the ATLAS ATLAS:2014cpa () and CMS Chatrchyan:2014hqa () collaborations, respectively. The factorisation and renormalisation scales were fixed to half the sum of the transverse energy of all 2 or 3 jets in the event. For ATLAS jets we used the anti- algorithm Cacciari:2008gp () with . For the CMS di-jets we used the anti- algorithm with and only jets within the acceptance were accepted, and the hardest (1) and next-to-hardest (2) jet within the acceptance had to fulfil the conditions GeV/c, GeV/c and their azimuthal distance .
The ATLAS collaboration measured jets with transverse momentum up to in eight rapidity bins. Strictly speaking, these data are not minimum bias as they comprise the events within the centrality class. It is therefore somewhat hazardous to include them into the present analysis but, for curiosity, we do so anyway. The ATLAS data are available as absolute yields from which we have constructed the forward-to-backward asymmetries adding all the uncertainties in quadrature. Let us remark that, by proceeding this way, we lose the most forward and central bins. The results before the reweighting are presented in Figure 5 and Table 3 (left-hand column). For EPS09 the forward-to-backward ratio tends to stay below unity since at positive rapidities the spectrum gets suppressed (gluon shadowing) and enhanced at negative rapidities (gluon antishadowing). For DSSZ, the effects are milder. The data does not appear to show any systematic tendency from one rapidity bin to another which could be due to the centrality trigger imposed. Indeed, the best is achieved with no nuclear effects at all, but all values of are very low. This is probably due to overestimating the systematic uncertainties by adding all errors in quadrature. It is worth mentioning here that, on the contrary to the ATLAS data, the preliminary CMS inclusive jet data CMS:2014qca () (involving no centrality selection) do show a consistent behaviour with EPS09.
Di-jet production by the CMS collaboration Chatrchyan:2014hqa () was the subject of study in Eskola:2013aya (), where sizeable mutual deviations between different nuclear PDFs were found. The experimental observable in this case is normalised to the total number of di-jets and the proton reference uncertainties tend to cancel to some extent, especially around midrapidity. A better cancellation would presumably be attained by considering the forward-to-backward ratios, but this would again involve the issue of correlated systematic uncertainties mentioned earlier. Comparisons between the data and theoretical predictions are shown in Figure 6 and the values are tabulated in Table 3 (right-hand column). The data clearly favour the use of EPS09 nPDFs, and in all other cases which is a clear signal of incompatibility. The better agreement follows from the gluon antishadowing and EMC effect at large present in EPS09 but not in DSSZ. However, the significant dependence of the employed free-proton PDFs is a bit alarming: indeed, one observes around 50% difference when switching from CT10 to MSTW2008. This indicates that the cancellation of proton PDF uncertainties is not complete at all and that they must be accounted for (unlike we do here) if this observable is to be used as an nPDF constraint. The proton-proton reference data taken in Run II may improve the situation.
3.4 Charged-particle production
Now let us move to the analysis of charged-particle production. Here we consider both charged-hadron (ALICE Abelev:2014dsa () and CMS CMS:2013cka ()) and pion (ALICE Richer:2015vqa ()) production. Apart from the PDFs, the particle production depends on the fragmentation functions (FFs) which are not well constrained. Indeed, it has been shown that any of the current FFs cannot give a proper description of the experimental results d'Enterria:2013vba () on charged-hadron production. In the same reference, a kinematic cut was advocated to avoid contaminations from other than independent parton-to-hadron ragmentation mechanism described by FFs. The same cut is applied here. Regarding the final state pions, we relaxed the requirement to , since cuts like this have been used in the EPS09 and DSSZ analyses. The theoretical values were determined with the same code as in Sassot:2010bh (), using the fragmentation functions from DSS deFlorian:2007hc () for the charged hadrons. In the case of the DSSZ nPDFs medium-modified fragmentation functions were used Sassot:2009sh (), in accordance with the way in which the RHIC pion data Adler:2006wg () were treated in the original DSSZ extraction. This is, however, not possible in the case of unidentified charged hadrons, as medium modified fragmentation functions are available for pions and kaons only.
The use of CMS data CMS:2013cka () poses another problem since it is known that, at high-, the data show a 40% enhancement that cannot currently be described by any theoretical model. However, it has been noticed that the forward-to-backward ratios are nevertheless more or less consistent with the expectations. While it is somewhat hazardous to use data in this way, we do so anyway hoping that whatever causes the high- anomaly cancels in ratios. A comparison between these data and EPS09/DSSZ calculations is shown in Figure 7 and the values of are listed in Table 4 (left-hand column). These data have a tendency to favour the calculations with DSSZ but with being absurdly low.
The ALICE collaboration Abelev:2014dsa () took data data relatively close to the central region and the data are available as backward-to-central ratios
with backward comprising the intervals and . A theory-to-data comparison is shown in Figure 8 and the corresponding s are in Table 4 (middle column). The data appear to slightly favour the use of EPS09/DSSZ but the remain, again, always very low.
Finally, we consider the preliminary pion data () shown by ALICE Richer:2015vqa (). In this case the measurement was performed only in the region so no or any similar quantity could be constructed. For this reason we had to resort to the use of ratio which involves a normalisation uncertainty.444Here we have deliberately ignored the normalization uncertainty — even by doing so the obtained values of are unrealistically small. A comparison between data and theory before the reweighting can be seen in Figure 9 and the values of are in Table 4 (right-hand column).
The very low values of attained in these three measurements indicate that the uncertainties have been overestimated and these data are doomed to have a negligible constraining power — notice that the uncertainties are dominated by the systematic errors which we add in quadrature with the statistical ones, in absence of a better experimental information.
4 Implications for nPDFs
The comparisons presented in the previous section demonstrate that many of the considered data (CMS W, CMS Z, ATLAS Z, CMS dijet) show sensitivity to the nuclear PDFs while others (ALICE W, ATLAS jets, CMS hadrons, ALICE hadrons, ALICE pions) remain inconclusive. Some of the considered observables (ATLAS jets, CMS hadrons) are also known to pose issues that are not fully understood, so the comparisons presented here should be taken as indicative. The most stringent constraints are provided by the CMS dijet measurements, which alone would rule out all but EPS09. However, upon summing all the ’s from the different measurements, this easily gets buried under the other data. This is evident from the total values of shown in Table 5 (upper part), as considering all the data it would look like all the PDF combinations were in agreement with the data (). However, excluding one of the dubious data sets (ATLAS jets) for which the number of data is large but very small, the differences between different PDFs grow, see the lower part of Table 5. The effective number of replicas remains always quite high. The reason for the high is that the variation of the total within a given set of nPDFs (that is, the variation among the error sets) is small even if some of the data sets are not properly described at all (in particular, CMS dijets with DSSZ). Thus, alone should not be blindly used to judge whether a reanalysis is required.
|Excluding ATLAS jets|
Given the tiny improvements in reweighted values one expects no strong modifications to be induced in the nPDFs either. Indeed, the only noticeable effect, as can be seen in Figure 10, is in the EPS09 gluons for which the CMS dijet data place new constraints Paukkunen:2014pha ().555These are results using all the data, including those whose consistency is in doubt. It should be recalled that, for technical reasons, in the EPS09 analysis the RHIC pion data were given a rather large additional weight and they still overweight the contribution coming from the dijets. In a fit with no extra weights the dijet data would, on the contrary, give a larger contribution than the RHIC data. Therefore these data will have a different effect that what Figure 10 would indicate. In the case of DSSZ the assumed functional form is not flexible enough to accommodate the dijet data and in practice nothing happens upon performing the reweighting. However, it is evident that these data will have a large impact on the DSSZ gluons if an agreement is required (see Figure 6), so a refit appears mandatory.
The impact of the LHC p-Pb data is potentially higher than what is found here also since, within our study, it is impossible to say anything concerning the constraints that these data may provide for the flavour separation of the nuclear PDFs, which again calls for a refit. Another issue is the form of the fit functions whose rigidity especially at small significantly underestimates the true uncertainty. In this sense, our study should be seen merely as a preparatory work towards nPDFs analyses including LHC data. More data p-Pb will also still appear (at least CMS inclusive jets, W production from ATLAS) and many of the data sets used here are only preliminary.
In the present work we have examined the importance of PDF nuclear modifications in describing some p-Pb results from Run I at the LHC, and the impact that the considered data have on the EPS09 and DSSZ global fits of nPDFs. We have found that while some data clearly favors the considered sets of nuclear PDFs, some sets are also statistically consistent with just proton PDFs. In this last case abnormally small values of are obtained, however. The global picture therefore depends on what data sets are being considered. We have chosen to use, in our analysis, most of the available data from the p-Pb run, it should, however, be stressed that some of the considered data sets are suspicious in the sense that unrealistically small values of are obtained and these sets, as we have shown, can easily twist the overall picture. Incidentally, these sets are the ones that have smallest when no nuclear effects in PDFs are included. The small values of are partly related to unknown correlations between the systematic uncertainties of the data but also, particularly in the case of ALICE pions, presumably to the additional uncertainty added to the interpolated p-p baseline. The p-p reference data at , recently recorded at the LHC, may eventually improve this situation.
The considered data are found to have only a mild impact on the EPS09 and DSSZ nPDFs. This does not, however, necessarily mean that these data would be useless. Indeed, they may facilitate to relax some rather restrictive assumptions made in the fits. An obvious example is the functional form for DSSZ gluon modification which does not allow for a similar gluon antishadowing as the EPS09 fit functions. This leads to a poor description of the CMS dijet data by DSSZ that the reweighting (being restricted to all assumptions made in the original analysis) cannot cure. Thus, in reality, these data are likely to have a large impact. In general, these new LHC data may allow to implement more flexibility into the fit functions and also to release restrictions related to the flavour dependence of the quark nuclear effects. Also, the EPS09 analysis used an additional weight to emphasise the importance of the data set (neutral pions at RHIC) sensitive to gluon nPDF. Now, with the use of the new LHC data, such artificial means are likely to be unnecessary. Therefore, for understanding the true significance of these data, new global fits including these and upcoming data are thus required.
Hence, both theoretical and experimental efforts, as explained above, are required to fully exploit the potentiality of both already done and future p-Pb runs at the LHC for constraining the nuclear modifications of parton densities.
We thank E. Chapon and A. Zsigmond, Z. Citron, and M. Ploskon, for their help with the understanding of the CMS, ATLAS and ALICE data respectively. This research was supported by the European Research Council grant HotLHC ERC-2011-StG-279579; the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007-2013/ under REA grant agreement #318921 (NA); Ministerio de Ciencia e Innovación of Spain under project FPA2014-58293-C2-1-P; Xunta de Galicia (Consellería de Educación) — the group is part of the Strategic Unit AGRUP2015/11.
- (1) C. A. Salgado et al., J. Phys. G 39 (2012) 015010 doi:10.1088/0954-3899/39/1/015010 [arXiv:1105.3919 [hep-ph]].
- (2) N. Armesto and E. Scomparin, arXiv:1511.02151 [nucl-ex].
- (3) K. J. Eskola, Nucl. Phys. A 910-911 (2013) 163 doi:10.1016/j.nuclphysa.2012.12.029 [arXiv:1209.1546 [hep-ph]].
- (4) H. Paukkunen, Nucl. Phys. A 926 (2014) 24 doi:10.1016/j.nuclphysa.2014.04.001 [arXiv:1401.2345 [hep-ph]].
- (5) M. Arneodo, Phys. Rept. 240 (1994) 301. doi:10.1016/0370-1573(94)90048-5
- (6) K. J. Eskola, V. J. Kolhinen and P. V. Ruuskanen, Nucl. Phys. B 535 (1998) 351 doi:10.1016/S0550-3213(98)00589-6 [hep-ph/9802350].
- (7) W. T. Giele and S. Keller, Phys. Rev. D 58 (1998) 094023 [hep-ph/9803393].
- (8) R. D. Ball et al. [NNPDF Collaboration], Nucl. Phys. B 849 (2011) 112 [Erratum-ibid. B 854 (2012) 926] [Erratum-ibid. B 855 (2012) 927] [arXiv:1012.0836 [hep-ph]].
- (9) R. D. Ball, V. Bertone, F. Cerutti, L. Del Debbio, S. Forte, A. Guffanti, N. P. Hartland and J. I. Latorre et al., Nucl. Phys. B 855 (2012) 608 [arXiv:1108.1758 [hep-ph]].
- (10) G. Watt, R. S. Thorne, JHEP 1208 (2012) 052 [arXiv:1205.4024 [hep-ph]].
- (11) B. J. A. Watt, P. Motylinski and R. S. Thorne, arXiv:1311.5703 [hep-ph].
- (12) N. Sato, J. F. Owens and H. Prosper, arXiv:1310.1089 [hep-ph].
- (13) H. Paukkunen and P. Zurita, JHEP 1412 (2014) 100 [arXiv:1402.6623 [hep-ph]].
- (14) N. Armesto, J. Rojo, C. A. Salgado and P. Zurita, JHEP 1311 (2013) 015 [arXiv:1309.5371 [hep-ph]].
- (15) K. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP 0904 (2009) 065 [arXiv:0902.4154 [hep-ph]].
- (16) D. de Florian, R. Sassot, P. Zurita and M. Stratmann, Phys. Rev. D 85 (2012) 074028 [arXiv:1112.6324 [hep-ph]].
- (17) A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, Eur. Phys. J. C 63 (2009) 189 [arXiv:0901.0002 [hep-ph]].
- (18) H. -L. Lai, M. Guzzi, J. Huston, Z. Li, P. M. Nadolsky, J. Pumplin, C. -P. Yuan and , Phys. Rev. D 82 (2010) 074024 [arXiv:1007.2241 [hep-ph]].
- (19) N. Armesto, D. C. Gulhan and J. G. Milhano, Phys. Lett. B 747 (2015) 441 [arXiv:1502.02986 [hep-ph]].
- (20) J. Zhu [ALICE Collaboration], J. Phys. Conf. Ser. 612 (2015) 1, 012009.
- (21) V. Khachatryan et al. [CMS Collaboration], arXiv:1503.05825 [nucl-ex].
- (22) ATLAS Collaboration, ATLAS-CONF-2015-056.
- (23) J. M. Campbell, R. K. Ellis and C. Williams, JHEP 1107 (2011) 018 [arXiv:1105.0020 [hep-ph]].
- (24) K. Kovarik et al., arXiv:1509.00792 [hep-ph].
- (25) CMS Collaboration [CMS Collaboration], CMS-PAS-HIN-15-002.
- (26) G. Aad et al. [ATLAS Collaboration], arXiv:1507.06232 [hep-ex].
- (27) R. Aaij et al. [LHCb Collaboration], JHEP 1409 (2014) 030 [arXiv:1406.2885 [hep-ex]].
- (28) S. Frixione, Z. Kunszt and A. Signer, Nucl. Phys. B 467 (1996) 399 [arXiv:hep-ph/9512328].
- (29) S. Frixione, Nucl. Phys. B 507 (1997) 295 [arXiv:hep-ph/9706545].
- (30) S. Frixione and G. Ridolfi, Nucl. Phys. B 507 (1997) 315 [hep-ph/9707345].
- (31) G. Aad et al. [ATLAS Collaboration], Phys. Lett. B 748 (2015) 392 [arXiv:1412.4092 [hep-ex]].
- (32) S. Chatrchyan et al. [CMS Collaboration], Eur. Phys. J. C 74 (2014) 7, 2951 [arXiv:1401.4433 [nucl-ex]].
- (33) M. Cacciari, G. P. Salam and G. Soyez, JHEP 0804 (2008) 063 [arXiv:0802.1189 [hep-ph]].
- (34) CMS Collaboration [CMS Collaboration], CMS-PAS-HIN-14-001.
- (35) K. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP 1310 (2013) 213 [arXiv:1308.6733 [hep-ph]].
- (36) B. B. Abelev et al. [ALICE Collaboration], Eur. Phys. J. C 74 (2014) 9, 3054 [arXiv:1405.2737 [nucl-ex]].
- (37) CMS Collaboration [CMS Collaboration], CMS-PAS-HIN-12-017.
- (38) T. Richert [ALICE Collaboration], J. Phys. Conf. Ser. 636 (2015) 1, 012009 [arXiv:1505.04717 [hep-ex]].
- (39) D. d’Enterria, K. J. Eskola, I. Helenius and H. Paukkunen, Nucl. Phys. B 883 (2014) 615 [arXiv:1311.1415 [hep-ph]].
- (40) R. Sassot, P. Zurita and M. Stratmann, Phys. Rev. D 82 (2010) 074011 [arXiv:1008.0540 [hep-ph]].
- (41) D. de Florian, R. Sassot and M. Stratmann, Phys. Rev. D 76 (2007) 074033 [arXiv:0707.1506 [hep-ph]].
- (42) R. Sassot, M. Stratmann and P. Zurita, Phys. Rev. D 81 (2010) 054001 [arXiv:0912.1311 [hep-ph]].
- (43) S. S. Adler et al. [PHENIX Collaboration], Phys. Rev. Lett. 98 (2007) 172302 [nucl-ex/0610036].
- (44) H. Paukkunen, K. J. Eskola and C. Salgado, Nucl. Phys. A 931 (2014) 331 doi:10.1016/j.nuclphysa.2014.07.012 [arXiv:1408.4563 [hep-ph]].