Highlights from BNL and RHIC 2014

Highlights from BNL and RHIC 2014

M. J. Tannenbaum
Physics Department, 510c,
Brookhaven National Laboratory,
Upton, NY 11973-5000, USA
mjt@bnl.gov
Research supported by U. S. Department of Energy, DE-AC02-98CH10886.

1 Introduction

The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) is one of the two remaining operating hadron colliders, the other being the LHC at CERN. Unlike the LHC, which is buried in a deep underground tunnel, RHIC is built in an enclosure on the surface which is covered by an earth berm for shielding which can be seen from outer space (Fig. 1).

Figure 1: NASA infra-red photo of Long Island and the New York Metro Region from space. RHIC is the white circle to the left of the word BNL. Manhattan Island in New York City, 100 km west of BNL, is also clearly visible on the left side of the photo.

BNL is a multipurpose laboratory, quite different in scope from Fermilab and CERN, with many “cutting edge” major research facilities in addition to RHIC. Figure 2 shows the two newest facilities: the National Synchrotron Light Source II (NSLS II) to come on-line October 1, 2014; and the Long Island Solar Farm which has an experimental section as well as supplying 32MW peak power to nearby homes in partnership with the local electric company.

Figure 2: Aerial view of BNL with NSLS II, RHIC and the Solar Farm indicated.

2 News from BNL since ISSP2013

Although Fiscal Year 2014 started on October 1, 2013 with the U. S. Government shut down for the first 16 days due to the lack of an approved budget, the rest of the FY2014 turned out very well for BNL and RHIC. At the administrative level, Professor Robert Tribble of Texas A&M University, well known for both his physics research and leadership of two important U. S. Department of Energy (DoE) Panels in 2005 and 2013, was appointed Deputy Director for Science & Technology, effective February 2014. Not long after that, in March, the DoE issued a Request For Proposals for a “management and operating (M&O) contractor” for BNL, which is owned by the U.S. Government, but run by an M&O contractor. The present contractor, BSA, is a partnership of Battelle Memorial Institute, a private non-profit science and technology development company, headquartered in Columbus, Ohio, and Stony Brook University. BSA has been the M&O contractor at BNL for the past 15 years (out of the BNL’s 67 year existence), with the “engagement” of six of the world’s leading research universities (Columbia, Cornell, Harvard, MIT, Princeton and Yale) who were among the universities that formed the founding M&O contractor, Associated Universities Incorporated, along with Johns Hopkins, and the Universities of Pennsylvania and Rochester. The new contract starts on January 1, 2015, preceded by a maximum 2 month transition phase-in period so should be awarded near or soon after November 1, 2014 [1].

The U. S. High Energy Physics bureaucracy was not idle during this period, with the release of the “Particle Physics Project Prioritization Panel (P5)” Report to the High energy Physics Advisory Panel (HEPAP) on May 21, 2014. The charge of the panel was “to develop an updated strategic plan for U.S. high energy physics that can be executed over a 10 year timescale, in the context of a 20 year global vision for the field.” Their reasonable top priority for constrained budget scenarios was to “Use the Higgs boson as a new tool for discovery” which is good news for the U. S. HEP groups working at the LHC at CERN; but lots of internal U. S. activities were “redirected” [2]. For the unconstrained budget scenario, all they could come up with was:

  • Develop a greatly expanded accelerator R&D program that would emphasize the ability to build very high-energy accelerators beyond the High-Luminosity LHC (HL-LHC) and ILC at dramatically lower cost.

  • Play a world-leading role in the ILC experimental program and provide critical expertise and components to the accelerator, should this exciting scientific opportunity be realized in Japan.

which IMHO lacks the imagination and drive of previous generations of U.S High Energy Physicists who had proposed and were constructing a 40 TeV collider for completion in 1995 if not for … [3].

Not to be outdone, the new Long Range Planning exercise for U. S.  Nuclear Physics was initiated in April 2014.

3 RHIC Operations and accelerator future plans

Since beginning operation in the year 2000, RHIC, which can collide any species with any other species including polarized protons, has provided collisions at 14 different values of nucleon-nucleon c.m. energy, , and ten different species combinations including Au+Au, d+Au, Cu+Cu, Cu+Au, U+U, and in 2014 He+Au, if differently polarized protons are counted as different species. The performance history of RHIC with A+A

Figure 3: a)(left) Year, species and proton polarization (Longitudinal or Transverse), , integrated luminosity of RHIC runs. b) (right) Future run schedule and new equipment.

and polarized collisions is shown in Fig. 5a; and in Fig. 5b, the plans for future runs are shown.

For this year’s run (2014) the full 3 dimensional cooling including electron lenses for partial compensation of the beam-beam tune shift and 56 MhZ storage r.f. for stronger longitudinal focusing were implemented which led to a higher initial luminosity and much longer lifetime of the beam with a more level luminosity load due to the 3d stochatic cooling (Fig. 4). The luminosity performance of RHIC with A+A and polarized collisions is shown in Fig. 5. Notably, the AuAu in 2014 exceeds all previous Au+Au runs combined as did the pp in 2013.

Figure 4: Run-14 luminosity vs. storage time compared to Run-7, courtesy Wolfram Fischer.
Figure 5: a)(left) Au+Au performance, where the nucleon-pair luminosity is defined as , where is the luminosity and , are the number of nucleons in the colliding species. b) (right) Polarized performance. Courtesy Wolfram Fischer.

The major future plan for accelerators in Nuclear Physics concerns an electron-ion collider, which if located at BNL will be called eRHIC. A new highly innovative and cost-effective design of eRHIC was proposed this year based on a Fixed Focus Alternating Gradient (FFAG) electron accelerator and an Energy Recovery Linac (Fig. 6).

Figure 6: New BNL design for eRHIC with annotations.

4 Detector issues in AA compared to pp collisions

A main concern of experimental design in RHI collisions is the huge multiplicity in A+A central collisions compared to collisions. A schematic drawing of a collision of two relativistic Au nuclei is shown in Fig. 7a.

Figure 7: a) (left) Schematic of collision in the - c.m. system of two Lorentz contracted nuclei with radius and impact parameter . The curve with the ordinate labeled represents the relative probability of charged particle multiplicity which is directly proportional to the number of participating nucleons, . b)(right) distribution in Au+Au at GeV from PHENIX [4].

In the center of mass system of the nucleus-nucleus collision, the two Lorentz-contracted nuclei of radius approach each other with impact parameter . In the region of overlap, the “participating” nucleons interact with each other, while in the non-overlap region, the “spectator” nucleons simply continue on their original trajectories and can be measured in Zero Degree Calorimeters (ZDC), in fixed target experiments, so that the number of spectators can be measured from which the number of participants () can be determined for symmetric A+A collisions. The degree of overlap is called the centrality of the collision, with , being the most central and , the most peripheral. The maximum time of overlap is where is the Lorentz factor and is the speed of light in vacuum. The energy of the inelastic collision is predominantly dissipated by multiple particle production, where , the number of charged particles produced, or , the energy emitted transverse to the beam direction, is directly proportional [5] to as sketched on Fig. 7a. Thus, and in central Au+Au collisions are roughly times larger than in a collision, as shown in actual events from the STAR and PHENIX detectors at RHIC (Fig. 8).

Figure 8: a) (left) A collision in the STAR detector viewed along the collision axis; b) (center) Au+Au central collision at GeV in STAR; c) (right) Au+Au central collision at GeV in PHENIX.

At colliders, the impact parameter can not be measured directly because charged spectators are swept away from zero degrees by the collider magnets. Instead, the centrality of a collision is defined in terms of the upper percentile e.g. top 10%-ile, upper 1020%-ile, of or distributions as in Fig. 7b. Unfortunately the “upper” and “-ile” are usually not mentioned which sometimes confuses the uninitiated. Also a model is required to derive from the measurement so that the derived value of at a collider or the number of binary nucleon-nucleon collisions () is model dependent and may have biases.

a) b)

Figure 9: Actual STAR (a) and PHENIX (b) detectors, compare with Figs 8b,c. The direction of the beam is along the axis of the STAR solenoid; and in (b) between the two spectrometer arms (perpendicular to the caption).

Since it is a huge task to reconstruct the momenta and identity of all the particles produced in these events, the initial detectors at RHIC [6] concentrated on the measurement of single-particle or multi-particle inclusive variables to analyze RHI collisions, with inspiration from the CERN ISR which emphasized those techniques before the era of jet reconstruction (see, for example, Refs. [7] and [8]). There are at present two major detectors in operation at RHIC, STAR and PHENIX, and there were also two smaller detectors, BRAHMS and PHOBOS, which have completed their program. As may be surmised from Figs. 8a,b and 9a, STAR, which emphasizes hadron physics, is most like a conventional general purpose collider detector, a Time Projection Chamber to detect all charged particles over the full azimuth () and units of pseudo-rapidity (); while PHENIX (Figs. 8c and 9b), is a very high granularity high resolution special purpose detector: a two-arm spectrometer at mid-rapidity, with each arm covering solid angle , together with two full-azimuth muon detectors at forward and backward rapidity ().111The detector is so non-conventional that it made the cover of Physics Today, October 2003. For the present runs, both STAR and PHENIX have excellent particle identification (PID) capability with electromagnetic calorimeters (EMcal) for photon and electron detection and Time of Flight for charged hadrons. PHENIX has a Ring Imaging CHerenkov counter for enhanced electron detection and triggering and small but full azimuth EM calorimeters (MPC) just before each muon arm covering , while STAR obtains enhanced hadron identification using dE/dx in the TPC. For the 2014 run, both PHENIX (VTX, FVTX) and STAR (HFT) are equipped with micro-vertex detectors for tagging Heavy-Flavor and quarks via displaced vertices.

The main objectives of buliding RHIC were i) to discover the Quark Gluon Plasma ( Q G P), which was achieved as I have discussed in detail in review articles based on previous ISSP proceedings [7, 9]; ii) to measure its properties, which were much different than expected, namely a “perfect fluid” of quarks and gluons with their color charges exposed rather than a gas. The latest measurements from RHIC continue to be very interesting.

5 , distributions and constituent-quarks as the fundamental elements of particle production

The first experiment specifically designed to measure the dependence of the charged particle multiplicity in high energy p+A collisions as a function of the nuclear size was performed by Wit Busza and collaborators at Fermilab using beams of GeV/c hadrons colliding with various fixed nuclear targets. They found the extraordinary result [10] that the average charged particle multiplicity in hadron+nucleus (h+A) interactions was not simply proportional to the number of collisions (absorption-mean-free-paths), , but increased much more slowly, proportional to the number of participants . Thus, relative to h+p collisions (Fig. 10a) [11]:

(1)

Since the different projectiles, in Fig. 10a have different mean free paths, the fit to the same straight line in terms of is convincing.

a)   b)
Figure 10: a) as a function of the average thickness of each nucleus given in terms of the mean free path,  [11] for 50 and 100 GeV/c h+A collisions; b) Charged particle multiplicity density, , as a function of (represented by ) for 200 GeV/c p+A collisions [12].

The other striking observation (Fig. 10b) [12] was that a relativistic incident proton could pass through e.g. absorption-mean-free-paths of a target nucleus and emerge from the other side; and furthermore there was no intra-nuclear cascade of produced particles (a stark difference from what would happen to the same proton in a macroscopic 4 mean-free-path hadron calorimeter). In the forward fragmentation region of 200 GeV/c p+A collisions, within 1 unit of rapidity from the beam, , there was essentially no change in as a function of , while at mid-rapidity (), increased with together with a small backward shift of the peak of the distribution resulting in a huge relative increase of multiplicity in the target fragmentation region, in the laboratory system. These striking features of the GeV/c fixed target hadron-nucleus data ( GeV) showed the importance of taking into account the time and distance scales of the soft multi-particle production process including quantum mechanical effects.

5.1 The Wounded Nucleon Model

The observations in Fig. 10 had clearly shown that the target nucleus was rather transparent so that a relativistic incident nucleon could make many successive collisions while passing through the nucleus, and emerge intact. Immediately after a relativistic nucleon interacts inside a nucleus, the only thing that can happen consistent with relativity and quantum mechanics is for it to become an excited nucleon with roughly the same energy and reduced longitudinal momentum and rapidity. It remains in that state inside the nucleus because the uncertainty principle and time dilation prevent it from fragmenting into particles until it is well outside the nucleus. This feature immediately eliminates the possibility of a cascade in the nucleus from the rescattering of the secondary products. If one makes the further assumptions that an excited nucleon interacts with the same cross section as an unexcited nucleon and that the successive collisions of the excited nucleon do not affect the excited state or its eventual fragmentation products [13], this leads to the conclusion (c. 1977) that the elementary process for particle production in nuclear collisions is the excited nucleon, and to the prediction that the multiplicity in nuclear interactions should be proportional to the total number of projectile and target participants, rather than to the total number of collisions, or , as observed. This is called the Wounded Nucleon Model (WNM) [14] and, in the common usage, Wounded Nucleons (WN) are called participants. In a later model from the early 1980’s, the Additive Quark Model, AQM [15], constituent-quark participants were introduced; but the AQM is actually a model of particle production by color-strings in which only one color-string can be attached to a constituent-quark participant, effectively a projectile quark participant model.

5.2 Extreme Independent Models

The models mentioned above are examples of Extreme Independent Models in which the effect of the nuclear geometry of the interaction can be calculated independently of the dynamics of particle production which can be taken directly from experimental measurements. The nuclear geometry is represented by the relative probability, per AB interaction for a given number of fundamental elements, in the present case, number of collisions (), number of nucleon participants (wounded nucleon model-WNM [14]), number of constituent-quark participants (), number of color strings (AQM). The dynamics of particle production, the or distribution of the fundamental element, is taken from the measured data in the same detector: e.g. the measured distribution for a collision represents: 1 collision; 2 participants (WNM); a predictable convolution of constituent-quark-participants (NQP), or projectile-quark-participants (AQM). Glauber calculations of the nuclear geometry () together with the measurement provide a prediction for the AB measurement in the same detector as the result of particle production by multiple independent fundamental elements.

I became acquainted with these models in my first talk at a Quark Matter conference (QM1984) where

a)b) c)

Figure 11: (a) distributions in ,  [18] at =31 GeV, with AQM and WNM calculations [21]. (b),(c) distributions with breaks indicating jets: (b) =62.3 GeV [22]; (c) (nb/GeV) vs. for =540 GeV [23].

I presented measurements of transverse energy distributions from and interactions at =31 GeV at the CERN-ISR (Fig. 11[18, 19]). The transverse energy, , is a multiparticle variable defined as the sum

(2)

where is the polar angle, is the pseudorapidity, is by convention taken as the kinetic energy for baryons, the kinetic energy + 2 for antibaryons, and the total energy for all other particles, and the sum is taken over all particles emitted into a fixed solid angle for each event.

The transverse energy was introduced by high energy physicists [16, 17] as an improved method to detect and study the jets from hard-scattering compared to high single particle spectra by which hard-scattering was discovered in collisions and used as a hard-probe in AuAu collisions at RHIC. However, it didn’t work as expected: distributions, like distributions, are dominated by soft particles near (e.g. see Ref. [8] for details). Nevertheless, it was claimed at the conference [20], in comments to my talk, that the deviation from the WNM in Fig. 11a was due to jets, but in both proceedings [19, 20] it was demonstrated that [20] “there is no … sign of jets. This indicates that soft processes are still dominant, and that we are still legimately testing the WNM at these high values of .” As shown in Fig. 11a, the the AQM [21], rather than the WNM, followed the data. Jets do appear in distributions as a break down in cross section (Figs. 11b,c).

5.3 and distributions cut on centrality

At RHIC, following the style of the CERN SpS rather than the BNL-AGS fixed target heavy ion program, and distributions were not generally shown. The measurements were presented cut in centrality in the form vs. (Fig. 12),

a)     b)   c)

Figure 12: (a) PHENIX, AuAu, =130 GeV [24]; (b) ALICE, Pb+Pb, =2.76 TeV [26]; (c) PHENIX preliminary AuAu, =7.7 GeV compared to the data at larger  [27].

which would be a constant equal to if the WNM worked. The measurements clearly deviate from the WNM (Fig. 12a) [24]; so the PHENIX collaboration, inspired by the preceding article in the journal [25], fit their data to the two-component model:

(3)

where the term implied a hard-scattering component for and , known to be absent in collisions 222It was noted in Ref. [24] that hard-scattering was not a unique interpretation. The shape of the centrality dependences of parameterized as were the same for Pb+Pb at =17.6 GeV at the CERN SpS and AuAu at =130 GeV, with and , respectively. The LHC data 10 years later [26] gave for Pb+Pb at =2760 GeV, again the same shape. (recall Fig. 11). A decade later, the first measurement from Pb+Pb collisions with =2.76 TeV at the LHC Fig. 12[26], showed exactly the same shape vs. as the RHIC AuAu data at =200 GeV, although is a factor of 1.6 larger and the hard-scattering cross section is more than a factor of 20 larger. This strongly argued against a hard-scattering component and for a nuclear geometrical effect, which was reinforced by a PHENIX preliminary measurement in AuAu at =7.7 GeV (Fig. 12c) [27] which also showed the same shape for the evolution of with as the =200 and 2760 GeV measurements. It had previously been proposed that the number of constituent-quark participants provided the nuclear geometry that could explain the RHIC AuAu data without the need to introduce a hard-scattering component [28]. However an asymmetric system is necessary in order to distinguish the NQP model from the AQM so the two models were applied to the RHIC Au data.

5.4 The number of constituent-quark participants model (NQP)

The massive constituent-quarks [29], which form mesons and nucleons (e.g. a proton=), are relevant for static properties and soft physics with GeV/c. They are complex objects or quasiparticles [30] made of the massless partons (valence quarks, gluons and sea quarks) of DIS [31] such that the valence quarks acquire masses the nucleon mass with radii fm when bound in the nucleon. With smaller resolution one can see inside the bag to resolve the massless partons which can scatter at large angles according to QCD. At RHIC, hard-scattering starts to be visible as a power law above soft (exponential) particle production only for 1.4 GeV/c at mid-rapidity (Fig 13a), where (GeV/c) which corresponds to a distance scale (resolution) fm.333Shuryak and collaborators recently made similar arguments about resolution in separating hard from soft processes although their mechanism for soft particle and Q G P production was 2 color strings per wounded nucleon from Pomeron exchange [32].

a)                   

Figure 13: (a) Invariant cross section of vs. at mid-rapidity in collisions at =200 GeV [34]. The inset shows the transition from an exponental to a power-law in the range GeV/c (b) PHENIX deconvolution of distribution at =200 GeV [4]

A standard Monte Carlo Glauber calculation is used to assemble the initial positions of all the nucleons. Then three quarks are distributed around the center of each nucleon according to the proton charge distribution , where fm is the rms charge radius of the proton [33]. The inelastic scattering cross section is adjusted to 9.36 mb at =200 GeV to give the correct inelastic cross section (42 mb) and then used in the AB calculations.

  

Figure 14: PHENIX NQP calculations [4] based on the distribution of a constituent-quark participant from Fig. 13b for: (a) Au (also AQM), (b) AuAu distributions at =200 GeV.

Fig. 13b shows the deconvolution of the distribution to the sum of 2–6 constituent-quark participants from which the distribution of a constituent-quark is determined and applied to Au (Fig. 14a) and AuAu (Fig. 14b) reactions in the same detector.

The NQP calculations closely follow the measured Au and AuAu distributions in shape and magnitude over a range of more than 1000 in cross section. A complete calculation was also done for the AQM which fails to describe the Au data (Fig. 14a). The conclusion is that the number of constituent-quark participants determines the and distributions and that the AQM calculation which describes the data at =31 GeV (Fig. 11a) is equivalent to the NQP in the symmetric system.

The NQP model was also applied to the centrality-cut PHENIX data by making a plot of for a given centrality bin as a function of the number of constituent-quark participants in the bin for AuAu collisions at , 130 and 200 GeV (Fig. 15a).

a)

Figure 15: PHENIX [4]: (a) vs. ; (b) vs.

The data for each are well described by a straight line and all are consistent with a zero intercept. This means that is strictly proportional to (Fig 15a) so that vs. is a constant for GeV (Fig 15b) even up to the LHC TeV. This brought up a very interesting question, with a very important and fundamental answer.

Most experiments at RHIC, starting with PHENIX (Fig. 12[24] had successfully fit their measurements of or as a function of centrality (represented by ) to the two-component model (Eq. 3). Also, both the ATLAS [35] and ALICE [36] experiments at the LHC computed the ansatz, , in event-by-event Glauber Monte Carlo calculations which fit their forward measurements used to define centrality in Pb+Pb collisions. ALICE realized that the combination of the two components and in the ansatz represented the number of emitting sources of particles, which they named “ancestors”. Since the number of constituent-quarks also represents the number of emitting sources in a simple linear relationship, Bill Zajc of PHENIX suggested that “the success of the two component model is not because there are some contributions proportional to and some going as , but because a particular linear combination of and turns out to be an empirical proxy for the number of contsituent-quarks.” We had a nice table of , , as a function of centrality in AuAu collisions at =200 GeV, so it did not take very long for me to verify the striking result that indeed it was true: with , the ratio on the average and varies by less than 1% over the entire centrality range in 5% bins except for the most peripheral bin where it is 5% low (Table 1). This result clearly demonstrates that the ansatz works because the particular linear combination of and turns out to be an empirical proxy for and not because the term implies hard-scattering.

Table 1: Verification that the ansatz, , from Eq. 3, with , is a proxy for . The errors quoted on , , are correlated and largely cancel in the /ansatz ratio. For the average =3.81, the maximum variation is less than 1.6%, but 4% low in the most peripheral bin.

The fact that the /ansatz ratio drops from an average of 3.88 for AuAu collisions to 2.99 for collisions is also interesting. This is consistent with the PHOBOS [37] result that a fit of Eq. 3 to with leaving as a free parameter gives the result which is above the measured inelastic value of 2.29. The lower value of /ansatz for would then give a value of (2.12 for ) for , much closer to the measured value. In that same paper, PHOBOS also noted that their data were consistent with a constant value of from =19.6 to 200 GeV (more recently extended to =2.76 TeV [38]) which indicated that the fraction of hard-processes contributing to multiplicity did not increase over a huge range of even though the hard-scattering cross section greatly increased over this same range.

5.5 Constituent-quark participants resolve several outstanding puzzles

PHOBOS also made some very nice measurements of the charged particle multiplicity over the full rapidity range, not just mid-rapidity. The total charged multiplicity was measured and plotted as a function of centrality for =19.6, 62.4,130, and 200 GeV (Fig. 16a) [39]. At first glance the data appear to follow the WNM because the multiplicity/per nucleon pair appears to be constant in AuAu collisions. However, the true believers, e.g. Ref. [40], claim that the WNM does not work because the value in AuAu collisions is much larger than the value shown; but “still the proportionality of these multiplicities to the number of participants holds” [40].

b)

Figure 16: a) Total charged multiplicity per nucleon pair vs. centrality, , for the indicated. Open points are the measured ; solid points are extrapolated to . b) Total charged multiplicity per nucleon pair in and A+A collisions as a function of c.m. energy compared to collisions [41].

In fact, the difference between the and AuAu values may be related to another interesting observation by PHOBOS [41] that the “leading particle effect” in collisions, as discovered by Zichichi and collaborators [42]—in which the total multiplicity at c.m. energy is equal to that in collisions at (the “effective energy”) (Fig. 16b) because the leading protons carry away half the c.m. energy—is absent in A+A collisions where the leading protons can reinteract. This observation seems to contradict the WNM, in which the key assumption is that what counts is whether or not a nucleon was struck, not how many times it was struck.

Both these effects can be reconciled by constituent-quark participants.

In the NQP model (Table 1), the is 1.5 for a collision but rises to 2.27–2.73 (a factor of 1.51–1.82) for the more central (0-50%, ) AuAu collisions plotted in Fig. 16a. This would correspond to an increase in by a factor of as observed. It also might explain the slight rise of the open points with increasing . Similarly, the increase in “effective energy” for particle production shown by the increase in from to Au+Au collisions is due to an increase in the number of (constituent-quark) participants, not because of additional collisions of a given nucleon-participant. Furthermore, the factor 1.5 decrease in from AuAu to corresponds to a reduction in for the observed from 200 to 100 GeV collisions on Fig. 16b, the same factor of 2 discussed in the original measurement [41]. Thus, the NQP model rather than the WNM preserves the assumption in these “extreme-independent” participant models that successive collisions of a participant do not increase its particle emission while explaining these two interesting observations.

Another argument against the , ansatz representing actual hard-collisions rather than simply being a proxy for constituent-quark participants concerns the measurement of elliptic flow in central U+U collisions [43].

6 Collective Flow

For many years, since the days of the Bevalac [44], collective flow [45] has been observed in AA collisions over the full range of energies studied, from incident kinetic energy of MeV to c.m. energy of TeV, and thought to be a distinguishing feature of AA collisions compared to either or A collisions. Collective flow, or simply flow, is a collective effect which can not be obtained from a superposition of independent NN collisions. I first present a short review (details can be found in previous ISSP proceedings [7, 9]) and then move on to the newer results.

Immediately after an A+A collision, the overlap region defined by the nuclear geometry is almond shaped (see Fig 17a) with the shortest axis along the impact parameter vector. The different pressure gradients along the short and long axes of the ellipse break the symmetry of the problem and create an azimuthal angular dependence of the semi-inclusive single particle spectrum with respect to the reaction plane, , which is represented by an expansion in harmonics [48], where the angle of the reaction plane is defined to be along the impact parameter vector, the axis in Fig. 17a:

(4)

a)     b)

Figure 17: (a) Almond shaped overlap zone generated just after an AA collision where the incident nuclei are moving along the axis. The reaction plane by definition contains the impact parameter vector (along the axis) [46]. (b) for charged particles integrated over at =2.76 TeV for 20–30% centrality compared to the measurements at lower at the same centrality [47].

The Fourier coefficient , called elliptic flow, is predominant at mid-rapidity. The evolution of with (Fig. 17b) [47] is the result of competing processes. At very low corresponding to values of MeV [49] the main effect among many others is from nuclei bouncing off each other and breaking to fragments, which is sensitive to the equation of state of the nuclei—soft, like sponges, hard like billiard balls? The negative at larger is produced by the effective “squeeze-out” (in the direction) of the produced particles by slow moving minimally Lorentz-contracted spectators (as in Fig. 17a) which block the particles emitted in the reaction plane. With increasing , the spectators move faster and become more contracted so the blocking stops. The increase of with is generally described by hydrodynamics in the Q G P region, but is also described by hadron transport theories for GeV [50].

Flow measurements contributed two of the most important results about the properties of the Q G P: i) the scaling of of identified particles at mid-rapidity with the number of constituent-quarks in the particle— scales with the transverse kinetic energy per constituent-quark, , because particles have not formed at the time flow develops; ii) the persistence of flow for GeV/c which implied that the viscosity is small [51], perhaps as small as a quantum viscosity bound from string theory [52], where is the shear viscosity and the entropy density per unit volume. This led to the description of the “s Q G P” produced at RHIC as “the perfect fluid”.

New insight came in 2013, when measurements in Pb at LHC and Au at RHIC observed what looked very much like collective flow in these systems that were believed to be too small to support collective effects. This was the reason for the HeAu run at RHIC in 2014, to see whether triangular flow, , would be more prominent with a 3 nucleon projectile. The improvement of the Au and Au measurements this year to identified pions and protons strengthened the case that the observed in these small systems is really hydrodynamic collective flow.

Figure 18[53] shows the two-particle correlation function in Au (Eq. 5) fit with terms from to where the solid line is the fit and only the (dashes) and (dots) make significant contribution.

    

Figure 18: (a) PHENIX [53] two-particle azimuthal correlation function (Eq. 5) for GeV/c, , in central (0-5%) Au collisions at RHIC. for identified by the standard reaction plane method (Eq. 4) for (b) central Au collisions at RHIC ( =200 GeV) and (c) central Pb collisions at LHC ( =5.02 TeV).
(5)

The comes into play because the trigger particle is a charged track at mid-rapidity while the associated particle is a count in an MPC tower () from or meson decay photons at . Also, there is no evidence of a di-jet contribution because the large pseudorapidity gap between and is beyond that of a di-jet. Thus, the long-range correlation in Fig. 18a which is not seen in comparison data but has the same properties as collective flow in AuAu collisions is consistent with hydrodynamic collective flow in Au. Perhaps more convincing evidence for hydrodynamic flow is given in Fig. 18b,c where both at RHIC in Au (b) and LHC in Pb (c), the characteristic , mass splitting for seen in AuAu is observed [53]. The splitting occurs because, for a given transverse collective expansion velocity , protons have a larger than pions.

6.1 in UU collisions and constituent-quark participants

Because Uranium nuclei are prolate spheroids, there is the interesting possibility of large in body-to-body central collisions which have a significant eccentricity and almond shape (Fig. 19a).

a)     b)

Figure 19: (a) Body-to-body and tip-to-tip configurations in U+U collisions with zero impact parameter. The different relation of to is sketched next to each configuration. (Modified drawing from Ref. [43]). (b) STAR measurements of in AuAu and UU at 200 GeV as a function of with upper percentiles of centrality for UU indicated by vertical dashed lines [43].

Based on the assumption that the , ansatz (Eq. 3) would describe the distribution in UU collisions, it was predicted that for the highest (the most central collisions) the tip-to-tip configuration with much larger and small eccentricity (small ) would overtake the body-to-body configuration with large eccentricity corresponding to large .

This led to two predictions: i) the tip-to-tip configuration would be selected by the most central collisions [54]; ii) these most central collisons would see a sharp decrease in with increasing  [55, 56] called a cusp. This sharp decrease—represented by the bent line on the topmost UU data (filled circles) in Fig. 19b (called a knee in Ref. [43])—is not observed. As discussed previously, this is because the term is not relevant for distributions, which also argues against the method proposed in Ref. [54] to select the tip-to-tip configuration.

7 RHIC Beam Energy Scan (BES)—in search of the critical point

In addition to discovering the Q G P and measuring its properties, another objective of the RHIC physics program is to measure the phase diagram of nuclear matter and to determine the equation of state in the various phases and the characteristics of the phase transitions. Two of the many proposed phase diagrams of nuclear matter (e.g. see Ref. [57]) are shown in Fig. 20 together with the idealized trajectories of the

a)      b)

Figure 20: Proposed phase diagrams for nuclear matter: Temperature, , vs Baryon Chemical Potential, . a)STAR’s idea in 2013 [58]; b) STAR’s more cautious idea in 2014 [59].

evolution of the medium for Au+Au collisions at the proposed for the Beam Energy Scan at RHIC to search for a Q C D critical point. The bursts represent the hottest and densest stage of the medium when thermal equilibrium is reached shortly after the collision. The axes are the temperature vs. the baryon chemical potential . The temperature for the transition from the Quark Gluon Plasma ( Q G P) to a hadron gas is taken as 170 MeV for and the phase boundary is predicted to be a smooth crossover down to a critical point below which the phase boundary becomes a first order phase transition.

In an equilibrated thermal medium, particles should follow a Boltzmann distribution in the local rest frame [60]

(6)

where and is a chemical potential. In fact, the ratios of particle abundances (which are dominated by low particles) for central Au+Au collisions at RHIC, even for strange and multi-strange particles, are well described [61] by fits to a thermal distribution,

(7)

with similar expressions for strange particles. (and ) are chemical potentials associated with each conserved quantity: baryon number, , (and strangeness, ). Thus it is simple and instructive to estimate the ratio from Fig. 20a, near the arrow, where I read MeV, MeV, GeV, which gives . Since the ratio vs will be an important issue later, it is not a good idea to get this important information from a sketch in a proposal but from measurements and the best analysis (Fig. 21).

a)      b)

Figure 21: (a) STAR measurements of vs.  [62]; b) Best accepted analysis of and vs  [63].

The results are: i) the correct ratio at GeV is from Fig. 21a; ii) this ratio also corresponds to the correct MeV at GeV from Fig. 21b. The lesson is: if it looks more like art than like science, be skeptical and look in refereed journals for the correct numbers.

7.1 A press release during ISSP 2011

On June 23, 2011, shortly before I was to present my 2011 lectures, a press release from LBL arrived claiming that “By comparing theory with data from STAR, Berkeley Lab scientists and their colleagues map phase changes in the QGP” [64]. Since I was going to criticize in my lectures what I considered to be a particularly egregious case of “physics by press release” in the year 2000 by CERN (see Ref. [9]), I felt that I was obliged to quickly absorb and present in my talk the physics behind this latest press release, hopefully a “Highlight from RHIC”.

The subject is “Fluctuations of conserved quantities”, in this case the net baryon distribution taken as . Since there can be no fluctuations of conserved quantities such as net charge or net baryon number in the full phase space, one has go to “locally conserved quantities” [65] in small rapidity intervals to detect a small fraction of the protons and anti-protons which then fluctuates, i.e. varies from event to event. The argument is that, e.g. the fluctuation of one charged particle in or out of the considered interval produces a larger mean square fluctuation of the net electric charge if the system is in the hadron gas phase with integral charges than for the Q G P phase with fractional charges.

However, while there are excellent statistical mechanical arguments about the utility of fluctuations of conserved quantities such as net baryon number as a probe of a critical point [66], there were, in 2011, no adequate treatments of the mathematical statistics of the experimental measurements. There are also additional problems such as short-range rapidity correlations in AA collisions between like-particles induced by Fermi or Bose quantum statistics that must be reckoned with (e.g. see Refs. [67, 68]).

Theoretical analyses tend to be made by a Taylor expansion of the free energy around the critical temperature where is the partition function, or sum over states, which is of the form

(8)

and are chemical potentials associated with conserved charges  [66]. The terms of the Taylor expansion, which are obtained by differentiation, are called susceptibilities, denoted . The only connection of this method to mathematical statistics is that the Cumulant generating function in mathematical statistics for a random variable is also a Taylor expansion of the of an exponential:

(9)

Thus, the susceptibilities are Cumulants in mathematical statistics terms, where, in general, the Cumulant represents the central moment, , with all -fold combinations of the lower order moments subtracted, where . For instance, , , , . Two so-called normalized or standardized Cumulants are common in this field, the skewness, and the kurtosis, .

A sample [69] of STAR measurements of the distribution of net-protons in Au+Au collisions in the small interval GeV/c, for different is shown in Fig. 22a.

Figure 22: a) (top-left) STAR [69] distribution of event-by-event at 3 values of ; b) (top-right) STAR published [70] measurements of ; c) (bottom-left) Measurements from (b) as shown in Ref. [71] compared to the predicted ratio of susceptibilities (open crosses); d) (bottom-right) compilation [69] of STAR measurements of for .

The moments in the form are shown from a previous STAR publication [70] in Fig. 22b while a plot, alleged to be of this same data, presented in the Lattice Q C D theory publication that generated the press-release, is shown in Fig. 22[71]; and a plot of the from the data of Fig. 22c, combined with the results from Fig. 22b, is shown in Fig. 22[69]. There are many interesting issues to be gleaned from Fig. 22.

The data point at 20 GeV in Fig. 22c is not the published one from (b), as stated in the caption [71], but the one from (d), which is different and with a much larger error. This, in my opinion, makes the data point look better compared to the predicted discontinuous value of for the critical point at 20 GeV (open crosses) in contrast to the predictions of 1.0 for both 62.4 and 200 GeV. The published measurements in (b) together with the newer measurements in (d) are all consistent with ; but clearly indicate the need for a better measurement at GeV. Apart from these issues, the main problem of comparing Lattice Q C D “data” to experimental measurements is that it is like comparing peaches to a fish, since the prediction is the result of derivatives of the log of the calculated partition function of an idealized system, which may have little bearing on what is measured using finite sized nuclei in an experiment with severe kinematic cuts. Maybe this is too harsh a judgement; but since this is the first such comparison (hence the press release), perhaps the situation will improve in the future. If a future measurement would show a significant huge discontinuity of similar to the theoretical prediction at GeV, then even I would admit that such a discovery would deserve a press release, maybe more!

7.1.1 If you know the distribution, you know all the moments and cumulants

When I first saw the measured distributions in Fig. 22a in 2011, my immediate reaction was that STAR should fit them to Negative Binomial distributions (NBD) so that they would know all the Cumulants. However, I subsequently realized that my favorite 3 distributions for integer random variables, namely, Poisson, Binomial, and Negative Binomial, are all defined only for positive integers (e.g. see Ref. [8] for details), while the number of net-protons on an event can be negative as well as positive, especially at higher c.m. energies. Thanks to Gary Westfall of STAR, in a paper presented at the Erice School of Nuclear Physics in 2012 [72], who found out that these three distributions fall into the class of “integer valued Lévy processes [73]” for which the Cumulants for the distribution of the difference of samples from two such distributions, and , with Cumulants and , respectively, are [74, 73]:

(10)

so long as the distributions are not 100% correlated. This result is the same as if the distributions and were statistically independent. The first four Cumulants of the Poisson, Binomial and Negative Binomial distributions are given in Table 2.

Cumulant Poisson Binomial Negative Binomial
Table 2: Cumulants for Poisson, Binomial and Negative Binomial Distributions

7.2 The latest measurements have appeared without a press release.

In the intervening period since 2011, the STAR collaboration has improved the preliminary measurements to publications and has improved the analysis by comparing to both Poisson and Negative Binomial distributions. Figure 23[75] shows the STAR measurements of Cumulants of the net charge distributions from the “number of positive () and negative () charged particles within and GeV/c on each event (after removing protons and antiprotons with MeV/c) [75]”. The corresponding Poisson and NBD Cumulants were calculated from the measured mean, , and variance, , of the and distributions, respectively, and then calculated using Eq. 10. In contrast to Fig. 22, no non-monotonic behavior with is observed (or claimed) and the measurements of and are all above the Poisson baseline. The measurements clearly favor the NBD.

    

Figure 23: a) dependence of combinations of Cumulants in AuAu (and ) from STAR: a) (left) net-charge Cumulants [75], where is used to represent the mean, . b) (right) Cumulants of the distributions [76], where the error bars are statistical and the caps systematic errors.

The situation is quite different for the net-proton () Cumulants (Fig. 23b) [76] measured within over the range GeV/c which covers roughly half the spectrum. Here the measurements of and are all below the Poisson baseline, denoted Skellam, which is the distribution of the difference between two Poissons and reflects “a system of totally uncorrelated, statistically random particle production”[76]. From Eq. 10 and Table 2 for a Poisson one can see that, for a Skellam, which increases with decreasing because the vanish () so that the shape of the net distribution becomes dominated by the protons. This is easier to see in a plot of vs with indicated (Fig. 24a) [59] which shows clearly that starts dropping for GeV where the ratio drops below (recall Fig. 21).

a)      b)

Figure 24: (a)(left) vs. and  [59]; b) (right) () for top 5% centrality at AGS (Au+Au, =4.9 GeV), SPS (Pb+Pb, =17.2 GeV) and RHIC(Au+Au, =200 GeV), beam rapidity in c.m. system  [77].

The errors are still too large to determine whether or not stays constant below =19.6 GeV but are sufficient to clearly rule out the value of at GeV predicted in Fig. 22 [71] which created the fuss in 2011. It is also important to point out that in addition to the vanishing of the anti-protons, the physics of the protons at mid-rapidity changes dramatically—the protons are no longer produced particles, which would conserve , but are the participants and fragments from the colliding nuclei which move to mid-rapidity and eventially stop as is reduced from 200, to 17.2 to 4.9 GeV (Fig. 24b) [77]. All these results indicate that the search for a Q C D critical point at RHIC in the Beam Energy Scan (BES) in 2018-19 may not be as straightforward as originally assumed.

8 Jet quenching, RHIC’s main claim to fame

The gold-plated signature for the Q G P since 1986 [78] has been the suppression of because the color potential between quarks would be screened (Debye screening) by all the free color charges in the medium so that the would not be able to bind to form the . In fact the PHENIX experiment at RHIC was specifically designed to detect the at mid-rapidity at rest or with very low (where the screening effect would be the largest) via the decay . suppression was reportedly observed several times at the CERN SpS fixed target heavy ion program starting with NA38 in OU collisions in 1989 [79] but was plagued with many problems. The principal physics problem is that the does not follow the standard hard-scattering pointlike scaling in AB collisions, with , but is suppressed in cold nuclear matter (CNM) in A and AB collisions, with . Thus, the ultimate discovery by NA50 in PbPb collisions at =17.2 GeV [80] in 1996–1998 was called “anomalous suppression” because it was below the CNM cross section dependence which was itself well below the hard-scattering pointlike scaling.

In 1998 at the Q C D workshop in Paris [81], I found what I thought was a cleaner signal of the Q G P when Rolf Baier asked me whether jets could be measured in AuAu collisions because he had made studies in p Q C D [82] of the energy loss of partons, produced by hard-scattering “with their color charge fully exposed”, in traversing a medium “with a large density of similarly exposed color charges”. The conclusion was that “Numerical estimates of the loss suggest that it may be significantly greater in hot matter than in cold. This makes the magnitude of the radiative energy loss a remarkable signal for Q G P formation” [82]. In addition to being a probe of the Q G P the fully exposed color charges allow the study of parton-scattering with (GeV/c) in the medium where new collective Q C D effects may possibly be observed.

Because the expected energy in a typical jet cone in central AuAu collisions at =200 GeV would be GeV for , where the kinematic limit is 100 GeV, I said (and wrote [81]) that jets can not be reconstructed in AuAu central collisions at RHIC—still correct after 16 years. On the other hand, hard-scattering was discovered in collisions at the CERN-ISR in 1972 with single particle and two-particle correlations, while jets had a long learning curve from 1977–1982 with a notorious false claim (e.g. see Refs. [8, 9]), so I said (and wrote [81]) that we should use single and two-particle measurements—which we did and it WORKED! The present solution for jets in AA collisions (LHC 2010 and RHIC c.2014) is to take smaller cones, with 56 GeV in , 32 GeV in , 14 GeV in at RHIC.

8.1 Jet quenching at RHIC — Suppression of high particles

The discovery at RHIC [83] that ’s produced at large transverse momenta are suppressed in central Au+Au collisions by a factor of compared to pointlike scaling from collisions is arguably the major discovery in Relativistic Heavy Ion Physics. For (Fig. 25a) [84] the hard-scattering in collisions is indicated by the power law behavior for the invariant cross section, , with for GeV/c. The Au+Au data at a given can be characterized either as shifted lower in by from the pointlike scaled data at , or shifted down in magnitude, i.e. suppressed. In Fig. 25b, the suppression of the many identified particles measured by PHENIX at RHIC is presented as the Nuclear Modification Factor,

Figure 25: a) (left) Log-log plot of invariant yield of at GeV as a function of transverse momentum in collisions multiplied by for Au+Au central (0–10%) collisions compared to the Au+Au measurement [84]. Vertical arrow is for , horizontal arrow for . b) (right) for all identified particles so far measured by PHENIX in Au+Au central collisions at GeV.

, the ratio of the yield of e.g. per central Au+Au collision (upper 10%-ile of observed multiplicity) to the pointlike-scaled cross section at the same , where is the average overlap integral of the nuclear thickness functions:

(11)

The striking differences of in central Au+Au collisions for the many particles measured by PHENIX (Fig. 25b) illustrates the importance of particle identification for understanding the physics of the medium produced at RHIC. Most notable are: the equal suppression of and mesons by a constant factor of 5 () for GeV/c, with suggestion of an increase in for GeV/c; the equality of suppression of direct-single (from heavy quark (, ) decay) and at GeV/c; the non-suppression of direct- for GeV/c; the exponential rise of of direct- for GeV/c [85], which is totally and dramatically different from all other particles and attributed to thermal photon production by many authors (e.g. see citations in Ref. [85]). For GeV/c, the hard-scattering region, the fact that all hadrons are suppressed, but direct- are not suppressed, indicates that suppression is a medium effect on outgoing color-charged partons likely due to energy loss by coherent Landau-Pomeranchuk-Migdal radiation of gluons, predicted in p Q C D [82], which is sensitive to properties of the medium.

One nice advantage that hard-scattering and high suppression have as a Q G P probe compared to suppression is that although there is a CNM effect, it is an enhancment rather than a suppression; and as far as is known, the enhancement, historically called the Cronin effect [86], only occurs for baryons and not mesons at RHIC energies.

Figure 26: Measurements of of identified particles as a function and centrality at GeV [87]: a) (left) Au+Au; b) (right) d+Au.

Figure 26a shows in Au+Au for protons and mesons in the range GeV/c, where, in central collisions (0-10%), all the mesons are suppressed for GeV/c while the protons are enhanced for GeV/c and then become suppressed at larger . The d+Au results in Fig. 26b show no CNM effect for the mesons, out to GeV/c; while the protons show a huge enhancement (Cronin effect) in all centralities except for the most peripheral (60-88%). At present, there is no explanation of the proton enhancement in either AuAu or Au collisions, so and are the favored hard-probes.

8.2