Eventplane flow analysis without nonflow effects
Abstract
The eventplane method, which is widely used to analyze anisotropic flow in nucleusnucleus collisions, is known to be biased by nonflow effects, especially at high . Various methods (cumulants, LeeYang zeroes) have been proposed to eliminate nonflow effects, but their implementation is tedious, which has limited their application so far. In this paper, we show that the LeeYangzeroes method can be recast in a form similar to the standard eventplane analysis. Nonflow correlations are strongly suppressed by using the information from the length of the flow vector, in addition to the eventplane angle. This opens the way to improved analyses of elliptic flow and azimuthallysensitive observables at RHIC and LHC.
pacs:
25.75.Ld,25.75.Gz,05.70.FhI Introduction
Studies of particle production at the BNL Relativistic Heavy Ion Collider (RHIC) have revealed strong collective effects: in particular, the azimuthal distribution transverse to the direction of the colliding nuclei has sizable anisotropies, a phenomenon called anisotropic flow. The main component of this anisotropy, elliptic flow, has been extensively measured for several beam energies and collision systems Ackermann:2000tr (); Back:2002gz (); Adler:2003kt ().
Anisotropic flow is most often analyzed using the eventplane method Poskanzer:1998yz (). This analysis technique is plagued by systematic errors due to nonflow effects Ollitrault:1995dy (). There are other sources of systematic errors, such as fluctuations Miller:2003kd (); Alver:2006wh (), but nonflow effects are expected to be the dominant source of error at high Adams:2004bi (), where they are likely to originate from jetlike (hard) correlations; they are expected to be even larger at the LHC. The purpose of this paper is to show that nonflow effects can be suppressed at the expense of a slight modification of the eventplane method.
Anisotropic flow of selected produced particles, in a given part of phasespace, is defined as their azimuthal correlation with the reaction plane Voloshin:1994mz ()
(1) 
where is an integer ( is directed flow, is elliptic flow), , and angular brackets denote respectively the azimuth of the particle under study, the azimuth of the reaction plane, and an average over particles and events. Since is not known experimentally, cannot be measured directly.
The most commonly used method to estimate is the eventplane method Poskanzer:1998yz (). In each event, one constructs an estimate of the reaction plane , the “event plane” Danielewicz:1985hn (). The anisotropic flow coefficients are then estimated as
(2) 
where is the eventplane resolution, which corrects for the difference between and . This resolution is determined in each class of events through a standard procedure Ollitrault:1997di ().
The analogy between Eq. (2) and Eq. (1) makes the method rather intuitive, but its practical implementation has a few subtleties:

One must remove autocorrelations: the particle under study should not be used in defining the event plane, otherwise there is a trivial correlation between and Danielewicz:1985hn (). This means in practice that one must keep track of which particles have been used in defining the event plane, so as to remove them if necessary.

More generally, there are sources of correlation, other than flow, through which the particle under study can be correlated with a particle used in defining the event plane. Such correlations, called “nonflow effects”, result in and must be suppressed. This cannot be done in a systematic way, but rapidity gaps are believed to reduce nonflow effects Adler:2003kt (); Voloshin:2006wi ().

Eventplane flattening procedures must be implemented to correct for azimuthal asymmetries of the detector acceptance Poskanzer:1998yz ().
A systematic way of suppressing nonflow effects is to use improved methods such as cumulants Borghini:2001vi () or LeeYang zeroes Bhalerao:2003xf (). Cumulants have been used at SPS Alt:2003ab () and RHIC Adler:2002pu (); Adams:2004bi (). LeeYang zeroes have been implemented at SIS Bastid:2005ct () and at RHIC Abelev:2008ed (). They are comparatively much less used than the eventplane method, and one reason is that the eventplane method is deemed more intuitive and handy.
In this paper, we show that the method of flow analysis based on LeeYang zeroes can be rewritten in a way which is mathematically equivalent to the original formulation Bhalerao:2003xf (), but formally analogous to the eventplane method, which makes it more intuitive. The corresponding estimate of is defined as
(3) 
where is the same as in Eq. (2), and is an event weight as defined in this paper. The formal analogy with the eventplane method, Eq. (2), is obvious. The advantage of the improved eventplane method defined by Eq. (3) over the standard eventplane method is that both autocorrelations and nonflow effects are automatically suppressed.
The paper is organized as follows. In Sec. II, we describe the method for a detector with perfect azimuthal symmetry, and we explain why it automatically removes autocorrelations and nonflow correlations, in contrast to the standard eventplane method. Readers interested in applying the method should read Appendix A, which describes the recommended practical implementation, taking into account anisotropies in the detector acceptance. In Sec. III, we present results of MonteCarlo simulations, where results obtained with the LeeYang zeroes method are compared to those obtained with 2 and 4particle cumulants. Sec. IV concludes with a discussion of where the method should be applicable, and of its limitations.
Ii Description of the method
ii.1 The flow vector
The first step of the flow analysis is to evaluate, for each event, the flow vector of the event. It is a twodimensional vector defined as
(4)  
(5) 
where the sum runs over all detected particles footnote1 (). is the observed multiplicity of the event, are the azimuthal angles of the particles measured with respect to a fixed direction in the laboratory. The coefficients in Eq. (4) are weights depending on transverse momentum, particle mass and rapidity. The best weight, which minimizes the statistical error (or, equivalently, maximizes the resolution) is itself, Borghini:2000sa (). A reasonable choice for elliptic flow measurements at RHIC (and probably LHC) is .
If collective flow is present, the azimuthal angles and the event plane are correlated with the true reaction plane , and the goal of the flow analysis is to measure this correlation. This is usually done within a set of events belonging to the same centrality class. Integrated flow is defined as the average value of the projection of onto the true reaction plane:
(6) 
where angular brackets denote an average over events in the same centrality class. We use a capital letter for because it is in general a dimensionful quantity: it is the weighted sum of the ’s of individual particles, according to Eqs. (1) and (4). The flow vector fluctuates around this average value because the multiplicity is finite. These fluctuations can be modeled using the central limit theorem. The resulting distribution of is Ollitrault:1995dy ():
(7) 
where is a dimensionless quantity called the resolution parameter, which characterizes the relative magnitude of collective flow and statistical fluctuations. The resolution in Eq. (2) increases from to as goes from to . Fig. 1 illustrates the distribution of for two values of . For , this distribution is a narrow peak centered at .
LeeYang zeroes use the projection of the flow vector onto a fixed, arbitrary direction making an angle with respect to the axis. We denote this projection by :
(8) 
ii.2 Integrated flow
We now explain how the integrated flow , defined by Eq. (6), is obtained. We define the complexvalued function:
(9) 
If there is no collective flow, the probability distribution of is a Gaussian due to the central limit theorem (if ). Its Fourier transform is also a Gaussian. Collective flow results in oscillations of around zero: In the ideal case where the multiplicity is so large that fluctuations can be neglected, and . Inserting Eq. (8) into (9) and averaging over , one obtains
(10) 
where denotes the Bessel function of the first kind of order 0, which oscillates around 0. Finite multiplicity fluctuations result in a gaussian smearing of , but quite remarkably, the location of the zeroes is unchanged, up to statistical fluctuations due to the finite number of events Bhalerao:2003yq ().
As a consequence, the modulus has sharp minima for positive , which can be estimated numerically. The position of the first minimum, , is used to estimate , using Eq. (10):
(11) 
where is the first zero of . One may also check, as a consistency test, that within statistical errors Bhalerao:2003yq ().
The above procedure only makes use of the projection of the flow vector onto an arbitrary direction . For a perfect detector, azimuthal symmetry ensures that is independent of , up to statistical errors. In practice, however, it is recommended to repeat the analysis for several values of (see Appendix A).
ii.3 Differential flow and event weight
We now derive the expression of the event weight in Eq. (3), which is the crucial improvement of our paper over the standard eventplane method. The goal is to measure the differential flow of selected produced particles. can be obtained by shifting the weights of the selected particles in Eq. (4) by an infinitesimal quantity , , and computing the integrated flow with the new weights. The differential flow is then simply given by , with . Differentiating Eq. (11),
(12) 
where denotes the shift of the zero. Differentiating the condition , one obtains
(13) 
For an event containing one selected particle, Eqs. (4) and (8) give , where is the azimuth of the selected particle. Eq. (12) then gives
(14) 
where the average in the numerator is over selected particles, and the average in the denominator is over events. In this expression, is an arbitrary reference angle. Both the numerator and the denominator are expected to be independent of , up to asymmetries in the detector acceptance, and statistical fluctuations. In practice, we recommend to first take the ratio and then average over , as explained in Sec.A.2. Here, we derive simple approximate expressions by assuming that is independent of , and by averaging the numerator and the denominator over before taking the ratio. We thus obtain:
(15) 
where is the derivative of . Identifying Eq. (15) with Eq. (3), we obtain the event weight
(16) 
where is a normalization constant which can be computed using the distribution (7):
(17) 
The difference with the standard eventplane analysis is that each event is given a weight (16) which depends on the length of the flow vector , a quantity which is not used in the standard analysis. Eq. (16) involves the integrated flow through , which must be determined in a first pass through the data.
Fig. 1 displays the variation of with , for two values of the resolution parameter. For , the distribution of is a narrow peak centered at . Therefore, the weight defined by Eqs. (16) and (17) is close to for all events. If is smaller, the distribution of is wider, and is negative for some events. These negative weights are required in order to subtract nonflow effects. On the other hand, they also subtract part of the flow. In order to compensate for this effect, the global normalization of the weight increases when decreases (as illustrated in Fig. 1 by the fact that the amplitude of the curve showing the weight changes for different values of ). This qualitatively explains the dependence in Eq. (17).
The weight (16) vanishes linearly at . This is physically intuitive. Given that the flow vector is obtained by summing over all particles, one increases the relative weight of collective flow over individual, random motion of the particles. If the flow vector is small in an event, it means that the random motion hides the collective motion in this particular event, which is therefore of little use for the flow analysis.
ii.4 Nonflow effects and autocorrelations
We now explain why the method suppresses nonflow effects and autocorrelations on the basis of two simple examples.
As a first example, we assume that each particle splits into two particles with identical momenta, roughly imitating the effect of resonance decays or track splitting in a detector. This splitting does not change the anisotropic flow , defined by Eq. (1), but it introduces nonflow correlations, which bias standard analyses as will be shown in Sec. III. The splitting leaves unchanged: it multiples both the flow vector, Eq. (4) and the integrated flow , Eq. (6) by 2. Therefore in Eq. (11) is divided by 2, and defined by Eq. (14) is unchanged.
As a second example, we consider the situation where there is collective flow in the system, but the selected particles have . We further assume that the selected particles are uncorrelated with the other particles. In the standard eventplane method, one needs to subtract the selected particles from the flow vector (4), otherwise autocorrelations yield . We now show that , even if selected particles are included in the flow vector.
We separate the flow vector, Eq. (4), into the contribution of selected particles, , and other particles .
(18) 
Our estimate of is defined by Eq.(14). Since the flow vector appears in an exponential, the contributions of selected particles and other particles can be written as a product of two independent factors:
(19) 
Let us define by replacing with in Eq. (9). Following the same reasoning as in Sec. II.2, the first zero of depends on the integrated flow of other particles. We have assumed that for selected particles, therefore , and
(20) 
Inserting into Eq. (19), we find
(21) 
up to statistical fluctuations. This proof can easily be generalized to the situation where each selected particle is correlated with a few additional particles (e.g. within a jet) which are not correlated with the bulk of particles producing collective flow.
We have constructed two simple examples where LeeYang zeroes are able to eliminate nonflow effects and autocorrelations. In actual experiments, however, flow and nonflow effects are likely to be mingled, and detailed simulations must be carried out to determine to what extent the suppression is effective.
Iii Simulations
To check the validity of the procedure described in this paper and to compare it with other analysis methods 28000 events were simulated with a MonteCarlo program dubbed GeVSim gevsim (). In GeVSim the and the particle yield as function of transverse momentum and pseudorapidity are generated according to a userdefined parameterization. For these simulations events were generated using a linear dependence of in the range 0–2 GeV/, above 2 GeV/ the was set constant. The average elliptic flow is . We then reconstructed from the simulated events using several methods: the LeeYangzeroes method described in Appendix A, as well as 2 and 4particle cumulants Borghini:2001vi (). The corresponding estimates of are denoted by , and , respectively. is generally close to from the traditional eventplane method; both are biased by nonflow effects. On the other hand, is expected to be close to , with the bias from nonflow effects suppressed. The weight in Eq. (4) was chosen identically for all particles, with the event multiplicity, so that the integrated flow defined by Eq. (1) coincides with the average elliptic flow, i.e., . The analysis was repeated twice by varying the multiplicity used in the flow analysis: the values 256 and 576 were used, so as to achieve a resolution of 1 and 1.5. footnote2 ().
Fig. 2 shows the generated (input) together with the reconstructed using cumulants and LeeYang zeroes for . The upper panel shows the results in the case where all correlations are due to flow. In this case, all three methods yield the correct and within statistical uncertainties (see Table 1), which are twice larger for and than for (see Sec. A.3).
In the lower panel, simulations are shown which include nonflow effects. Because GeVSim generates no nonflow, nonflow correlations are introduced by using each input track twice, as in Sec. II.4. Experiments at RHIC have shown Adams:2004bi () that nonflow effects are larger at high (probably due to jetlike correlations), and a realistic simulation of nonflow effects should take into account this dependence. Our simplified implementation, which does not, is not realistic. It is merely an illustration of the impact of nonflow effects on the flow analysis. Fig. 2 shows that due to nonflow effects, the method based on twoparticle cumulants () overestimates the average elliptic flow . The error on the average elliptic flow is larger than 20% (see Table 1, right column). The transversemomentum dependence of is also not correct, with an excess at low by . By contrast, the results from 4particle cumulants () and LeeYang zeroes () are, within statistical uncertainties, in agreement with the true generated flow distribution. This shows that the method presented in this paper is able to remove nonflow effects.
Method  Flow only  Flow+nonflow 

Iv Discussion
Two effects limit the accuracy of flow analyses at high energy: nonflow effects and eccentricity fluctuations Miller:2003kd (); Alver:2006wh (). The method presented in this paper is an improved eventplane method, which strongly suppresses the first source of uncertainty, nonflow effects. It has been argued Voloshin:2007pc () that cumulants (and therefore LeeYang zeroes, which corresponds to the limit of largeorder cumulants) also eliminate eccentricity fluctuations Miller:2003kd (); Alver:2006wh (). However, a detailed study Alver:2008zza () shows that even with cumulants, there may remain large effects of fluctuations in central collisions and/or small systems. This issue deserves more detailed investigation.
Letting aside the question of fluctuations, we now discuss which method of flow analysis should be used, depending on the situation. There are three main classes of methods: the standard eventplane method Poskanzer:1998yz (), fourparticle cumulants Borghini:2001vi (), and the LeeYangzeroes method presented in this paper. When the standard eventplane method is used, nonflow effects and eccentricity fluctuations are generally the main sources of uncertainty on , and they dominate over statistical errors. The magnitude of this uncertainty is at least 10% at RHIC in semicentral collisions; it is larger for more central or more peripheral collisions, and also larger at high . Unless statistical errors are of comparable magnitude as errors from nonflow effects, cumulants or LeeYang zeroes should be preferred over the standard method.
The main advantage of LeeYang zeroes, compared to cumulants, is that the method involves an eventplane angle. This is useful in particular for studying azimuthally dependent correlations Adams:2004wz (); Bielcikova:2003ku (). Such studies cannot be done with cumulants, but they are straightforward with LeeYang zeroes. The only complication is that the azimuthal distribution of particle pairs generally involves sine terms Borghini:2004ra (), in addition to the cosine terms of Eq. (1). These terms are simply obtained by replacing cos with sin in Eq. (3).
When studying anisotropic flow of individual particles, both cumulants and LeeYang zeroes can be applied. The cumulant method has been recently improved by directly calculating the cumulants Bilandzic:2010jr (). With these improvements, both methods are expected to be essentially equivalent. The slight advantages of LeeYang zeroes are: 1) They are easier to implement. 2) They further reduce the error from nonflow effects. 3) The statistical error is slightly smaller if the resolution parameter . For , the error is only 35% larger with LeeYang zeroes than with 4particle cumulants (and 4 times larger than with the eventplane method).
Our recommendation is that LeeYang zeroes should be used as soon as . For small values of , typically , statistical errors on LeeYang zeroes blow up exponentially, which rules out the method; the statistical error on 4particle cumulants also increases but more mildly, and their validity extends down to lower values of the resolution if very large event statistics is available.
A limitation of the present method is that it does not apply to mixed harmonics: this means that it cannot be used to measure and at RHIC and LHC using the event plane from elliptic flow Adams:2003zg (). Note that can in principle be measured using LeeYang zeroes Borghini:2004ad () using the “product” generating function, but this method cannot be recast in the form of an improved eventplane method. Higher harmonics such as also have a sensitivity to autocorrelations and nonflow effects, which is significantly reduced by using the product generating function Borghini:2001vi ().
In conclusion, we have presented an improved eventplane method for the flow analysis, which automatically corrects for autocorrelations and nonflow effects. As in the standard method, each event has its event plane , an estimate of the reaction plane, which is the same as for the standard method, except for technical details in the practical implementation. The trick which removes autocorrelations and nonflow effects is that there is in addition an event weight. Anisotropic flow is then estimated as a weighted average of . A straightforward application of this method would be to measure jet production with respect to the reaction plane at LHC. With the traditional eventplane method, such a measurement would require to subtract particles belonging to the jet from the event plane; in addition, strong nonflow correlations are expected within a jet, which would bias the analysis.
Acknowledgments
JYO thanks Yiota Foka for a discussion which motivated this work. We thank S. Voloshin for comments on the manuscript. The work of AB, NvdK and RS was supported in part by the Dutch funding agencies FOM and NWO.
Appendix A Practical implementation
Before we describe the implementation of the method, let us mention that there are in fact two LeeYangzeroes methods, depending on how the generating function is defined: the “sum generating function” makes explicit use of the flow vector Bhalerao:2003yq (), while the “product generating function” Borghini:2004ke () is constructed using the azimuthal angles of individual particles, and cannot be expressed simply in terms of the flow vector. Cumulants also exist in both versions, the “sum” Borghini:2000sa () and the “product” Borghini:2001vi (). For LeeYang zeroes, both the sum and the product give essentially the same result for the lowest harmonic Bastid:2005ct (): the difference between results from the two methods is significantly smaller than the statistical error. On the other hand, the product generating function is significantly better than the sum generating function if one analyzes or Borghini:2004ad () using mixed harmonics. The method described below is strictly equivalent to the sum generating function, although expressed in different terms. On the other hand, the product generating function cannot be recast in a form similar to the eventplane method, and will not be used here.
The method a priori requires two passes through the data, which are described in Sec. A.1 and Sec. A.2.
a.1 First pass: locating the zeroes
As with other flow analyses, one must first select events in some centrality class. The whole procedure described below must be carried out independently for each centrality class.
The flow vector is defined by Eq. (4). In contrast to the standard eventplane method, no flattening procedure is required to make the distribution of isotropic. Corrections for azimuthal anisotropies in the acceptance, which do not vary significantly in the event sample used, are handled using the procedure described below. We do not define the event plane as the azimuthal angle of the flow vector, as in Eq. (4). The procedure below defines both the event weight and the event plane.
The analysis uses the projection of the flow vector onto an arbitrary direction, see Eq. (8). In practice, the first pass should be repeated for several equallyspaced values of between and . This reduces the statistical error as shown by Eq. (29). For more than 5 values of the reduction is not significant anymore, so this number is recommended. For elliptic flow, for instance, takes the values .
One first computes the modulus , with defined by Eq. (9), as a function of for positive . One determines numerically the first minimum of this function. This is the LeeYang zero. We denote its value by . It must be stored for each .
a.2 Second pass: determining the event weight, , and the event plane, .
In the second pass, one computes and stores, for each , the following complex number:
(22) 
where . Except for statistical fluctuations and asymmetries in the detector acceptance, should be purely imaginary.
For each event, the event weight and the event plane are defined by
(23)  
(24) 
where Re denotes the real part, and angular brackets denote averages over the values of defined in subsection A.1. Our estimate of , denoted by , is then defined by Eq. (3).
We now discuss how the angle defined by Eq. (23) compares with the eventplane from the standard analysis. First, we note that Eqs. (23) uniquely determine the angle (modulo ) only if the sign of is known. The simplest convention is . In the simplified implementation described in Sec. II, however, where coincides with the standard event plane, defined by Eq. (16) can be negative, because the Bessel function changes sign (see Fig. 1). The convention then leads to a value of which differs from the standard event plane by , since changing the sign of amounts to shifting by in Eqs. (23). This is illustrated in Fig. 3, which shows the distribution of the relative angle between and the standard event plane in the simulation of at LHC described in Sec. III. The distribution has two sharp peaks at 0 and . The sign ambiguity produces the peak at . The width of the peaks results from statistical fluctuations. The final result , given by Eq. (3), does not depend on the sign chosen for .
If one wishes to have an eventplane as close as possible to the standard event plane, one may choose the following convention. Denoting by the standard event plane, one computes the following quantity:
(25) 
where and are defined by Eq. (23). The sign of is then chosen as the sign of , which ensures that lies between and .
The procedure described in this Appendix differs from the procedure described in Sec. II only in the case of nonuniform acceptance. This agreement can be seen in Fig. 1, which displays a comparison between the two. The solid line corresponds to the weight defined in Sec. II (Eqs. (16) and (17)), while the stars corresponds to the weight defined by Eq. (23), as implemented in the MonteCarlo simulation presented in Sec. III. The agreement is very good. This agreement can also be seen directly from the equations. If the detector has perfect azimuthal symmetry, and in Eq. (23) are independent of , up to statistical fluctuations. Neglecting these fluctuations, replacing with Eq. (8) and integrating over , one easily recovers Eq. (16). If there are azimuthal asymmetries in the detector acceptance, on the other hand, they are automatically taken care of by Eq. (23). The fact that one first projects the flow vector onto a fixed direction is essential (for a related discussion, see Selyuzhenkov:2007zi ()).
a.3 Statistical errors
The statistical error strongly depends on the resolution parameter Ollitrault:1997di () , which is closely related to the reaction plane resolution in the eventplane analysis. It is given by
(26) 
In this equation, is given by Eq. (11), averaged over to minimize the statistical dispersion. The average values , , and must be computed in the first pass through the data. Note that and vanish for a symmetric detector: they are acceptance corrections.
The price to pay for the elimination of nonflow effects is an increased statistical error. This increase is very modest if is larger than 1: If , the error is only larger than with the standard eventplane method. If , it is larger by a factor 2. If , it is 20 times larger. This prevents the application of LeeYang zeroes in practice for smaller than 0.6.
We now recall the formulas Bhalerao:2003xf () which determine the statistical error on :
(29)  
where denotes the number of of objects one correlates to the event plane, whatever they are (jets, individual particles), and is the number of equallyspaced values of used in the analysis (see above). The larger , the smaller the error. The recommended value is , because larger values do not significantly reduce the error. This equation shows that the statistical error diverges exponentially when is small.
References
 (1) K. H. Ackermann et al. [STAR Collaboration], Phys. Rev. Lett. 86, 402 (2001).
 (2) B. B. Back et al. [PHOBOS Collaboration], Phys. Rev. Lett. 89, 222301 (2002).
 (3) S. S. Adler et al. [PHENIX Collaboration], Phys. Rev. Lett. 91, 182301 (2003).
 (4) A. M. Poskanzer and S. A. Voloshin, Phys. Rev. C 58, 1671 (1998).
 (5) J. Y. Ollitrault, Nucl. Phys. A 590, 561C (1995).
 (6) M. Miller and R. Snellings, arXiv:nuclex/0312008.
 (7) B. Alver et al. [PHOBOS Collaboration], Phys. Rev. Lett. 98, 242302 (2007).
 (8) J. Adams et al. [STAR Collaboration], Phys. Rev. C 72, 014904 (2005).
 (9) S. Voloshin and Y. Zhang, Z. Phys. C 70, 665 (1996).
 (10) P. Danielewicz and G. Odyniec, Phys. Lett. B 157, 146 (1985).
 (11) J. Y. Ollitrault, arXiv:nuclex/9711003.
 (12) S. A. Voloshin [STAR Collaboration], AIP Conf. Proc. 870, 691 (2006).
 (13) N. Borghini, P. M. Dinh and J. Y. Ollitrault, Phys. Rev. C 64, 054901 (2001); arXiv:nuclex/0110016.
 (14) R. S. Bhalerao, N. Borghini and J. Y. Ollitrault, Nucl. Phys. A 727, 373 (2003).
 (15) C. Alt et al. [NA49 Collaboration], Phys. Rev. C 68, 034903 (2003).
 (16) C. Adler et al. [STAR Collaboration], Phys. Rev. C 66, 034904 (2002).
 (17) N. Bastid et al. [FOPI Collaboration], Phys. Rev. C 72, 011901 (2005).
 (18) B. I. Abelev et al. [STAR Collaboration], Phys. Rev. C 77, 054901 (2008).
 (19) In the eventplane method, the sum runs over a selected subset of detected particles, not over all particles, in order to suppress nonflow correlations. With the present method, nonflow correlations are not an issue. On the other hand, the method can only be applied if the resolution is large enough, as will be discussed in Sec. IV. In order to maximize the resolution, one must use all detected particles.
 (20) N. Borghini, P. M. Dinh and J. Y. Ollitrault, Phys. Rev. C 63, 054906 (2001).
 (21) R. S. Bhalerao, N. Borghini and J. Y. Ollitrault, Phys. Lett. B 580, 157 (2004).
 (22) S. Radomski and Y. Foka, ALICE NOTE 200231; http://www.gsi.de/forschung/kp/kp1/gevsim.html
 (23) depends on the average elliptic flow , and on the event multiplicity . If there are no nonflow effects, and if the weights in Eq. (4) are identical for all particles, .
 (24) S. A. Voloshin, A. M. Poskanzer, A. Tang and G. Wang, Phys. Lett. B 659, 537 (2008).
 (25) B. Alver et al., Phys. Rev. C 77, 014906 (2008).
 (26) J. Adams et al. [STAR Collaboration], Phys. Rev. Lett. 93, 252301 (2004).
 (27) J. Bielcikova, S. Esumi, K. Filimonov, S. Voloshin and J. P. Wurm, Phys. Rev. C 69, 021901 (2004).
 (28) N. Borghini and J. Y. Ollitrault, Phys. Rev. C 70, 064905 (2004).
 (29) A. Bilandzic, R. Snellings and S. Voloshin, arXiv:1010.0233 [nuclex].
 (30) J. Adams et al. [STAR Collaboration], Phys. Rev. Lett. 92, 062301 (2004).
 (31) N. Borghini and J. Y. Ollitrault, Nucl. Phys. A 742, 130 (2004).
 (32) N. Borghini, R. S. Bhalerao and J. Y. Ollitrault, J. Phys. G 30, S1213 (2004).
 (33) I. Selyuzhenkov and S. Voloshin, Phys. Rev. C 77, 034904 (2008).