SoftKiller, a particle-level pileup removal method
Existing widely-used pileup removal approaches correct the momenta of individual jets. In this article we introduce an event-level, particle-based pileup correction procedure, SoftKiller. It removes the softest particles in an event, up to a transverse momentum threshold that is determined dynamically on an event-by-event basis. In simulations, this simple procedure appears to be reasonably robust and brings superior jet resolution performance compared to existing jet-based approaches. It is also nearly two orders of magnitude faster than methods based on jet areas.
At high-luminosity hadron colliders such as CERN’s Large Hadron Collider (LHC), an issue that has an impact on many analyses is pileup, the superposition of multiple proton-proton collisions at each bunch crossing. Pileup affects a range of observables, such as jet momenta and shapes, missing transverse energy and lepton and photon isolation. In the specific case of jets, it can add tens of GeV to a jet’s transverse momentum and significantly worsens the resolution for reconstructing the jet momentum. In the coming years the LHC will move towards higher luminosity running, ultimately increasing pileup by up to a factor of ten for the high-luminosity LHC . The experiments’ ability to mitigate pileup’s adverse effects will therefore become increasingly crucial to fully exploit the LHC data, especially at low and moderate momentum scales, for example in studies of the Higgs sector.
Some approaches to reducing the impact of pileup are deeply rooted in experimental reconstruction procedures. For example, charged hadron subtraction (CHS) in the context of particle flow , exploits detectors’ ability to identify whether a given charged track is from a pileup vertex or not. Other aspects of pileup mitigation are largely independent of the experimental details: for example both ATLAS and CMS [3, 4] rely on the area–median approach [5, 6], which makes a global estimate for the transverse momentum-flow density, , and then applies a correction to each jet in proportion to its area.
In this article, we introduce and study a new generic pileup-removal method. Instead of correcting individual jets, it corrects for pileup at the level of particles. Such a method should make a guess, for each particle in an event, as to whether it comes from pileup or from the hard collision of interest. Particles deemed to be from pileup are simply discarded, while the much smaller set of residual “hard-collision” particles are passed to the jet clustering. Event-wide particle-level subtraction, if effective, would greatly simplify pileup mitigation in advanced jet studies such as those that rely on jet substructure . Even more importantly, as we shall see, it has the potential to bring significant improvements in jet resolution and computational speed. This latter characteristic makes our approach particularly appealing also for trigger-level applications.
The basis of our pileup suppression method, which we dub “SoftKiller” (SK), is that the simplest characteristic of a particle that affects whether it is likely to be from pileup or not is its transverse momentum. In other words, we will discard particles that fall below a certain transverse momentum threshold. The key feature of the method will be its event-by-event determination of that threshold, chosen as the lowest value that causes , in the median–area method, to be evaluated as zero. In a sense, this can be seen as the extreme limit of ATLAS’s approach of increasing the topoclustering noise threshold as pileup increases .
This approach might at first sight seem excessively naïve in its simplicity. We have also examined a range of other methods. For example, one approach involved an all-orders matrix-element analysis of events, similar in spirit to shower deconstruction ; others involved event-wide extensions of a recent intrajet particle-level subtraction method  and subjet-level [11, 12] approaches (see also ); we have also been inspired by calorimeter  and particle-level  methods developed for heavy-ion collisions. Such methods and their extensions have significant potential. However we repeatedly encountered additional complexity, for example in the form of multiple free parameters that needed fixing, without a corresponding gain in performance. Perhaps with further work those drawbacks can be alleviated, or performance can be improved. For now, we believe that it is useful to document one method that we have found to be both simple and effective.
2 The SoftKiller method
The SoftKiller method involves eliminating particles below some cutoff, , chosen to be the minimal value that ensures that is zero. Here, is the event-wide estimate of transverse-momentum flow density in the area–median approach [5, 6]: the event is broken into patches and is taken as the median, across all patches, of the transverse-momentum flow density per unit area in rapidity-azimuth:
where and are respectively the transverse momentum and area of patch . In the original formulation of the area–median method, the patches were those obtained by running inclusive clustering , but subsequently it was realised that it was much faster and equally effective to use (almost) square patches of size in the rapidity-azimuth plane. That will be our choice here. The use of the median ensures that hard jets do not overly bias the estimate (as quantified in Ref. ).222One practically important aspect of the area–median method is the significant rapidity dependence of the pileup, most easily accounted for through a manually determined rapidity-dependent rescaling. This is discussed in detail in appendix B.
Choosing the minimal transverse momentum threshold, , that results in is equivalent to gradually raising the threshold until exactly half of the patches contain no particles, which ensures that the median is zero. This is illustrated in Fig. 1. Computationally, is straightforward to evaluate: one determines, for each patch , the of the hardest particle in that patch, and then is given by the median of values:
With this choice, half the patches will contain only particles that have . These patches will be empty after application of the threshold, leading to a zero result for as defined in Eq. (1).333Applying a threshold to individual particles is not collinear safe; in the specific context of pileup removal, we believe that this is not a significant issue, as we discuss in more detail in Appendix A. The computational time to evaluate as in Eq. (2) scales linearly in the number of particles and the method should be amenable to parallel implementation.
Imposing a cut on particles’ transverse momenta eliminates most of the pileup particles, and so might reduce the fluctuations in residual pileup contamination from one point to the next within the event. However, as with other event-wide noise-reducing pileup and underlying-event mitigation approaches, notably the CMS heavy-ion method  (cf. the analysis in Appendix A.4 of Ref. ), the price that one pays for noise reduction is the introduction of biases. Specifically, some particles from pileup will be above and so remain to contaminate the jets, inducing a net positive bias in the jet momenta. Furthermore some particles in genuine hard jets will be lost, because they are below the , inducing a negative bias in the jet momenta. The jet energy scale will only be correctly reproduced if these two kinds of bias are of similar size,444For patch areas that are similar to the typical jet area, this can be expected to happen because half the patches will contain residual pileup of order , and since jets tend to have only a few low- particles from the hard scatter, the loss will also be order of . so that they largely cancel. There will be an improvement in the jet resolution if the fluctuations in these biases are modest.
Figure 2 shows, on the left, the average value, together with its standard deviation (dashed lines), as a function of the number of pileup interactions, . The event sample consists of a superposition of zero-bias on one hard dijet event, in 14 TeV proton–proton collisions, all simulated with Pythia 8 (tune 4C) . The 4C tune gives reasonable agreement with a wide range of minimum-bias data, as can be seen by consulting MCPlots .555In appendix C we also briefly examine the Pythia 6  Z2 tune , and find very similar results. The underlying event in the hard event has been switched off, and all particles have been made massless, maintaining their , rapidity and azimuth.666If one keeps the underlying event in the hard event, much of it (about for both the area–median approach and the SoftKiller) is subtracted together with the pileup correction, affecting slightly the observed shifts. Keeping massive particles does not affect the SK performance but requires an extra correction for the area–median subtraction . We therefore use massless particles for simplicity. These are our default choices throughout this paper. The grid used to determine has a spacing of and extends up to . One sees that remains moderate, below , even for pileup at the level foreseen for the high-luminosity upgrade of the LHC (HL-LHC), which is expected to reach an average (Poisson-distributed) number of pileup interactions of . The right-hand plot shows the two sources of bias: the lower (solid) curves, illustrate the bias on the hard jets induced by the loss of genuine hard-event particles below . Jet clustering is performed with the anti- jet algorithm  with , as implemented in a development version of FastJet 3.1 [22, 23].777For our purposes here, the version that we used is equivalent to the most recent public release, FastJet 3.0.6. The three line colours correspond to different jet ranges. The loss has some dependence on the jet itself, notably for higher values of .888In a local parton-hadron duality type approach to calculate hadron spectra, the spectrum of very low particles in a jet of a given flavour is actually independent of the jet’s . In particular it grows in absolute terms for larger jet ’s, though it decreases relative to the jet . The positive bias from residual pileup particles (in circular patches of radius at rapidity ) is shown as dashed curves, for three different pileup levels. To estimate the net bias, one should choose a value for , read the average from the left-hand plot, and for that compare the solid curve with the dashed curve that corresponds to the given . Performing this exercise reveals that there is indeed a reasonable degree of cancellation between the positive and negative biases. Based on this observation, we can move forward with a more detailed study of the performance of the method.999A study of fixed cutoffs, rather than dynamically determined ones, is performed in Appendix D.
3 SoftKiller performance
For a detailed study of the SoftKiller method, the first step is to choose the grid spacing so as to break the event into patches. The spacing is the one main free parameter of the method. The patch-size parameter101010or jet radius. is present also for area–median pileup subtraction. There the exact choice of this parameter is not too critical. The reason is that the median is quite stable when pileup levels are high: all grid cells are filled, and nearly all are dominated by pileup. However the SoftKiller method chooses the so as to obtain a nearly empty event. In this limit, the median operation becomes somewhat more sensitive to the grid spacing .
Fig. 3 considers a range of hard event samples (different line styles) and pileup levels (different colours). For each, as a function of the grid spacing , the left-hand plot shows the average, , of the net shift in the jet transverse momentum,
while the right-hand plot shows the dispersion, , of that shift from one jet to the next, here normalised to (right).
One sees that the average jet shift has significant dependence on the grid spacing . However, there exists a grid spacing, in this case , for which the shift is not too far from zero and not too dependent either on the hard sample choice or on the level of pileup. In most cases the absolute value of the shift is within about , the only exception being for the dijet sample, for which the bias can reach up to for . This shift is, however, still less than the typical best experimental systematic error on the jet energy scale, today of the order of or slightly better [25, 26].
It is not trivial that there should be a single grid spacing that is effective across all samples and pileup levels: the fact that there is can be considered phenomenologically fortuitous. The value of the grid spacing that minimises the typical shifts is also close to the value that minimises the dispersion in the shifts.111111In a context where the net shift is the sum of two opposite-sign sources of bias, this is perhaps not too surprising: the two contributions to the dispersion are each likely to be of the same order of magnitude as the individual biases, and their sum probably minimised when neither bias is too large. That optimal value of isn’t identical across event samples, and can also depend on the level of pileup. However the dispersion at is always close to the actual minimal attainable dispersion for a given sample. Accordingly, for most of the rest of this article, we will work with a grid spacing of .121212A single value of is adequate as long as jet finding is carried out mostly with jets of radius . Later in this section we will supplement our studies with a discussion of larger jet radii.
Next, let us compare the performance of the SoftKiller to that of area–median subtraction. Figure 4 shows the distribution of shift in , for (hard) jets with in a dijet sample. The average number of pileup events is , with a Poisson distribution. One sees that in the SoftKiller approach, the peak is about higher than what is obtained with the area–median approach and the distribution correspondingly narrower. The peak, in this specific case, is well centred on .
Figure 5 shows the shift (left) and dispersion (right) as a function of for two different samples: the dijet sample (in blue), as used in Fig. 4, and a hadronic sample, with a cut on jets (in green). Again, the figure compares the area–median (dashed) and SoftKiller results (solid). One immediately sees that the area–median approach gives a bias that is more stable as a function of . Nevertheless, the bias in the SoftKiller approach remains between about and , which is still reasonable when one considers that, experimentally, some degree of recalibration is anyway needed after area–median subtraction. As concerns sample dependence of the shift, comparing v. dijet, the area–median and SoftKiller methods appear to have similar systematic differences. In the case of SoftKiller, there are two main causes for the sample dependence: firstly the higher multiplicity of jets has a small effect on the choice of and secondly the dijet sample is mostly composed of gluon-induced jets, whereas the sample is mostly composed of quark-induced jets (which have fewer soft particles and so lose less momentum when imposing a particle threshold). Turning to the right-hand plot, with the dispersions, one sees that the the SoftKiller brings a significant improvement compared to area–median subtraction for . The relative improvement is greatest at high pileup levels, where there is a reduction in dispersion of , beating the scaling that is characteristic of the area–median method. While the actual values of the dispersion depend a little on the sample, the benefit of the SoftKiller approach is clearly visible for both.
Figure 6 shows the shift (left) and dispersion (right) for jet ’s and jet masses, now as a function of the hard jet minimum . Again, dashed curves correspond to area–median subtraction, while solid ones correspond to the SoftKiller results. All curves correspond to an average of pileup interactions. For the jet (blue curves) one sees that the area–median shift ranges from to as increases from to , while for SK the dependence is stronger, from about to , but still reasonable. For the jet mass (green curves), again the area–median method131313using a “safe” subtraction procedure that replaces negative-mass jets with zero-mass jets . is more stable than SK, but overall the biases are under control, at the to level. Considering the dispersions (right), one sees that SK gives a systematic improvement, across the whole range of jet ’s. In relative terms, the improvement is somewhat larger for the jet mass () than for the jet ().
Fig. 7 shows the actual mass spectra of jets, for two samples: a QCD dijet sample and a boosted sample. For both samples, we only considered jets with in the hard event. One sees that SK gives slightly improved mass peaks relative to the area–median method and also avoids area–median’s spurious peak at , which is due to events in which the squared jet mass came out negative after four-vector area-subtraction and so was reset to zero. The plot also shows results from the recently proposed Constituent Subtractor method , using v. 1.0.0 of the corresponding code from FastJet Contrib . It too performs better than area–median subtraction for the jet mass, though the improvement is not quite as large as for SK.141414A further option is to use an “intrajet killer” that removes soft particles inside a given jet until a total of has been subtracted. This shows performance similar to that of the Constituent Subtractor.
One might ask why we concentrated on jets here, given that jet-mass studies often use large- jets. The reason is that large- jets are nearly always used in conjunction with some form of grooming, for example trimming, pruning or filtering [28, 29, 30]. Grooming reduces the large-radius jet to a collection of small-radius jets and so the large-radius groomed-jet mass is effectively a combination of the ’s and masses of one or more small-radius jets.
For the sake of completeness, let us briefly also study the SoftKiller performance for large- jets. Figure 8 shows jet-mass results for the same sample as in Fig. 7 (right), now clustered with the anti- algorithm with . The left-hand plot is without grooming: one sees that SK with our default spacing of gives a jet mass that has better resolution than area–median subtraction (or the ConstituentSubtractor), but a noticeable shift, albeit one that is small compared to the effect of uncorrected pileup. That shift is associated with some residual contamination from pileup particles: in an jet, there are typically a handful of particles left from pileup, which compensate low- particles lost from near the core of the jet. If one substantially increases the jet radius without applying grooming, then that balance is upset, with substantially more pileup entering the jet, while there is only slight further loss of genuine jet . To some extent this can be addressed by using the SoftKiller with a larger grid spacing (cf. the result), which effectively increases the particle . This comes at the expense of performance on small- jets (cf. Fig. 3). An interesting, open problem is to find a simple way to remove pileup from an event such that, for a single configuration of the pileup removal procedure, one simultaneously obtains good performance on small- and large- jets.151515As an example, the threshold could be made to depend on a particle’s distance from the nearest jet core; however this then requires additional parameters to define what is meant by a nearby jet core and to parametrise the distance-dependence of the cut.
As we said above, however, large- jet masses are nearly always used in conjunction with some form of grooming. Fig. 8 (right) shows that when used together with trimming , SoftKiller with our default choice performs well both in terms of resolution and shift.
Returning to jets, the final figure of this section, Fig. 9, shows average shifts (left) and dispersions (right) as a function of for several different jet “shapes”: jet masses, clustering scales , the jet width (or broadening or girth [31, 32, 33]), an energy-energy correlation moment  and the and N-subjettiness ratios , using the exclusive axes with one-pass of minimisation. Except in the case of the jet mass (which uses “safe” area subtraction, as mentioned above), the area–median results have been obtained using the shape subtraction technique , as implemented in v. 1.2.0 of the GenericSubtractor in FastJet Contrib.
As regards the shifts, the SK approach is sometimes the best, other times second best. Which method fares worst depends on the precise observable. In all cases, when considering the dispersions, it is the SK that performs best, though the extent of the improvement relative to other methods depends strongly on the particular observable. Overall this figure gives us confidence that one can use the SoftKiller approach for a range of properties of small-radius jets.
4 Adaptation to CHS events and calorimetric events
It is important to verify that a new pileup mitigation method works not just at particle level, but also at detector level. There are numerous subtleties in carrying out detector-level simulation, from the difficulty of correctly treating the detector response to low- particles, to the reproduction of actual detector reconstruction methods and calibrations, and even the determination of which observables to use as performance indicators. Here we will consider two cases: idealised charged-hadron-subtraction, which simply examines the effect of discarding charged pileup particles; and simple calorimeter towers.
For events with particle flow  and charged-hadron subtraction (CHS), we imagine a situation in which all charged particles can be unambiguously associated either with the leading vertex or with a pileup vertex. We then apply the SK exclusively to the neutral particles, which we assume to have been measured exactly. This is almost certainly a crude approximation, however it helps to illustrate some general features.
One important change that arises from applying SK just to the neutral particles is that there is a reduced contribution of low- hard-event particles. This means that for a given actual amount of pileup contamination (in terms of visible transverse momentum per unit area), one can afford to cut more aggressively, i.e. raise the as compared to the full particle-level case, because for a given there will be a reduced loss of hard-event particles. This can be achieved through a moderate increase in the grid spacing, to . Figure 10 shows the results, with the shift (left) and dispersion (right) for the jet in dijet and samples. The SK method continues to bring an improvement, though that improvement is slightly more limited than in the particle-level case. We attribute this reduced improvement to the fact that SK’s greatest impact is at very high pileup, and for a given , SK with CHS is effectively operating at lower pileup levels than without CHS. A further study with our toy CHS simulation concerns lepton isolation and is given in appendix E.
Next let us turn to events where the particles enter calorimeter towers. Here we encounter the issue, discussed also in appendix A, that SK is not collinear safe. While we argue there that this is not a fundamental drawback from the point of view of particle-level studies, there are issues at calorimeter level: on one hand a single particle may be divided between two calorimeter towers (we won’t attempt to simulate this, as it is very sensitive to detector details); on the other, within a given tower (say ) it is quite likely that for high pileup the tower may receive contributions from multiple particles. In particular, if a tower receives contributions from a hard particle with a substantial and additionally from pileup particles, the tower will always be above threshold, and the pileup contribution will never be removed. There are also related effects due to the fact that two pileup particles may enter the same tower. To account for the fact that towers have finite area, we therefore adapt the SK as follows. In a first step we subtract each tower:
where is as determined on the event prior to any correction.161616We use our standard choices for determining , namely the grid version of the area–median method, with a grid spacing of and rapidity scaling as discussed in Appendix B. One could equally well use the same grid spacing for the determination as for the SoftKiller. This in itself eliminates a significant fraction of pileup, but there remains a residual contribution from the roughly of towers whose was larger than . We then apply the SoftKiller to the subtracted towers,
where is the , after subtraction, of the hardest tower in patch , in analogy with Eq. (2). In the limit of infinite granularity, a limit similar to particle level, . The step in Eq. (4) then has no effect and one recovers the standard SoftKiller procedure applied to particle level.
Results are shown in Fig. 11. The energy in each tower is taken to have Gaussian fluctuations with relative standard deviation . A threshold of is applied to each tower after fluctuations. The SK grid spacing is set to . Interestingly, with a calorimeter, the area–median method starts to have significant biases, of a couple of GeV, which can be attributed to the calorimeter’s non-linear response to soft energy. The SK biases are similar in magnitude to those in Fig. 5 at particle level (note, however, the need for a different choice of grid spacing ). The presence of a calorimeter worsens the resolution both for area-median subtraction and SK, however SK continues to perform better, even if the improvement relative to area–median subtraction is slightly smaller than for the particle-level results.
We have also investigated a direct application of the particle-level SoftKiller approach to calorimeter towers, i.e. without the subtraction in Eq. (4). We find that the biases were larger but still under some degree of control with an appropriate tuning of , while the performance on dispersion tends to be intermediate between that of area–median subtraction and the version of SoftKiller with tower subtraction.
The above results are not intended to provide an exhaustive study of detector effects. For example, particle flow and CHS are affected by detector fluctuations, which we have ignored; purely calorimetric jet measurements are affected by the fact that calorimeter towers are of different sizes in different regions of the detector and furthermore may be combined non-trivially through topoclustering. Nevertheless, our results help illustrate that it is at least plausible that the SoftKiller approach could be adapted to a full detector environment while retaining much of its performance advantage relative to the area–median method.
5 Computing time
The computation time for the SoftKiller procedure has two components: the assignment of particles to patches, which is , i.e. linear in the total number of particles and the determination of the median, which is where is the number of patches. The subsequent clustering is performed with a reduced number of particles, , which, at high pileup is almost independent of the number of pileup particles in the original event. In this limit, the procedure is therefore expected to be dominated by the time to assign particles to patches, which is linear in . This assignment is almost certainly amenable to being parallelised.
In studying the timing, we restrict our attention to particle-level events for simplicity. We believe that calorimeter-type extensions as described in section 4 can be coded in such a way as to obtain similar (or perhaps even better) performance.
Timings are shown in Fig. 12 versus initial multiplicity (left) and versus the number of pileup vertices (right).171717 These timings have been obtained on an Intel Xeon processor, E5-2470 (2.20 GHz), using a development version of FastJet 3.1, with the “Best” clustering strategy. This has a speed that is similar to the public 3.0.6 version of FastJet. Significant speed improvements at high multiplicity are planned for inclusion in the public release of FastJet 3.1, however they were not used here. Each plot shows the time needed to cluster the full event and the time to cluster the full event together with ghosts (as needed for area-based subtraction). It also shows the time to run the SoftKiller procedure, the time to cluster the resulting event, and the total time for SK plus clustering.
Overall, one sees nearly two orders of magnitude improvement in speed from the SK procedure, with run times per event ranging from to ms as compared to to ms for clustering with area information. At low multiplicities, the time to run SK is small compared to that needed for the subsequent clustering. As the event multiplicity increases, SK has the effect of limiting the event multiplicity to about particles, nearly independently of the level of pileup and so the clustering time saturates. However the time to run SK grows and comes to dominate over the clustering time. Asymptotically, the total event processing time then grows linearly with the level of pileup. A significant part of that time (about per particle, 75% of the run-time at high multiplicity) is taken by the determination of the particles’ rapidity and azimuth in order to assign them to a grid cell. If the particles’ rapidity and azimuth are known before applying the SoftKiller to an event (as it would be the case e.g. for calorimeter towers), the computing time to apply the SoftKiller would be yet faster, as indicated by the dotted orange line on Fig. 12.
Because of its large speed improvement, the SoftKiller method has significant potential for pileup removal at the trigger level. Since SoftKiller returns an event with fewer particles, it will have a speed performance edge also in situations where little or no time is spent in jet-area calculations (either because Voronoi areas or fast approximate implementations are used). This can be seen in Fig. 12 by comparing the green and the solid blue curves.
The SoftKiller method appears to bring significant improvements in pileup mitigation performance, in particular as concerns the jet energy resolution, whose degradation due to pileup is reduced by relative to the area–median based methods. As an example, the performance that is obtained with area–median subtraction for 70 pileup events can be extended to 140 pileup events when using SoftKiller. This sometimes comes at the price of an increase in the biases on the jet , however these biases still remain under control.
Since the method acts directly on an event’s particles, it automatically provides a correction for jet masses and jet shapes, and in all cases that we have studied brings a non-negligible improvement in resolution relative to the shape subtraction method, and also (albeit to a lesser extent) relative to the recently proposed Constituent Subtractor approach.
The method is also extremely fast, bringing nearly two orders of magnitude speed improvement over the area–median method for jet ’s. This can be advantageous both in time-critical applications, for example at trigger level, and in the context of fast detector simulations.
There remain a number of open questions. It would be of interest to understand, more quantitatively, why such a simple method works so well and what dictates the optimal choice of the underlying grid spacing. This might also bring insight into how to further improve the method. In particular, the method is known to have deficiencies when applied to large- ungroomed jets, which would benefit from additional study. Finally, we have illustrated that in simple detector simulations it is possible to reproduce much of the performance improvement seen at particle level, albeit at the price of a slight adaption of the method to take into account the finite angular resolution of calorimeters. These simple studies should merely be taken as indicative, and we look forward to proper validation (and possible further adaptation) taking into account full detector effects.
We are grateful to Phil Harris, Peter Loch, David Miller, Filip Moortgat, Ariel Schwartzman, and many others, for stimulating conversations on pileup removal. We are especially grateful to Filip Moortgat for his comments on the manuscript. This work was supported by ERC advanced grant Higgs@LHC, by the French Agence Nationale de la Recherche, under grant ANR-10-CEXC-009-01 and by the EU ITN grant LHCPhenoNet, PITN-GA-2010-264564 and by the ILP LABEX (ANR-10-LABX-63) supported by French state funds managed by the ANR within the Investissements d’Avenir programme under reference ANR-11-IDEX-0004-02. GPS and GS wish to thank Princeton University and CERN, respectively, for hospitality while this work was being carried out.
Appendix A Collinear safety issues
Collinear safety is normally essential in order to get reliable results from perturbation theory. One reaction to the SoftKiller proposal is that it is not collinear safe, because it relies only on information about individual particles’ transverse momenta. There are at least two perspectives on why this is not a severe issue.
The first relates to the intrinsic low- nature of the , which is typically of order . At these scales, non-perturbative dynamics effectively regulates the collinear divergence. Consider one element of the hadronisation process, namely resonance decay, specifically : if the has a of order , the rapidity-azimuth separation of the two pions is of the order of (see e.g. Ref. ). Alternatively, consider the emission from a high-energy parton of a gluon with a of the order of : this gluon can only be considered perturbative if its transverse momentum relative to the emitter is at least of order a GeV, i.e. if it has an angle relative to the emitted of order . Both these examples illustrate that the collinear divergence that is of concern at parton level is smeared by non-perturbative effects when considering low- particles. Furthermore, the impact of these effects on the jet will remain of the order of , i.e. power-suppressed with respect to the scale of hard physics.
The second perspective is from the strict point of view of perturbative calculations. One would not normally apply a pileup reduction mechanism in such a context. But it is conceivable that one might wish to define the final state such that it always includes a pileup and underlying event (UE) removal procedure.181818For example, so as to reduce prediction and reconstruction uncertainties related to the modelling of the UE (we are grateful to Leif Lönnblad for discussions on this subject). This might, just, be feasible with area–median subtraction, with its small biases, but for the larger biases of SK does not seem phenomenologically compelling. Still, it is interesting to explore the principle of the question. Then one should understand the consequences of applying the method at parton level. Considering patches of size and particles with , there are a total of 120 patches; only when the perturbative calculation has at least particles, i.e. attains order , can be non-zero; so the collinear safety issue would enter at an inconceivably high order, and all practical fixed-order parton-level calculations would give results that are unaffected by the procedure.
Collinear safety, as well as being important from a theoretical point of view, also has experimental relevance: for example, depending on its exact position, a particle may shower predominantly into one calorimeter tower or into two. Collinear safety helps ensure that results are independent of these details. While we carried out basic detector simulations in section 4, a complete study of the impact of this type of effect would require full simulation and actual experimental reconstruction methods (e.g. particle flow or topoclustering).
Appendix B Rapidity dependence
One issue with the area–median method is that a global determination fails to account for the substantial rapidity dependence of the pileup contamination. Accordingly, the method is often extended by introducing an a-priori determined function that encodes the shape of the pileup’s dependence on rapidity ,
This is the approach that we have used throughout this paper.191919For Pythia8(4C) simulations, we use . The rapidity dependence of , shown as the dashed lines in Fig. 13 (left), is substantial and therefore we account for it through rescaling. The figure shows two different tunes (4C and Monash 2013), illustrating the fact that they have somewhat different rapidity dependence.
The SoftKiller method acts not on the average energy flow, but instead on the particle ’s. The solid lines in Fig. 13 (left) show that the average particle is nearly independent of rapidity. This suggests that there may not be a need to explicitly account for rapidity in the SK method, at least at particle level (detector effects introduce further non-trivial rapidity dependence).
This is confirmed in the right-hand plot of Fig. 13, which shows the rapidity-dependence of the shift in the jet with the area–median and SK methods. Our default area–median curve, which includes rapidity rescaling, leads to a nearly rapidity-independent shift. However, without the rapidity rescaling, there are instead rapidity-dependent shifts of up to at high pileup. In contrast, the SK method, which in our implementation does not involve any parametrisation of rapidity dependence, automatically gives a jet shift that is fairly independent of rapidity, to within about . We interpret this as a consequence of the fact (cf. the left-hand plot of Fig. 13) that the average particle appears to be far less dependent on rapidity than the average flow.202020In a similar spirit to Eq. (6), one could also imagine introducing a rapidity-dependent rescaling of the particle ’s before applying SoftKiller, and then inverting the rescaling afterwards. Our initial tests of this approach suggest that it does largely correct for the residual SK rapidity dependence.
Appendix C Monte Carlo tune dependence
While a full study of the dependence of the SK method on different Monte Carlo tunes is beyond the scope of this article, we have briefly verified that our conclusions are not affected by switching to another widely used LHC tune, the Pythia 6  Z2 tune . Fig. 14 compares the Pythia 6 Z2 results for the jet offset and dispersion in a dijet sample with those from the Pythia 8 4C tune that we used throughout the article. While there are some differences between the two tunes, our main conclusions appear unchanged. In particular, the average shift remains under control, and there continues to be a significant improvement in the resolution.212121One may wonder about the stronger dependence for area–median subtraction with the Z2 tune as compared to 4C, however one should keep in mind that this corresponds to about just per pileup vertex.
Appendix D Impact of a fixed cutoff
One key aspect of the SoftKiller approach is not simply that it applies a cutoff, but rather that there is a straightforward dynamical way of determining a cutoff, on an event-by-event basis, that removes the bulk of the effects of pileup with modest biases and improved dispersion.
For completeness, it is interesting to compare its performance to that of a fixed cut. Figure 15 shows the shifts (left) and dispersions (right), as a function of , as obtained for the area–median method, the SoftKiller, and three fixed particle-level values, , and . For each of these fixed values, there is a value of for which the shift in jet is zero, respectively , and . However, as soon as one moves away from that particular value, large biases appear. Around the that has zero bias for a given fixed , the dispersion of the shift is quite close to that obtained in the SoftKiller approach. However, away from that value, the dispersion becomes somewhat worse. Overall, therefore, the SoftKiller approach works noticeably better than any fixed cut.
One further study that we have carried out is to parametrise the average shown in Fig. 2 (left) as a function of , and to apply a that is chosen event-by-event according to that event’s actual value of . We have found that this has performance that is similar to that of the SoftKiller, i.e. SoftKiller’s slight event-by-event adaptation of the for a fixed (represented by the 1- dashed lines in Fig. 2 (left)) does not appear to be critical to its success. This suggests that any approach that chooses an -dependent so as to give a near-zero average shift may yield performance on dispersions that is similar to that of SoftKiller. From this point of view, SoftKiller provides an effective heuristic for the dynamic determination of the value.
Appendix E Lepton isolation and (not) MET
Two non-jet-based quantities that suffer significantly from pileup effects are lepton isolation and missing transverse energy (MET) reconstruction.
Both potentially involve significant detector effects. For lepton isolation, we believe we may nevertheless be able to gain some insight by considering a simplified scenario. We consider isolation of hard leptons () from decay and also of hard leptons (with the same cut) from -hadron decays in events whose hard scattering was or . The first sample provides genuinely isolated leptons, while the second provides a sample of non-primary leptons, i.e. one important source of lepton-production background that isolation is intended to eliminate. In both cases we use toy CHS events, as was described in section 4.
Fig. 16 (left) shows the contained in a cone of radius around the lepton, with solid curves for leptons from ’s and dashed curves for leptons from decays. All curves except the black one (hard event only, i.e. no pileup) correspond to events with a mean pileup of . The orange curves illustrate how pileup severely shifts and smears the distribution of around the lepton. Area-median subtraction eliminates the shift, but gives only a marginal improvement for the smearing. SK gives somewhat more improvement as concerns the smearing, but has the “feature” that there is a residual shift for the events, but not for the -hadron decays. This difference is because -hadron jets have some number of soft particles that are removed by SK, compensating for the small residual pileup left in by SK. In contrast leptons from ’s tend to have few genuine soft particles around them, so there is simply a net positive bias from the small leftover PU. The peaks in the SK -sample curve correspond to having , , , etc. residual pileup particles.
To establish how these characteristics translate to final performance, one should examine the ROC curve for “background” efficiency (i.e. for leptons from ’s) versus “signal” efficiency (i.e. leptons from ’s). These are shown in the right-hand plot of Fig. 16, with the symbols providing information about the isolation cut being used at a given point on the curve. Lower curves imply better performance. One sees that uncorrected pileup (orange curve) significantly degrades performance relative to the “hard” (i.e. no pileup) case. Area–median subtraction brings a small benefit and SK brings a further moderate improvement. For a given isolation cut, the area–median approach gives a relatively stable signal efficiency, while SK gives a more stable background efficiency.
A final comment about Fig. 16 (right) concerns the red curve, in which isolation is carried out just on the charged particles from the primary vertex. For signal efficiencies this performs better than any pileup-correction method (the exact value depends on the choice of for SK). This highlights the point that it may be better to discard pileup-contaminated information than it is to try to correct for the large impact of pileup.222222The very good performance of pure charged-particle isolation may be overoptimistic. A shortcut in our simulation is that we assume that tracks from -decays can be correctly associated with the primary vertex. This may not be the case in a realistic environment. As well as discarding neutrals, one may also consider going to smaller isolation radii, keeping in mind also recent theoretical progress in understanding small- isolation and jets [42, 43]. The full optimisation over these various options should probably be left to detailed experimental work.
Let us finally briefly comment on MET. With a perfect, infinite acceptance detector, pileup would have almost no impact on MET, other than through the small fraction of neutrinos present in pileup. The large pileup-induced degradation in MET resolution that occurs in real life is almost entirely a result of the interplay between the detector (its acceptance and response) and pileup. Without a detailed full detector simulation, we believe that it is difficult for us to carry out a robust study of potential improvements in MET reconstruction with SK-inspired methods. Nevertheless, the fact that jet-area subtraction is used successfully in ATLAS MET reconstruction  suggests that the improvements from SK may be of benefit also for MET.
-  High Luminosity LHC Upgrade, http://cern.ch/HL-LHC ; see also D. Abbaneo et al., proceedings of “ECFA High Luminosity LHC Experiments Workshop: Physics and Technology Challenges,” ECFA-13-284.
-  CMS Collaboration, “Particle-Flow Event Reconstruction in CMS and Performance for Jets, Taus, and MET,” CMS-PAS-PFT-09-001.
-  The ATLAS collaboration, “Pile-up subtraction and suppression for jets in ATLAS,” ATLAS-CONF-2013-083.
-  CMS Collaboration, “Jet Energy Scale performance in 2011,” CMS-DP-2012-006.
-  M. Cacciari and G. P. Salam, Phys. Lett. B 659 (2008) 119 [arXiv:0707.1378 [hep-ph]].
-  M. Cacciari, G. P. Salam and G. Soyez, JHEP 0804 (2008) 005 [arXiv:0802.1188 [hep-ph]].
-  A. Altheimer, A. Arce, L. Asquith, J. Backus Mayes, E. Bergeaas Kuutmann, J. Berger, D. Bjergaard and L. Bryngemark et al., arXiv:1311.2708 [hep-ex].
“Topoclustering pileup suppression,”
D. E. Soper and M. Spannowsky,
Phys. Rev. D 84 (2011) 074002
D. E. Soper and M. Spannowsky, arXiv:1402.1189 [hep-ph].
-  P. Berta, M. Spousta, D. W. Miller and R. Leitner, JHEP 1406 (2014) 092 [arXiv:1403.3108 [hep-ex]].
-  M. Cacciari, J. Rojo, G. P. Salam and G. Soyez, JHEP 0812 (2008) 032 [arXiv:0810.1304 [hep-ph]].
-  D. Krohn, M. Low, M. D. Schwartz and L.-T. Wang, arXiv:1309.4777 [hep-ph].
-  M. Cacciari, G. P. Salam and G. Soyez, arXiv:1404.7353 [hep-ph].
O.L. Kodolova et al,
“Study of +Jet Channel in Heavy ion Collisions with CMS,”
V. Gavrilov, A. Oulianov, O. Kodolova and I. Vardanian, “Jet Reconstruction with Pileup Subtraction,” CMS-RN-2003-004;
O. Kodolova, I. Vardanian, A. Nikitenko and A. Oulianov, Eur. Phys. J. C 50 (2007) 117.
-  CMS Collaboration, “Underlying Event Subtraction for Particle Flow,” https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsDPSUESubtractionPF
-  S. Catani, Y. L. Dokshitzer, M. H. Seymour and B. R. Webber, Nucl. Phys. B 406 (1993) 187 and refs. therein; S. D. Ellis and D. E. Soper, Phys. Rev. D 48 (1993) 3160 [hep-ph/9305266].
-  M. Cacciari, G. P. Salam and S. Sapeta, JHEP 1004 (2010) 065 [arXiv:0912.4926 [hep-ph]].
-  M. Cacciari, G. P. Salam and G. Soyez, Eur. Phys. J. C 71 (2011) 1692 [arXiv:1101.2878 [hep-ph]].
-  T. Sjostrand, S. Mrenna and P. Z. Skands, Comput. Phys. Commun. 178 (2008) 852 [arXiv:0710.3820 [hep-ph]].
-  G. Soyez, G. P. Salam, J. Kim, S. Dutta and M. Cacciari, Phys. Rev. Lett. 110 (2013) 16, 162001 [arXiv:1211.2811 [hep-ph]].
-  M. Cacciari, G. P. Salam and G. Soyez, JHEP 0804 (2008) 063 [arXiv:0802.1189 [hep-ph]].
-  M. Cacciari and G. P. Salam, Phys. Lett. B 641 (2006) 57 [hep-ph/0512210].
-  M. Cacciari, G. P. Salam and G. Soyez, Eur. Phys. J. C 72 (2012) 1896 [arXiv:1111.6097 [hep-ph]].
-  V. A. Khoze, S. Lupia and W. Ochs, Eur. Phys. J. C 5 (1998) 77 [hep-ph/9711392].
-  H. Kirschenmann [CMS Collaboration], J. Phys. Conf. Ser. 404 (2012) 012013.
-  G. Aad et al. [ ATLAS Collaboration], arXiv:1406.0076 [hep-ex].
-  FastJet Contrib, http://fastjet.hepforge.org/contrib .
-  D. Krohn, J. Thaler and L. -T. Wang, JHEP 1002 (2010) 084 [arXiv:0912.1342 [hep-ph]].
-  S. D. Ellis, C. K. Vermilion and J. R. Walsh, Phys. Rev. D 80 (2009) 051501 [arXiv:0903.5081 [hep-ph]].
-  J. M. Butterworth, A. R. Davison, M. Rubin and G. P. Salam, Phys. Rev. Lett. 100 (2008) 242001 [arXiv:0802.2470 [hep-ph]].
-  S. Catani, G. Turnock and B. R. Webber, Phys. Lett. B 295 (1992) 269.
-  C. F. Berger, T. Kucs and G. F. Sterman, Phys. Rev. D 68 (2003) 014012 [hep-ph/0303051].
-  L. G. Almeida, S. J. Lee, G. Perez, G. F. Sterman, I. Sung and J. Virzi, Phys. Rev. D 79 (2009) 074017 [arXiv:0807.0234 [hep-ph]].
-  A. J. Larkoski, G. P. Salam and J. Thaler, JHEP 1306 (2013) 108 [arXiv:1305.0007 [hep-ph]].
-  J. Thaler and K. Van Tilburg, JHEP 1103 (2011) 015 [arXiv:1011.2268 [hep-ph]]; JHEP 1202 (2012) 093 [arXiv:1108.2701 [hep-ph]].
D. Bertolini, P. Harris, M. Low, N. Tran,
JHEP 1410 (2014) 59 [arXiv:1407.6013 [hep-ph]].
-  Mitigation of pileup effects at the LHC, CERN, Switzerland, 16–18 May 2014, https://indico.cern.ch/event/306155/ .
-  P. Skands, S. Carrazza and J. Rojo, arXiv:1404.5630 [hep-ph].
-  A. Karneyeu, L. Mijovic, S. Prestel and P. Z. Skands, Eur. Phys. J. C 74 (2014) 2714 [arXiv:1306.3436 [hep-ph]]; see also http://mcplots.cern.ch/, as well as an alternative comparison, http://rivet.hepforge.org/tunecmp/index.html.
-  R. Field, Acta Phys. Polon. B 42 (2011) 2631 [arXiv:1110.5530 [hep-ph]].
-  T. Sjostrand, S. Mrenna and P. Z. Skands, JHEP 0605 (2006) 026 [hep-ph/0603175].
-  S. Catani, M. Fontannaz, J. P. Guillet and E. Pilon, JHEP 1309 (2013) 007 [arXiv:1306.6498 [hep-ph]].
-  M. Dasgupta, F. Dreyer, G. P. Salam and G. Soyez, arXiv:1411.5182 [hep-ph].
-  P. Loch (for the ATLAS Collaboration), “Missing ET at ATLAS,” https://indico.cern.ch/event/306155/session/4/contribution/11/material/slides/0.pdf