Accelerating Science with Generative Adversarial Networks:
An Application to 3D Particle Showers in Multi-Layer Calorimeters
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theory modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speed-up factors of up to 100,000. This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.
High-precision modeling of the interactions of particles with media is important across many physical sciences, enabling and accelerating new findings. Similar to complex weather or cosmological modeling, the detailed simulation of subatomic particle collisions and interactions, as captured by detectors at the LHC, is a computationally demanding task, which annually requires billions of CPU hours, constituting more than half of the LHC experiments’ computing resources Flynn (2015); Karavakis et al. (2014); Bozzi (2015).
The Nobel-prize-winning Higgs boson discovery Aad et al. (2012); Chatrchyan et al. (2012) would not have been possible without extensive simulation. Before its experimental observation, its fundamental properties, such as its mass, were unknown, but synthetic particle collisions could be generated to simulate the outcome of various measurements under different model assumptions.
Today, as several questions remain unanswered about the nature of known particles (such as neutrinos) and hypothetical ones (such as the supersymmetric partners of the Standard Model particles), modern nuclear and particle physics research continues to strongly depend on detailed simulations for developing analysis techniques, interpreting results, and designing new experiments.
Cutting-edge software libraries such as Geant4 GEANT4 Collaboration (2003) provide the backbone to construct complex detector geometries and accurately model physical processes and interactions happening at distance scales as small as m.
The shortcoming of this method is its computational footprint. The high-precision description of electromagnetic and nuclear processes that govern the evolution of particle showers in calorimeters can requires minutes per event on modern computing platforms Aad et al. (2010); Rahmat et al. (2012), making this the most computationally expensive step in the simulation pipeline. Due to the expensive simulation cost, significant resources are also invested in storing generated data sets, which can occupy petabytes of disk space.
This bottleneck becomes apparent at the scale at which events need to be simulated to enable physics analyses at the high luminosity phase of the LHC (HL-LHC). The ATLAS and CMS experiments are expected to observe about Higgs boson events de Florian et al. (2016), buried in background events Aaboud et al. (2016); CMS (2016). Hundreds of billions of simulated collisions will be required to reduce the Monte Carlo uncertainty and measure some of the Higgs boson’s as yet unprobed properties.
Approximate calorimeter simulation techniques exist Grindhammer and Peters (1993); Beckingham et al. (2010); Grindhammer et al. (1990); Barberio et al. (2009), but they provide compromises that lie on different, yet similarly sub-optimal, parts of the accuracy-speedup trade-off curve.
Full detector simulations are too slow to meet the growing analysis demands; current fast simulations are not precise enough to serve the entire physics program. We therefore introduce a Deep Learning model, named CaloGAN, for high-fidelity fast simulation of particle showers in electromagnetic calorimeters. Its goal is to be both quick and precise, by significantly reducing the accuracy cost incurred with increased speed-up. A fast simulation technique of this kind also addresses the issue of data storage and transfer, as the gained generation simplicity and speedup make real-time, on-demand simulation a possibility.
Similar techniques have been tested in Cosmology Ravanbakhsh et al. (2016); Schawinski et al. (2017), Condensed Matter Physics Mosser et al. (2017), and Oncology Kadurin et al. (2016). However, the sparsity, high dynamic range, and highly location-dependent features present in this application make it uniquely challenging. In addition to enabling physics analysis at the LHC, an approach similar to the CaloGAN may be useful for other applications in particle and nuclear physics, nuclear medicine, and space science that require detailed modeling of particle interactions with matter.
To alleviate the computational burden of simulating electromagnetic showers, we introduce a method based on Generative Adversarial Networks (GANs) Goodfellow et al. (2014) in order to directly simulate component read-outs in electromagnetic calorimeters. GANs are an increasingly popular approach to learning a generative model using deep neural networks, and have shown great promise in generating clear samples from natural images Radford et al. (2015).
Though the GAN formulation, by design, does not admit an explicit probability density or explicit likelihood, we gain the ability to sample from the learned generative model in a efficient manner. The GAN training uses a minimax game theoretic framework, and admits a function as an artifact that maps a -dimensional latent vector, to a point in the space of realistic samples. We would like the implicit density learned by to be close to the distribution that governs the simulated data distribution. Since is a neural network, a forward pass to generate new samples is highly efficient on modern computing platforms Chetlur et al. (2014).
Previous work de Oliveira et al. (2017) investigated GAN-based methods for jet images Cogan et al. (2015), which are similar to one-layer calorimeters with square pixels (except jet generatators such as Pythia Sjostrand et al. (2006) are much faster than Geant4). This work addresses the complexity introduced by modeling a realistic sampling detector with heterogeneous longitudinal and transverse segmentation. We exploit the location specificity of the calorimeter, and utilize weight locality at the model level. We also follow the guidelines outlined in de Oliveira et al. (2017) in order to deal with both high dynamic range and sparsity levels. Our neural network architecture per calorimeter layer is a function of the read-out grid dimensionality, and is augmented with an attentional component Xu et al. (2015) that provides a mechanism to carry information from layer to layer Zhang et al. (2016). This allows the CaloGAN to model the physical sequential dependence among the calorimeter layers.
To ensure the realism of the CaloGAN setup, we impose an additional constraint to encourage the generator to produce a given energy shower. That is, the learned, implicit PDF needs to converge to the hypothetical data generating function for any initial nominal energy , i.e., that for all .
To encourage this to be well modeled, a physics-specific loss component is introduced to penalize absolute deviation between the nominal energy and the reconstructed energy . A noteworthy subtlety is that this penalization scheme, coupled with minibatch discrimination Salimans et al. (2016), invites the network to learn the distribution of , a desirable characteristic for a readily applicable practical system to augment fast simulation. Such a formulation also encourages conservation of energy through the generation process. The simulation only includes models of energy deposition, not digitization (a non-linear effect that can violate reconstructed energy conservation). The energy per layer includes the contribution from inactive material (see below). Therefore, aside from leakage beyond the calorimeter (relevant mostly for charged pions), energy must be conserved and provides a useful constraint on the generation.
Iii Experimental Results
From a series of simulated showers, the CaloGAN is tasked with learning the simulated data distributions of , , and generated by Geant4 with uniform energy spectrum GeV, and incident perpendicular to the center of a three-layer, heterogeneously segmented, liquid argon (LAr) calorimeter cube of side-length mm. The training dataset Nachman et al. (2017a) is represented in image format by three figures of dimensions , , and , each representing the shower energy depositions per pixel in each calorimeter layer. The energy per layer includes the active and inactive contributions. For e.g. calorimeter calibrations Aad et al. (2017), it is important to have the inactive component; in the future one could add separate layers for the inactive component or add a second step for dividing the energy per layer into the two components. The flexible CaloGAN architecture allows for a straightforward extension to related detector geometries that have more sampling layers or different cell sizes per layer Nachman et al. (2017b).
Our analysis establishes that it is possible to generate three-dimensional electromagnetic showers in a multi-layer sampling LAr calorimeter with uneven spatial segmentation, while attempting to preserve spatio-temporal relationships among layers.
For performance evaluation, we choose application-driven methods focused on sample quality. A first qualitative assessment is accompanied by a quantitative evaluation based on physics-driven similarity metrics. The choice reflects the domain specific procedure for Monte Carlo-data comparisons. However, it is also important to examine high-dimensional behavior because CaloGAN is not anchored by parameterized models the way traditional fast simulators are. While the adversarial classifier provides some high-dimensional validation, we also use particle classification performance. Visualization and validation is still a key challenge for multi-dimensional generators parameterized by a neural network.
iii.1 Qualitative Evaluation
The average calorimeter deposition per voxel (Fig. 1) suggests that the learned generative models of , , and showers capture aspects of the underlying physical processes. For photon showers, for instance, the mean per-layer cell variations only show a and discrepancy in the first two layers where most energy is deposited for . This level of agreement is promising, but it is important to analyze more than the mean energy pattern to fully study the strengths and weaknesses of the proposed approach.
The CaloGAN-generated samples are checked for adequate diversity and lack of direct memorization of the Geant4 samples used for training. The nearest (by Euclidean distance) Geant4 image is found for each of a random selection of CaloGAN images in order to verify the desired characteristics (Fig. 2). The samples show strong inter- and intra-class diversity and no evidence of memorization since the closest images do not look exactly the same.
iii.2 Shower Shape Description
Geometrically and physically motivated shower shape variables Olive et al. (2014) are used as further validation and introspection into the capabilities of the CaloGAN to adequately model and capture non-linear functional representations of the simulated data distribution (Fig. 3). In fact, it is desirable for the CaloGAN to recover the target distribution of these 1D statistics.
The network is not shown any shower shape variable (only pixel values) at training time - therefore, it is encouraging to note that the CaloGAN recovers the simulated data distribution for a variety of shower shapes across the three particle types. However, certain features of some distributions are not well-described. This is a challenge for the future and will likely require improvements to the architecture and training procedure. Longer trainings of higher capacity architectures have shown promise in rectifying some of these issues.
Examining 1D statistics does not probe correlations between shower shapes or higher dimensional aspects of the probability distribution. One way to examine the full shower phase space is to study classification performance, as described in the next section.
iii.3 Classification as a Performance Proxy
When training a six-layer, fully-connected classification model on the 504-dimensional pixel space of the concatenated representation of shower energy depositions across all calorimeter layers, no major classification degradation is observed for out-of-domain learning when trained on the full simulation, i.e. when the network is trained on Geant4 samples but evaluated on CaloGAN samples. Specifically, although the classification accuracy always reaches 99% when evaluating performance on CaloGAN showers – which points to an over-differentiation among particle types in the CaloGAN dataset – in both and discrimination tasks, the evaluation of the network trained on Geant4 images results in no accuracy decrease in the former task (), and only a 2% decrease in the latter ( versus accuracy), when compared to the classifier tested on CaloGAN samples. The stability of the accuracy metric implies that the CaloGAN succeeds at representing at least as much variation among showers initiated by different particles as it is necessary to classify them using the same features in Geant4. Training on CaloGAN and testing on Geant4 does show significant degradation, indicating that the GAN is inventing new class-dependent features or underrepresenting class-independent features. While percent-level variations may be important for some applications, using classification as a generator diagnostic is an important tool for exposing the modeling of interclass shower variations.
iii.4 Computational Performance
Directly generating deposited energy per calorimeter cell rather than particle dynamics renders the model’s time-complexity invariant to nominal energy, whereas Geant4 shower simulation runtime increases significantly with higher energy. Therefore, the CaloGAN affords sizable simulation-time speed ups compared to Geant4. All benchmarks are performed on Intel Xeon® 2.6GHz processors for CPU-time and a single NVIDIA® K80 for GPU-time. When simulating a single in a uniform energy range between 1 GeV and 100 GeV, CaloGAN is times faster than Geant4 on both CPU and GPU. However, when batching is utilized, the CaloGAN throughput significantly improves – when batching of size 1024 is allowed (not unrealistic given the embarrassingly parallel nature of EM showering), the per- generation time is times faster on CPU and times faster on GPU.
Iv Outlook and Future Work
This letter demonstrates that the Generative Adversarial Network technology represents a powerful new tool for efficient simulation. Our ability to infuse Physics domain knowledge into the neural network documents the flexibility and extensibility of the method for field-specific applications and explicit mismodeling mitigation.
Prior to this work, the prospect of a GAN-based calorimeter simulation had generated considerable excitement within the high energy physics community. The availability and performance of the CaloGAN has attracted further interest as a concrete and publically available demonstration of the power and drawbacks of a GAN-based calorimeter simulation. In addition to the applicability within individual experiments, variations of the CaloGAN are also being studied as a generic tool for future Geant software versions. While the CaloGAN is currently structured as a fast simulation tool, in the future it could also be trained on testbeam data to replace or augment a full simulation tool.
Future work will focus on incorporating the most recent cutting-edge innovations from the GAN literature to stabilize the training procedure and improve convergence to optimal solutions Gulrajani et al. (2017); Nowozin et al. (2016); Heusel et al. (2017); Berthelot et al. (). While our primary effort will be to improve and maintain this technique for event simulation at the LHC, this neural-network approach retains generalization power to other fields in which computationally expensive simulation inhibits result productivity.
This work was supported in part by the Office of High Energy Physics of the U.S. Department of Energy under contracts DE-AC02-05CH11231 and DE-FG02-92ER40704. The authors would like to thank Wahid Bhimji, Zach Marshall, Mustafa Mustafa, and Prabhat, for helpful conversations.
- Flynn (2015) J. Flynn, Computing Resources Scrutiny Group Report, Tech. Rep. CERN-RRB-2015-014 (CERN, Geneva, 2015).
- Karavakis et al. (2014) E. Karavakis et al., Journal of Physics: Conference Series 513, 062024 (2014).
- Bozzi (2015) C. Bozzi, LHCb Computing Resource usage in 2014 (II), Tech. Rep. LHCb-PUB-2015-004. CERN-LHCb-PUB-2015-004 (CERN, Geneva, 2015).
- Aad et al. (2012) G. Aad et al. (ATLAS), Phys. Lett. B716, 1 (2012), arXiv:1207.7214 [hep-ex] .
- Chatrchyan et al. (2012) S. Chatrchyan et al. (CMS), Phys. Lett. B716, 30 (2012), arXiv:1207.7235 [hep-ex] .
- GEANT4 Collaboration (2003) GEANT4 Collaboration, Nuclear Instruments and Methods in Physics Research A 506, 250 (2003).
- Aad et al. (2010) G. Aad et al. (ATLAS), Eur. Phys. J. C70, 823 (2010), arXiv:1005.4568 [physics.ins-det] .
- Rahmat et al. (2012) R. Rahmat, R. Kroeger, and A. Giammanco, Journal of Physics: Conference Series 396, 062016 (2012).
- de Florian et al. (2016) D. de Florian et al. (LHC Higgs Cross Section Working Group), (2016), 10.23731/CYRM-2017-002, arXiv:1610.07922 [hep-ph] .
- Aaboud et al. (2016) M. Aaboud et al. (ATLAS), Phys. Rev. Lett. 117, 182002 (2016), arXiv:1606.02625 [hep-ex] .
- CMS (2016) Measurement of the inelastic proton-proton cross section at , Tech. Rep. CMS-PAS-FSQ-15-005 (CERN, Geneva, 2016).
- Grindhammer and Peters (1993) G. Grindhammer and S. Peters, in International Conference on Monte Carlo Simulation in High-Energy and Nuclear Physics - MC 93 Tallahassee, Florida, February 22-26, 1993 (1993) arXiv:hep-ex/0001020 [hep-ex] .
- Beckingham et al. (2010) M. Beckingham et al. (ATLAS), The simulation principle and performance of the ATLAS fast calorimeter simulation FastCaloSim, Tech. Rep. ATL-PHYS-PUB-2010-013 (CERN, Geneva, 2010).
- Grindhammer et al. (1990) G. Grindhammer, M. Rudowicz, and S. Peters, SSC Workshop on Calorimetry for the Superconducting Super Collider Tuscaloosa, Alabama, March 13-17, 1989, Nucl. Instrum. Meth. A290, 469 (1990).
- Barberio et al. (2009) E. Barberio, J. Boudreau, B. Butler, S. L. Cheung, A. Dell’Acqua, A. D. Simone, E. Ehrenfeld, M. V. Gallas, A. Glazov, Z. Marshall, J. Mueller, R. PlaÃÂakyte, A. Rimoldi, P. Savard, V. Tsulaia, A. Waugh, and C. C. Young, Journal of Physics: Conference Series 160, 012082 (2009).
- Ravanbakhsh et al. (2016) S. Ravanbakhsh, F. Lanusse, R. Mandelbaum, J. Schneider, and B. Poczos, (2016), arXiv:1609.05796 [astro-ph.IM] .
- Schawinski et al. (2017) K. Schawinski, C. Zhang, H. Zhang, L. Fowler, and G. K. Santhanam, 467 (2017), arXiv:1702.00403 [astro-ph.IM] .
- Mosser et al. (2017) L. Mosser, O. Dubrule, and M. J. Blunt, ArXiv e-prints (2017), arXiv:1704.03225 [cs.CV] .
- Kadurin et al. (2016) A. Kadurin, A. Aliper, A. Kazennov, P. Mamoshina, Q. Vanhaelen, K. Khrabrov, and A. Zhavoronkov, Oncotarget 8 (2016).
- Goodfellow et al. (2014) I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, ArXiv e-prints (2014), arXiv:1406.2661 .
- Radford et al. (2015) A. Radford, L. Metz, and S. Chintala, ArXiv e-prints (2015), arXiv:1511.06434 .
- Chetlur et al. (2014) S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, ArXiv e-prints (2014), arXiv:1410.0759 [cs.NE] .
- de Oliveira et al. (2017) L. de Oliveira, M. Paganini, and B. Nachman, (2017), arXiv:1701.05927 [stat.ML] .
- Cogan et al. (2015) J. Cogan, M. Kagan, E. Strauss, and A. Schwarztman, JHEP 02, 118 (2015), arXiv:1407.5675 [hep-ph] .
- Sjostrand et al. (2006) T. Sjostrand, S. Mrenna, and P. Z. Skands, JHEP 05, 026 (2006), arXiv:hep-ph/0603175 [hep-ph] .
- Xu et al. (2015) K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, in Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 37 (PMLR, Lille, France, 2015) pp. 2048–2057.
- Zhang et al. (2016) H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas, ArXiv e-prints (2016), arXiv:1612.03242 [cs.CV] .
- Salimans et al. (2016) T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, ArXiv e-prints (2016), arXiv:1606.03498 .
- Nachman et al. (2017a) B. Nachman, L. de Oliveira, and M. Paganini, (2017a), 10.17632/pvn3xc3wy5.1.
- Aad et al. (2017) G. Aad et al. (ATLAS), Eur. Phys. J. C77, 490 (2017), arXiv:1603.02934 [hep-ex] .
- Nachman et al. (2017b) B. Nachman, L. de Oliveira, and M. Paganini, (2017b), 10.5281/zenodo.584155.
- Olive et al. (2014) K. A. Olive et al. (Particle Data Group), Chin. Phys. C38, 090001 (2014).
- Gulrajani et al. (2017) I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, ArXiv e-prints (2017), arXiv:1704.00028 [cs.LG] .
- Nowozin et al. (2016) S. Nowozin, B. Cseke, and R. Tomioka, ArXiv e-prints (2016), arXiv:1606.00709 [stat.ML] .
- Heusel et al. (2017) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, G. Klambauer, and S. Hochreiter, ArXiv e-prints (2017), arXiv:1706.08500 [cs.LG] .
- (36) D. Berthelot, T. Schumm, and L. Metz, ArXiv e-prints arXiv:1703.10717 [stat.LG] .