Learning Particle Physics by Example:Location-Aware Generative Adversarial Networks for Physics Synthesis

Learning Particle Physics by Example:
Location-Aware Generative Adversarial Networks for Physics Synthesis

Luke de Oliveira,    Michela Paganini, and    Benjamin Nachman lukedeoliveira@lbl.gov, michela.paganini@yale.edu, bnachman@cern.ch

We provide a bridge between generative modeling in the Machine Learning community and simulated physical processes in High Energy Particle Physics by applying a novel Generative Adversarial Network (GAN) architecture to the production of jet images – 2D representations of energy depositions from particles interacting with a calorimeter. We propose a simple architecture, the Location-Aware Generative Adversarial Network, that learns to produce realistic radiation patterns from simulated high energy particle collisions. The pixel intensities of GAN-generated images faithfully span over many orders of magnitude and exhibit the desired low-dimensional physical properties (i.e., jet mass, n-subjettiness, etc.). We shed light on limitations, and provide a novel empirical validation of image quality and validity of GAN-produced simulations of the natural world. This work provides a base for further explorations of GANs for use in faster simulation in High Energy Particle Physics.

institutetext: Lawrence Berkeley National Laboratory, 1 Cyclotron Rd, Berkeley, CA, 94720, USAinstitutetext: Department of Physics, Yale University, New Haven, CT 06520, USA

1 Introduction

The task of learning a generative model of a data distribution has long been a very difficult, yet rewarding research direction in the statistics and machine learning communities. Often positioned as a complement to discriminative models, generative models face a more difficult challenge than their discriminative counterparts, i.e., reproducing rich, structured distributions such as natural language, audio, and images. Deep learning-based generative models, with the promise of hierarchical, end-to-end learned features, are seen as one of the most promising avenues towards building generative models capable of handling the most rich and non-linear of data spaces. Generative Adversarial Networks goodfellow2014generative (), a relatively new framework for learning a deep generative model, cast the task as a two player non-cooperative game between a generator network, , and a discriminator network, . The generator tries to produce samples that the discriminator cannot distinguish as fake when compared to real samples drawn from the data distribution, while the discriminator tries to correctly identify if a sample it is shown originates from the generator (fake) or if it was sampled from the data distribution (real). When we relax and to be from the space of all functions, there exists a unique equilibrium where reproduces the true data distribution, and is 1/2 everywhere goodfellow2014generative (). Most recent work on GANs has focused on the task of recovering data distributions from natural images; many recent advances in image generation using GANs have shown great promise in producing photo-realistic natural images at high resolution goodfellow2014generative (); odena_acgan (); info_gan (); improved_gan (); conditional_gan (); semisup_gan (); dcgan (); whatwheredraw (); text2im (); stackgan () while attempting to solve many known problems with stability improved_gan (); dcgan () and lack of convergence guarantees distinguishability ().

With the growing complexity of theoretical models and power of computers, many scientific and engineering projects increasingly rely on detailed simulations and generative modeling. This is particularly true for High Energy Particle Physics where precise Monte Carlo (MC) simulations are used to model physical processes spanning distance scales from meters all the way to the macroscopic distances of detectors. For example, both the ATLAS Aad:2010ah () and CMS Bayatian:922757 () collaborations model the detailed interactions of particles with matter to accurately describe the detector response. These full simulations are based on the Geant4 package Agostinelli:2002hh () and are time111Full simulation can take up to . and CPU intensive, a significant challenge given that events/year must be simulated. In fact, simulating this expansive dynamical range with the required precision is very expensive. Various approximations can be used to make faster simulations Edmonds:2008zz (); ATLAS:1300517 (); Abdullin:2011zz (), but the time is still non-negligible and they are not applicable in all physics applications. Another potential bottleneck is the modeling of quark and gluon interactions on the smallest distance scales (matrix element calculations). Calculations with a large number of final state objects (e.g. higher order in the perturbative series) are very time consuming and can compete with the detector simulation for the total event generation time. Recent work focusing on algorithmic improvements that leverage High Performance Computing capabilities have helped combat long generation times Childers:2015pwh (). However, not all processes are massively parallelizable and time at supercomputers is a scarce resource.

Our goal is to develop a new paradigm for fast event generation in high energy physics through the use of GANs. We start by tackling a constrained version of the larger problem, where we use the concept of a jet image Cogan:2014oua () to show that the idealized 2D radiation pattern from high energy quarks and gluons can be efficiently and effectively reproduced by a GAN. This work builds upon recent developments in jet image classification with deep neural networks deOliveira:2015xxd (); Almeida:2015jua (); Komiske:2016rsd (); Barnard:2016qma (); Baldi:2016fql (). By showing that generated jet images resemble the true simulated images in physically meaningful ways, we demonstrate the aptitude of GANs for future applications.

This paper is organized as follows. Sections 2 and 3 provide a brief introductions to the physics of jet images and structure of Generative Adversarial Networks, respectively. New GAN architectures developed specifically for jet images are described in Sec. 4. The results from our model are shown in Sec. 5, with an extensive discussion of what the neural network is learning. The paper concludes with Sec. 6.

2 Dataset

Jets are the observable result of quarks and gluons scattering at high energy. A collimated stream of protons and other hadrons forms in the direction of the initiating quark or gluon. Clusters of such particles are called jets. A jet image is a two-dimensional representation of the radiation pattern within a jet: the distribution of the locations and energies of the jet’s constituent particles. Jet formation is finished well before it can be detected, so it is sufficient to consider the radiation pattern in a two dimensional surface spanned by orthogonal angles222While the azimuthal angle is a real angle, pseudorapidity is only approximately equal to the polar angle . However, the radiation pattern is nearly symmetric in and and so these standard coordinates are used to describe the jet constituent locations. and . The jet image consists of a regular grid of pixels in . This is analogous to a calorimeter without longitudinal segmentation (e.g. the CMS detector Chatrchyan:2008aa ()). Adding layers, either through longitudinal segmentation or combining information from multiple detectors, has recently been studied for classification Komiske:2016rsd () and generation Paganini:2017hrr (), but is beyond the scope of this paper.

Jet images are constructed and pre-processed using the setup described in Ref. deOliveira:2015xxd () and briefly summarized in this section. The finite granularity of a calorimeter is simulated with a regular grid in and . The energy of each calorimeter cell is given by the sum of the energies of all particles incident on the cell. Cells with positive energy are assigned to jets using the anti- clustering algorithm antiktpaper () with a radius parameter of via the software package FastJet 3.2.1 fastjet (). To mitigate the contribution from the underlying event, jets are are trimmed trimming () by re-clustering the constituents into subjets and dropping those which have less than of the transverse momentum of the parent jet. Trimming also reduces the impact of pileup: multiple proton-proton collisions occurring in the same event as the hard-scatter process. Jet images are formed by translating the and of all constituents of a given jet so that its highest subjet is centered at the origin. A rectangular grid of with pixels centered at the origin forms the basis of the jet image. The intensity of each pixel is the corresponding to the energy and pseudorapditiy of the constituent calorimeter cell, . The radiation pattern is symmetric about the origin of the jet image and so the images are rotated333For more details about this rotation, which slightly differs from Ref. deOliveira:2015xxd (), see Appendix B.. The subjet with the second highest (or, in its absence, the direction of the first principle component) is placed at an angle of with respect to the axes. Finally, a parity transform about the vertical axis is applied if the left side of the image has more energy than the right side.

Jet images vary significantly depending on the process that produced them. One high profile classification task is the separation of jets originating from high energy bosons (signal) from generic quark and gluon jets (background). Both signal and background are simulated using Pythia 8.219 Pythia8 (); Pythia () at TeV. In order to mostly factor the impact of the jet transverse momentum (), we focus on a particular range:  GeV  GeV. A typical jet image from the simulated dataset dataset () is shown in Fig. 1. The image has already been processed so that the leading subjet is centered at the origin and the second highest subjet is at in the translated space. Unlike natural images from commonly studied datasets, jet images do not have smooth features and are highly sparse (typical occupancy ). This will necessitate a dedicated GAN setup, described in Sec. 4.

Figure 1: A typical jet image.

3 Generative Adversarial Networks

Let be the sample space – in this application, the space of grey-scale square images of given dimensions. Let the space of naturally occurring samples, according to a data distribution , be where we do not know the true form of (i.e., is the distribution we wish to recover).

A generator is a (potentially stochastic) function , where is a latent vector governing the generation process, and is the space of synthesized examples, described as

A discriminator is a map , which assigns to any sample a probability of being fake, i.e., , or real, i.e., . The loss of the system can be expressed as


The first term of the loss function is associated with the discriminator classifying a GAN-generated sample as fake, while the second term is associated with it classifying a sample drawn from the data distribution as real. Equation 1 is minimized by the generator and maximized by the discriminator. Note that the generator only directly affects the first term during backpropagation. At training time, gradient descent steps are taken in an alternating fashion between the generator and the discriminator. Due to this and to the fact that GANs are usually parametrized by non-convex players, GANs are considered to be quite unstable without any guarantees of convergence distinguishability (). To combat this, a variety of ad-hoc procedures for controlling convergence have emerged in the field, in particular, relating to the generation of natural images. Architectural constraints and optimizer configurations introduced in dcgan () have provided a well studied set of defaults and starting points for training GANs.

Many newer improvements also help avoid mode collapse, a very common point of failure for GANs where a generator learns to generate a single element of that is maximally confusing for the discriminator. For instance, mini-batch discrimination and feature matching improved_gan () allow the discriminator to use batch-level features and statistics, effectively rendering mode collapse suboptimal for the generator. It has also been empirically shown  whatwheredraw (); odena_acgan (); info_gan (); improved_gan () that adding auxiliary tasks seems to reduce such a tendency and improve convergence stability; side information is therefore often used as either additional information to the discriminator / generator improved_gan (); conditional_gan (); whatwheredraw (), as a quantity to be reconstructed by the discriminator semisup_gan (); info_gan (), or both odena_acgan ().

4 Location-Aware Generative Adversarial Network(LAGANs)

We introduce a series of architectural modifications to the DCGAN dcgan () frameworks in order to take advantage of the pixel-level symmetry properties of jet images while explicitly inducing location-based feature detection.

We build an auxiliary task into our system following the ACGAN odena_acgan () formulation. In addition to the primary task where the discriminator network must learn to identify fake jet images from real ones, the discriminator is also tasked with jointly learning to classify boosted bosons (signal) and QCD (background), with the scope of learning the conditional data distribution.

We design both the generator and discriminator using a single 2D convolutional layer followed by 2D locally connected layers without weights sharing. Though locally connected layers are rarely seen as useful components of most computer vision systems applied to natural images due to their lack of translation invariance and parameter efficiency, we show that the location specificity of locally connected layers allows these transformations to outperform their convolutional counterparts on tasks relating to jet-images, as previously shown in deOliveira:2015xxd (); Baldi:2016fql (). Diagrams to better understand the nature of convolutional and locally connected layers are available in Figures 3 and 3.

Figure 2: In the simplest (i.e., all-square) case, a convolutional layer consists of filters of size sliding across an image with stride . For a valid convolution, the dimensions of the output volume will be , where .
Figure 3: A locally connected layer consists of unique filters applied to each individual patch of the image. Each group of filters is specifically learned for one patch, and no filter is slid across the entire image. The diagram shows the edge case in which the stride is equal to the filter size , but in general patches would partially overlap. A convolution, as described above, is simply a locally connected layer with a weight sharing constraint.
Figure 2: In the simplest (i.e., all-square) case, a convolutional layer consists of filters of size sliding across an image with stride . For a valid convolution, the dimensions of the output volume will be , where .

In preliminary investigations, we found experimental evidence showing the efficacy of fully connected networks in producing the central constituents of jet images (which is reminiscent of the findings of deOliveira:2015xxd () on the discriminative power of fully connected networks applied to jet images). We also observed that fully convolutional systems excel at capturing the less location-specific low energy radiation dispersion pattern. This informed the choice of 2D locally connected layers to obtain a parameter-efficient model while retaining location-specific information.

As an additional experiment, we also trained a multi-headed model, essentially employing a localization network and a dispersion network where each stream learns a portion of the data distribution that reflects its comparative advantage. Though we do not present results from this model due to its ad-hoc construction, we provide code and model weights to analyze code ().

Since jet images represent a fundamental challenge for most GAN-inspired architectures due to the extreme levels of sparsity and unbounded nature of pixel levels Cogan:2014oua (), we apply additional modifications to the standard DCGAN framework. To achieve sparsity, we utilize Rectified Linear Units maas2013rectifier () in the last layer of the generator, even though these activations are not commonplace in most GAN architectures due to issues with sparse gradients how_to_train (). However, we remain consistent with dcgan (); how_to_train () and use Leaky Rectified Linear Units leaky () throughout both the generator and discriminator. We also apply minibatch discrimination improved_gan () on the last layer of features learned by the discriminator, which we found was crucial to obtain both stability and sample diversity in the face of an extremely sparse data distribution. Both batch normalization batchnorm () and label flipping improved_gan (); how_to_train () were also essential in obtaining stability in light of the large dynamic range.

In summary, a Location Aware Generative Adversarial Network (LAGAN) is a set of guidelines for learning GANs designed specifically for applications in a sparse regime, when location within the image is critical, and when the system needs to be end-to-end differentiable, as opposed to requiring hard attention. Examples of such applications, in addition to the field of High Energy Physics, could include medical imaging, geological data, electron microscopy, etc. The characteristics of a LAGAN can be summarized as follows:

  • Locally Connected Layers - or any attentional component where we can attend to location specific features - to be used in the generator and the discriminator

  • Rectified Linear Units in the last layer to induce sparsity

  • Batch normalization, as also recommended in dcgan (), to help with weight initialization and gradient stability

  • Minibatch discriminationimproved_gan (), which experimentally was found to be crucial in modeling both the high dynamic range and the high levels of sparsity

4.1 Architecture Details, Implementation, and Training

Figure 4: LAGAN architecture

A diagram of the architecture is available in Fig. 4.

We utilize low-dimensional vectors as our latent space, where, , with final generated outputs occupying .

Before passing in , we perform a hadamard product between and a trainable lookup-table embedding of the desired class (boosted , or QCD), effectively conditioning the generation procedure odena_acgan ().

The generator consists of a same-bordered 2D convolution, followed by two valid-bordered 2D locally connected layer, with 64, 6, and 6 feature maps respectively. We use receptive fields of size in the convolutional layer, and and in the two locally connected layers respectively. Sandwiched between the layers are 2x-upsampling operations and channel-wise batch normalization batchnorm () layers. On top of the last layer, we place a final ReLU-activated locally connected layer with 1 feature map, a receptive field, and no bias term.

The discriminator consists of a same-bordered 2D convolutional layer with 32 unique filters, followed by three valid-bordered 2D locally connected layers all with 8 feature maps with receptive fields of size , , and . After each locally connected layer, we apply channel-wise batch normalization. We use the last feature layer as input to a minibatch discrimination operation with twenty 10-dimensional kernels. These batch-level features are then concatenated with the feature layer before being mapped using sigmoids for both the primary and auxiliary tasks.

At training time, label flipping alleviates the tendency of the GAN to produce very signal-like and very background-like samples, and attempts to prevent saturation at the label extremes. We flip labels according to the following scheme: when training the discriminator, we flip 5% of the target labels for the primary classification output, as well as 5% of the target labels for the auxiliary classification output on batches that were solely fake images, essentially tricking it into misclassifying fake versus real images in the first case, and signal versus background GAN-generated images in the second case; in addition, while training the generator, 9% of the time we ask it to produce signal images that the discriminator would classify as background, and vice versa.

We train the system end-to-end by taking alternating steps in the gradient direction for both the generator and the discriminator. We employ the Adam adam () optimizer, utilizing the sensible parameters outlined in dcgan () with a batch size of 100 for 40 epochs. We construct all models using Keras keras () and TensorFlow tensorflow2015-whitepaper (), and utilize two NVIDIA® Titan X (Pascal) GPUs for training.

5 Generating Jet Images with Adversarial Networks

The proposed LAGAN architecture is validated through quantitative and qualitative means on the task of generating realistic looking jet images. In this section, we generate 200k jet images and compare them to 200k Pythia images to evaluate - both quantitatively and qualitatively - their content (5.1), to explore the powerful information provided by some of the most representative images (5.2), to dig deeper into the inner workings of the generator (5.3) and discriminator (5.4), to monitor the development of the training procedure (5.5), to compare with other architectures (5.6), and to briefly evaluate computational efficiency of the proposed method (5.7).

5.1 Image Content Quality Assessment

Quantifying the efficacy of the generator is challenging because of the dimensionality of the images. However, we can assess the performance of the network by considering low-dimensional, physically inspired features of the 625 dimensional image space. Furthermore, by directly comparing images, we can visualize what aspects of the radiation pattern the network is associating with signal and background processes, and what regions of the image are harder to generate via adversarial training.

The first one-dimensional quantity to reproduce is the distribution of the pixel intensities aggregated over all pixels in the image. Intensities span a wide range of values, from the energy scale of the jet ( GeV) down to the machine epsilon for double precision arithmetic. Because of inherent numerical degradation in the preprocessing steps 444Bicubic spline interpolation in the rotation process causes a large number of pixels to be interpolated between their original value and zero, the most likely intensity value of neighboring cells. Though a zero-order interpolation would solve sparsity problems, we empirically determine that the loss in jet-observable resolution is not worth the sparsity preservation. A more in-depth discussion can be found in Appendix B., images acquire unphysical low intensity pixel values. We truncate the distribution at and discard all unphysical contributions. Figure 5 shows the distribution of pixel intensities for both Pythia and GAN-generated jet images. The full physical dynamic range is explored by the GAN and the high region is faithfully reproduced.

Figure 5: The distribution of the pixel intensities for both fake (GAN) and real (Pythia) jet images, aggregated over all pixels in the jet image.

One of the unique properties of high energy particle physics is that we have a library of useful jet observables that are physically motivated functions whose features are qualitatively (and in some cases, quantitatively) well-understood. We can use the distributions of these observables to assess the abilty of the GAN to mimic Pythia. Three such features of a jet image are the mass , transverse momentum , and -subjettiness  nsub ():


where , , and are the pixel intensity, pseudorapidity, and azimuthal angle, respectively. The sums run over the entire image. The quantities and are axis values determined with the one-pass axis selection using the winner-take-all combination scheme Larkoski:2014uqa ().

The distributions of , and are shown in Fig. 6 for both GAN and Pythia images. These quantities are highly non-linear, low dimensional manifolds of the 625-dimensional space in which jet images live, so there is no guarantee that these non-trivial mappings will be preserved under generation. However this property is desirable and easily verifiable. The GAN images reproduce many of the jet-observable features of the Pythia images. Shapes are nearly matched, and, for example, signal mass exhibits a peak at GeV, which corresponds to the mass of the boson that generates the hadronic shower. This is an emergent property - nothing in the training or architecture encourages this. Importantly, the generated GAN images are as diverse as the true Pythia images used for training - the fake images do not simply occupy a small subspace of credible images.

Figure 6: The distributions of image mass , transverse momentum , and -subjettiness . See the text for definitions.

We claim that the network is not only learning to produce samples with a diverse range of , and , but it’s also internally learning these projections of the true data distribution and making use of them in the discriminator. To provide evidence for this claim, we explore the relationships between the ’s primary and auxiliary outputs, namely and , and the physical quantities that the generated images possess, such as mass and transverse momentum .

The auxiliary classifier is trained to achieve optimal performance in discriminating signal from background images. Fig. 7 confirms its ability to correctly identify the class most generated images belong to. Here, we can identify the response’s dependence on the kinematic variables. Notice how is making use of its internal representation of mass to identify signal-like images: the peak of the distribution for signal events is located around 80 GeV, and indeed images with mass around that point have a higher than the ones at very low or very high mass. Similarly, low images are more likely to be classified as background, while high ones have a higher probability of being categorized as signal images. This behavior is well understood from a physical standpoint and can be easily cross-checked with the and distribution for boosted and QCD jets displayed in Fig. 6. Although mass and transverse momentum influence the label assignment, is only partially relying on these quantities; there is more knowledge learned by the network that allows it, for example, to still manage to correctly classify the majority of signal and background images regardless of their and values.

Figure 7: Auxiliary discriminator output as a function of mass, on the left, and as a function of transverse momentum, on the right. The top plots refer to signal (boosted bosons from decay) images, while the bottom plots refer to background (QCD) images. All plots have been normalized to unity in each or bin, such that the -axis represents the percentage of images generated in a specific or bin that have been assigned the value of indicated on the -axis.
Figure 8: Discriminator output as a function of mass, on the left, and as a function of transverse momentum, on the right. The top plots refer to signal (boosted bosons from decay) images, while the bottom plots refer to background (QCD) images. All plots have been normalized to unity in each or bin, such that the -axis represents the percentage of images generated in a specific or bin that have been assigned the value of indicated on the -axis.

On the other hand, Fig. 8 shows that the training has converged to a stable point such that outputs a for almost all generated images. A high level of confusion from the discriminator is not just expected – it’s one of the goals of the adversarial training procedure. In addition, we notice no strong or dependence of the output, except for background images produced with and values outside the ranges of the training set.

Studying one-dimensional projections onto powerful physically-inspired variables is useful, but we can also try to understand the network behavior by looking directly at images. This is similar to the visualization techniques used in Ref. deOliveira:2015xxd () except now there is not only signal and background classes, but also fake and real categories.

As is commonly done in GAN literature, we examine randomly selected images from the data distribution and show a comparison with their nearest generated neighbor to show that we have not memorized the training set (Figure 9). The dispersion patterns appear realistic, and, although our generative model seems inadequate in the very low intensity region, this turns out to be inconsequential because most of the important jet information is carried by the higher particles.

Figure 9: Randomly selected Pythia images (top row) and their nearest generated neighbor (bottom row).

The topology of jet images becomes more visible when examining averages over jet images with specific characteristics. Figure 10 shows the average Pythia image, the average generated image, and their difference555Similar plots for the average signal and background images are shown in Fig. 23, 24.. On average, the generative model produces large, spread out, roughly circular dispersion patterns that approximate those of Pythia jet images. The generated images span multiple orders of magnitude, and differ from Pythia images in the pixel intensity of the central regions by only a few percent.

Figure 10: Average image produced by Pythia (left) and by the GAN (right), displayed on log scale, with the difference between the two (center), displayed on linear scale.
Figure 11: The difference between signal and background images generated by Pythia (left) and GAN (right). Similar images are available in Appendix A (Fig. 2930) after conditioning on the discriminator’s output, which classifies images into real and fake.

For classification, the most important property of the GAN images is that the difference between signal and background should be faithfully reproduced. This is shown qualitatively on average in Fig. 11. The polarity of the difference between signal and background for both Pythia and GAN-generated images is consistent at the individual pixel level. However, the magnitude difference is stronger in GAN-generated images, leading us to believe that the GAN overestimates the importance of individual pixel contributions to the categorization of signal and background.

Another shortcoming of GANs when applied to the generation of images that, like ours, are divided into inherently overlapping and indistinguishable classes, is their inability to carefully explore the grey area represented by the subspace of images that are hard to classify with the correct label. There exists an entire literature Cogan:2014oua (); deOliveira:2015xxd (); Baldi:2016fql () on the effort of classifying boson from QCD jet images using machine learning, and the topic itself is an active field of research in high energy physics. In particular, inserting this auxiliary classification task into the GAN’s loss function may induce the generator to produce very distinguishably -like or QCD-like jet images. The production of ambiguous images is unfavorable under the ACGAN loss formulation. Evidence of the occurrence of this phenomenon can be found by analyzing the normalized confusion matrices (Fig. 12) generated from the auxiliary classifier’s output evaluated on Pythia and GAN-generated images. The plots show that classification appears to be easier for GAN-generated images, which correctly labels with higher success rate that Pythia images.

A consequence of the inability to produce equivocal images is that GAN-generated images currently do not represent a viable, exclusive substitute to Pythia images as a training set for a vs QCD classifier that is to be applied for the discrimination of Pythia images. We check this hypothesis by training a fully-connected ‘MaxOut’ network in the same vogue as the one in deOliveira:2015xxd (). Two trainings are performed: one on a subset of Pythia images, one on GAN images. Both are evaluated on a test set composed exclusively of Pythia images. Fig. 13 clearly shows how, unlike Pythia images, GAN images cause the network to easily create two distinct representations for signal and background, which in turn leads to a higher misclassification rate when the model is applied to real images. We expect that with more research in this direction, coupled with better theoretical understanding, this problem with be ameliorated. Nonetheless, generated images can still be useful for data augmentation.

Figure 12: We plot the normalized confusion matrices showing the percentage of signal and background images that the auxiliary classifier successfully labels. The matrices are plotted for Pythia images (left) and GAN images (right).
Figure 13: Output of the 2 fully connected nets - one trained on Pythia, one trained on GAN images - evaluated on Pythia images to discriminate boosted bosons (signal) from QCD (bkg).

Finally, we provide further visual analysis of Pythia and GAN images, this time aimed at identifying what features the discriminator network is using to differentiate real and fake images. Fig. 15 shows the average radiation pattern in truly real and falsely real images, and the difference between the two. Conversely, Fig. 15 shows the average pattern in falsely fake and truly fake images, and their difference. In both cases, the most striking feature is that a higher intensity in the central pixel is associated with a higher probability of the image being classified as fake, while real-looking images tend to have a more diffuse radiation around the leading subjet, especially concentrated in the top right area adjacent to it.

Figure 14: The average Pythia image labeled as real (left), as fake (right), and the difference between these two (middle) plotted on linear scale. All plots refer to aggregated signal and background jets; similar plots for individual signal and background only jets are shown in Fig. 25, 26.
Figure 15: The average GAN-generated image labeled as real (left), as fake (right), and the difference between these two (middle) plotted on linear scale. All plots refer to aggregated signal and background jets; similar plots for individual signal and background only jets are shown in Fig. 27, 28.
Figure 14: The average Pythia image labeled as real (left), as fake (right), and the difference between these two (middle) plotted on linear scale. All plots refer to aggregated signal and background jets; similar plots for individual signal and background only jets are shown in Fig. 25, 26.

5.2 Most Representative Images

All plots so far were produced using 200k Pythia images and 200k GAN-generated images, averaged over all examples with specific characteristics. Now, to better understand what’s being learned by the generative model, we only consider the 500 most representative jet images of a particular kind, and average over them. This helps identifying and avoids washing out striking and unique features.

The plots shown in Fig. LABEL:fig:500_real_fake provide unique insight into what features the discriminator is learning and using in its adversarial task of distinguishing generated images from real images. Looking at this figure from top to bottom, we see that, although Pythia images qualitatively differ from generated images, the metric that applies to discriminate between real and fake images is consistent among the two. The images with the highest are largely asymmetric, with strong energy deposits in the bottom corners and a much closer location of the subleading subjet to the leading one. The net learns that more symmetric images, with a more uniform activation just to the right side of the leading subjet, stronger intensity in the central pixel, and more distant location of the subleading subjet, appear to be easier to produce for and are therefore more easily identifiable as fake.

Figure 16: Comparison between the 500 most signal and most background looking images, generated by Pythia, on the left, and by the GAN, on the right.

In addition, we can isolate the primary information that pays attention to when classifying signal and background images. The learned metric is consistently applied to both Pythia and GAN images, as shown in Fig. 16. When identifying signal images, learns to looks for more concentrated images, with well-defined two prong structure. On the other hand, the network has learned that background images have a wider radiation pattern and a more fuzzy structure around the location of the second subjet.

5.3 Generator

To further understand the generation process of jet images using GANs, we explore the inner workings of the generator network. As outlined in Sec. 4.1, consists of a 2D convolution followed by 3 consecutive locally-connected layers, the last of which yields the final generated images. By peeking through the layers of the generator and following the path of a jet image in the making, we can visually explore the steps that lead to the production of jet images with their desired physical characteristics.

Figure 17: Pixel activations of images representing the various channels in the outputs of the two locally-connected hidden layers that form the generator, highlighting the difference in activation between the production of the average signal and average background samples. The first row represents the six channels of the first LC layer’s output; the second row represents the six channels of the second LC layer’s output.

We probe the network after each locally-connected hidden layer; we investigate how the average signal and average background images develop into physically distinct classes as a function of depth of the generator network, by plotting each channel separately (Fig. 17). The output from the first locally-connected layer consists of six images (i.e., an volume), and the second consists of six images. The red pixels are more strongly activated for signal images, while blue pixels activate more strongly in the presence of background images, which proves the fact that the generator is learning, from its very early stages that spread out energy depositions are needed to produce more typical background images.

5.4 Discriminator

The discriminator network undertakes the task of hierarchical feature extraction and learning for classification. Its role as an adversary to the generator is augmented by an auxiliary classification task. With interpretability being paramount in high energy physics, we explore techniques to visualize and understand the representations learned by the discriminator and their correlation with the known physical processes driving the production of jet images.

The convolutional layer at the beginning of the network provides the most significant visual aid to guide the interpretation process. In this layer, we convolve 32 learned convolutional filters with equi-sized patches of pixels by sliding them across the image. The weights of the matrices that compose the convolutional kernels of the first hidden layer are displayed in a pictorial way in the top panel of Fig. 18.

Each filter is then convolved with two physically relevant images: in the middle panel, the convolution is performed with the difference between the average GAN-generated signal image and the average GAN-generated background image; in the lower panel, the convolution is performed with the difference between the average Pythia image drawn from the data distribution and the average GAN-generated image. By highlighting pixel regions of interest, these plots shed light on the highest level representation learned by the discriminator, and how this representation translates into the auxiliary and adversarial classification outputs.

Finally, we compute linear correlations between the average image’s pixel intensities and each of the discriminator’s outputs, and visualize the learned location-dependent discriminating information in Fig. 19. These plots accentuate the distribution of distinctive local features in jet images. Additional material in Appendix A is available to further explore the effects of conditioning on the primary and auxiliary outputs on the visualization techniques used in this paper, as well as the the correlation between ’s output and the physical process that originated the jet (Fig. 31).

Figure 18: Convolutional filters in the first layer of the discriminator (top), their convolved version with the difference between the average signal and background generated image (center), and their convolved version with the difference between average Pythia and average generate image.
Figure 19: Per-pixel linear correlation with the discriminator auxiliary (top) and adversarial (bottom) classification output for combined signal and background images (left), signal only images (center), and background only images (right).

5.5 Training Procedure Observations

Figure 20: Histograms of the discriminator’s adversarial output for generated and real images at the epoch chosen to generate the images used in this paper (34th epoch).

During training, a handle on the performance of is always directly available through the evaluation of its ability to correctly classify real versus GAN-generated images. We visualize the discriminator’s performance in Fig. 20 by constructing histograms of the output of the discriminator evaluated on GAN and Pythia generated images. Perfect confusion would be achieved if the discriminator output were equal to 1/2 for every image. However, this visualization fails at providing quantifiable information on the performance achieved by . In fact, a high degree of confusion can also be symptomatic of undertraining in both and or mode-collapse in , and is observed in the first few epochs if is not pre-trained. Similarly, a slow divergence towards winning the game against cannot lead to the conclusion that the performance of is not improving; it might simply be doing so at a slower pace than , and the type of representation shown in Fig. 20 does not help us identify when the peak performance of is obtained.

Instead, we use convergence of generated mass and to the corresponding real distributions to select a stopping point for the training (Fig. 6). We select the 34th epoch of training to generate the images shown in this paper, but qualitatively similar results can be observed after only a handful of epochs. At this time, the training seems to have almost reached the equilibrium point in which is unsure of how to label most samples, regardless of their true origin, without any strong dependence on physical process that generated the image (Fig. 31).

5.6 Architecture Comparisons

With a toolkit of methods for validating the effectiveness of a GAN for jet images, we can now compare the LAGAN performance against other architectures. To this end, we focus on the jet image observables introduced in Sec. 5.1. To quantify the preservation of these physically meaningful manifolds under the data distribution and learned distribution, and to standardize generative model performance comparisons, we design a simple, universal scoring function.

Consider the true data distribution of jet images , and a generated distribution from a particular generative model. Define and to be the distributions of any number of physically meaningful manifolds under the data and generated distributions respectively. We require that both conditional distributions be well modeled, or more formally that, for a sensibly chosen distance metric , , where is the set of classes. To ensure good convergence for all classes, the performance scoring function is designed to minimize the maximum distance between the real and generated class-conditional distributions. Formally, such a minimax scoring function is defined as


Note that Eq. 6 is minimized when the generative model exactly recovers the data distribution. For this application, the similarity metric has been chosen to be the Earth Mover’s Distance earthmover ().

Previous work on the classification of jet images from boosted bosons and QCD jets found the combination of jet mass and to be the most discriminating high-level variable combination deOliveira:2015xxd (). Therefore, these two features are chosen to form the reduced manifold space used for performance evaluation. The empirical 2D Probability Mass Function over is discretized into a equispaced grid containing the full support, and the distance between any two points in the PMF is defined to be the Euclidean distance of the respective coordinate indices. This distance is used in the internal flow optimization procedure when calculating the Earth Mover’s Distance. The two classes are jets produced from boosted boson decays and generic quark and gluon jets.

The LAGAN architecture performance is compared with other GAN architectures using Eq. 6 in Fig. 21. In addition to the DCGAN, we compare to two other networks introduced in Sec. 4: an architecture that uses fully connected instead of locally connected layers (FCGAN) and an architecture that combines a fully connected stream and a convolutional stream (HYBRIDGAN). Figure 21 shows that the LAGAN architecture achieves the lowest median distance score, as well as the lowest spread between the first and third quartiles, confirming it as both a powerful and robust architecture for generating jet images.

Figure 21: Boxplots quantifying the performance of the 4 bench-marked network architectures in terms of the scoring function defined in Eq. 4. The boxes represent the interquartile range (IQR), and the line inside the box represents the media. The whiskers extend to include all points that fall withing of the upper and lower quartiles.

5.7 Inference Time Observations

To evaluate the relative benefit of a GAN-powered fast simulation regime, we benchmark the speed of event generation using Pythia on a CPU versus our finalized GAN model on both CPU and GPU, as many applications of such technology in High Energy Physics will not have access to inference-time graphics processors. We benchmark on a p2.xlarge Elastic Compute Cloud (EC2) instance on Amazon Web Services (AWS) for control and reproducibility, using an Intel® Xeon® E5-2686 v4 @ 2.30GHz for all CPU tests, and an NVIDIA® Tesla K80 for GPU tests. We use TensorFlow v0.12, CUDA 8.0, cuDNN v5.1 for all GPU-driven generation. GAN-based approaches can offer up to two orders of magnitude improvement over traditional event generation (see Table 1 and Fig. 22), validating this approach as an avenue with pursuing.

Method Hardware # events / sec Relative Speed-up
Pythia CPU 34 1
LAGAN CPU 470 14
LAGAN GPU 7200 210
Table 1: Performance Comparison for LAGAN and Pythia-driven event generation
Figure 22: Time required to produce a jet image in PYTHIA-basedgeneration, LAGAN-based generation on the CPU, and LAGAN-based generation on the GPU.

6 Conclusions and Outlook

We have introduced a new neural network architecture, the Location-Aware Generative Adversarial Network (LAGAN), to learn a rich generative model of jet images. Both qualitative and quantitative evaluations show that this model produces realistic looking images. To the best of our knowledge, this represents the one of the first successful applications of Generative Adversarial Networks to the physical sciences. Jet images differ significantly from natural images studied in the extensive GAN literature and the LAGAN represents a significant step forward in extending the GAN idea to other domains.

The LAGAN marks the beginning of a new era of fast and powerful generative models for high energy physics. Jet images have provided a useful testing ground to deploy state-of-the-art generation methods. Preliminary work that incorporates the LAGAN in an architecture for higher dimensional images demonstrates the importance of this work for future GAN-based simulation  Paganini:2017hrr (). Neural network generation is a promising avenue for many aspects of the computationally heavy needs of high energy particle and nuclear physics.


This work was supported in part by the Office of High Energy Physics of the U.S. Department of Energy under contracts DE-AC02-05CH11231 and DE-FG02-92ER40704. The authors would like to thank Ian Goodfellow for insightful deep learning related discussion, and would like to acknowledge Wahid Bhimji, Zach Marshall, Mustafa Mustafa, Chase Shimmin, and Paul Tipton, who helped refine our narrative.

Appendix A Additional Material

Figure 23: Average signal image produced by Pythia (left) and by the GAN (right), displayed on log scale, with the difference between the two (center), displayed on linear scale.
Figure 24: Average background image produced by Pythia (left) and by the GAN (right), displayed on log scale, with the difference between the two (center), displayed on linear scale.
Figure 25: The average signal Pythia image labeled as real (left), as fake (right), and the difference between these two (middle) plotted on linear scale.
Figure 26: Average background Pythia image labeled as real (left), as fake (right), and the difference between these two (middle) plotted on linear scale.
Figure 27: Average signal GAN-generated image labeled as real (left), as fake (right), and the difference between these two (middle) plotted on linear scale.
Figure 28: Average background GAN-generated image labeled as real (left), as fake (right), and the difference between these two (middle) plotted on linear scale.
Figure 29: Difference between the average signal and the average background images labeled as real, produced by Pythia (left) and by the GAN (right), both displayed on linear scale.
Figure 30: Difference between the average signal and the average background images labeled as fake, produced by Pythia (left) and by the GAN (right), both displayed on linear scale.
Figure 31: We plot the normalized confusion matrices showing the correlation of the predicted output with the true physical process used to produce the images. The matrices are plotted for all images (left), Pythia images only (center) and GAN images only (right).

Appendix B Image Pre-processing

Reference deOliveira:2015xxd () contains a detailed discussion on the the impact of image pre-processing and information content of the image. For example, it is shown that normalizing each image removes a significant amount of information about the jet mass. One important step that was not fully discussed is the rotational symmetry about the jet axis. It was shown in Ref. deOliveira:2015xxd () that a rotation about the jet axis in does not preserve the jet mass, i.e. , where is the rotation angle and runs over the constituents of the jet. One can perform a proper rotation about the -axis (preserving the leading subjet at ) via




Figure 32 quantifies the information lost by various preprocessing steps, highlighting in particular the rotation step. A ROC curve is constructed to try to distinguish the preprocessed variable and the unprocessed variable. If they cannot be distinguished, then there is no loss in information. Similar plots showing the degradation in signal versus background classification performance are shown in Fig. 33. The best fully preprocessed option for all metrics is the Pix+Trans+Rotation(Cubic)+Renorm. This option uses the cubic spline interpolation from Ref. deOliveira:2015xxd (), but adds a small additional step that ensures that the sum of the pixel intensities is the same before and after rotation. This is the procedure that is used throughout the body of the manuscript.

Figure 32: These plots show ROC curves quantifying the information lost about the jet mass (left) or -subjettiness (right) after pre-processing. A preprocessing step that does not loose any information will be exactly at the random classifier line . Both plots use signal boosted boson jets for illustration.
Figure 33: ROC curves for classifying signal versus background based only on the mass (left) or -subjettiness (right). Note that in some cases, the preprocessing can actually improve discrimination (but always degrades the information content - see Fig. 32).


  • (1) I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, Generative adversarial networks, ArXiv e-prints (2014) [1406.2661].
  • (2) A. Odena, C. Olah and J. Shlens, Conditional Image Synthesis With Auxiliary Classifier GANs, ArXiv e-prints (Oct., 2016) [1610.09585].
  • (3) X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever and P. Abbeel, Infogan: Interpretable representation learning by information maximizing generative adversarial nets, ArXiv e-prints (2016) [1606.03657].
  • (4) T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford and X. Chen, Improved techniques for training gans, ArXiv e-prints (2016) [1606.03498].
  • (5) M. Mirza and S. Osindero, Conditional generative adversarial nets, ArXiv e-prints (2014) [1411.1784].
  • (6) A. Odena, Semi-Supervised Learning with Generative Adversarial Networks, ArXiv e-prints (June, 2016) [1606.01583].
  • (7) A. Radford, L. Metz and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, ArXiv e-prints (2015) [1511.06434].
  • (8) S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele and H. Lee, Learning what and where to draw, ArXiv e-prints (2016) [1610.02454].
  • (9) S. E. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele and H. Lee, Generative adversarial text to image synthesis, ArXiv e-prints (2016) [1605.05396].
  • (10) H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang and D. Metaxas, StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, ArXiv e-prints (Dec., 2016) [1612.03242].
  • (11) I. J. Goodfellow, On distinguishability criteria for estimating generative models, ArXiv e-prints (Dec., 2014) [1412.6515].
  • (12) ATLAS Collaboration, G. Aad et. al., The ATLAS Simulation Infrastructure, Eur. Phys. J. C70 (2010) 823–874 [1005.4568].
  • (13) CMS Collaboration, CMS Physics: Technical Design Report Volume 1: Detector Performance and Software. Technical Design Report CMS. CERN, Geneva, 2006.
  • (14) GEANT4 Collaboration, S. Agostinelli et. al., GEANT4: A Simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250–303.
  • (15) K. Edmonds, S. Fleischmann, T. Lenz, C. Magass, J. Mechnich and A. Salzburger, The fast ATLAS track simulation (FATRAS), .
  • (16) ATLAS Collaboration, M. Beckingham, M. Duehrssen, E. Schmidt, M. Shapiro, M. Venturi, J. Virzi, I. Vivarelli, M. Werner, S. Yamamoto and T. Yamanaka, The simulation principle and performance of the ATLAS fast calorimeter simulation FastCaloSim, Tech. Rep. ATL-PHYS-PUB-2010-013, CERN, Geneva, Oct, 2010.
  • (17) CMS Collaboration, S. Abdullin, P. Azzi, F. Beaudette, P. Janot and A. Perrotta, The fast simulation of the CMS detector at LHC, J. Phys. Conf. Ser. 331 (2011) 032049.
  • (18) J. T. Childers, T. D. Uram, T. J. LeCompte, M. E. Papka and D. P. Benjamin, Simulation of LHC events on a millions threads, J. Phys. Conf. Ser. 664 (2015), no. 9 092006.
  • (19) J. Cogan, M. Kagan, E. Strauss and A. Schwarztman, Jet-Images: Computer Vision Inspired Techniques for Jet Tagging, JHEP 02 (2015) 118 [1407.5675].
  • (20) L. de Oliveira, M. Kagan, L. Mackey, B. Nachman and A. Schwartzman, Jet-images — deep learning edition, JHEP 07 (2016) 069 [1511.05190].
  • (21) L. G. Almeida, M. Backović, M. Cliche, S. J. Lee and M. Perelstein, Playing Tag with ANN: Boosted Top Identification with Pattern Recognition, JHEP 07 (2015) 086 [1501.05968].
  • (22) P. T. Komiske, E. M. Metodiev and M. D. Schwartz, Deep learning in color: towards automated quark/gluon jet discrimination, 1612.01551.
  • (23) J. Barnard, E. N. Dawe, M. J. Dolan and N. Rajcic, Parton Shower Uncertainties in Jet Substructure Analyses with Deep Neural Networks, 1609.00607.
  • (24) P. Baldi, K. Bauer, C. Eng, P. Sadowski and D. Whiteson, Jet Substructure Classification in High-Energy Physics with Deep Neural Networks, Phys. Rev. D93 (2016), no. 9 094034 [1603.09349].
  • (25) CMS Collaboration, S. Chatrchyan et. al., The CMS Experiment at the CERN LHC, JINST 3 (2008) S08004.
  • (26) M. Paganini, L. de Oliveira and B. Nachman, CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks, 1705.02355.
  • (27) M. Cacciari, G. P. Salam and G. Soyez JHEP 063.
  • (28) M. Cacciari, G. P. Salam and G. Soyez, FastJet User Manual, Eur. Phys. J. C72 (2012) 1896 [1111.6097].
  • (29) D. Krohn, J. Thaler and L.-T. Wang, Jet Trimming, JHEP 1002 (2010) 084 [0912.1342].
  • (30) T. Sjostrand, S. Mrenna and P. Z. Skands, A Brief Introduction to PYTHIA 8.1, Comput. Phys. Commun. 178 (2008) 852–867 [0710.3820].
  • (31) T. Sjostrand, S. Mrenna and P. Z. Skands, PYTHIA 6.4 Physics and Manual, JHEP 0605 (2006) 026 [hep-ph/0603175].
  • (32) B. Nachman, L. de Oliveira and M. Paganini, Dataset release - Pythia generated jet images for location aware generative adversarial network training, Feb., 2017. 10.17632/4r4v785rgx.1.
  • (33) L. de Oliveira and M. Paganini, lukedeo/adversarial-jets: Initial code release, Mar., 2017. 10.5281/zenodo.400708.
  • (34) A. L. Maas, A. Y. Hannun and A. Y. Ng, Rectifier nonlinearities improve neural network acoustic models, in Proc. ICML, vol. 30, 2013.
  • (35) S. Chintala, How to train a GAN?, NIPS, Workshop on Generative Adversarial Networks (2016).
  • (36) A. L. Maas, A. Y. Hannun and A. Y. Ng, Rectifier Nonlinearities Improve Neural Network Acoustic Models, .
  • (37) S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, 1502.03167.
  • (38) D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, CoRR abs/1412.6980 (2014).
  • (39) F. Chollet, “Keras.” https://github.com/fchollet/keras, 2017.
  • (40) M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu and X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from https://www.tensorflow.org/.
  • (41) J. Thaler and K. Van Tilburg, Identifying Boosted Objects with N-subjettiness, JHEP 1103 (2011) 015 [1011.2268].
  • (42) A. J. Larkoski, D. Neill and J. Thaler, Jet Shapes with the Broadening Axis, JHEP 04 (2014) 017 [1401.2158].
  • (43) Y. Rubner, C. Tomasi and L. J. Guibas, The earth mover’s distance as a metric for image retrieval, Int. J. Comput. Vision 40 (Nov., 2000) 99–121.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description