Learning to Classify from Impure Samples

Learning to Classify from Impure Samples

Patrick T. Komiske pkomiske@mit.edu Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA    Eric M. Metodiev metodiev@mit.edu Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA    Benjamin Nachman bpnachman@lbl.gov Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA    Matthew D. Schwartz schwartz@physics.harvard.edu Department of Physics, Harvard University, Cambridge, MA 02138, USA

A persistent challenge in practical classification tasks is that labelled training sets are not always available. In particle physics, this challenge is surmounted by the use of simulations. These simulations accurately reproduce most features of data, but cannot be trusted to capture all of the complex correlations exploitable by modern machine learning methods. Recent work in weakly supervised learning has shown that simple, low-dimensional classifiers can be trained using only the impure mixtures present in data. Here, we demonstrate that complex, high-dimensional classifiers can also be trained on impure mixtures using weak supervision techniques, with performance comparable to what could be achieved with pure samples. Using weak supervision will therefore allow us to avoid relying exclusively on simulations for high-dimensional classification. This work opens the door to a new regime whereby complex models are trained directly on data, providing direct access to probe the underlying physics.

preprint: MIT–CTP 4968

Data analysis methods at the Large Hadron Collider (LHC) rely heavily on simulations. These simulations are generally excellent and allow us to explore the mapping between truth information (particles from collisions) and observables (tracks or calorimeter deposits). In particular, simulations let us train complex algorithms to extract the truth information from the observables. Machine learning methods trained on low-level inputs have been developed for collider physics Larkoski et al. (2017) to identify boosted bosons Cogan et al. (2015); de Oliveira et al. (2016); Baldi et al. (2016); Barnard et al. (2017); Louppe et al. (2017); Datta and Larkoski (2017); Komiske et al. (2017a), top quarks Almeida et al. (2015); Kasieczka et al. (2017); Pearkes et al. (2017); Butter et al. (2017); Egan et al. (2017), -quarks CMS Collaboration (2017a); ATLAS Collaboration (2017a); Sirunyan et al. (2017), and light quarks Komiske et al. (2017b); CMS (2017); ATLAS Collaboration (2017b); Bhimji et al. (2017), for removing pileup Komiske et al. (2017c), and for generating fragmentation and calorimeter showers de Oliveira et al. (2017); Paganini et al. (2018, 2017). These new methods achieve excellent performance by exploiting subtle features of the simulations, which are presumed to be similar to the features in the data. Unfortunately, the simulations are known to be imperfect. For instance, the particle multiplicity and radiation distributions within quark- and gluon-initiated jets are known to be quite variable among different simulations and between simulations and data Gallicchio and Schwartz (2013); Aad et al. (2014, 2016a). In addition, non-negligible corrections (“scale factors”) are required experimentally (see e.g. Refs. Chatrchyan et al. (2013); CMS Collaboration (2013a); Aad et al. (2014); Khachatryan et al. (2014); Collaboration (2014); Aad et al. (2016b, c, d)). Thus it is natural to question the performance of machine learning algorithms trained on simulations: how do we know they are not just learning unphysical artifacts of the simulation? This objection certainly has merit, as the power of these methods for physics applications stems precisely from their ability to find features we do not fully understand and cannot easily interpret.

Data-driven approaches avoid the pitfalls of relying on simulations in experimental analyses. For simple observables, such as the invariant mass of a photon pair, a traditional experimental approach has been to perform sideband fits directly to the data. This avoids relying on the simulation altogether. Unfortunately, most of the sophisticated discrimination techniques developed in recent years use full supervision, where truth information is needed in order to train the classifier. However, real data generally consist only of mixed samples without truth information, arising from underlying statistical or quantum mixtures of signal and background. Occasionally one can find a small region of phase space where the signal or background is pure, but these regions are generally sparsely populated and may not produce representative distributions. Recent work on weak supervision Hernández-González et al. (2016) allows classifiers to be trained using only the information available from mixed samples. Two weakly supervised paradigms tailored to physics applications are Learning from Label Proportions (LLP) Dery et al. (2017) and Classification Without Labels (CWoLa) Metodiev et al. (2017). Ref. Dery et al. (2017) considered the problem of quark/gluon (/) jet discrimination using three standard jet observables and showed how to achieve fully supervised discrimination power by using LLP with two samples of different but known quark jet fractions. In Ref. Metodiev et al. (2017), it was shown that the proportions are not necessary for training since the likelihood ratio of the mixed samples is monotonically related to the signal/background likelihood ratio, the optimal binary classifier for signal vs. background.

One potential objection to the weak-learning demonstrations in Refs. Dery et al. (2017); Metodiev et al. (2017); Cohen et al. (2017) is that the dimensionality of the inputs used is small. Indeed, for a one-dimensional discriminant, such as the jet mass, one can extract the exact pure distributions from mixed samples using the fractions. It is not obvious that weak supervision will succeed when trained on high-dimensional inputs where the feature space may be sparsely populated. Indeed, the most powerful modern methods are trained on high-dimensional, low-level inputs, where numerically approximating and weighting the probability distribution is completely intractable.

In this paper, we demonstrate that weak supervision can approach the effectiveness of full supervision on complex models with high-dimensional inputs. As a concrete illustration, we use jet images Cogan et al. (2015) with convolutional neural networks (CNNs) applied to quark versus gluon jet tagging, where the dimensionality of the inputs is and simulation mis-modeling issues are a challenge Aad et al. (2014); ATLAS Collaboration (2016); CMS Collaboration (2013b); CMS Collaboration (2016); CMS Collaboration (2017b); ATLAS Collaboration (2017c). We find that CWoLa more robustly generalizes to learning with high-dimensional inputs than LLP, with the latter requiring careful engineering choices to achieve comparable performance. Though we use a particle physics problem as an example, the lessons about learning from data using mixtures of signal and background are applicable more broadly.

We begin by establishing some notation and formulating the problem. Let represent a vector of observables (features) useful for discriminating two classes we call signal () and background (). For example, might be the momenta of observed particles, calorimeter energy deposits, or a complete set of observables Datta and Larkoski (2017); Komiske et al. (2017a). In fully supervised learning, each training sample is assigned a truth label such as 1 for signal and 0 for background. Then the fully supervised model is trained to predict the correct labels for each training example by minimizing a loss function. For a sufficiently large training set, an appropriate model parameterization, and a suitable minimization procedure, the learned model should approach the optimal classifier defined by thresholding the likelihood ratio.

Data collected from a real detector do not come with signal/background labels. Instead, one typically has two or more mixtures of signal and background with different signal fractions , such that the distribution of the features, , is given by:


where and are the signal and background distributions, respectively. Weak supervision assumes sample independence, that Eq. 1 holds with the same distributions and for all mixtures. Although in most situations sample independence does not hold perfectly (see e.g. Ref. Gras et al. (2017)), it is often a very good approximation (cf. Table 2 below).

LLP uses any fully supervised classification method and modifies the loss function to globally match the signal fraction predicted by the model on a batch of training samples to the known truth fractions . Breaking the training set into batches, normally done to parallelize training, takes on a new significance with LLP since the loss function is evaluated globally on each batch. The batch size, which for LLP we define as the number of samples drawn from each mixture during one update of the model, is a critical hyperparameter of LLP.

The loss functions we use for LLP are slightly different from those in Ref. Dery et al. (2017). Analogous to the mean squared error (MSE) loss function for fully supervised (or CWoLa) training, we introduce the weak MSE (WMSE) loss for the LLP framework:


where is the batch size, indexes the mixed samples, and is the model. Analogous to the crossentropy, we also introduce the weak cross entropy (WCE) loss:


where . One caveat we discovered while exploring LLP is that the range of must be restricted to , otherwise the model falls into trivial minima of the loss function. We also observe the effect of model outputs becoming effectively binary at 0 and 1, necessitating additional care to avoid numerical precision issues.

CWoLa works without requiring the fractions to be known for training (the fractions on smaller test sets can be used to calibrate the classifier operating points). It acts on two mixtures, treating one as signal and the other as background. CWoLa uses any fully supervised classification method to distinguish the “signal” mixture from the “background” mixture. Amazingly, a classifier trained in this way asymptotically (as the number of training samples goes to infinity) results in the same classifier as if the samples were pure Metodiev et al. (2017); Blanchard et al. (2016); Cranmer et al. (2015). The CWoLa framework has the nice property that as the samples approach complete purity () it smoothly approaches the fully supervised paradigm. CWoLa presently only works with two mixtures; if more than two are available they can be pooled in some way at the cost of diluting their purity. Table 1 summarizes some differences between CWoLa and LLP.

Property LLP CWoLa
No need for fully-labeled samples
Compatible with any trainable model
No training modifications needed
Training does not need fractions
Smooth limit to full supervision
Works for mixed samples ?
Table 1: The essential pros (✓), cons (✗), and open questions (?) of the CWoLa and LLP weak supervision paradigms.

To explore weak supervision methods with high-dimensional inputs, we simulate events at TeV using Pythia 8.226 Sjöstrand et al. (2008) and create artificially mixed samples with various quark (signal) fractions. Jets with transverse momentum and rapidity are obtained from final-state, non-neutrino particles clustered using the anti- algorithm Cacciari et al. (2008) with radius implemented in FastJet 3.3.0 Cacciari et al. (2012). Single-channel, jet images Cogan et al. (2015); de Oliveira et al. (2016); Komiske et al. (2017b) are constructed from a patch of the pseudorapidity-azimuth plane of size centered on the jet, treating the particle values as pixel intensities. The images are normalized so the sum of the pixels is 1 and standardized by subtracting the mean and dividing by the standard deviation of each pixel as calculated from the training set.

All instantiations and trainings of neural networks were performed with the python deep learning library Keras Chollet (2015) with the TensorFlow Abadi et al. (2016) backend. A CNN architecture similar to that employed in Ref. Komiske et al. (2017b) was used: three 32-filter convolutional layers with filter sizes of , , and followed by a 128-unit dense layer. Maxpooling of size was performed after each convolutional layer with a stride length of 2. The dropout rate was taken to be 0.1 for all layers. Keras VarianceScaling initialization was used to initialize the weights of the convolutional layers. Due to numerical precision issues caused by the tendency of LLP to push outputs to 0 or 1, a softmax activation function was included as part of the loss function rather than the model output layer. Validation and test sets were used consisting each of 50k equally mixed quark and gluon jet images. Training was performed with the Adam algorithm Kingma and Ba (2014) with a learning rate of 0.001 and a validation performance patience of 10 epochs. Each network was trained 10 times and the variation of the performance was used as a measure of the uncertainty. Unless otherwise specified, the following are used by default: Exponential Linear Unit (ELU) Clevert et al. (2015) activation functions for all non-output layers, the CE loss function for CWoLa, and the WCE loss function for LLP.

The performance of a binary classifier can be captured by its receiver operating characteristic (ROC) curve. To condense the classifier performance into a single number, we use the area under the ROC curve (AUC). The AUC is also the probability that the classifier correctly sorts a randomly drawn signal and background event. Random classifiers have and perfect classifiers have . We also confirmed that our conclusions are unchanged when using the background mistag rate at 50% signal efficiency as a performance metric instead.

Figure 1: The AUC and training time of CWoLa (solid) and LLP (dashed) as the batch size is varied. Training times are measured on an NVIDIA Tesla K80 GPU using CUDA 8.0, TensorFlow 1.4.1, and Keras 2.1.2.

As previously noted, the LLP paradigm works by matching the predicted fraction of signal events to the known fraction for multiple mixed samples. In Ref. Dery et al. (2017), the averaging took place over the entire mixed sample. Averaging over the entire training set at once is effectively impossible for high-dimensional inputs such as jet images because the graphics processing units (GPUs) that are needed to train the CNNs in a reasonable amount of time typically do not have enough memory to hold the entire training set at one time. Hence, the ability to train with batches is highly desirable for using LLP with high-dimensional inputs.

There are many tradeoffs inherent with choosing the LLP batch size. Smaller batch sizes are susceptible to shot noise in the sense that the actual signal fraction on that batch may differ significantly from the fraction for the entire mixed sample, an effect which decreases as the batch size increases. Smaller batch sizes result in longer training times per epoch (because the full parallelization capabilities of the GPU cannot be used) but often require fewer epochs to train. Larger batch sizes have shorter training times per epoch but typically require more epochs to train. For CWoLa, the batch size plays the same role as in full supervision, with the performance being largely insensitive to it but the total training time varying slightly. These tradeoffs are captured in Fig. 1, which shows both the performance and training time for CWoLa and LLP models as the batch size is swept in powers of two from 64 to 16384, trained on two mixtures with and . The expected independence of CWoLa performance and the degradation of LLP performance for low batch sizes can clearly be seen. The training time curves are concave with optimum batch sizes toward the middle of the swept region. Based on this figure, we choose default batch sizes of for LLP and 400 for CWoLa.

In order to explore a slightly more realistic scenario than artificially mixing samples from the same distribution of quarks and gluons, we generate and dijet events with the same generation parameters and cuts as described previously. These “naturally” mixed samples have quark fractions and . The signal and background fractions have been systematically explored for these and many other processes in Ref. Gallicchio and Schwartz (2011). As indicated by Table 2, there is no significant difference in performance on the naturally mixed or artificially mixed samples. Hence, artificially mixed samples are used in the rest of this study in order to evaluate weak supervision performance at different quark purities.

Learning Sample AUC
CWoLa +jet vs. dijets 0.8626 0.0020
Artificial + / 0.8621 0.0019
LLP +jet vs. dijets 0.8544 0.0019
Artificial + / 0.8549 0.0018
Table 2: AUCs for training with CWoLa and LLP on and dijet samples as well as on artificial mixtures of and samples. The error given is the interquartile range. There is no significant difference in classifier performance between the naturally mixed (+jet vs. dijets) samples and the artificially mixed () samples with the same signal fractions.

Fig. 2 compares CWoLa and LLP performance for various quark/gluon purities as a function of the number of training samples. Each network is trained using two samples, one with quark fraction and the other with quark fraction . Each point in the figure is the median of 10 independent network trainings and the error bars show the and percentiles. Full supervision performance corresponds to CWoLa with . The most important takeaway from Fig. 2 is that we have achieved good performance with both weak supervision methods over a large variety of sample purities and training sample sizes. We also see that CWoLa consistently outperforms LLP and continues to get better as additional training samples are used, likely a result of the increasingly-populated feature space, whereas LLP performance tends to level off. It should be noted that given the binary output nature of LLP models, classifiers trained in this way effectively come with a working point and sweeping the threshold to produce a ROC curve may not be ideal. The purity/data tradeoff analysis of Fig. 2 can provide valuable information for practical applications of weak supervision methods in physics, particularly in cases where more data can be acquired at the expense of worsening sample purity.

The sensitivity of LLP to different choices of loss function and activation function was examined. We studied the choices of the symmetric squared loss of Eq. (2) and the weak crossentropy loss of Eq. (3) with Rectified Linear Unit (ReLU) Nair and Hinton (2010) and ELU activation functions. We found a significant improvement in LLP classification performance in using ELU activations instead of ReLU activations, particularly at high signal efficiencies. The choice of loss function was found to be less important than the choice of activation function, but minor improvements in AUC were observed with the WCE loss function over WMSE. We also studied the dependence of CWoLa on the choice of activation function and found consistent performance between ELU and ReLU activations. These results justify our default choices of ELU activation and WCE loss functions. With the choice of ELU activation, LLP achieves almost the same performance to our CWoLa-trained network near the operating point with equal signal and background efficiencies. We suspect this is a result of the tendency of LLP to output binary predictions (near 0 or 1) rather than a continuous output that can be easily thresholded.

Lastly, LLP has the potential advantage over the present implementation of CWoLa that it can naturally encompass multiple mixed samples with different purities. While in principle adding more samples should help, it is not obvious whether the network will effectively take advantage of them. Indeed, we did not find significant improvement to LLP when adding additional samples with intermediate purities, even after significant, dedicated architecture engineering.

Figure 2: Classifier performance (AUC) shown for both CWoLa (solid) and LLP (dashed) trained on two mixed samples with various signal fractions as the number of training data is varied between 100k and 1M. Each training is repeated 10 times and the , , and percentiles are shown. The CWoLa curve corresponds to full supervision. CWoLa outperforms LLP by this metric, though both methods work quite well.

In conclusion, we have shown that machine learning approaches using very high-dimensional inputs can be trained directly on mixtures of signal and background, and therefore on data. This addresses one of the main objections to the use of modern machine learning in jet tagging: sensitivity to untrustworthy simulations. We have implemented and tested weakly supervised learning with both LLP and CWoLa, finding that for the quark/gluon discrimination problem considered here CWoLa outperforms LLP and is less sensitive to particular hyperparameter choices. We have developed a method for training LLP with high-dimensional inputs in batches and demonstrated that the batch size is a critical hyperparameter for both performance and training time. Given any fully supervised classifier, CWoLa works “out-of-the-box” whereas LLP requires additional engineering to achieve good performance and is generally harder to train. Nonetheless, the success in using both of these weak supervision approaches on high-dimensional data is encouraging for the future of modern machine learning techniques in particle physics and beyond.

The authors would like to thank Lucio Dery and Francesco Rubbo for collaboration in the initial stages of this work. We are grateful to Jesse Thaler for helpful discussions. PTK and EMM would like to thank the MIT Physics Department for its support. Computations for this paper were performed on the Odyssey cluster supported by the FAS Division of Science, Research Computing Group at Harvard University. This work was supported by the Office of Science of the U.S. Department of Energy (DOE) under contracts DE-AC02-05CH11231 and DE-SC0013607, the DOE Office of Nuclear Physics under contract DE-SC0011090, and the DOE Office of High Energy Physics under contract DE-SC0012567. Cloud computing resources were provided through a Microsoft Azure for Research award. Additional support was provided by the Harvard Data Science Initiative.


  • Larkoski et al. (2017) Andrew J. Larkoski, Ian Moult,  and Benjamin Nachman, “Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning,”  (2017), arXiv:1709.04464 [hep-ph] .
  • Cogan et al. (2015) Josh Cogan, Michael Kagan, Emanuel Strauss,  and Ariel Schwarztman, ‘‘Jet-Images: Computer Vision Inspired Techniques for Jet Tagging,” JHEP 02, 118 (2015)arXiv:1407.5675 [hep-ph] .
  • de Oliveira et al. (2016) Luke de Oliveira, Michael Kagan, Lester Mackey, Benjamin Nachman,  and Ariel Schwartzman, “Jet-images -€” deep learning edition,” JHEP 07, 069 (2016)arXiv:1511.05190 [hep-ph] .
  • Baldi et al. (2016) Pierre Baldi, Kevin Bauer, Clara Eng, Peter Sadowski,  and Daniel Whiteson, “Jet Substructure Classification in High-Energy Physics with Deep Neural Networks,” Phys. Rev. D93, 094034 (2016)arXiv:1603.09349 [hep-ex] .
  • Barnard et al. (2017) James Barnard, Edmund Noel Dawe, Matthew J. Dolan,  and Nina Rajcic, “Parton Shower Uncertainties in Jet Substructure Analyses with Deep Neural Networks,” Phys. Rev. D95, 014018 (2017)arXiv:1609.00607 [hep-ph] .
  • Louppe et al. (2017) Gilles Louppe, Kyunghyun Cho, Cyril Becot,  and Kyle Cranmer, “QCD-Aware Recursive Neural Networks for Jet Physics,”  (2017), arXiv:1702.00748 [hep-ph] .
  • Datta and Larkoski (2017) Kaustuv Datta and Andrew Larkoski, “How Much Information is in a Jet?” JHEP 06, 073 (2017)arXiv:1704.08249 [hep-ph] .
  • Komiske et al. (2017a) Patrick T. Komiske, Eric M. Metodiev,  and Jesse Thaler, “Energy flow polynomials: A complete linear basis for jet substructure,”  (2017a), arXiv:1712.07124 [hep-ph] .
  • Almeida et al. (2015) Leandro G. Almeida, Mihailo Backović, Mathieu Cliche, Seung J. Lee,  and Maxim Perelstein, “Playing Tag with ANN: Boosted Top Identification with Pattern Recognition,” JHEP 07, 086 (2015)arXiv:1501.05968 [hep-ph] .
  • Kasieczka et al. (2017) Gregor Kasieczka, Tilman Plehn, Michael Russell,  and Torben Schell, “Deep-learning Top Taggers or The End of QCD?” JHEP 05, 006 (2017)arXiv:1701.08784 [hep-ph] .
  • Pearkes et al. (2017) Jannicke Pearkes, Wojciech Fedorko, Alison Lister,  and Colin Gay, “Jet Constituents for Deep Neural Network Based Top Quark Tagging,”  (2017), arXiv:1704.02124 [hep-ex] .
  • Butter et al. (2017) Anja Butter, Gregor Kasieczka, Tilman Plehn,  and Michael Russell, “Deep-learned Top Tagging using Lorentz Invariance and Nothing Else,”  (2017), arXiv:1707.08966 [hep-ph] .
  • Egan et al. (2017) Shannon Egan, Wojciech Fedorko, Alison Lister, Jannicke Pearkes,  and Colin Gay, “Long Short-Term Memory (LSTM) networks with jet constituents for boosted top tagging at the LHC,”  (2017), arXiv:1711.09059 [hep-ex] .
  • CMS Collaboration (2017a) CMS Collaboration, “Heavy flavor identification at CMS with deep neural networks,” CMS-DP-2017-005  (2017a).
  • ATLAS Collaboration (2017a) ATLAS Collaboration, “Identification of Jets Containing -Hadrons with Recurrent Neural Networks at the ATLAS Experiment,” ATL-PHYS-PUB-2017-003  (2017a).
  • Sirunyan et al. (2017) Albert M Sirunyan et al. (CMS), “Identification of heavy-flavour jets with the CMS detector in pp collisions at 13 TeV,”  (2017), arXiv:1712.07158 [physics.ins-det] .
  • Komiske et al. (2017b) Patrick T. Komiske, Eric M. Metodiev,  and Matthew D. Schwartz, “Deep learning in color: towards automated quark/gluon jet discrimination,” JHEP 01, 110 (2017b)arXiv:1612.01551 [hep-ph] .
  • CMS (2017) “New Developments for Jet Substructure Reconstruction in CMS,” CMS-DP-2017-027  (2017).
  • ATLAS Collaboration (2017b) ATLAS Collaboration, “Quark versus Gluon Jet Tagging Using Jet Images with the ATLAS Detector,” ATL-PHYS-PUB-2017-017  (2017b).
  • Bhimji et al. (2017) Wahid Bhimji, Steven Andrew Farrell, Thorsten Kurth, Michela Paganini, Prabhat,  and Evan Racah, “Deep Neural Networks for Physics Analysis on low-level whole-detector data at the LHC,” in 18th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2017) Seattle, WA, USA, August 21-25, 2017 (2017) arXiv:1711.03573 [hep-ex] .
  • Komiske et al. (2017c) Patrick T. Komiske, Eric M. Metodiev, Benjamin Nachman,  and Matthew D. Schwartz, “Pileup Mitigation with Machine Learning (PUMML),”  (2017c), arXiv:1707.08600 [hep-ph] .
  • de Oliveira et al. (2017) Luke de Oliveira, Michela Paganini,  and Benjamin Nachman, “Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis,” Comput. Softw. Big Sci. 1, 4 (2017)arXiv:1701.05927 [stat.ML] .
  • Paganini et al. (2018) Michela Paganini, Luke de Oliveira,  and Benjamin Nachman, “Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multi-Layer Calorimeters,” Phys. Rev. Lett. 120, 042003 (2018)arXiv:1705.02355 [hep-ex] .
  • Paganini et al. (2017) Michela Paganini, Luke de Oliveira,  and Benjamin Nachman, ‘‘CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks,”  (2017), arXiv:1712.10321 [hep-ex] .
  • Gallicchio and Schwartz (2013) Jason Gallicchio and Matthew D. Schwartz, “Quark and Gluon Jet Substructure,” JHEP 04, 090 (2013)arXiv:1211.7038 [hep-ph] .
  • Aad et al. (2014) Georges Aad et al. (ATLAS), “Light-quark and gluon jet discrimination in collisions at with the ATLAS detector,” Eur. Phys. J. C74, 3023 (2014)arXiv:1405.6583 [hep-ex] .
  • Aad et al. (2016a) Georges Aad et al. (ATLAS), “Measurement of the charged-particle multiplicity inside jets from TeV collisions with the ATLAS detector,” Eur. Phys. J. C76, 322 (2016a)arXiv:1602.00988 [hep-ex] .
  • Chatrchyan et al. (2013) Serguei Chatrchyan et al. (CMS), “Identification of b-quark jets with the CMS experiment,” JINST 8, P04013 (2013)arXiv:1211.4462 [hep-ex] .
  • CMS Collaboration (2013a) CMS Collaboration, “Performance of quark/gluon discrimination in 8 TeV pp data,” CMS-PAS-JME-13-002  (2013a).
  • Khachatryan et al. (2014) Vardan Khachatryan et al. (CMS), “Identification techniques for highly boosted W bosons that decay into hadrons,” JHEP 12, 017 (2014)arXiv:1410.4227 [hep-ex] .
  • Collaboration (2014) CMS Collaboration, “Boosted Top Jet Tagging at CMS,” CMS-PAS-JME-13-007  (2014).
  • Aad et al. (2016b) Georges Aad et al. (ATLAS), “Performance of -Jet Identification in the ATLAS Experiment,” JINST 11, P04008 (2016b)arXiv:1512.01094 [hep-ex] .
  • Aad et al. (2016c) Georges Aad et al. (ATLAS), “Identification of boosted, hadronically decaying W bosons and comparisons with ATLAS data taken at TeV,” Eur. Phys. J. C76, 154 (2016c)arXiv:1510.05821 [hep-ex] .
  • Aad et al. (2016d) Georges Aad et al. (ATLAS), “Identification of high transverse momentum top quarks in collisions at = 8 TeV with the ATLAS detector,” JHEP 06, 093 (2016d)arXiv:1603.03127 [hep-ex] .
  • Hernández-González et al. (2016) Jerónimo Hernández-González, Iñaki Inza,  and Jose A Lozano, “Weak supervision and other non-standard classification problems: a taxonomy,” Pattern Recognition Letters 69, 49–55 (2016).
  • Dery et al. (2017) Lucio Mwinmaarong Dery, Benjamin Nachman, Francesco Rubbo,  and Ariel Schwartzman, “Weakly Supervised Classification in High Energy Physics,” JHEP 05, 145 (2017)arXiv:1702.00414 [hep-ph] .
  • Metodiev et al. (2017) Eric M. Metodiev, Benjamin Nachman,  and Jesse Thaler, “Classification without labels: Learning from mixed samples in high energy physics,” JHEP 10, 174 (2017)arXiv:1708.02949 [hep-ph] .
  • Cohen et al. (2017) Timothy Cohen, Marat Freytsis,  and Bryan Ostdiek, “(Machine) Learning to Do More with Less,”  (2017), arXiv:1706.09451 [hep-ph] .
  • ATLAS Collaboration (2016) ATLAS Collaboration, “Discrimination of Light Quark and Gluon Jets in collisions at TeV with the ATLAS Detector,” ATLAS-CONF-2016-034  (2016).
  • CMS Collaboration (2013b) CMS Collaboration, “Performance of quark/gluon discrimination in 8 TeV pp data,” CMS-PAS-JME-13-002  (2013b).
  • CMS Collaboration (2016) CMS Collaboration, “Performance of quark/gluon discrimination in 13 TeV data,” CMS-DP-2016-070  (2016).
  • CMS Collaboration (2017b) CMS Collaboration, “Jet algorithms performance in 13 TeV data,” CMS-PAS-JME-16-003  (2017b).
  • ATLAS Collaboration (2017c) ATLAS Collaboration, “Quark versus Gluon Jet Tagging Using Charged Particle Multiplicity with the ATLAS Detector,” ATL-PHYS-PUB-2017-009  (2017c).
  • Gras et al. (2017) Philippe Gras, Stefan Höche, Deepak Kar, Andrew Larkoski, Leif Lönnblad, Simon Plätzer, Andrzej Siódmok, Peter Skands, Gregory Soyez,  and Jesse Thaler, “Systematics of quark/gluon tagging,” JHEP 07, 091 (2017)arXiv:1704.03878 [hep-ph] .
  • Blanchard et al. (2016) Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi,  and Clayton Scott, “Classification with asymmetric label noise: Consistency and maximal denoising,” Electron. J. Statist. 10, 2780–2824 (2016).
  • Cranmer et al. (2015) Kyle Cranmer, Juan Pavez,  and Gilles Louppe, “Approximating Likelihood Ratios with Calibrated Discriminative Classifiers,”  (2015), arXiv:1506.02169 [stat.AP] .
  • Sjöstrand et al. (2008) Torbjorn Sjöstrand, Stephen Mrenna,  and Peter Z. Skands, “A Brief Introduction to PYTHIA 8.1,” Comput. Phys. Commun. 178, 852–867 (2008)arXiv:0710.3820 [hep-ph] .
  • Cacciari et al. (2008) Matteo Cacciari, Gavin P. Salam,  and Gregory Soyez, “The Anti-k(t) jet clustering algorithm,” JHEP 04, 063 (2008)arXiv:0802.1189 [hep-ph] .
  • Cacciari et al. (2012) Matteo Cacciari, Gavin P. Salam,  and Gregory Soyez, “FastJet User Manual,” Eur. Phys. J. C72, 1896 (2012)arXiv:1111.6097 [hep-ph] .
  • Chollet (2015) F. Chollet, “Keras,”  (2015).
  • Abadi et al. (2016) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al., “Tensorflow: A system for large-scale machine learning.” in OSDI, Vol. 16 (2016) pp. 265–283.
  • Kingma and Ba (2014) Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980  (2014).
  • Clevert et al. (2015) Djork-Arné Clevert, Thomas Unterthiner,  and Sepp Hochreiter, ‘‘Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289  (2015).
  • Gallicchio and Schwartz (2011) Jason Gallicchio and Matthew D. Schwartz, “Pure Samples of Quark and Gluon Jets at the LHC,” JHEP 10, 103 (2011)arXiv:1104.1175 [hep-ph] .
  • Nair and Hinton (2010) Vinod Nair and Geoffrey E Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10) (2010) pp. 807–814.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description