A multi-instance deep neural network classifier: application to Higgs boson CP measurement.
We investigate properties of a classifier applied to the measurements of the CP state of the Higgs boson in decays. The problem is framed as binary classifier applied to individual instances. Then the prior knowledge that the instances belong to the same class is used to define the multi-instance classifier. Its final score is calculated as multiplication of single instance scores for a given series of instances. In the paper we discuss properties of such classifier, notably its dependence on the number of instances in the series. This classifier exhibits very strong random dependence on the number of epochs used for training and requires careful tuning of the classification threshold. We derive formula for this optimal threshold.
Deep learning, classifiers, neural networks.
The Deep Neural Networks (DNN)  have been shown to work very well across many different domains, including image classification, machine translation or speech recognition. Recently, it is also finding his place in the applications to very demanding classification problems in High Energy Physics (HEP) [2, 3].
In this paper we present further development of the DNN application reported in , where the possibility of measuring the CP state of the Higgs boson produced in pp collisions at LHC accelerator, using a DNN trained on the Monte-Carlo data was investigated. The problem was defined as binary classification problem with the goal to distinguish between two different CP states of the Higgs boson. Presented in  solution was concentrated quantifying performance of a single instance classifier i.e. on predicting the probability that a given instance is a CP scalar object with the alternative hypothesis of being pseudo-scalar or mixed state. As a measure of the performance an area under the receiver operational characteristic curve (AUC)  was used.
What was not explicitly explored in  is a prior knowledge that only one of the different CP states can be realized in the nature according to the considered model, so the sample of multiple instances will belong to the one class only. This discussion will be a subject of the work presented here.
Classification is probably one of the most common machine learning (ML) tasks. One possible approach is to use Bayesian classifier. This amounts to calculating or estimating for each category the conditional probability that, given the input variables (features) , analyzed instance belongs to the category .
In this paper we will consider only the case of binary classifiers, with two categories denoted as and . Classification then consists of comparing probability with some threshold . We will call the classifier using a single-instance classifier. In practice, often one cannot find such set of features and which cleanly separate the two categories.
There are however cases when one can assume that a sample of instances, denoted by , consists of all belonging to the same category. We can then use this information to greatly increase the accuracy of classification. This can be achieved by calculating the probability that given the features all instances in a sample belong to category . We will call the classifier estimating a multi-instance classifier.
In this paper we will discuss how to calculate properties of the multi-instance classifier, notably its dependence on the sample size , from the properties of the single instance classifier. We will propose also how to choose optimal threshold to assure that predictions of multi-instance classifier are regularised, i.e. are not too sensitive to the number of epochs used for training DNN.
Without going into details on the nature of the problem and its practical importance, lets us briefly remind that we are discussing measurement of the properties of the Higgs boson, recently discovered by the experiments at CERN LHC proton-proton collider. This resonance searched for since decades by HEP experiments, is an evidence of the mechanism explaining within the context of so called Standard Model how elementary particles are acquiring their masses. The presently available statistics of the Higgs boson samples allows to explore ML techniques to measure quite precisely its internal properties, like the spin and CP state, crucial to support that indeed observed resonance is a Higgs boson of the Standard Model. The case studied here is quite challenging, the Higgs boson is decaying into two objects (tau leptons), , each of them decaying further into objects which caries in the correlations between they directions information about the CP state of the initial resonance we are interested in. So the goal of the DNN algorithm will be to identify those correlations in the multi-dimensional phase-space and use them for classifying the instances as belonging to one or other category allowed by the model.
For the case studied here we use the same Monte Carlo data as studied in  but we consider only one classification case, namely the decay, with and . The problem is defined as classification based on input variables in total, namely 7 outgoing - pair decay products, each represented by 4-vector in the energy-momenta phase-space. We do not build any functional features out of those variables, but we use directly 4-vector representation, however in the frame which, after boost and rotation from the laboratory frame, removes trivial symmetries from the system which then do not have to be rediscovered by the DNN. This choice of the frame was discussed in details in  and is motivated by the nature of the problem. One should note that those variables are not independent because of the kinematic constraints. Some of them are also not detected experimentally, so the set of being 28 variables for each instance represents the idealistic scenario. We will call this set a complete data set. We also consider a more realistic scenario, including only momenta of the particles which can be detected experimentally, i.e. removing neutrinos from the list. This gives in total variables for each instance . This more realistic scenario was used also in . We will call this set an incomplete data set.
Because in this paper we are interested in the methodology and properties of the classifiers rather then physics problem itself, as a case study we consider discrimination between two possible scenarios: a CP scalar which we will denote as vs. mixed scalar-pseudoscalar state with mixing angle denoted as .
The available statistics of Monte-Carlo (MC) data is approximately four millions of instances. Each instance, defined by the momenta of all the decay products, has two weights associated with it denoted by and . Those weights are respectively proportional to the probability that a given instance is of class or respectively. Those weights are calculated by the Monte Carlo program used for simulating physics model of interest and are depending on input variables used in complete data set. In the case of incomplete data set the weights and are still calculated using variables, but as inputs to DNN is missing some of those, weights are no longer representing an unique function of the inputs.
3 Binary classification
When conditional probability is available the classification depends on a single threshold parameter . We classify that an instance belongs to category when
and to category otherwise.
In practice one cannot usually find such set of features and which cleanly separate the two categories and as a consequence one is faced with misclassification errors. Those errors are usually quantified by two metrices: true positives rate (TPR) and false positives rate (FPR)
The TPR and FPR values depend on the threshold parameter . If we consider as a point on the plane, varying will follow a curve known as Receiver Operational Characteristic (ROC) curve .
If the probability distributions are known, the TPR and FPR can be calculated as follows
where denotes the Heaviside function equal to zero if its argument is negative and one otherwise. The is the probability density for variables in category and similarly for .
The area under ROC curve (AUC score) is another measure of the quality of the classifier. It is equal to the probability that a positive (A) instance will be rated higher then a negative (B) instance 
The value of AUC score equal to one half corresponds to random classification and value of indicates a perfect classifier.
4 Single instance classifier
We started by training a DNN with inputs and outputs defined as
The probability that instance belongs to category is just
We use cross-entropy for the loss function
where is the number of instances used for training. The tilde over a variable denotes DNN output.
For training we use the Keras framework with the TensorFlow  backend. We use seven layers DNN with softmax activation on the last layer. Remaining layers use the PReLU rectifier  as well as batch normalization . This model is similar to the one that was proposed in . The code defining the model and implementation is shown in the Listing 1.
Let us discuss first complete data set. In this case the outputs are a function (in mathematical sense) of the inputs i.e. outputs are uniquely determined by the inputs. The best possible LOSS and AUC score can be calculated directly from the MC data sample using approximation
and similarly for , yielding for the AUC score. This is consistent with what reported in Table 2 (top-right column) of  for the same data set and equivalent case. The evolution of the LOSS and AUC score as function of number of epochs used for the neural-network training and with a dropout is shown in the Figure 1. As can be seen from those plots the network gets close to the best possible AUC score (0.615) and does not overfit.
Lets us move now incomplete data set. In this case the mapping from to outputs is not unique i.e. not a function in the mathematical sense. We tried the same network architecture and dropout, but the network significantly overfitted. This likely is due to the fact that the data in this case looks like much more noisy as the same set of input features potentially describes several different instances and so corresponds to different values of . To fix this issue we increased dropout rate and after some trials we have settled on the dropout of . The achieved performance scores are much lower than for complete data, with the plateau AUC score of about (instead of ) but it is to be expected. The nature of the problem is that in the incomplete data set we removed some features (4-vectors) which are essential for picking up on correlations which allow the discrimination between model and . DNN cannot learn the original mapping as the outputs are no longer unique function of the variables and the network is performing some form of averaging. In this case we cannot calculate maximal possible AUC score directly from the data as we have no practical means to estimate .
So far we have considered only the AUC score as a quantitative metric of the classifier performance. In practice we might be more interested in the actual true and false positives rates. In Figure 3 we show the corresponding ROC curves for four arbitrarily chosen number of epochs used for training DNN.
As we can see all curves fall on top of each other which is consistent with the fact that the AUC score does not change much between epochs after some plateau is reached, as shown in the Figure 2 (bottom). We have also marked in the same plot the true and false positive rates corresponding to threshold for each of those networks and they do not coincidence. This indicates that the classification threshold should be adjusted further, individually for each network (labeled by number of epochs used for training) to make predictions less sensitive to small variations due to the number of epochs used in the training. In the next section we will discuss how to calculate the optimal threshold .
5 Multiple instance classification
As discussed in the previous section a single instance classifier cannot reliably distinguish between two different categories, the best AUC score being of about for single instance and the incomplete data set only. However, as we are discussing the problem were all instances must belong to the same category we can increase the accuracy of the classification by simultaneously interpreting results of classification of sequential instances. We will discuss this below for the incomplete data set case.
Multiple instance classification requires calculations of the probability . Because instances are independent the straightforward formula reads
Similarly as in the single instance case, multi-instance classification consists of comparing to some threshold value . In the Figure 4 we show results for the true and false positive rates corresponding to threshold , for four different DNN trained and as a function of sample sizes used in formula (9).
As already observed in the single instance classifier and fixed threshold value, both TPR and FPR vary even between DNNs trained with number of epochs different by one. This is of no surprise. The differences shown in Fig. 3 get magnified when multiplying probabilities in formula (9). The spread of predicted TPR for sequence of 200 instances is as large as between 70% and 90% for the number of epochs used between 82 and 94. We can regularize this effect by choosing the optimal classification threshold as indicated in the previous section.
5.1 True and false positives rate
Condition on the multi-instance classifier applied to sequence of instances
is equivalent to imposing condition on the single-instance classifier
which can be rewritten as
we can write the expression for true and false positive rates
Those formulas can be interpreted as
where variable is defined as
and denotes that variable has probability density equal to . Similarly
Using the central limit theorem we obtain that asymptotically
where denotes normal (gaussian) distribution with mean and standard deviation . The means and standard deviations for can be calculated with formulas
Those integrals can be approximated from the data using (8).
The probability that a normally distributed variable is greater than zero is
Adding the constant to a normally distributed variable only changes its mean so finally we can estimate
We have used those formula to plot the predictions for TPR and FPR as a function of and with threshold in the Figure 4. The agreement between TPR and FPR estimated from formula (4) and measured for networks trained with different number of epochs is very good even for very low values of where the central theorem is not expected to hold.
From the above it follows that we can consider the quantities
as a measure of quality of the multi-istance classifier. In particular if and the classifier will converge to a perfect classifier for large and any threshold .
5.2 Optimal threshold
To find the optimal threshold for each network we need some criterion. We will use as such minimization of the total number of misclassifications given by
but of course any other combination is possible. Inserting the formulas derived in previous section we obtain
giving the equation for optimal
We have found out that in all cases , so we set which simplifies equation (30) to
In Figure 5 we show the estimated TPR and FPR using the optimal thresholds and as expected the results for different networks now coincide.
5.3 AUC score
In the same way we can calculate the AUC score for the multi-instance classifier. The formula (4) in this case takes the form
after some manipulations can be rewritten as
As described in previous section those sums can be approximated by gaussian random variables. Their difference is also a gaussian variable so finally
As can be seen in Figure 6 the resulting formula is accurate down to .
6 Summary and discussion
In this paper we investigated properties of a multi-instance classifier applied to the measurements of the CP state of the Higgs boson in decays. The problem was framed as binary classifier applied to individual instances. Then the prior knowledge that the instances belong to the same class was used to define the multi-instance classifier. Its final score was calculated as multiplication of single instance scores for a given serie of instances. We discussed properties of such classifier and derived formula for the optimal threshold which, when applied single classifier, regularise (stabilise) FPR, TPR and AUC curves of the multi-instance classifier.
Taking as an example problem of measuring CP state of the Higgs boson in channel, we have shown that for realistic scenario as considered in  starting from AUC = 0.535 for the single-instance classifier, we can reach AUC = 0.95 for the multi-instance classifier after analysing serie of N = 200 instances or close to 0.85 for N = 100. This result is quite stable vs variation of the single-instance classifier due to eg. slight difference in the number of epochs used for the training. This stability is achieved thanks to introducing optimal classification threshold to the single-instance classifier.
D. Nemeth and E. Richter-Was were supported in part from funds of Polish National Science Center under decisions UMO-2014/15/ST2/00049 and by PLGrid Infrastructure of the Academic Computer Centre CYFRONET AGH in Krakow, Poland, where majority of Monte-Carlo calculations were performed.
- Y. LeCun, Y. Bengio and G. Hinton, Deep learning, Nature 521 (2015) no. 7553 436-444.
- A. J. Larkoski, Ian Moult, B. Nachman, Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning, arXiv: 1709.04464
- E. M. Metodiev, B. Nachman, J. Thaler, Classification without labels: Learning from mixed samples in high energy physics, JHEP 10 (2017) 174.
- R. Józefowicz, E. Richter-Was and Z. Was, Potential for optimizing the Higgs boson CP measurement in H decays at the LHC including machine learning techniques, Phys. Rev. D 94, no. 9, 093001 (2016) doi:10.1103/PhysRevD.94.093001 [arXiv:1608.02609 [hep-ph]].
- Tom Fawcett, An introduction to ROC analysis, Pattern Recognition Letters, Volume 27, Issue 8, 2006, Pages 861-874, ISSN 0167-8655, http://dx.doi.org/10.1016/j.patrec.2005.10.010. (http://www.sciencedirect.com/science/article/pii/S016786550500303X)
- Chollet, François et al., [https://github.com/fchollet/keras] GitHub (2015).
- Martín Abadi et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems [http://tensorflow.org/] (2015).
- K. He, X. Zhang, S. Ren and J. Sun, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 1026-1034. doi: 10.1109/ICCV.2015.123
- Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML’15), Francis Bach and David Blei (Eds.), Vol. 37. JMLR.org 448-456.