Hierarchical clustering with deep Qlearning
Abstract
The reconstruction and analyzation of high energy particle physics data is just as important as the analyzation of the structure in real world networks. In a previous study it was explored how hierarchical clustering algorithms can be combined with cluster algorithms to provide a more generic clusterization method. Building on that, this paper explores the possibilities to involve deep learning in the process of cluster computation, by applying reinforcement learning techniques. The result is a model, that by learning on a modest dataset of nodes during epochs can reach precision in predicting the appropriate clusters.
1 Introduction
Different datasets should be clusterized with specific approaches. For real world networks, hierarchical algorithms, like the Louvain method, provides an efficient way to produce the clusters. Fusing some of the aspects of these processes and the jet clustering, a more generic process can be conceived as it was studied in [26]. This solution might prove very useful for heavy ion physics, where the jet physics plays an important role. The contribution in this paper is a deep learning method, that uses reinforcement learning, to teach an artificial neural network how to clusterize the input graphs without any external user interaction. The evaluation is provided on real world network data, that conforms the original Louvain method’s properties, so a more thorough examination is possible. Looking at the results, the neural network is capable to achieve an average precision on the test dataset of , even by running for only epochs.
2 Hierarchical clustering
This section contains a brief introduction of the used hierarchical clustering algorithm and of the jet algorithms from physics.
2.1 The Louvain algorithm
The Louvain method [5], is a multiphase, iterative, greedy hierarchical clusterization algorithm, working on undirected, weighted graphs. The algorithm processes through multiple phases, within each phase multiple iterations until a convergence criteria is met. Its parallelization was explored in [13], that was further evolved into a GPU based implementation as was detailed in [9]. The modularity is a monotonically increasing function, spreading across multiple iterations, giving a numerical representation on the quality of the clusters. Because the modularity is monotonically increasing, the process is guaranteed to terminate. Running on a real world dataset, termination is achieved in not more than a dozen iterations.
2.1.1 Modularity
On a set, , containing every community in a given partitioning of , where and is the set of nodes, modularity is given by the following [15]:
(1) 
where is the sum of the degrees of all the nodes in community C and is the sum of the weight of all the edges.
2.2 Jet algorithm
During the last 40 years several jet reconstruction algorithms have been developed for hadronic colliders [24][2]. The first ever jet algorithm was published by Sterman and Weinberg in the 1970’s [18]. The cone algorithm plays an important role when a jet consists of a large amount of hadronic energy in a small angular region. It is based on a combination of particles with their neighbours in space within a cone of radius . However the sequential recombination cluster algorithms combine the pairs of objects which have very close values. The particles merge into a new cluster through successive pair recombination. The starting point is the lowest particles for clustering in the algorithm, but in the anti recombination algorithm it is the highest momentum particles.
The jet clustering involves the reconstructed jet momentum of particles, which leaves the calorimeter together with modified values by the tracker system.
2.2.1 Cone algorithm
The Cone algorithm is one of the regularly used methods at the hadron colliders. The main steps of the iteration are the following [18]: the seed particle belongs to the initial direction, and it is necessary to sum up the momenta of all particle , which is situated in a circle of radius () around , where and are the rapidity and azimuth of particle .
The direction of the sum is applied as a new seed direction. The iteration procedure is repeated as long as the direction of the determined cone is stable.
It is worth noting what happens when two seed cone overlaps during the iteration. Two different groups of cone algorithms are discussed:
One possible solution is to select the first seed particle that has the greatest transverse momentum. Have to find the corresponding stable cone, i.e. jet and delete the particles from the event, which were included in the jet. Then choose a new seed, which is the hardest particle from the remaining particles, and apply to search the next jet. The procedure is repeated until there is no particle that has not worked. This method avoids overlapping.
Other possibility is the so called ”overlapping” cones with the splitmerge approach. All the stable cones are found, which are determined by iteration from all particles. This avoids the same particle from appearing in multiple cones. The splitmerge procedure can be used to consider combining pair of cones. In this case more than a fraction of the transverse momentum of the softer cones derives from the harder particles; otherwise the common particles assigned to the cone, which is closer to them. The splitmerge procedure applies the initial list of protojets, which contains the full list of stable cones:

Take the protojet with the largest (i.e. hardest protojet), label it .

Search the next hardest protojet that shares particles with (i.e. overlaps), label it . If no such protojet exists, delete from the list of protojets and add it to the list of final jets.

Determine the total of the particles, which is shared between the two protojets, .

If , where is a free parameter, it is called the overlap threshold, replace protojets and with a single merged protojet.

Otherwise the protojets are scattered, for example assigning the shared particles to the protojet whose axis is closer.


Repeat from step 1 as long as there are protojets left.
A similar procedure to splitmerge method is the so called splitdrop procedure, where the nonshared particles, which fall into the softer of two overlapping cones are dropped, i.e. are deleted from the jets altogether.
2.2.2 Sequential recombination jet algorithm
They go beyond just finding jets and implicitly assign a clustering sequence to an event, which is often closely connected with approximate probabilistic pictures that one may have for parton branching. The current work focuses on the algorithm, whose parallelisation was studied in [11] and [25].
The algorithm for hadrons
In the case of the protonproton collision, the variables which are invariant under longitudinal boots are applied. These quantities which were introduced by [27] and the distance measures are longitudinally invariant as the following:
(2) 
(3) 
In this definition the two beam jets are not distinguished.
If , then it gives the ”anti” algorithm. In this case the clustering contains hard particles instead of soft particles. Therefore the jets extend outwards around hard seeds. Because the algorithm depends on the energy and angle through the distance measure, therefore the collinear branching will be collected at the beginning of the sequence.
Hierarchical clustering
In [26] it was studied how to do hierarchical clustering, following the rules of the algorithm. First the list of particles has to be transformed into a graph, with the particles themselves appointed as nodes. The distance between the elements is a suitable selection for a weight to all edges between adjacent particles. But as it eventually leads up to links, where is the number of nodes, a better solution is to make connections between nearest neighbours and to the second to nearest. If the particle’s nearest ”neighbour” is the beam, it will be represented with an isolated node. While the Louvain algorithm relies on modularity gain to drive the computation, the jet clustering variant doesn’t have the modularity calculation, as it is known that the process will end, when all particles are assigned to a jet.
The result of this clustering will still be a dendogram, where the leafs will represent the jets.
3 Basic artificial neural networks
Since the beginning of the 1990s the artificial neural network (ANN) methods are employed widely in the high energy physics for the jet reconstruction and track identification[30][34]. These methods are wellknown in offline and online data analysis also.
Artificial neural networks are layered networks of artificial neurons (AN) in which biological neurons are modelled. The underlying principle of operation is as follows, each AN receives signals from another AN or from environment, gathers these and creates an output signal which is forwarded to another AN or the environment. An ANN contains one input layer, one or more hidden layers and one output layer of ANs. Each AN in a layer is connected to the ANs in the next layer. There are such kind ANN configurations, where the feedback connections are introduced to the previous layers.
3.1 Architecture
An artificial neuron is denoted by a set of input signals from the environment or from another AN. A weight is assigned to each input signal. If the value of weight is larger than zero then the signal is excited, otherwise the signal is inhibited. AN assembles all input signals, determines a net signal and propagates an output signal.
3.1.1 Types of artificial networks
Some features of neural systems which makes them the most distinct from the properties of conventional computing:

The associative recognition of complex structures

Data may be noncomplete, inconsistent or noisy

The systems can train, i.e. they are able to learn and organize themselves

The algorithm and hardware are parallel
There are many types of artificial neural networks. In the high energy particle physics the socalled multilayer perception (MLP) is the most widespread. Here a functional mapping from input to output values is realised with a function :
where are the weights between the input layer and the hidden layer, and are the weights between the hidden layer and the output layer. This type of ANN is called feedforward multilayer ANN.
It can be extended into a layer of functional units. In this case an activation function is implemented for the input layer. This ANN type is called functional link ANN. The output of this ANN is similar such as previously ANN, without it has additional layer, which contains functions . The weights between the input layer and the functional layer are , if depends on , and otherwise. The output of this ANN is:
The functional link ANNs provides better computational time and accuracy then the simple feedforward multilayer ANN.
Application in HighEnergy Physics
The first application, which was published in 1988, discussed a recurrent ANN for tracking reconstruction [28]. A recurrent ANN was also used for tracking reconstruction in LEP experiment[29].
An article published about a neural network method which was applied to find efficient mapping between certain observed hadronic kinematical variables and the quarkgluon identify. With this method it is able to separate gluon from quark jets originating from the MonteCarlo generated events [31]. A possible discrimination method is presented by the combination of a neural network and QCD to separate the quark and gluon jet of annihilation [35].
The neural network clusterisation algorithm was applied for the ATLAS pixel detector to identify and split merged measurements created by multiple charged particles[32]. The neural network based cluster reconstruction algorithm which can identify overlapping clusters and improves overall particle position reconstruction [33].
Artificial intelligence offers the potential to automate challenging dataprocessing tasks in collider physics. To establish its prospects, it was explored to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets [36].
4 Qlearning
Qlearning is a modelfree reinforcement learning technique [40]. The reinforcement learning problem is meant to be learning from interactions to achieve a goal. The learner and decisionmaker is called the agent. The thing it interacts with is called the environment, that contains everything from the world surrounding the agent. There’s a continuous interaction between, where the agent selects an action and the environment responds by presenting new situations (states) to the agent. The environment also returns rewards, special numerical values that the agent tries to maximize over time. A full specification of an environment defines a task, that is an instance of the reinforcement learning problem. Specifically, the agent and environment interact at each of a sequence of discrete time steps . At each time step , the agent receives the environment’s state, , where is the set of possible states, and based on that it selects an action, , where is the set of all available actions in state . At the next time step as a response to the action, the agent receives a numerical reward, , and finds itself in a new state, (Figure 1).
At every time step, the agent implements a mapping from states to probabilities of selecting the available actions. This is called the agent’s policy and is denoted by , where is the probability that if . Reinforcement learning methods specify how the agent changes this using its experience. The agent’s goal is to maximize the total amount of reward it receives over the long run.
4.1 Goals and rewards
The purpose or goal of the agent is formalized in terms of a special reward passed from the environment. At each time step, the reward is a simple number, . The agent’s goal is to maximize the total reward it receives.
4.2 Returns
If the rewards accumulated after time step is denoted by , what will be maximized by the agent is the expected return , that is defined as some function of the received rewards. The simplest case is the sum of the rewards: , where is the final time step. This approach comes naturally, when the agentenvironment interaction breaks into subsequences, or episodes. Each episode ends in a special terminal state, that is then being reset to a standard starting state. The set of all nonterminal states is denoted by , while the set with a terminal state is denoted by .
Introducing discounting, the agent tries to maximize the the sum of the discounted rewards by selecting the right actions. At time step choosing action , the discounted return will be defined with equation 4.
(4) 
where is a parameter, , called the discount rate. It determines the present value of future rewards: a reward received at time step is worth only times the immediate reward. If , the infinite sum still is a finite value as long as the reward sequence is bounded. If , the agent is concerned only with maximizing immediate rewards. If all actions influences only the immediate reward, then the agent could maximize equation 4 by separately maximizing each reward. In general, this can reduce access to future rewards and the return may get reduced. As approaches 1, future rewards are used more strongly.
4.3 The Markov property
Assuming a finite set of states and reward values, also considering how a general environment responds at time to the action taken at time , this response may depend on everything that has happened earlier. In this case only the complete probability distribution can define the dynamics:
(5) 
for all ,, and all possible values of the past events: . If the state has the Markov property the environment’s response at depends only on the state and action at and the dynamics can be defined by applying only equation 6.
(6) 
4.4 Markov cecision process
A reinforcement learning task satisfying the Markov property is a Markov decision process, or MDP. If the state and action spaces are finite, then it is a finite MDP. This is defined by its state and action sets and by the environment’s onestep dynamics. Given any state, action pair, , the probability of each possible next state, , is
Having the current state and action, and , with any next state, , the expected value of the next reward can be computed with
These quantities, and , completely specify the most important aspects of the dynamics of a finite MDP.
4.5 Value functions
Reinforcement learning algorithms are generally based on estimating value functions, that are either functions of states or stateaction. They estimate how good a given state is, or how good a given action in the present state is. How good it is, depends on future rewards that can be expected, more precisely, on the expected return. As the rewards received depends on the taken actions, the value functions are defined with respect to particular policies. A policy, , is a mapping from each state, , and action, , to the probability of taking action while in state . The value of a state under a policy , denoted by , is the expected return when starting in and following . For MDPs is defined as
where is the expected value given that the agent follows policy . The value of the terminal state is always zero. is the statevalue function for policy . Similarly, the value of taking action in state under a policy , denoted by is defined as the expected return starting from , taking the action , and following policy :
is the actionvalue function for policy .
and can be estimated from experience. If an agent follows policy and maintains an average of the actual return values in each encountered state, then it will converge to the state’s value, , as the number of times that state is encountered approaches infinity. If in a given state every action has a separate average, then these will also converge to the action values, .
4.6 Optimal value functions
To solve a reinforcement learning task, a specific policy needs to be found, that achieves a lot of reward over the long run. For finite MDPs, an optimal policy can be defined. Value functions define a partial ordering over policies. A policy is defined to be better than or equal to a policy if its expected return is greater than or equal to for all states. Formally, if and only if for all . At least one policy exists, that is better than or equal to all other policies and this is the optimal policy. If more than one exists, the optimal policies are denoted by . The statevalue function among them is the same, called the optimal statevalue function, denoted by , and defined as
for all . The optimal actionvalue functions are also shared, denoted by , and defined as
for all and . For the stateaction pair , this gives the expected return for taking action in state and following an optimal policy. Thus, can be defined in terms of as follows:
5 Clustering with deep Qlearning
The Deep Qlearning (DQL) [41] [42] is about using deep learning techniques on the standard Qlearning (section 4).
Calculating the Q stateaction values using deep learning can be achieved by applying the following extensions to standard reinforcement learning problems:

Calculate Q for all possible actions in state ,

Make prediction for Q on the new state and find the action , that will yield the biggest return,

Set the Q return for the selected action to . For all other actions the return should remain unchanged,

Update the network using backpropagation and minibatches stochastic gradient descent.
This approach in itself leads to some additional problems. The explorationexploitation issue is related to which action is taken in a given state. By selecting an action that always seems to maximize the discounted future reward, the agent is acting greedy and might miss other actions, that can yield higher overall reward in the long run. To be able to find the optimal policy the agent needs to take some exploratory steps at specific time steps. This is solved by applying the algorithm [40], where a small probability will choose a completely random action.
The other issue is the problem of the localminima [43]. During training multiple states can be explored, that are highly correlated and this may make the network to learn replaying the same episode. This can be solved, by first storing past observations in a replay memory and taking random samples from there for the minibatch, that is used to replay the experience.
5.1 Environment
The environment provides the state that the agent will react to. In case of clustering the environment will be the full input graph. The actual state the necessary information required to compute the Louvain method, packaged into a Numpy stack. These include the weights, degrees, number of loops, the actual community and the total weight of the graph. Each state represents one node of the graph with all of its neighbors. The returned rewards for each state will be based on the result of the actual Louvain clusterization, which means during training the environment will compute the real clusters. If the action selected by the agent leads to the best community, that will have a positive reward set to and in any other case the returned value will be . After stepping, the next state will contain the modified community informations.
The agent’s action space is finite and predefined and the environment also has to reflect this. Let the cardinality of the action space be noted for all states by For this reason, the state of the environment contains information about only neighbors. This can lead to more nodes, than how many really is connected to a given element. In this case the additional dummy node’s values are filled with extremals, in the current implementation with negative numbers. One limitation of the actual solution is that if the number of neighbors are higher, than , then only the first neighbors will be considered, in the order in which they appear in dataset. The first ”neighbor” will be currently evaluated node, so in case the clusterization will not yield any better community, the model should see, that the node stays in place.
To help avoid potential overflow during the computation, weights of the input graph are normalized to be between and .
5.2 Agent
The agent acts as the decision maker, selecting the next community for a given node. It takes the state of the environment as an input and gives back the index of the neighbor that is considered to be providing the best community.
5.2.1 Implementation in Keras
Keras [44] is a Python based highlevel neural networks API, compatible with the TensorFlow, CNTK, and Theano machine learning frameworks. This API encourages experimentation as it supports rapid development of neural networks. It allows easy and fast prototyping, with a user friendly, modular, and extensible structure. Both convolutional networks and recurrent networks can be developed, also their combinations are also possible in the same agent. As all modern neural network API it both runs on CPU and GPU for higher performance.
The core data structure is a model, that is a collection of layers. The simplest type is the Sequential model, a linear stack of layers. More complex architectures also can be achieved using the Keras functional API.
The clustering agent utilizes a Sequential model:
from keras.models import Sequential model = Sequential()
Stacking layers into a model is done through the add function:
from keras.layers import Dense model.add(Dense(128, input_shape=(self.state_size, self.action_size), activation=’relu’)) model.add(Dropout(0.5)) model.add(Dense(128, activation=’relu’)) model.add(Dropout(0.5)) model.add(Dense(128, activation=’relu’))
The first layer will handle the input and has a mandatory parameter defining its size. In this case input_shape is provided as a 2dimensional matrix, where state_size is the number of parameters stored in the state and action_size is the number of possible actions. The first parameter tells how big the output dimension will be, so in this case the input will be propagated into a 128dimensional output.
The following two layers are hidden layers (section 3) with internal nodes, with rectified linear unit (ReLU) activation. The rectifier is an activation function given by the positive part of its argument: , where is the input to a neuron. The rectifier was first introduced to a dynamical network in [20]. It has been demonstrated in [21] to enable better training of deeper networks, compared to the widely used activation function prior 2011, the logistic sigmoid [45].
During training overfitting happens, when the ANN goes to memorize the training patterns. In this case the network is weak in generalizing on new datasets. This appears for example, when an ANN is very large, namely it has too many hidden nodes and hence, there are too many weights which need to be optimized.
The dropout for the hidden layers is used to prevent overfitting on the learning dataset. Dropout is a technique that makes some randomly selected neurons ignored during training. Their contribution to the activation of neurons on deeper layers is removed temporally and the weight updates are not applied back to the neurons. If neurons are randomly dropped during training, then others will have to handle the representation, that is required to make predictions, that is normally handled by the dropped elements. This results in multiple independent internal representations for the given features [39]. This way the network becomes capable of better generalization and avoids potential overfitting on the training data.
The output so far will still be a matrix with the same shape as the input. This is flatten into a 1dimensional array by adding the following layer:
model.add(Flatten())
Finally to have the output provide the returns on each available actions, the last layer changes the output dimension to action_size:
model.add(Dense(self.action_size, activation=’linear’))
Once the model is set up, the learning process can be configured with the compile function:
model.compile(loss=’mse’, optimizer=Adam(lr=self.learning_rate)),
where learning_rate has been set to . For the loss function mean squared error is used, optimizer is an instance of Adam [22] with the mentioned learning rate. The discount rate for future rewards have been set to . This way the model will try to select actions, that yield the maximum rewards in the short term. While maximizing the reward in long term can eventually lead to a policy, that computes the communities correctly, choosing it this small makes the model learn to select the correct neighbors faster.
To make a prediction on the current state, the predict function is used:
model.predict(state.reshape(1, self.state_size, self.action_size))
For Keras to work on the input state, it always have to be reshaped into dimensions , while the change always has to keep the same number of state elements.
6 Results
Evaluation of the proposed solution is done by processing network clustering on undirected, weighted graphs. These graphs contain real network information, instead of evaluating on physics related datasets (section 2.2), as it is more suitable for the original Louvain method. Because of this, the modularity can be used as a sort of metric to measure the quality (subsection 2.1) of the results. Additionally the number of correct predictions and misses are used to describe the deep Qlearning (section 5) based method’s efficiency.
Numerical evaluations are done by generating one iteration on the first level of the dendogram as the top level takes the most time to generate as it is based on all the original input nodes. The GPU implementation of the Louvain method being used was first described in [9].
6.1 Dataset
The proposed model, as well as the Louvain clustering works on undirected, weighted graphs. Such graphs can be generated from U.S. Census 2010 and Tiger/Line 2010 shapefiles, that are freely available from [46]. They contain the following:

the vertices are the Census Blocks;

there’s an edge between two vertices if the corresponding Census Blocks share a line segment on their border

each vertex has two weights:

Census2010 POP100 or the number of people living in that Census Block

Land Area of the Census Block in square meters


the edge weights are the pseudolength of the shared borderlines.

each Census Block is identified by a point, that is given longitudinal and latitudinal coordinates
A census block is the smallest geographical unit used by the United States Census Bureau for tabulation of 100percent data. The pseudolength is given by , where and are the differences in longitudes and latitudes of each line segment on the shared borderlines. The final result is multiplied by to make the edge weights integers. For clusterization the node weights are not used.
The matrices used for evaluation contains the information related to New York, Oregon and Texas (table 1), that was arbitrary selected from the SuiteSparse Matrix Collection [38]. The graph details can be found in [37].
New York  Oregon  Texas  

Nodes  
Edges 
Due to the limitations of the proposed solution as was described in subsection 5.1, neighbors are kept for each nodes during the computation.
6.2 Precision of the neural network
The model described in section 5 have been trained on the Oregon graph, taking the first nodes based on the order how they are first mentioned in the original dataset, running for epochs. The ratio of the good and bad predictions are shown in table 2.
New York  Oregon  Texas  

Positive  
Negative 
The deep learning solution’s precision in average is . Specifically on the datasets it’s respectively , and . Precision can be further increased by running the training for more epochs or by further tune the hyperparameters.
6.3 Modularity comparison
The Louvain method assumes nothing of the input graph. The clusterization can be done without any prior information of the groups being present in the network. The modularity is presented (table 3) for all test matrices for both the Louvain algorithm and the deep Qlearning based solution.
New York  Oregon  Texas  

DQL  
Louvain 
The modularities showing similar results to the precision of the network: the New York graph has a modularity less with compared to the Louvain computation, while Oregon is less with and Texas is less with . This proves, that by loosing from the precision, the qualities of the clusters do not degrade more than, what is lost on the precision.
7 Summary
In this paper a new hierarchical clustering was proposed based on the Louvain method, using deep learning. The detailed model was capable to achieve of precision, while only being teached for epochs. Even with the error, the resulting modularity in average was less compared to the Louvain method’s result with only .
8 Future work
The current solution can’t process the whole graph, just a subset of the neighbors are considered, when processing the communities. This needs to be extended further, to have a fully a realized deep learning clusterizer. The model still needs to be evaluated on Jet related datasets and needs to be explored if any changes are required in the agent to work efficiently on those type of graphs. Most of the errors come from choosing a dummy node as the best community, which implies that the way, how these nodes are represented should be studied further.
References
 [1]
 [2] A. Ali, G. Kramer, Jets and QCD: A Historical Review of the Discovery of the Quark and Gluon Jets and its Impact on QCD Eur. Phys. J. H36 (2011) 245326. [arXiv:1012.2288 [hepph]].
 [3] D. Bader, J. McCloskey, Modularity and graph algorithms, SIAM AN10 Minisymposium on Analyzing Massive RealWorld Graphs (2009) 12â16.
 [4] J.W. Berry, B. Hendrickson, R.A. LaViolette, C.A. Phillips, Tolerating the community detection resolution limit with edge weighting, Phys. Rev. E 83 (5) (2011) 056119.
 [5] Vincent D Blondel, JeanLoup Guillaume, Renaud Lambiotte and Etienne Lefebvre, Fast unfolding of communities in large networks, Journal of Statistical Mechanics: Theory and Experiment 10 (2008), doi:P10008
 [6] M.G. Bowler, Femptophysics, Pergamon Press 1990.
 [7] S.D. Ellis, D. E. Soper, Successive combination jet algorithm for hadron collisions Phys. Rev. D 48 7 (1993)3160.
 [8] S. D. Ellis, J. Huston, K. Hatakeyama, P. Loch and M. Tonnesmann, Jets in HadronHadron Collisions Prog. Part. Nucl. Phys. 60 (2008) 484 [arXiv:0712.2447 [hepph]].
 [9] R. Forster. Louvain Community Detection with Parallel Heuristics On GPUs, 20th Jubilee IEEE International Conference on Intelligent Engineering Systems 20 (2016), ISBN:9781509012169, doi: 10.1109/INES.2016.7555126
 [10] R. Forster, A. Fülöp, Parallel jet clustering algorithm, Acta Univ. Sapientiae Informatica 9 1 (2017) 49â64.
 [11] R. Forster, A. Fülöp, Jet browser model accelerated by GPUs, Acta Univ. Sapientiae Informatica 8 2(2016)171–185.
 [12] R. Forster, A. Fülöp, YangMills lattice on CUDA, Acta Univ. Sapientiae, Inf., 5, 2 (2013) 184–211.
 [13] Hao Lu, Mahantesh Halappanavar, Ananth Kalyanaraman, Parallel heuristics for scalable community detection, Parallel Computing 47 (2015) 1937
 [14] T. Muta, Foundation of Quantum Chrodinamics, World Scientific Press 1986.
 [15] M.E.J. Newman, M. Girvan, Finding and evaluating community structure in networks, Phys. Rev. E 69 (2) (2004) 026113.
 [16] M. E. Peskin, D. V. Schroeder, Quantum Field Theory, Westview Press, 1995.
 [17] D. Rohr, S. Gorbunov, A. Szostak, M. Kretz, T. Kollegger, T. Breitner, T. Alt, ALICE HLT TPC Tracking of PbPb Events on GPUs, Journal of Physics: Conference Series 396, (2012), doi:10.1088/17426596/396/1/012044
 [18] G. Sterman and S. Weinberg, Jets from Quantum Chromodynamics, Phys. Rev. Lett. 39 (1977)1436.
 [19] V.A. Traag, P. Van Dooren, Y. Nesterov, Narrow scope for resolutionlimitfree community detection, Phys. Rev. E 84 (1) (2011) 016114.
 [20] R Hahnloser, R. Sarpeshkar, M A Mahowald, R. J. Douglas, H.S. Seung (2000). Digital selection and analogue amplification coexist in a cortexinspired silicon circuit. Nature. 405. pp. 947â951.
 [21] Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). Deep sparse rectifier neural networks (PDF). AISTATS.
 [22] Diederik P. Kingma, Jimmy Ba, Adam: A Method for Stochastic Optimization, 2014, arXiv:1412.6980
 [23] G. P. Salam, Towards Jetography Eur. Phys. J. C67 (2010)637686 [arXiv:0906.1833 [hepph]].
 [24] S. Salur, Full Jet Reconstruction in Heavy Ion Collisions, Nuclear Physics A 830 (14) (2009)139c146c.
 [25] R. Forster, A. Fülöp, Parallel jet clustering algorithm, Acta Univ. Sapientiae Informatica 9 1(2017)4964.
 [26] R. Forster, A. Fülöp, Hierarchical jet clustering for parallel achitectures, Acta Univ. Sapientiae Informatica 9 2 (2017) 195213.
 [27] S. Carani, Yu.L Dokshitzer, M.H. Seymour, B.R. Webher, Longitudinallyinvariant clustering algorithms for hadronhadron collisions, Nuclear Physics B 406 (1993)187224.
 [28] B. Denby, Neural networks and cellular automata in experimental high energy physics Computer Physics Communications 49 (1988)429448.
 [29] C. Peterson, Track finding with neural networks Nuclear Instruments and Methods A279 (1988)537.
 [30] B. Denby,Neural networks in high energy physics: a ten year perspective Computer Physics Communications 119 (1999)219.
 [31] L. Lönnblad, C. Peterson, T. Rögnvaldsson, Using neural networks to identify jets, Nuclear Physics B349 (1991) 675702.
 [32] K.J.C. Leney, A neuralnetwork clusterisation algorithm for the ATLAS silicon pixel detector, Journal of Physics: Conbnference Series 523 (2014)012023.
 [33] K.E. Selbach, Neural network based cluster reconstruction in the ATLAS pixel detector, Nuclear Instruments and Methods in Physics Research A 718 (2013) 363365.
 [34] H. Kolanoski, Application of artifical neural networks in particle physics, Nuclear Instruments and Methods in Physics Research A 367 (1995) 1420.
 [35] I. Csabai, F. Czakó, Z. Fodor, Quark and gluonjet separations using neural networks, Phys. Rev. D 44 7 (1991)R1905R1908.
 [36] P. T. Komiske, E. M. Metodiev, M. D. Schwartz, Deep learning in color: towards automated quark/gluon jet discrimination, J. High Energy Physics (2017)2017:110.
 [37] T. Davis and Y. Hu, The University of Florida Sparse Matrix Collection, Mathematical Software, Vol 38, Issue 1, 2011, pages 1:11:25
 [38] SuiteSparse Matrix Collection
 [39] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Dropout: A Simple Way to Prevent Neural Networks from Overfitting, JMLR, 15, 2014, 1929â1958
 [40] Richard S. Sutton, Andrew G. Barto, Reinforcement Learning: An Introduction, A Bradford Book, 1998, ISBN: 9780262193986
 [41] Volodymyr Mnih1 et al., Playing Atari with Deep Reinforcement Learning, 2013, arXiv:1312.5602
 [42] Volodymyr Mnih1 et al., Humanlevel control through deep reinforcement learning, Nature, 2015, doi:10.1038/nature14236
 [43] Grzegorz Swirszcz, Wojciech Marian Czarnecki, Razvan Pascanu, Local minima in training of neural networks, 2016, arXiv:1611.06310
 [44] Keras: The Python Deep Learning library
 [45] Han Jun, Morag Claudio, The influence of the sigmoid function parameters on the speed of backpropagation learning, 1995, pp. 195â201., ISBN: 9783540594970
 [46] United States Census Bureau
 [47]