The Role of Isomorphism Classes in Multi-Relational Datasets

The Role of Isomorphism Classes in Multi-Relational Datasets


Multi-interaction systems abound in nature, from colloidal suspensions to gene regulatory circuits. These systems can produce complex dynamics and graph neural networks have been proposed as a method to extract underlying interactions and predict how systems will evolve. The current training and evaluation procedures for these models through the use of synthetic multi-relational datasets however are agnostic to interaction network isomorphism classes, which produce identical dynamics up to initial conditions. We extensively analyse how isomorphism class awareness affects these models, focusing on neural relational inference (NRI) models, which are unique in explicitly inferring interactions to predict dynamics in the unsupervised setting. Specifically, we demonstrate that isomorphism leakage overestimates performance in multi-relational inference and that sampling biases present in the multi-interaction network generation process can impair generalisation. To remedy this, we propose isomorphism-aware synthetic benchmarks for model evaluation. We use these benchmarks to test generalisation abilities and demonstrate the existence of a threshold sampling frequency of isomorphism classes for successful learning. In addition, we demonstrate that isomorphism classes can be utilised through a simple prioritisation scheme to improve model performance, stability during training and reduce training time.


1Department of Physics, University of Cambridge
2Computer Laboratory, University of Cambridge
3Department of Mathematics, University of Cambridge

1 Introduction

We focus on the task of predicting the dynamics of simple many-body multi-interaction systems, a first step towards scaling to more complex dynamical systems. A variety of approaches have been developed to tackle variants of this problem, including predicting the trajectories of particles given the underlying interaction network Battaglia et al. (2016), learning to simulate complex physics with graph networks Sanchez-Gonzalez et al. (2020), and applying constraints from Lagrangian dynamics to learn a physics model Lutter et al. (2019). We will focus on approaches that both predict trajectories and infer relations in the system, neural relational inference (NRI) Kipf et al. (2018) and factorised neural relation inference (fNRI) Webb et al. (2019). These are unsupervised models which explicitly infer the underlying interactions of a system to predict the resulting dynamics. This structure is akin to an interpretable theory and the predictions it makes, with the aim of being a more valuable research tool compared to inscrutable blackboxes. The investigations conducted will also be relevant to synthetic multi-relational datasets from other settings Sinha et al. (2020).

Figure 1: Model overview. The encoder embeds trajectories () and, using vertex-to-edge () and edge-to-vertex () message-passing operations, produces the latent interaction network. The sampled edges () modulate pairwise functions () in the decoder that can be associated with forces in classical physics. A function of the net resultant ‘force’ (sum over ) is used to update the mean position using a skip-connection. indicates concatenation.

Despite designing the model architecture around the potential value of explicitly inferring interactions, little attention is paid to the structure of multiplex interaction networks or their sampling distribution in training and evaluation routines. There are many non-obvious results in the field of random graph theory, perhaps the most well-known being the percolation transition where, above a threshold connectivity, it is expected that a single component will come to encompass the entire graph Erdős and Rényi (1959, 1960). Effects such as these can bias the generation of the synthetic data used to train these models, hampering generalisation and causing performance to be overestimated.

A second missing component is the scientific process by which experiments are formulated to test the edge of current theories: areas well within the understood domain provide little new information, whereas regions far beyond our understanding are often too poorly explained to allow insight to be gained from results. Scientific progress is driven by this almost antagonistic relationship between theorists and experimentalists. This is absent from the training procedure of these models, where examples are treated without consideration as to the model’s current performance.

We incorporate these concerns into better synthetic multi-relation dataset generation and a new training procedure in this work. To do this, we first analyse the structure of interaction networks (Section 3), exposing non-intuitive results in the distribution of multiplex isomorphism classes and exploring how generation methods can incur a bias and leak over generated datasets. We demonstrate how these biases impact training and the overestimation of model performance arising due to leakage. We also (tentatively) present a novel fast algorithm for the construction of the set of non-isomorphic interaction networks, with a proof of correctness1. We then present isomorphism-aware benchmarks which were used to evaluate the model’s performance and identify the presence of a threshold sampling frequency of isomorphism classes for successful learning (Section 4). Finally, we show that incorporating isomorphism awareness via priority sampling improves model performance, stability during training, and significantly reduces training time (Section 5).

2 Model background

We present a brief overview of the task formulation and state-of-the-art approaches, the Neural Relational Inference (NRI) model Kipf et al. (2018) and its derivative, the Factorised Neural Relational Inference (fNRI) model Webb et al. (2019). We do not make any architectural modifications to the original models.

Problem statement

The primary task is the reconstruction (or evolution) of trajectories of particles in an interacting system, represented as a sequence of feature vectors over particles, , and time, , with neither access to nor supervision from the ground truth interaction network.

Model formulation

Both the NRI and fNRI are formulated as variational-autoencoders (VAEs) with observed trajectories being encoded as a latent interaction network that determines the output trajectory evolution for some initial conditions. Architecturally, the models are graph neural networks that use message passing in the encoder and decoder. The encoder embeds each particle’s trajectory then, through a series of vertex-to-edge and edge-to-vertex message passing operations, produces an edge-embedding between each pair of particles. The models differ in the dimensionality and meaning of the edge-embedding: the NRI uses a one-hot -dimensional vector with a separate edge-type for each interaction and combination of interactions; the fNRI uses a multi-categorical vector of length where different edge types exist only for different interactions. The decoder samples the latent interaction network to modulate the message-passing between particles, corresponding to the transmission of force-carrying particles. Figure 1 presents the model diagrammatically.

Though the fNRI outperforms the original NRI, that the models differ in their handling of the latent interaction networks makes them both relevant to our analysis of the impact that interaction network sampling has on performance estimation.

3 Isomorphism analysis

Figure 2: Edge-coloured multi-graph (left) and multiplex network (centre) representations of an interacting system; and the combinations of a pair of basis graphs (right). form an equivalence class based on rotations, similarly for and , respectively. In addition, the and graphs collectively form an equivalence class (analogous to reflections). If the basis graphs were combined at random, would be selected twice as frequently as .

In this section we analyse interaction networks through their isomorphism classes, investigate the sampling biases that arise from standard multi-interaction network generation processes, and show how information can leak between datasets through isomorphisms. The influence of the bias and leakage on performance evaluation is presented. Here we focus our analysis on five particles interacting via ideal-springs, finite-springs, and a charge force in two dimensions, as in the original work Webb et al. (2019).

3.1 Isomorphism classes

The set of possible interaction networks for some combination of interactions can be partitioned into isomorphism classes which, up to initial conditions, result in identical particle dynamics. In this sense the isomorphism classes form the set of ‘unique networks’ that can be generated.

Basis networks

We can simplify our analysis by first considering the isomorphism classes of the base interactions separately, as the multiplex network2 is itself formed of the combinations and node-permutations of these (see Figure 2). In our experiments the interactions can either be pairwise (ideal-springs, finite-springs) or collective (charges) with different restrictions on the resulting basis networks, as shown in Figure 3. We also make use of the equivalence of symmetries for complementary graphs—the automorphisms of a graph and its complement are identical—meaning we need only consider the combinations of sparse graphs, from which the full set can be constructed by taking the complements of the basis networks (being careful with self-complementary graphs).

Figure 3: Sparse complement basis networks for pairwise (left, purple) and collective (right, green) interactions for 4 particles. There are 11 graphs with 4 vertices, but for pairwise we can use the 6 shown and for collective just 3 will suffice, without loss of generality. This significantly reduces the computational cost of finding the set of unique multiplex networks. Note that collective interactions always consist of a single fully-connected component with the other particles being isolated (particles that are charged interact with all other charged particles).

Multiplex isomorphism classes

To generate multi-interaction networks we join basis networks together to form a multiplex network. The set of multiplex networks resulting from all the combinations of basis networks, and all the ways of joining them up, can be partitioned into multiplex isomorphism classes. Just as for the basis networks, these can be considered as the meaningfully ’unique networks’. An example of multiplex isomorphism classes partitioning the interaction networks, generated by joining basis networks together, is shown in Figure 2. For multiplex networks to be isomorphic, it is necessary that the layers are separately isomorphic, and as such we are guaranteed to include all non-isomorphic multiplex networks when considering only the combinations of basis networks.

Input : Graphs , of length with automorphisms ,
Output : , the set of non-isomorphic multiplex networks with basis graphs ,
1 List all ways of connecting the graphs as permutations of labels as ;
2 Instantiate empty lists for the checked, , and unchecked, , members of an equivalence class and an empty set for the output, ;
3 while  is not empty do
4       move a permutation from to ;
5       while  is not empty do
6             move a permutation from to ;
7             foreach automorphism of  do
8                   apply the automorphism directly to the labels of the latest element of ;
9                   if the result is not in and not in  then
10                         move the result from and add it to
12            foreach automorphism of  do
13                   apply the automorphism to the label positions of the latest element of ;
14                   if the result is not in and not in  then
15                         move the result from and add it to
18      add an element from to as the representative of the class and empty
Algorithm 1 Generating multiplex isomorphism classes with automorphisms

Fast multiplex isomorphism generation

To understand the sampling distribution over multiplex isomorphism classes we need to generate them. Naively, this can be achieved by generating all possible networks (binary strings over the number of edges), checking that they are multiplex and satisfy force relations, and then performing pairwise isomorphism tests to build groups. We present a new method that exploits the symmetries of the basis networks and the process of combining them.

The key concept is to write the ways of combining basis graphs as permutations of node labels and then make associations between these using automorphisms. We can write all the ways of combining a pair of graphs with labels and , respectively, by keeping the second graph fixed and permuting the nodes in the first. By definition, performing an automorphic transformation on node labels in one layer of the multiplex is undetectable in the other layers, and so the resulting permutation of node labels is isomorphic to the original network. This allows us to construct an equivalence class by applying all basis graph automorphisms, grouping the resulting permutations and further applying the automorphisms to the results to form a closed-group. Notably, any automorphism for the overall network must also be an automorphism for every basis graph, and so we do not overlook any transformations. We provide a visualisation of the method in Figure 4, psuedocode in Algorithm 1 and a proof in Appendix A.

Our method is applied on pre-generated automorphisms (a task handled by multiple existing libraries Darga et al. (2008). To combine a third graph, we can flatten the representative multiplex of where the automorphisms of the flattened graphs are given by the automorphisms that exist in both basis networks for the given node pairing (permutation). The flattened graph can then be passed as input itself.

Figure 4: Two basis networks are combined to form multiplex networks using our automorphism method. Permutations of are implicitly connected to the statically ordered , which accounts for all unique vertex-aligned ways of connecting the graphs (top). Applying automorphic transformations and groups permutations into isomorphism equivalence classes (left, right) and leaves non-isomorphic networks separate. Because we keep the -basis network fixed, manifests as exchanging the first and third elements of the permutation string.

3.2 Sampling biases and data leaks

Given we are now able to efficiently generate the set of non-isomorphic multiplex networks for a group of interaction types, we turn our attention to the sampling distribution induced by different generation methods and their effects on model performance and evaluation.


width= Model Dataset MSE20 Accuracy fNRI Train-ER 19.610.56 0.5750.059 Train-Uniform 0.5530.019
Train-ER 428.5520.18 0.5650.071
Train-Uniform 0.5830.053

Table 1: fNRI and NRI performance on the Train-ER and Train-Uniform datasets with ideal-spring, charge, finite-spring interactions. The ER sampling biases affects the predictive performance of the models.

Not all networks are created equally

Kipf et al. (2018) generate interaction networks with Bernoulli sampling over edges for pairwise interactions and Bernoulli sampling over nodes for collective interactions, a process that is inherited by Webb et al. (2019). Sampling graphs with a Bernoulli distribution on the presence of edges is commonly known as Erdős–Rényi (ER) sampling and we will refer to this generation procedure as Original-ER. The total number of edges or interacting-nodes follow a binomial distribution and there is a second bias arising for pairwise interactions from their arrangement, as shown in Figure 5. We also consider a second generation method where basis network isomorphism classes are sampled uniformly, Uniform-Basis, that removes the arrangement bias. By propagating the distribution these methods induce over basis network sampling frequencies, we can produce the relative frequency of the full multiplex network isomorphism classes, shown in Figure 6. We find strong sampling biases exist for both methods, with the most-to-least-likely ratio exceeding 100:1 in each case.

We compare the performance of models that are trained on training sets generated by ER sampling and uniform sampling of the multiplex isomorphism classes. The latter removes all sampling biases on the multiplex isomorphism classes. The validation and test sets are generated by uniform sampling of the multiplex isomorphism classes, and are identical between both datasets, which will be referred to as Train-ER and Train-Uniform respectively. The results in Table 1 show that ER bias reduces the performance on both the models.

Figure 5: Ways of arranging two edges between four nodes. There are 6 edge positions and 15 (equally-likely) ways of arranging 2 edges on them (6-choose-2). Of these, 12 are in one equivalence class (unshaded, purple) and 3 are in the other (shaded, green). Other factors being equal, the first equivalence class is four times more likely to be sampled than the second in the Original-ER generation procedure.

width= Model Dataset MSE20 Accuracy fNRI Original-ER Rejection-ER NRI Original-ER Rejection-ER

Table 2: fNRI and NRI performance on the Original-ER and Rejection-ER datasets with ideal-spring, charge, finite-spring interactions. Isomorphism leakage in the Original-ER sampling leads to performance overestimation.

width= Dataset Initial Conditions Interaction Networks Total Trajectories Train Val Test Train Val Test Train Val Test Original-ER Random Random Random Random Random Random 50000 10000 10000 Con- 22 22 454 454 454 9988 9988 Con-111 111 22 22 454 454 454 50394 9988 9988 Iso-155 155 155 155 324 65 65 50220 10075 10075 Con-Iso 155 155 155 324 65 65 50220 10075 10075

Table 3: Dataset summary for ideal-spring, charge interactions. The number of initial conditions, interaction networks and total number of trajectories for the training, validation and test sets are given.

width= Dataset MSE20 Accuracy Original-ER 10.03 0.47 0.928 0.008 Con-111 14.31 0.71 0.943 0.005 Iso-155 Con-Iso 9.650.33

Table 4: fNRI performance on the ideal-spring, charge datasets. The fNRI demonstrates generalisation to different initial conditions, multiplex isomorphism classes and both.

Isomorphism leakage

Conventionally, training, validation and test sets are disjoint. Naive generation of the interaction networks however will almost certainly result in some multiplex isomorphism classes being present in the different sets, i.e. leaking to the test set. For two datasets and with data generated by some latent graph , we say that there is isomorphism leakage between and if there exists and where and are isomorphic. Neither Kipf et al. (2018) nor Webb et al. (2019) claim to control for this possibility, and it is not the case that identical examples are present across their splits as initial conditions vary, however, by controlling for this facet of variability in isolation we find significant changes in judged model performance. We adopt the exact training scheme used by Webb et al. (2019)3 to compare models trained on datasets produced with the Original-ER method and an adaptation that controls for isomorphism leakage by rejecting test samples from multiplex isomorphism classes present in the training set (Rejection-ER). The results presented in Table 2 show that the leaky test set judges models to produce better trajectories and more accurately infer interaction relations, thus overestimating performance.

4 Model testing

In light of the previous results, there is a need for a standardised and reproducible isomorphism-aware benchmarking framework to evaluate model performances Dwivedi et al. (2020). In this section we propose multiple benchmarks and utilise them, to analyse the fNRI. We test for generalisability, focusing on the evaluation of flexibility and robustness over ‘skill’ Chollet (2019). We also investigate how the training set distribution affects performance, including varying the sampling frequency of isomorphism classes in the training set.

Figure 6: Rank-frequency plot of multiplex isomorphism class relative sampling frequencies under the Original-ER generation method (left) and Uniform-Basis method (right). Isomorphism classes with joint-rank are grouped with group colour added to aid visualisation. This shows the strong sampling biases that arise as a consequence of the methods used to generate interaction networks: some equivalence classes are highly prioritised while others are effectively never sampled. The ratio of most-to-least likely is 581:1 for Original-ER and 120:1 for Uniform-Basis.

Measuring generalisation

Considering the uniqueness of interaction networks, we can associate testing on isomorphism classes seen during training with the transductive setup Yang et al. (2016), where the same graph is used in both contexts, and testing on unseen classes with the inductive setup. We can further associate two kinds of generalisation with these cases: to different initial conditions (Con) in the transductive case, and to different interaction networks (Iso) in the inductive case.

Here we focus on the ideal-spring and charge interactions for five particles as the number of multiplex isomorphism classes is small (454). The original work Kipf et al. (2018) used what will be referred to as the Original-ER dataset for ideal-spring, charge interactions. This has the same structure as in Section 3.2, which also included finite-springs. To test Con and Iso generalisation, and also compare our results with the Original-ER dataset, we propose the Con- dataset and Iso- dataset respectively. We also investigate both types of generalisation together using the Con-Iso dataset. The Con-111 dataset contains [454, 454, 454] multiplex isomorphism classes (all of them), with [111, 22, 22] initial conditions (the same set for each multiplex isomorphism class). The Iso-155 dataset partitions the multiplex isomorphism classes between the training, validation and test set such that they do not overlap, each with the same set of 155 initial conditions. The Con-Iso dataset also has the same structure, but all the initial conditions are different. In these datasets the number of initial conditions is chosen such that the total trajectories closely matches that of the Original-ER dataset e.g. . A summary of these datasets and the results are shown in Table 3 and 4 respectively.

The fNRI performs well on the Con-111 and Iso-155 datasets and demonstrates generalisation to different initial conditions, multiplex isomorphism classes separately. Perhaps counter-intuitively, the Iso-155 dataset actually outperforms the Con-111 dataset. A plausible explanation is that repeatedly observing the same interaction networks applied to different initial conditions provides a stronger learning signal, which in turn enables superior performance and generalisation. The Con-Iso dataset demonstrates good performance for the mean-squared error, but has a much lower encoding accuracy. This may be due to the model learning an implicit encoding of the trajectories that isn’t interaction network based, as opposed to the explicit one which the encoding accuracy measures.

Figure 7: Performance of the fNRI vs. the number of initial conditions for each isomorphism class in the training set for ideal-spring, charge interactions. The encoding accuracy (left), higher is better. This shows a sharp rise at around 20 initial conditions. The mean-squared error (MSE) for 20 time-steps (right), lower is better. The dashed grey line shows the MSE for stationary particles. The MSE shows a significant drop at around 20 initial conditions.

Figure 8: Mean validation performance for unprioritised and prioritised sampling with and without grouping by multiplex isomorphism classes of ideal-spring, charge interactions. Prioritised sampling with grouping reliably increases the learning rate and the performance of both the fNRI and NRI.

Few-Shot learning

Though state-of-the-art performance in many machine learning tasks is achieved with large amounts of labelled data, there are many domains in which it is impractical or overly costly to gather sufficient data. In such a setting, few-shot learning algorithms can be employed to make best use of what is available Wang et al. (2019). The effect of the frequency of occurrence of each multiplex isomorphism class in the training set is explored in this section to investigate the fNRI’s capacity for few-shot learning. The Con- datasets are used, which have the same structure as the Con-111 dataset, where each multiplex isomorphism class is seen -times per epoch. Figure 7 shows how model performance improves as the number of initial conditions, , is increased.

As expected, the performance of the fNRI increases and then plateaus as the number of initial conditions increase. There is a rise in performance at around 20 initial conditions, which is associated with the increase in encoding accuracy. This shows that there is a threshold sampling frequency of isomorphism classes for successful learning. Curiously, there is a large drop in the mean-squared error below the threshold frequency. Again, we believe that the model may be learning to encode an implicit representation of the interaction network, which allows for the prediction of the few trajectories that are present in the smaller datasets with lower number of initial conditions. Once we pass the threshold frequency, it may be that the explicit representation is required to capture the entire dataset, hence the increase in encoding accuracy and consequently the decrease of the mean-squared error.

5 Prioritised sampling

In this section, we use a simple prioritisation scheme to analyse the benefits isomorphism-awareness can have on training speed and final performance. The likelihood of selecting an example for training is proportional to the exponentially weighted average of the historic model error on that sample. The historic error can also be grouped by multiplex isomorphism class. The performance of unprioritised and prioritised sampling with and without grouping is shown in Figure 8.

Prioritised sampling in the fNRI reliably shifts the learning curve to lower epochs, increasing the learning rate. Without grouping by multiplex isomorphism classes, the fNRI converges to lower performances. Prioritised sampling also improves performance in the NRI, again to a lesser extent without grouping. This demonstrates that isomorphism-awareness can be used to improve model performances.

6 Conclusions

We have analysed multiplex isomorphism classes in the context of learning to model multi-interaction systems. On the basis of our analysis, we have shown that the performance of models on this task has been overestimated, particularly with regards to generalisation. To remedy this problem we have proposed and evaluated new benchmarking datasets. Through experiments with these new benchmarks we show under what conditions neural relational inference models can be expected to learn and generalise well. We also present results on prioritised sampling in a training procedure that parallels the scientfic process. Finally, we have presented, and proven, an efficient new method for generating multiplex isomorphism classes for this context that makes further work in this area practical and accessible.

Ethics Statement

Our work is concerned primarily with foundational results in graph theory and their implications for training and evaluation for similarly foundational problems in systems of interacting particles. For this reason we consider there to be few foreseeable broader impacts, though we address the potential application of these models to human networks.

The NRI Kipf et al. (2018) presents results applying the model to the motion of basketball players and it seems reasonable to consider whether this could be extended to generic motion of people. Firstly, the application is more narrow than it first appears, predicting motion during an artificially constrained phase of play (a pick-and-roll) and the model is only weakly able to reconstruct player trajectories even in this scenario. Secondly, there is a scaling issue with the current system that requires relations to be considered which has not been resolved (limiting application to larger groups). Thirdly, it is unclear how the data collection to enable this application could be performed without also having the infrastructure to make it redundant—if you have high quality segmented overhead video footage of citizens, why do you need a model to tell you how they will move?

A second consideration may be that the model could be adopted to track how individuals interact and ‘move’ online. Whilst it would be interesting to investigate whether these models can be used in a discrete, non-Euclidean space, current work is limited to particles moving in a continuous, low-dimensional, Euclidean space only, and it is far from obvious how to solve key challenges to adapting to this new task.

Appendix A Proof for the multiplex graph isomorphism algorithm

To show the algorithm works, we need to show given one representative of an isomorphism class, it generates all isomorphic layered graphs.

Suppose we have graphs as our basis graphs, together with embeddings a common vertex set. We can identify this vertex set with the vertex set , and choose to do so. We can therefore assume we have bijections by post-composing with .

Suppose that give an isomorphic embedding to the . Let be an isomorphism which witnesses the layered graphs are isomorphic, i.e. there is an edge between and iff there is an edge between and . Since all of these maps are bijections, we deduce exists and is an automorphism of for each . Call this map .

Note that . Returning to the maps, we want to transform the into the by postcomposition of a common isomorphism of , and precomposition by isomorphisms of . Using our previous map, we observe . This has the same form as in our algorithm, and hence we must obtain every isomorphic embedding.

Appendix B Other investigations

In this section we present other investigations that were conducted on the fNRI using isomophism-aware benchmarks. This includes identifying which interaction types are most important for training, the effect of not including all isomorphism classes in the training set, and measuring generalisation for three interactions (as opposed to two interactions in Section 4).

b.1 Training essentials

The importance of each interaction type in the training set for the fNRI on the ideal-spring, charge dataset is investigated here. The datasets used are:

  • Extrapolate Charges to High (XCH)

  • Extrapolate Charges to Low (XCL)

  • Interpolate Charges (IC)

  • Extrapolate Springs to High (XSH)

  • Extrapolate Springs to Low (XSL)

  • Interpolate Springs (IS)

Using the XCH dataset as an example, the interaction networks are split into high charge and low charge groups. The training set is comprised of the low charges. The validation and test set is comprised of the high charges. Each has [50, 22, 22] initial conditions. The same logic and number of initial conditions apply to the other datasets.

Dataset MSE20 Accuracy I-Spring Charge
XSH 96.48 64.26 0.567 0.288 0.688 0.181 0.779 0.250
XSL 105.40 38.21 0.568 0.278 0.724 0.187 0.749 0.191
IS 99.33 9.29 0.3800.118 0.636 0.082 0.5630.068
Table 5: fNRI performance on extrapolation/interpolation datasets with ideal-spring (top) and charge (bottom) interactions. The fNRI has comparable performance on all the ideal-spring datasets and performs the best on the XCL dataset for the charges.

The results in Table 5 show that the fNRI has comparable performance for the spring datasets, and performs the best on the XCL dataset for the charges. Training on high charges seems to allow for better generalisation to lower charges whereas there seems to be no preference for the springs. To gain insight into these results, we identify the ‘difficulty’ of each interaction type. To do this we trained the fNRI on the Con-111 dataset and partitioned the test set by interaction type. The results are shown in Figure 9.

Figure 9: Encoding accuracy (left) and mean-squared error (right) for 20 time-steps of the predicted trajectories vs. different combinations of interaction types in the interaction network. The colour-scale is chosen such that light colours are associated with worse performance. The high charge interactions are associated with lower performance for both the accuracy and the mean-squared error. The performance is roughly constant for the mean-squared error, with respect to variations in the number of springs, whereas it decreases with decreasing number of springs for the accuracy.

According to the reconstruction error (MSE20), the most difficult interactions are the high charge interaction networks, which seems to become easier for no springs and high springs. This is expected for the no spring case and the high spring (purely attractive) case may be explained by the clumping of particles which may cause the fNRI to ‘cheat’ and predict the centre of mass motion of the particles. Besides this, the spring difficulty seems to be roughly constant for each number of charge-edges. The difficulty, according to the encoding accuracy, is highest for high charges and low springs. This may explain the results on the extrapolation/interpolation datasets. According to the reconstruction error, the training set ‘difficulty’ should be around the same for the spring datasets, whereas the XCL dataset should have the most difficult training set. This may suggest that training on interactions the model find the most difficult may generalise better to easier interactions, and not the other way around. Note that the fNRI only has access to the reconstruction error (in the loss function) and not the encoding accuracy.

b.2 The effect of sub-sampling

In this section we consider the effect of not including all the possible multiplex isomorphism classes in the training set, for ideal-spring charge interactions. We compare the performance of Con- datasets, which contain all the multiplex isomorphism classes, with the performance on the Sub-Con- dataset. This dataset removes some multiplex isomorphism classes from the training set of the Con- dataset such that there are [324, 454, 454] multiplex isomorphism classes. The validation and test sets are identical between these two datasets.

Figure 10: A comparison between the fNRI performance on the Con- and Sub-Con- datasets. (left) The encoding accuracy (higher is better). (right) The predicted trajectory mean-squared error for 20 time steps (lower is better). The Sub-Con- dataset curve approximately has the same behaviour as for the Con- dataset curve, but shifted to higher number of initial conditions.

The Sub-Con- datasets generally show the same behaviour as the Con- datasets. It performs worse, but eventually catches up to the performance on the Con- dataset. The threshold frequency of learning has been shifted from around 20 initial conditions to around 40 initial conditions. This suggests that sub-sampling increases the threshold frequency of learning.

b.3 Measuring generalisation for three interactions

Here we focus on the ideal-spring, charge and finite-spring interactions for five particles, as opposed to the case for ideal-spring and charge interactions in Section 4. We use the Con-, Iso- and Con-Iso dataset as before, but for ideal-spring, charge, finite-spring interactions. In this case, the number of multiplex isomorphism classes is large (over 250,000) and we are no longer constrained to using just 454 of them, as in the case for ideal-spring, charge interactions. We keep the same structure for the Con- and Iso- datasets, but for the con-iso dataset we use [454, 454, 454] interaction networks, all from different multiplex isomorphism classes. A summary of the results are shown in Table 6.

The results are qualitatively similar to the results for the ideal-spring, charge interactions in Section 4, with the fNRI performing the best on the Iso-155 dataset and the worst on the Con-111 dataset. This shows that the fNRI generalises better to different graphs, compared to different initial conditions. Again, a plausible explanation is that repeatedly observing the same interaction networks applied to different initial conditions provides a strong learning signal.


width= Dataset MSE20 Accuracy I-Spring Accuracy Charge Accuracy F-Spring Accuracy Original-ER Con-111 Iso-155 Con-Iso

Table 6: fNRI performance on the ideal-spring, charge, finite-spring datasets. The fNRI has the best performance on the Iso-155 dataset, and the worst on the Con-111 dataset. Recall that the Original-ER dataset overestimates the performance due to isomorphism leakage.

width= Model Dataset MSE20 Accuracy I-Spring Charge F-Spring fNRI Train-ER 19.610.56 0.5750.059 0.8520.031 0.9780.004 0.6370.052 Train-Uniform 0.5530.019 0.8600.012 0.9810.004 0.6110.021
Train-ER 428.5520.18 0.5650.071 0.8650.018 0.9370.072 0.6440.038
Train-Uniform 0.5830.053 0.8900.006 0.9290.063 0.6650.018

Table 7: fNRI and NRI performance on the Train-ER and Train-Uniform datasets with ideal-spring, charge, finite-spring interactions. The ER sampling biases affects the predictive performance of the models.

width= Model Dataset MSE20 Accuracy I-Spring Charge F-Spring fNRI Original-ER Rejection-ER NRI Original-ER Rejection-ER

Table 8: fNRI and NRI performance on the Original-ER and Rejection-ER datasets with ideal-spring, charge, finite-spring interactions. Isomorphism leakage in the Original-ER sampling leads to performance overestimation.

width= Dataset MSE20 Accuracy Spring Accuracy Charge Accuracy Original-ER 10.03 0.47 0.928 0.008 0.959 0.019 Con-111 14.31 0.71 0.943 0.005 0.971 0.002 0.970 0.004 Iso-155 Con-Iso 9.650.33

Table 9: fNRI performance on the ideal-spring, charge datasets. The fNRI demonstrates generalisation to different initial conditions, multiplex isomorphism classes and both.


  1. To the best of our knowledge, following a thorough literature review and consultation with domain experts, the algorithm and proof are original work, though we welcome any suggestions of prior-art.
  2. A multiplex network is a vertex-aligned multilayer graph. A vertex exists in every layer and is only connected to itself across layers.
  3. Both the original NRI and fNRI have made their codebases publicly available, greatly enabling this work.


  1. Interaction networks for learning about objects, relations and physics. neural information processing systems (pp. 4502-4510). External Links: Link Cited by: §1.
  2. The measure of intelligence.. . External Links: Link Cited by: §4.
  3. Faster symmetry discovery using sparsity of symmetries. In 2008 45th ACM/IEEE Design Automation Conference, pp. 149–154. Cited by: §3.1.
  4. Benchmarking graph neural networks.. . External Links: Link Cited by: §4.
  5. On random graphs.. Publicationes mathematicae, 6(26), pp.290-297.. External Links: Link Cited by: §1.
  6. On the evolution of random graphs.. Publ. Math. Inst. Hung. Acad. Sci, 5(1), pp.17-60.. External Links: Link Cited by: §1.
  7. Neural relational inference for interacting systems. . External Links: Link Cited by: §1, §2, §3.2, §3.2, §4, Ethics Statement.
  8. Deep lagrangian networks: using physics as model prior for deep learning.. . External Links: Link Cited by: §1.
  9. Learning to simulate complex physics with graph networks.. . External Links: Link Cited by: §1.
  10. Evaluating Logical Generalization in Graph Neural Networks.. External Links: Link Cited by: §1.
  11. Generalizing from a Few Examples: A Survey on Few-Shot Learning. arXiv e-prints, pp. arXiv:1904.05046. External Links: 1904.05046 Cited by: §4.
  12. Factorised neural relational inference for multi-interaction systems.. . External Links: Link Cited by: §1, §2, §3.2, §3.2, §3.
  13. Revisiting Semi-Supervised Learning with Graph Embeddings. 33rd International Conference on Machine Learning, ICML 2016 1, pp. 86–94. External Links: Link Cited by: §4.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description