DEFactor: Differentiable Edge Factorizationbased Probabilistic Graph Generation
Abstract
Generating novel molecules with optimal properties is a crucial step in many industries such as drug discovery. Recently, deep generative models have shown a promising way of performing denovo molecular design. Although graph generative models are currently available they either have a graph size dependency in their number of parameters, limiting their use to only very small graphs or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph nondifferentiable w.r.t the model parameters, therefore preventing them to be used in scenarios such as conditional graph generation. In this work we propose a model for conditional graph generation that is computationally efficient and enables direct optimisation of the graph. We demonstrate favourable performance of our model on prototypebased molecular graph conditional generation tasks.
DEFactor: Differentiable Edge Factorizationbased Probabilistic Graph Generation
Rim Assouel Mila, Université de Montréal, QC Benevolent.AI, London, UK rim.assouel@umontreal.ca Mohamed Ahmed Benevolent.AI, London, UK mohamed.ahmed@benevolent.ai Marwin H Segler Benevolent.AI, London, UK marwin.segler@benevolent.ai Amir Saffari Benevolent.AI, London, UK amir.saffari@benevolent.ai Yoshua Bengio^{†}^{†}thanks: CIFAR Senior Fellow Mila, Université de Montréal, QC yoshua.bengio@mila.quebec
noticebox[b]Preprint. Work in progress.\end@float
1 Introduction
We address the problem of learning probabilistic generative graph models for tasks such as the conditional generation of molecules with optimal properties. More precisely we focus on generating realistic molecular graphs, similar to a target molecule (the prototype).
The main challenge here stems from the discrete nature of graph representations for molecules; which prevents us from using global discriminators that assess generated samples and backpropagate their gradients to guide the optimisation of a generator. This becomes a bigger hindrance if we want to either optimise a property of a molecule (graph) or explore the vicinity of an input molecule (prototype) for conditional optimal generation, an approach that has proven successful in controlled image generation (Rajpurkar et al., 2016; Nguyen et al., 2016).
Several recent approaches aim to address this limitation by performing indirect optimisation (Trischler et al., 2016; Mostafazadeh et al., 2016; Su et al., 2016). You et al. Mostafazadeh et al. (2016) formulate the molecular graph optimisation task in a reinforcement learning setting, and optimise the loss with policy gradient Bowman et al. (2015). However policy gradient tends to suffer from high variance during training. Kang and Cho Williams et al. (2018) suggest a reconstructionbased formulation which is directly applicable to discrete structures and does not require gradient estimation. However, it is limited by the number of samples available. Moreover, there is always a risk that the generator simply ignores the part of the latent code containing the property that we want to optimise. Finally, Jin et al. Trischler et al. (2016) apply Bayesian optimisation to optimise a proxy (the latent code) of the molecular graph, rather than the graph itself.
In contrast, Simonovsky and Komodakis Devlin et al. (2018) and De Cao and Kipf Peters et al. (2018) have proposed decoding schemes that output graphs (adjacencies and node/edge feature tensors) in a single step, and so are able to perform direct optimisation on the probabilistic continuous approximation of a graph. However, both decoding schemes make use of fixed size MLP layers which restricts their use to very small graphs of a predefined maximum size.
Our approach (DEFactor) depicted in Figure 1 aims to directly address these issues with a probabilistic graph decoding scheme that is endtoend differentiable, computationally efficient w.r.t the number of parameters in the model and capable of generating arbitrary sized graphs (section 3). We evaluate DEFactor on the task of constrained molecule property optimisation Trischler et al. (2016); Mostafazadeh et al. (2016) and demonstrate that our results are competitive with recent results.
2 Related work
Leadbased Molecule Optimisation:
The aim here is to obtain molecules that satisfy a target set of objectives, for example activity against a biological target while not being toxic or maintaining certain properties, such as solubility. Currently a popular strategy is to finetune an pretrained generative model to produce/select molecules that satisfy a desired set of properties Johnson et al. (2017).
Bayesian optimisation is proposed to explore the learnt latent spaces for molecules in Kok and Domingos (2007), and is shown to be effective at exploiting feature rich latent representations Muggleton (1991); Lavrac and Dzeroski (1994); Trischler et al. (2016). In Rocktäschel and Riedel (2017); Su et al. (2016) sequential graph decoding schemes whereby conditioning properties can be added to the input are proposed. However these approaches are unable to perform direct optimisation for objectives. Finally Mostafazadeh et al. (2016) reformulates the problem in a reinforcement learning setting, and objective optimisation is performed while keeping an efficient sequentiallike generative scheme Weston et al. (2015).
Graph Generation Models:
Sequential methods to graph generation Weston et al. (2015); Su et al. (2016); Mostafazadeh et al. (2016); Rocktäschel and Riedel (2017) aim to construct a graph by predicting a sequence of addition/edition actions of nodes/edges. Starting from a subgraph (normally empty), at each time step a discrete transition is predicted and the subgraph is updated. Although sequential approaches enable us to decouple the number of parameters in models from the the maximum size of the graph processed, due to the discretisation of the final outputs, the graph is still nondifferentiable w.r.t. to the decoder’s parameters. This again prevents us from directly optimising for the objectives we are interested in.
In contrast to the sequential process Peters et al. (2018); Devlin et al. (2018) reconstruct probabilistic graphs. These methods however make use of fixed size MLP layers when decoding to predict the graph adjacency and node tensors. This however limits their use to very small graphs of a prechosen maximum size. They therefore restrict study and application to small molecular graphs; a maximum number of 9 heavy atoms, compared to approximately 40 in sequential models.
We propose to tackle these drawbacks by designing a graph decoding scheme that is:

Efficient: so that the number of parameters of the decoder does not depend on a fixed maximum graph size.

Differentiable: in particular we would like the final graph to be differentiable w.r.t the decoder’s parameters, so that we are able to directly optimise the graph for target objectives.
3 DEFactor
Molecules can be represented as graphs where atoms and bonds correspond to the nodes and edges respectively. Each node in V is labeled with its atom type which can be considered as part of its features. The adjacency tensor is given by where is the number of nodes (atoms) in the graph and is the number of possible edge (bond) types. The node types are represented by a node feature tensor which is composed of several onehotencoded properties.
3.1 Graph Construction Process
Given a molecular graph defined as we propose to leverage the edgespecific information propagation framework described in Gilmer et al. (2017) to learn a set of informative an embedding from which we can directly infer a graph. Our graph construction process is composed of two parts:

An Encoder that in

step 1 performs several spatial graph convolutions on the input graph, and in

step 2 aggregates those embeddings into a single graph latent representation.


A Decoder that in

step 3 autoregressively generates a set of continuous node embeddings conditioned on the learnt latent representation, and in

step 4 reconstructs the whole graph using edgefactorization.

Figure 1 (a) and (b) provides a summary of those 4 steps.
Steps 1 and 2: Graph Representation Learning.
We use the Graph Convolutional Network (GCN) update rule (Hochreiter and Schmidhuber, 1997) to encode the graph. Each node embedding can be written as a weighted sum of the edgeconditioned information of its neighbors in the graph. Namely for each th layer of the encoder, the representation is given by:
(1) 
where is the th frontal slice of the adjacency tensor, the corresponding degree tensor and and are learned parameters of the layer.
Once we have the node embeddings we aggregate them to obtain a fixedlength latent representation of the graph. We propose to parametrize this aggregation step by an LSTM and we compute the graph latent representation by a simple linear transformation of the last hidden state of this Aggregator:
(2) 
Because the use of an LSTM makes aggregations permutation dependant, Like Veličković et al. (2017), we adapt the aggregator using randomly permuted sets of embeddings and empirically validated that this did not affect the performance of the model significantly.
In the subsequent steps we are interested in designing a graph decoding scheme from the latent code that is both scalable and powerful enough to model the interdependencies between the nodes and edges in the graph.
Step 3: Autoregressive Generation.
We are interested in building a graph decoding scheme that models the nodes and their connectivity (represented by continuous embeddings ) in an autoregressive fashion. This is in contrast to (Devlin et al., 2018; Peters et al., 2018), where each node and edge is conditionally independent given the latent code . In practice this means that every detail of the interdependencies within the graph have to be encoded in the latent variable. We propose to tackle this drawback by autoregressive generation of the continuous embeddings for nodes. More precisely we model the generation of node embeddings such that:
(3) 
In our model, the autoregressive generation of embeddings is parametrized by a simple Long ShortTerm Memory (LSTM, Song et al. (2018)) and is completely deterministic such that at each time step the LSTM decoder takes as input the previously generated embeddings and the latent code which captured nodeinvariant features of the graph. Each embedding is computed as a function of the concatenation of the current hidden state and the latent code such that:
(4)  
(5) 
where corresponds to the LSTM recurrence operation and and are parametrized as simple MLP layers to perform nonlinear feature extraction.
Step 4: Graph Decoding from Node Embeddings.
As stated previously, we want to drive the generation of the continuous embeddings towards latent factors that contains enough information about the node they represent (i.e. we can easily retrieve the onehot atom type performing a linear transformation of the continuous embedding) and its neighbourhood (i.e. the adjacency tensor can be easily retrieved by comparing those embeddings in a pairwise manner). For those reasons, we suggest to factorize each bond type in a relational inference fashion (Hochreiter and Schmidhuber, 1997; Veličković et al., 2017).
Let be the concatenated continuous node embeddings generated in the previous step. We reconstruct the adjacency tensor by learning edgespecific similarity measure for th edge type as follows:
(6) 
This is modeled by a set of edgespecific factors such that we can reconstruct the adjacency tensor as :
(7) 
where is the logistic sigmoid function, the corresponding diagonal matrix of the vector and the factors are learned parameters.
We reconstruct the node features (i.e. the atom type) with a simple affine transformation such that:
(8) 
where is a learned parameter.
Generating Graphs of arbitrary sizes.
In order to generate graphs of different sizes we need to add what we call here an Existence module that retrieves a probability of a node belonging to the final graph for each of the embedding generated (in step 3). This module is parametrized as a simple MLP layer followed by a sigmoid activation and stops the unrolling of the embedding LSTM generator whenever we encounter a noninformative embedding. This module can be interpreted as an translator.
3.2 Training
Teacher forcing.
To make the model converge in reasonable time we adapt teacherforcing on language models (Song et al., 2018) as follows. The training is thus done in 3 phases:

We first pretrain the GCN part along with the embedding decoder (factorization, nodes and existence modules) to reconstruct the graphs. This corresponds to the training of a simple Graph AE as in Hochreiter and Schmidhuber (1997) except that we also want to reconstruct the nodes’ onehot features (and not just the relations).

We then append those two units to the embedding aggregator and generator while keeping them fixed. In this second phase, the embedding generator is trained using teacher forcing where at each time step the LSTM decoder does not take as input the previously generated embedding but the true one that is the direct output of the pretrained GCN embedding encoder.

Finally in order to transition from teacherforcing to a fully autoregressive state we increasingly (Veličković et al., 2017) feed the LSTM generator more of its own predictions. When a fully autoregressive state is reached the pretrained units are unfrozen and the whole model continues training endtoend.
LogLikelihood Estimates
We train the autoencoder on the reconstruction error using the MLE with the estimate negative loglikelihood given by:
(9) 
where the number of nodes in the graph, and corresponding to the existing and non existing edges in the adjacency tensor , and is the node features, such that:
(10)  
(11)  
(12) 
Since molecular graphs are sparse, we found that such separate normalisations were helpful for the training.
3.3 Conditional Generation and Optimisation
Model overview.
Given the entangled latent code for a given input molecular graph, we create a conditioned input by augmenting with a set of structured attributes  the target properties of interest, such as physicochemical property. The conditional generator is then trained on the combined reconstruction and property loss. At the end of a successful training we expect the decoder to generate samples that have the properties specified in and to be similar (in terms of information contained in ) to the original query molecular graph (encoded as ). To do so we choose a mutual information maximization approach (detailed in the Appendix B.2) that involves the use of discriminators that assess the properties of the generated samples and their feedback is used to guide the learning of the generator.
Discriminator PreTraining
In this phase we pretrain a discriminator to assess the property of a generated sample so that we can backpropagate its feedback to the generator (the discriminator can be trained on another dataset and we can have several discriminators for several attributes of interest). In order to have informative gradients in the early stages of the training we have trained the discriminator on continuous approximations of the discrete training graphs (details in Appendix B.1) so that our objective becomes:
(13) 
where the graphs sampled from are the probabilistic approximations of the discrete graphs from the training distribution .
The next step is to incorporate the feedback signal of the trained discriminator in order to formulate the property attribute constraint. The training is decomposed in two phases in which we learn to reconstruct graphs of the dataset (MLE phase) and to modify chemical attributes (Variational MI maximization phase).
Encoder Learning.
The encoder is updated only during the reconstruction phase where we sample attributes from the true posterior. The encoder loss is a linear combination of the molecular graph reconstruction () and the property reconstruction (). The total encoder loss is:
(14) 
where (using the loglikelihood estimates in (7)) and . With a hyperparameter of the model.
Generator Learning.
The generator is updated in both reconstruction and conditional phases. In the MLE phase the generator is trained with same loss as the encoder so that it is pushed towards generating realistic molecular graphs. In the MI maximization phase we sample the attributes from a prior s.t. we minimize the following objective: ,
(15) 
where are hyperparameters of the model.
In this phase the only optimisation signal comes from the trained discriminator. Since there are no realism constraint specified in our model (see Trischler et al. (2016); Mostafazadeh et al. (2016)), there is a risk of “falling off the manifold”. A possible way solution mitigate against it is to add a similarity discriminator trained to distinguish between the real probabilistic graph and the generated ones  so that when trying to satisfy the attribute constraint the generator is forced to produce valid molecular graphs. We leave this for future work.
Method  Reconstruction Accuracy 

JTVAE Trischler et al. (2016)  76.7 
JTAE (without stereochemistry)  69.9 
DEFactor  56  89.2 
DEFactor  64  89.4 
DEFactor  100  89.8 
4 Experiments
To compare with recent results in constrained molecular graph optimization Trischler et al. (2016); Mostafazadeh et al. (2016), we present the following experiments :

Molecular Graph Reconstruction: We test the autoencoder framework on the task of reconstructing input molecules from their latent representations.

Conditional Generation: We test our conditional generation framework on the task of generating novel molecules that satisfy a given input property. Here, we are interested in the octanolwater partition coefficient (LogP) optimization used as benchmark in Muggleton (1991); Trischler et al. (2016); Mostafazadeh et al. (2016).

Constrained Property Optimization: Finally, we test our conditional autoencoder on the task of modifying a given molecule to improve a specified property, while constraining the degree of deviation from the original molecule. Again we use the LogP benchmark for the experiment.
Finally in the following section we use the 250K subset of the ZINC Song et al. (2018) dataset, released by Muggleton (1991), along with their given train and test splits.
Molecular graph reconstruction:
In this task we evaluate the exact reconstruction error from encoding and decoding a given molecular graph from the test set. We report in Table 1 the ratio of exactly reconstructed graphs, where we see that the our autoencoder outperforms the JTVAE Trischler et al. (2016) which has the current stateoftheart performance in this task. Appendix C.1 reports the reconstruction ratio as a function of the molecule size (number of heavy atoms).
Conditional Generation:
In this task we evaluate the conditional generation formulation described in Section 3.3. For a given molecule with an observed property value , the goal here is to modify the molecule to generate a new molecule with the given target property value; (). New molecules are generated by conditioning the decoder on , where is the latent code for . The decoded new molecule , is ideally best suited to satisfy the target property. This is evaluated by comparing the property value of the new molecule with the target property value. A generator that performs well at this task will produce predicted molecules with property values that are close to the target. In these experiments, LogP was chosen as the desired property, and we use RDKIT Song et al. (2018) to calculate the LogP values of generated molecules.
The scatter plots in Figure 2 give for a selected set of test molecules, the correlation of target property values against the evaluated property value of the correctly decoded molecules. Here, for each molecule the target set is defined by uniformly sampling around the observed property value for the molecule.
Constrained Property Optimization:
In this section we follow the evaluation methodology outlined in Trischler et al. (2016); Mostafazadeh et al. (2016), and evaluate our model in a constrained molecule property optimization. In contrast to Trischler et al. (2016), due to the conditional formulation, does not need retraining for the optimisation task.
Given the 800 molecules with the lowest penalized LogP^{1}^{1}1The penalized logP is octanolwater partition coefficient (logP) penalized by the synthetic accessibility (SA) score and the number of long cycles, see Trischler et al. (2016) property scores from the test set, we evaluate the decoder by providing pairs of () with increasing property scores, and among the valid decoded graphs we compute:

Their similarity scores (Sim.) to the encoded target molecule (called the prototype);

Their penalized LogP scores. Note that in this setting the conditioning property values () are the unpenalized LogP scores. However, to evaluate the model we compute the penalized LogP scores to assess the model’s ability to decode synthetically accessible molecules.

While varying the similarity threshold values (), we compute the success rate (Suc.) for all 800 molecules. This measures how often we to get a novel molecule with an improved penalized LogP score.
The final results are reported in Table 2. As can be seen, although slightly behind GCPN Mostafazadeh et al. (2016) w.r.t. success rates (Suc.), DEFactor significantly outperforms other models in terms of improvements (Imp.) achieved (by between and for thresholds 0.2 and 0.6 respectively, with respect to the next best model GCPN).
JTVAE  GCPN  DEFactor  

Imp.  Sim.  Suc.  Imp.  Sim.  Suc.  Imp.  Sim.  Suc.  
0.0  1.91 2.04  0.280.15  97.5  4.201.28  0.320.12  100  6.622.50  0.200.16  91.5 
0.2  1.68 1.85  0.330.13  97.1  4.121.19  0.340.11  100  5.552.31  0.310.12  90.8 
0.4  0.84 1.45  0.510.10  83.6  2.491.30  0.470.08  100  3.411.8  0.490.09  85.9 
0.6  0.21 0.71  0.690.06  46.4  0.790.63  0.680.08  100  1.551.19  0.690.06  72.6 
5 Future work
In this paper, we have presented a new way of modelling and generating graphs in a conditional optimisation setting such that the final graph being fully differentiable w.r.t to the model parameters. We believe that our DEFactor model will contribute to understanding and building MLdriven applications for denovo drug design or generation of molecules with optimal properties, without resorting to methods that do not directly optimise the desired properties.
Note that a drawback of our model is that it uses an MLE training process which forces us to either fix the ordering of nodes or to perform a computationally expensive graph matching operation to compute the loss. Moreover in our fully deterministic conditional formulation we assume that chemical properties optimisation is a onetoone mapping but in reality there may exist many suitable way of optimizing a molecule to satisfy one property condition while staying similar to the query molecule. To that extent it could be interesting to augment our model to include the possibility of a onetomany mapping. Another way of improving the model could also be to include a validity constraint formulated as training a discriminator that discriminates between valid and generated graphs.
Acknowledgements
The authors thank NSERC and CIFAR for funding at U. Montreal and Mila as well as Marco Fiscato, Petar Veličković, Jian Tang and Benedek Fabian for useful discussions.
References
 Rajpurkar et al. (2016) Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans, 2016.
 Nguyen et al. (2016) Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets, 2016.
 Trischler et al. (2016) Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation, 2018.
 Mostafazadeh et al. (2016) Jiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goaldirected molecular graph generation, 2018.
 Veličković et al. (2017) Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks, 2015.
 Song et al. (2018) John J. Irwin, Teague Sterling, Michael M. Mysinger, Erin S. Bolstad, and Ryan G. Coleman. Zinc: A free tool to discover chemistry for biology. Journal of Chemical Information and Modeling, 52(7):1757–1768, 2012. PMID: 22587354.
 Song et al. (2018) RDKit: Opensource cheminformatics. http://www.rdkit.org. [Online; accessed 11April2013].
 Su et al. (2016) Yibo Li, Liangren Zhang, and Zhenming Liu. Multiobjective de novo drug design with conditional graph generative model, 2018.
 Bowman et al. (2015) Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. CoRR, abs/1609.05473, 2016.
 Williams et al. (2018) Seokho Kang and Kyunghyun Cho. Conditional molecular design with deep generative models, 2018.
 Devlin et al. (2018) Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders, 2018.
 Johnson et al. (2017) Marwin H. S. Segler, Thierry Kogej, Christian Tyrchan, and Mark P. Waller. Generating focussed molecule libraries for drug discovery with recurrent neural networks, 2017.
 Kok and Domingos (2007) Rafael GómezBombarelli, Jennifer N. Wei, David Duvenaud, José Miguel HernándezLobato, Benjamín SánchezLengeling, Dennis Sheberla, Jorge AguileraIparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán AspuruGuzik. Automatic chemical design using a datadriven continuous representation of molecules, 2016.
 Muggleton (1991) Matt J. Kusner, Brooks Paige, and José Miguel HernándezLobato. Grammar variational autoencoder. In Proceedings of the 34th International Conference on Machine Learning, 2017.
 Lavrac and Dzeroski (1994) Hanjun Dai, Yingtao Tian, Bo Dai, Steven Skiena, and Le Song. Syntaxdirected variational autoencoder for structured data. CoRR, abs/1802.08786, 2018.
 Rocktäschel and Riedel (2017) Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs, 2018.
 Weston et al. (2015) Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep autoregressive models, 2018.
 Gilmer et al. (2017) Martin Simonovsky and Nikos Komodakis. Dynamic edgeconditioned filters in convolutional neural networks on graphs, 2017.
 Hochreiter and Schmidhuber (1997) Thomas N. Kipf and Max Welling. Semisupervised classification with graph convolutional networks, 2016.
 Veličković et al. (2017) Thomas Kipf, Ethan Fetaya, KuanChieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems, 2018.
 Song et al. (2018) Ronald J. Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks, 1989.
 Hochreiter and Schmidhuber (1997) Thomas N. Kipf and Max Welling. Variational graph autoencoders, 2016.
 Veličković et al. (2017) William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs, 2017.
 Song et al. (2018) Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Hochreiter and Schmidhuber (1997) Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34:13, 457466, 2018, 2018.
 Peters et al. (2018) Nicola De Cao and Thomas Kipf. Molgan: An implicit generative model for small molecular graphs, 2018.
Appendix A
a.1 Models Comparison
Model  Inference  Parameters  Constrained  Probabilistic  No Retraining 

MolGAN [26]  ✗  ✗  ✗  ✓  NA 
JTVAE [3]  ✓  ✓  ✓  ✗  ✗ 
GCPNN[4]  ✗  ✓  ✓  ✗  ✓ 
DEFactor(Ours)  ✓  ✓  ✓  ✓  ✓ 
We are interested in the following features of the models :

Inference : If the model is equipped or not with an inference network. To encode some target molecule like we do in the conditional setting.

Parameterefficient : If the number of parameters of the model depends on the graph sizes.

Constrained : If the model is studied in a constrained optimization scenario : namely the case where we want to optimize a property while constraining the degree of deviation from the original molecule.

Probabilistic : If the outptut of the model is a probabilistic graph s.t. it is differentiable w.r.t to the decoder’s parameters.

No Retraining : If we need to retrain/finetune/perform gradientascent each time we want to optimize a novel molecule.
Appendix B
b.1 Graphs continuous approximation
For the pretraining of the discriminators we suggested to train them on continuous approximation of the discrete graphs that resembles the output of the decoder. To that extent we used the trained partial graph autoencoder (used for the teacher forcing at the beginning of the training of the full autoencoder.)
b.2 Mutual information maximization
For the conditional setting we choose a simple mutual information maximization formulation. The objective is to maximize the MI between the target property and the decoder’s output under the joint defined by the decoder . In the conditional setting is also conditioned on the encoded molecule but for simplicity we treat it as a parameter of the decoder (and thus reason with one target molecule from which we want to modify attributes). We define the MI as:
In our conditional setting we pretrained the discriminators (parametrized by in the lower bound derivation) to approximate which makes the bound tight only when is close to and this corresponds to a stage where the decoder has maximized the loglikelihood of the data well enough (i.e. when it is able to reconstruct input graphs properly when and are paired). Thus, in the conditional setting we are maximizing the following objective:
Appendix C
c.1 Reconstruction as a function of number of atoms
Notice that as we make use of a simple LSTM to encode a graph representation, there is a risk that for the largest molecules the long term dependencies of the embeddings are not captured well resulting in a bad reconstruction error. We capture this observation in figure 4. One possible amelioration could be to add at each step another nonsequential aggregation of the embeddings (average pooling of the embeddings for example) or to make the encoder more powerful by adding some attention mechanisms. We leave these options for future work.