DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation

DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation

Rim Assouel
Mila, Université de Montréal, QC
Benevolent.AI, London, UK
Mohamed Ahmed
Benevolent.AI, London, UK
Marwin H Segler
Benevolent.AI, London, UK
Amir Saffari
Benevolent.AI, London, UK
Yoshua Bengio
Mila, Université de Montréal, QC
CIFAR Senior Fellow

Generating novel molecules with optimal properties is a crucial step in many industries such as drug discovery. Recently, deep generative models have shown a promising way of performing de-novo molecular design. Although graph generative models are currently available they either have a graph size dependency in their number of parameters, limiting their use to only very small graphs or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph non-differentiable w.r.t the model parameters, therefore preventing them to be used in scenarios such as conditional graph generation. In this work we propose a model for conditional graph generation that is computationally efficient and enables direct optimisation of the graph. We demonstrate favourable performance of our model on prototype-based molecular graph conditional generation tasks.


DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation

  Rim Assouel Mila, Université de Montréal, QC Benevolent.AI, London, UK Mohamed Ahmed Benevolent.AI, London, UK Marwin H Segler Benevolent.AI, London, UK Amir Saffari Benevolent.AI, London, UK Yoshua Bengiothanks: CIFAR Senior Fellow Mila, Université de Montréal, QC


noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

We address the problem of learning probabilistic generative graph models for tasks such as the conditional generation of molecules with optimal properties. More precisely we focus on generating realistic molecular graphs, similar to a target molecule (the prototype).

The main challenge here stems from the discrete nature of graph representations for molecules; which prevents us from using global discriminators that assess generated samples and back-propagate their gradients to guide the optimisation of a generator. This becomes a bigger hindrance if we want to either optimise a property of a molecule (graph) or explore the vicinity of an input molecule (prototype) for conditional optimal generation, an approach that has proven successful in controlled image generation (Rajpurkar et al., 2016; Nguyen et al., 2016).

Several recent approaches aim to address this limitation by performing indirect optimisation (Trischler et al., 2016; Mostafazadeh et al., 2016; Su et al., 2016). You et al. Mostafazadeh et al. (2016) formulate the molecular graph optimisation task in a reinforcement learning setting, and optimise the loss with policy gradient Bowman et al. (2015). However policy gradient tends to suffer from high variance during training. Kang and Cho Williams et al. (2018) suggest a reconstruction-based formulation which is directly applicable to discrete structures and does not require gradient estimation. However, it is limited by the number of samples available. Moreover, there is always a risk that the generator simply ignores the part of the latent code containing the property that we want to optimise. Finally, Jin et al. Trischler et al. (2016) apply Bayesian optimisation to optimise a proxy (the latent code) of the molecular graph, rather than the graph itself.

In contrast, Simonovsky and Komodakis Devlin et al. (2018) and De Cao and Kipf Peters et al. (2018) have proposed decoding schemes that output graphs (adjacencies and node/edge feature tensors) in a single step, and so are able to perform direct optimisation on the probabilistic continuous approximation of a graph. However, both decoding schemes make use of fixed size MLP layers which restricts their use to very small graphs of a predefined maximum size.

Our approach (DEFactor) depicted in Figure 1 aims to directly address these issues with a probabilistic graph decoding scheme that is end-to-end differentiable, computationally efficient w.r.t the number of parameters in the model and capable of generating arbitrary sized graphs (section 3). We evaluate DEFactor on the task of constrained molecule property optimisation Trischler et al. (2016); Mostafazadeh et al. (2016) and demonstrate that our results are competitive with recent results.

(a) The full autoencoder (step 1 to 4)
(b) Expanding the steps (3 for the LSTM and 4 for the factorization and node decoding) of the generator of the autoencoder
(c) The conditional setting a discriminator that assesses the outputs and gives its feedback to the generator . (resp. ) refers to the reconstruction (resp. conditional) loss described in section 3.3.
Figure 1: Overview of our molecule autoencoding ((a) and (b)) and conditional generation (c) process.

2 Related work

Lead-based Molecule Optimisation:

The aim here is to obtain molecules that satisfy a target set of objectives, for example activity against a biological target while not being toxic or maintaining certain properties, such as solubility. Currently a popular strategy is to fine-tune an pretrained generative model to produce/select molecules that satisfy a desired set of properties Johnson et al. (2017).

Bayesian optimisation is proposed to explore the learnt latent spaces for molecules in Kok and Domingos (2007), and is shown to be effective at exploiting feature rich latent representations Muggleton (1991); Lavrac and Dzeroski (1994); Trischler et al. (2016). In Rocktäschel and Riedel (2017); Su et al. (2016) sequential graph decoding schemes whereby conditioning properties can be added to the input are proposed. However these approaches are unable to perform direct optimisation for objectives. Finally Mostafazadeh et al. (2016) reformulates the problem in a reinforcement learning setting, and objective optimisation is performed while keeping an efficient sequential-like generative scheme Weston et al. (2015).

Graph Generation Models:

Sequential methods to graph generation Weston et al. (2015); Su et al. (2016); Mostafazadeh et al. (2016); Rocktäschel and Riedel (2017) aim to construct a graph by predicting a sequence of addition/edition actions of nodes/edges. Starting from a sub-graph (normally empty), at each time step a discrete transition is predicted and the sub-graph is updated. Although sequential approaches enable us to decouple the number of parameters in models from the the maximum size of the graph processed, due to the discretisation of the final outputs, the graph is still non-differentiable w.r.t. to the decoder’s parameters. This again prevents us from directly optimising for the objectives we are interested in.

In contrast to the sequential process Peters et al. (2018); Devlin et al. (2018) reconstruct probabilistic graphs. These methods however make use of fixed size MLP layers when decoding to predict the graph adjacency and node tensors. This however limits their use to very small graphs of a pre-chosen maximum size. They therefore restrict study and application to small molecular graphs; a maximum number of 9 heavy atoms, compared to approximately 40 in sequential models.

We propose to tackle these drawbacks by designing a graph decoding scheme that is:

  • Efficient: so that the number of parameters of the decoder does not depend on a fixed maximum graph size.

  • Differentiable: in particular we would like the final graph to be differentiable w.r.t the decoder’s parameters, so that we are able to directly optimise the graph for target objectives.

3 DEFactor

Molecules can be represented as graphs where atoms and bonds correspond to the nodes and edges respectively. Each node in V is labeled with its atom type which can be considered as part of its features. The adjacency tensor is given by where is the number of nodes (atoms) in the graph and is the number of possible edge (bond) types. The node types are represented by a node feature tensor which is composed of several one-hot-encoded properties.

3.1 Graph Construction Process

Given a molecular graph defined as we propose to leverage the edge-specific information propagation framework described in Gilmer et al. (2017) to learn a set of informative an embedding from which we can directly infer a graph. Our graph construction process is composed of two parts:

  • An Encoder that in

    • step 1 performs several spatial graph convolutions on the input graph, and in

    • step 2 aggregates those embeddings into a single graph latent representation.

  • A Decoder that in

    • step 3 autoregressively generates a set of continuous node embeddings conditioned on the learnt latent representation, and in

    • step 4 reconstructs the whole graph using edge-factorization.

Figure 1 (a) and (b) provides a summary of those 4 steps.

Steps 1 and 2: Graph Representation Learning.

We use the Graph Convolutional Network (GCN) update rule (Hochreiter and Schmidhuber, 1997) to encode the graph. Each node embedding can be written as a weighted sum of the edge-conditioned information of its neighbors in the graph. Namely for each -th layer of the encoder, the representation is given by:


where is the -th frontal slice of the adjacency tensor, the corresponding degree tensor and and are learned parameters of the layer.

Once we have the node embeddings we aggregate them to obtain a fixed-length latent representation of the graph. We propose to parametrize this aggregation step by an LSTM and we compute the graph latent representation by a simple linear transformation of the last hidden state of this Aggregator:


Because the use of an LSTM makes aggregations permutation dependant, Like  Veličković et al. (2017), we adapt the aggregator using randomly permuted sets of embeddings and empirically validated that this did not affect the performance of the model significantly.

In the subsequent steps we are interested in designing a graph decoding scheme from the latent code that is both scalable and powerful enough to model the interdependencies between the nodes and edges in the graph.

Step 3: Autoregressive Generation.

We are interested in building a graph decoding scheme that models the nodes and their connectivity (represented by continuous embeddings ) in an autoregressive fashion. This is in contrast to (Devlin et al., 2018; Peters et al., 2018), where each node and edge is conditionally independent given the latent code . In practice this means that every detail of the interdependencies within the graph have to be encoded in the latent variable. We propose to tackle this drawback by autoregressive generation of the continuous embeddings for nodes. More precisely we model the generation of node embeddings such that:


In our model, the autoregressive generation of embeddings is parametrized by a simple Long Short-Term Memory (LSTM, Song et al. (2018)) and is completely deterministic such that at each time step the LSTM decoder takes as input the previously generated embeddings and the latent code which captured node-invariant features of the graph. Each embedding is computed as a function of the concatenation of the current hidden state and the latent code such that:


where corresponds to the LSTM recurrence operation and and are parametrized as simple MLP layers to perform nonlinear feature extraction.

Step 4: Graph Decoding from Node Embeddings.

As stated previously, we want to drive the generation of the continuous embeddings towards latent factors that contains enough information about the node they represent (i.e. we can easily retrieve the one-hot atom type performing a linear transformation of the continuous embedding) and its neighbourhood (i.e. the adjacency tensor can be easily retrieved by comparing those embeddings in a pair-wise manner). For those reasons, we suggest to factorize each bond type in a relational inference fashion (Hochreiter and Schmidhuber, 1997; Veličković et al., 2017).

Let be the concatenated continuous node embeddings generated in the previous step. We reconstruct the adjacency tensor by learning edge-specific similarity measure for -th edge type as follows:


This is modeled by a set of edge-specific factors such that we can reconstruct the adjacency tensor as :


where is the logistic sigmoid function, the corresponding diagonal matrix of the vector and the factors are learned parameters.

We reconstruct the node features (i.e. the atom type) with a simple affine transformation such that:


where is a learned parameter.

Generating Graphs of arbitrary sizes.

In order to generate graphs of different sizes we need to add what we call here an Existence module that retrieves a probability of a node belonging to the final graph for each of the embedding generated (in step 3). This module is parametrized as a simple MLP layer followed by a sigmoid activation and stops the unrolling of the embedding LSTM generator whenever we encounter a non-informative embedding. This module can be interpreted as an translator.

3.2 Training

Teacher forcing.

To make the model converge in reasonable time we adapt teacher-forcing on language models (Song et al., 2018) as follows. The training is thus done in 3 phases:

  • We first pre-train the GCN part along with the embedding decoder (factorization, nodes and existence modules) to reconstruct the graphs. This corresponds to the training of a simple Graph AE as in Hochreiter and Schmidhuber (1997) except that we also want to reconstruct the nodes’ one-hot features (and not just the relations).

  • We then append those two units to the embedding aggregator and generator while keeping them fixed. In this second phase, the embedding generator is trained using teacher forcing where at each time step the LSTM decoder does not take as input the previously generated embedding but the true one that is the direct output of the pretrained GCN embedding encoder.

  • Finally in order to transition from teacher-forcing to a fully autoregressive state we increasingly (Veličković et al., 2017) feed the LSTM generator more of its own predictions. When a fully autoregressive state is reached the pre-trained units are unfrozen and the whole model continues training end-to-end.

Log-Likelihood Estimates

We train the autoencoder on the reconstruction error using the MLE with the estimate negative log-likelihood given by:


where the number of nodes in the graph, and corresponding to the existing and non existing edges in the adjacency tensor , and is the node features, such that:


Since molecular graphs are sparse, we found that such separate normalisations were helpful for the training.

3.3 Conditional Generation and Optimisation

Model overview.

Given the entangled latent code for a given input molecular graph, we create a conditioned input by augmenting with a set of structured attributes - the target properties of interest, such as physico-chemical property. The conditional generator is then trained on the combined reconstruction and property loss. At the end of a successful training we expect the decoder to generate samples that have the properties specified in and to be similar (in terms of information contained in ) to the original query molecular graph (encoded as ). To do so we choose a mutual information maximization approach (detailed in the Appendix B.2) that involves the use of discriminators that assess the properties of the generated samples and their feedback is used to guide the learning of the generator.

Discriminator Pre-Training

In this phase we pre-train a discriminator to assess the property of a generated sample so that we can backpropagate its feedback to the generator (the discriminator can be trained on another dataset and we can have several discriminators for several attributes of interest). In order to have informative gradients in the early stages of the training we have trained the discriminator on continuous approximations of the discrete training graphs (details in Appendix B.1) so that our objective becomes:


where the graphs sampled from are the probabilistic approximations of the discrete graphs from the training distribution .

The next step is to incorporate the feedback signal of the trained discriminator in order to formulate the property attribute constraint. The training is decomposed in two phases in which we learn to reconstruct graphs of the dataset (MLE phase) and to modify chemical attributes (Variational MI maximization phase).

Encoder Learning.

The encoder is updated only during the reconstruction phase where we sample attributes from the true posterior. The encoder loss is a linear combination of the molecular graph reconstruction () and the property reconstruction (). The total encoder loss is:


where (using the log-likelihood estimates in (7)) and . With a hyperparameter of the model.

Generator Learning.

The generator is updated in both reconstruction and conditional phases. In the MLE phase the generator is trained with same loss as the encoder so that it is pushed towards generating realistic molecular graphs. In the MI maximization phase we sample the attributes from a prior s.t. we minimize the following objective: ,


where are hyperparameters of the model.

In this phase the only optimisation signal comes from the trained discriminator. Since there are no realism constraint specified in our model (see  Trischler et al. (2016); Mostafazadeh et al. (2016)), there is a risk of “falling off the manifold”. A possible way solution mitigate against it is to add a similarity discriminator trained to distinguish between the real probabilistic graph and the generated ones - so that when trying to satisfy the attribute constraint the generator is forced to produce valid molecular graphs. We leave this for future work.

Method Reconstruction Accuracy
JT-VAE Trischler et al. (2016) 76.7
JT-AE (without stereochemistry) 69.9
DEFactor - 56 89.2
DEFactor - 64 89.4
DEFactor - 100 89.8
Table 1: Molecular graph reconstruction task. We compare the performance of our decoder in the molecular reconstruction task with the JT-VAE. The results for JT-VAE result is taken from Trischler et al. (2016) and use a latent code of size 56. The JT-AE refers to an adapted version of the original model using the same parameters. It is however deterministic, and like DEFactor does not evaluate stereochemistry.

4 Experiments

To compare with recent results in constrained molecular graph optimization Trischler et al. (2016); Mostafazadeh et al. (2016), we present the following experiments :

  • Molecular Graph Reconstruction: We test the autoencoder framework on the task of reconstructing input molecules from their latent representations.

  • Conditional Generation: We test our conditional generation framework on the task of generating novel molecules that satisfy a given input property. Here, we are interested in the octanol-water partition coefficient (LogP) optimization used as benchmark in Muggleton (1991); Trischler et al. (2016); Mostafazadeh et al. (2016).

  • Constrained Property Optimization: Finally, we test our conditional autoencoder on the task of modifying a given molecule to improve a specified property, while constraining the degree of deviation from the original molecule. Again we use the LogP benchmark for the experiment.

Finally in the following section we use the 250K subset of the ZINC Song et al. (2018) dataset, released by Muggleton (1991), along with their given train and test splits.

Molecular graph reconstruction:

In this task we evaluate the exact reconstruction error from encoding and decoding a given molecular graph from the test set. We report in Table 1 the ratio of exactly reconstructed graphs, where we see that the our autoencoder outperforms the JT-VAE Trischler et al. (2016) which has the current state-of-the-art performance in this task. Appendix C.1 reports the reconstruction ratio as a function of the molecule size (number of heavy atoms).

Figure 2: Conditional generation: The initial LogP value of the query molecule is specified as IL and the Pearson correlation coefficient is specified as . We report on the y-axis the conditional value given as input and on the x-axis the true LogP of the generated graph when translated back into molecule. For each molecule we sample uniformly around the observed LogP value and report the LogP values for the decoded graphs corresponding to valid molecules.

Conditional Generation:

In this task we evaluate the conditional generation formulation described in Section 3.3. For a given molecule with an observed property value , the goal here is to modify the molecule to generate a new molecule with the given target property value; (). New molecules are generated by conditioning the decoder on , where is the latent code for . The decoded new molecule , is ideally best suited to satisfy the target property. This is evaluated by comparing the property value of the new molecule with the target property value. A generator that performs well at this task will produce predicted molecules with property values that are close to the target. In these experiments, LogP was chosen as the desired property, and we use RDKIT Song et al. (2018) to calculate the LogP values of generated molecules.

The scatter plots in Figure 2 give for a selected set of test molecules, the correlation of target property values against the evaluated property value of the correctly decoded molecules. Here, for each molecule the target set is defined by uniformly sampling around the observed property value for the molecule.

Constrained Property Optimization:

In this section we follow the evaluation methodology outlined in Trischler et al. (2016); Mostafazadeh et al. (2016), and evaluate our model in a constrained molecule property optimization. In contrast to Trischler et al. (2016), due to the conditional formulation, does not need retraining for the optimisation task.

Given the 800 molecules with the lowest penalized LogP111The penalized logP is octanol-water partition coefficient (logP) penalized by the synthetic accessibility (SA) score and the number of long cycles, see Trischler et al. (2016) property scores from the test set, we evaluate the decoder by providing pairs of () with increasing property scores, and among the valid decoded graphs we compute:

  • Their similarity scores (Sim.) to the encoded target molecule (called the prototype);

  • Their penalized LogP scores. Note that in this setting the conditioning property values () are the unpenalized LogP scores. However, to evaluate the model we compute the penalized LogP scores to assess the model’s ability to decode synthetically accessible molecules.

  • While varying the similarity threshold values (), we compute the success rate (Suc.) for all 800 molecules. This measures how often we to get a novel molecule with an improved penalized LogP score.

  • Finally, for different similarity thresholds, for successfully decoded molecules, we report the average improvements (Imp.) and the similarity (Sim.) for the molecule that is most improved. We compare our results with Trischler et al. (2016); Mostafazadeh et al. (2016).

The final results are reported in Table 2. As can be seen, although slightly behind GCPN Mostafazadeh et al. (2016) w.r.t. success rates (Suc.), DEFactor significantly outperforms other models in terms of improvements (Imp.) achieved (by between and for thresholds 0.2 and 0.6 respectively, with respect to the next best model GCPN).

Imp. Sim. Suc. Imp. Sim. Suc. Imp. Sim. Suc.
0.0 1.91 2.04 0.280.15 97.5 4.201.28 0.320.12 100 6.622.50 0.200.16 91.5
0.2 1.68 1.85 0.330.13 97.1 4.121.19 0.340.11 100 5.552.31 0.310.12 90.8
0.4 0.84 1.45 0.510.10 83.6 2.491.30 0.470.08 100 3.411.8 0.490.09 85.9
0.6 0.21 0.71 0.690.06 46.4 0.790.63 0.680.08 100 1.551.19 0.690.06 72.6
Table 2: Constrained penalized LogP maximisation task: each row gives a different threshold similarity constraint and columns are for improvements (Imp.), similarity to the original query (Sim.), and the success rate (Suc.). Values for other models are taken from Mostafazadeh et al. (2016).

5 Future work

In this paper, we have presented a new way of modelling and generating graphs in a conditional optimisation setting such that the final graph being fully differentiable w.r.t to the model parameters. We believe that our DEFactor model will contribute to understanding and building ML-driven applications for de-novo drug design or generation of molecules with optimal properties, without resorting to methods that do not directly optimise the desired properties.

Note that a drawback of our model is that it uses an MLE training process which forces us to either fix the ordering of nodes or to perform a computationally expensive graph matching operation to compute the loss. Moreover in our fully deterministic conditional formulation we assume that chemical properties optimisation is a one-to-one mapping but in reality there may exist many suitable way of optimizing a molecule to satisfy one property condition while staying similar to the query molecule. To that extent it could be interesting to augment our model to include the possibility of a one-to-many mapping. Another way of improving the model could also be to include a validity constraint formulated as training a discriminator that discriminates between valid and generated graphs.


The authors thank NSERC and CIFAR for funding at U. Montreal and Mila as well as Marco Fiscato, Petar Veličković, Jian Tang and Benedek Fabian for useful discussions.


  • Rajpurkar et al. (2016) Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans, 2016.
  • Nguyen et al. (2016) Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets, 2016.
  • Trischler et al. (2016) Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation, 2018.
  • Mostafazadeh et al. (2016) Jiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation, 2018.
  • Veličković et al. (2017) Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks, 2015.
  • Song et al. (2018) John J. Irwin, Teague Sterling, Michael M. Mysinger, Erin S. Bolstad, and Ryan G. Coleman. Zinc: A free tool to discover chemistry for biology. Journal of Chemical Information and Modeling, 52(7):1757–1768, 2012. PMID: 22587354.
  • Song et al. (2018) RDKit: Open-source cheminformatics. [Online; accessed 11-April-2013].
  • Su et al. (2016) Yibo Li, Liangren Zhang, and Zhenming Liu. Multi-objective de novo drug design with conditional graph generative model, 2018.
  • Bowman et al. (2015) Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. CoRR, abs/1609.05473, 2016.
  • Williams et al. (2018) Seokho Kang and Kyunghyun Cho. Conditional molecular design with deep generative models, 2018.
  • Devlin et al. (2018) Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders, 2018.
  • Johnson et al. (2017) Marwin H. S. Segler, Thierry Kogej, Christian Tyrchan, and Mark P. Waller. Generating focussed molecule libraries for drug discovery with recurrent neural networks, 2017.
  • Kok and Domingos (2007) Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules, 2016.
  • Muggleton (1991) Matt J. Kusner, Brooks Paige, and José Miguel Hernández-Lobato. Grammar variational autoencoder. In Proceedings of the 34th International Conference on Machine Learning, 2017.
  • Lavrac and Dzeroski (1994) Hanjun Dai, Yingtao Tian, Bo Dai, Steven Skiena, and Le Song. Syntax-directed variational autoencoder for structured data. CoRR, abs/1802.08786, 2018.
  • Rocktäschel and Riedel (2017) Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs, 2018.
  • Weston et al. (2015) Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep auto-regressive models, 2018.
  • Gilmer et al. (2017) Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs, 2017.
  • Hochreiter and Schmidhuber (1997) Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks, 2016.
  • Veličković et al. (2017) Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems, 2018.
  • Song et al. (2018) Ronald J. Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks, 1989.
  • Hochreiter and Schmidhuber (1997) Thomas N. Kipf and Max Welling. Variational graph auto-encoders, 2016.
  • Veličković et al. (2017) William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs, 2017.
  • Song et al. (2018) Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • Hochreiter and Schmidhuber (1997) Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34:13, 457-466, 2018, 2018.
  • Peters et al. (2018) Nicola De Cao and Thomas Kipf. Molgan: An implicit generative model for small molecular graphs, 2018.

Appendix A

a.1 Models Comparison

Model Inference Parameters Constrained Probabilistic No Retraining
MolGAN [26] NA
JT-VAE [3]
Figure 3: We report here a comparison of the abilities of previous recent models involving molecular graph generation and optimization.

We are interested in the following features of the models :

  • Inference : If the model is equipped or not with an inference network. To encode some target molecule like we do in the conditional setting.

  • Parameter-efficient : If the number of parameters of the model depends on the graph sizes.

  • Constrained : If the model is studied in a constrained optimization scenario : namely the case where we want to optimize a property while constraining the degree of deviation from the original molecule.

  • Probabilistic : If the outptut of the model is a probabilistic graph s.t. it is differentiable w.r.t to the decoder’s parameters.

  • No Retraining : If we need to retrain/fine-tune/perform gradient-ascent each time we want to optimize a novel molecule.

Appendix B

b.1 Graphs continuous approximation

For the pre-training of the discriminators we suggested to train them on continuous approximation of the discrete graphs that resembles the output of the decoder. To that extent we used the trained partial graph autoencoder (used for the teacher forcing at the beginning of the training of the full autoencoder.)

Figure 4: Partial graph Autoencoder used for the pre-training.

b.2 Mutual information maximization

For the conditional setting we choose a simple mutual information maximization formulation. The objective is to maximize the MI between the target property and the decoder’s output under the joint defined by the decoder . In the conditional setting is also conditioned on the encoded molecule but for simplicity we treat it as a parameter of the decoder (and thus reason with one target molecule from which we want to modify attributes). We define the MI as:

In our conditional setting we pre-trained the discriminators (parametrized by in the lower bound derivation) to approximate which makes the bound tight only when is close to and this corresponds to a stage where the decoder has maximized the log-likelihood of the data well enough (i.e. when it is able to reconstruct input graphs properly when and are paired). Thus, in the conditional setting we are maximizing the following objective:

Appendix C

c.1 Reconstruction as a function of number of atoms

Figure 5: Accuracy score as a function of the number of heavy atoms in the molecule(x axis) for different size of the latent code.

Notice that as we make use of a simple LSTM to encode a graph representation, there is a risk that for the largest molecules the long term dependencies of the embeddings are not captured well resulting in a bad reconstruction error. We capture this observation in figure 4. One possible amelioration could be to add at each step another non-sequential aggregation of the embeddings (average pooling of the embeddings for example) or to make the encoder more powerful by adding some attention mechanisms. We leave these options for future work.

c.2 Visual similarity samples

Figure 6: LogP increasing task visual example. The original molecule is outlined in red.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description