The Many-Body Expansion Combined with Neural Networks.

The Many-Body Expansion Combined with Neural Networks.

Kun Yao    John E. Herr 251 Nieuwland Science Hall, Notre Dame, IN 46556    John Parkhill 251 Nieuwland Science Hall, Notre Dame, IN 46556 john.parkhill@nd.edu
July 13, 2019
Abstract

Fragmentation methods such as the many-body expansion (MBE) are a common strategy to model large systems by partitioning energies into a hierarchy of decreasingly significant contributions. The number of fragments required for chemical accuracy is still prohibitively expensive for ab-initio MBE to compete with force field approximations for applications beyond single-point energies. Alongside the MBE, empirical models of ab-initio potential energy surfaces have improved, especially non-linear models based on neural networks (NN) which can reproduce ab-initio potential energy surfaces rapidly and accurately. Although they are fast, NNs suffer from their own curse of dimensionality; they must be trained on a representative sample of chemical space. In this paper we examine the synergy of the MBE and NN’s, and explore their complementarity. The MBE offers a systematic way to treat systems of arbitrary size and intelligently sample chemical space. NN’s reduce, by a factor in excess of the computational overhead of the MBE and reproduce the accuracy of ab-initio calculations without specialized force fields. We show they are remarkably general, providing comparable accuracy with drastically different chemical embeddings. To assess this we test a new chemical embedding which can be inverted to predict molecules with desired properties.

many-body expansion, neural networks, methanol

I Introduction

The many body expansion (MBE) lies at the heart of a multitude of computational methods being developed in the realm of ab-initio theory and force fields. In insulators the high-order terms of the MBE decay rapidly with distance, which makes this type of approximation useful for low-scaling, high-accuracy models of liquids, solids and biological moleculesRaghavachari and Saha (2015); Mayhall and Raghavachari (2012); Richard and Herbert (2012); Liu and Herbert (2016); Wen et al. (2012); Beran (2009); Saha and Raghavachari (2013); Gordon et al. (2012); Dahlke and Truhlar (2007a); Medders, Babin, and Paesani (2013); von Lilienfeld and Tkatchenko (2010). However an ab-initio MBE is orders of magnitude more costly than a classical force field. The main limitation comes from the combinatorial growth of effort at each order.
In chemistry neural networks are growing in popularity to predict molecular propertiesJose, Beckett, and Raghavachari (2015); Behler and Parrinello (2007); Behler (2011a); Jiang and Guo (2013); Handley and Popelier (2010); Cuny et al. (2016); Zhang and Zhang (2014); Koch and Zhang (2014); Chen, Xu, and Zhang (2013). However NN’s have their own limitations: their input must have a constant shape, moreover they must be trained on a representative number of samples, and chemical space grows exponentially with molecular size. This curse of dimensionality in the training set is the main barrier to the creation of a universal NN force field with very high accuracy. The purpose of this paper is to show that the MBE provides a very natural and accurate way to alleviate this curse of dimensionality while retaining the generality, accuracy and efficiency of a NN.
Force fields based on the many-body expansion are growing in popularity. Richard and Herbert (2012); Dahlke and Truhlar (2007a); Beran (2009); Pinski and Csányi (2013). Under the MBE scheme, the total energy of an system can be expanded as the sum of the many-body terms. High order terms are more costly calculations and the error of the MBE is often balanced with the error of the underlying model chemistry at third orderXantheas (1994, 2000); Kulkarni, Ganesh, and Gadre (2004) so long as care is taken to correct for basis set superposition error (BSSE).Ouyang, Cvitkovic, and Bettens (2014); Boys and Bernardi (1970) An electrostatically embedded MBE (EE-MBE) has also been proposed as a means to improve the accuracy.Dahlke and Truhlar (2007a, b). Others have suggested a many-body expansion scheme of overlapping-fragments as a way to improve upon the accuracy of the energies.Richard and Herbert (2012, 2013); Lao et al. (2016)
Statistical models from machine learning are becoming popular chemical models. Examples include fitted potential energy surfacesBehler (2011a); Handley and Popelier (2010) with atom-centered symmetry functions,Behler and Parrinello (2007); Behler (2011b); Khaliullin et al. (2011) and with permutation invariant polynomials.Jiang and Guo (2013); Xin, Chen, and Zhang (2014); Zhang and Zhang (2014); Medders, Babin, and Paesani (2013); Shao et al. (2016); Li and Guo (2015); Li et al. (2015); Medders et al. (2015) Permutationally invariant polynomials have been used to express the many-body energies of water clustersMedders, Babin, and Paesani (2013); Medders et al. (2015) and water-methane clustersConte, Qu, and Bowman (2015) with great success. Also, machine learning has been used to predict properties, such as atomization energies, HOMO and LUMO eigenvalues, ionization potentials, force constants, dielectric constants Rupp (2015); Hansen et al. (2015); Montavon et al. (2013); Pilania et al. (2013); Ghasemi et al. (2015); Schütt et al. (2014); Olivares-Amaya et al. (2011); Ma et al. (2015); Ediz et al. (2009), quantum transport coefficientsLopez-Bezanilla and von Lilienfeld (2014) and nuclear magnetic resonance parametersCuny et al. (2016). It has also been used to construct kinetic energy functionalsSnyder et al. (2013, 2012); Yao and Parkhill (2016) and to design new materialsOlivares-Amaya et al. (2011); Hachmann et al. (2011, 2014); Hautier et al. (2010); Ediz et al. (2008).
To our knowledge, there are few works that combine neural networks with the MBE and those have focused on elemental solids. The closest work predicted the many-body energy of Malshe et al. (2009) clusters. Bartók used machine learning techniques based on Bayesian inference to correct DFT one-body and two-body energies for waterBartók et al. (2013). In this paper we learn the many-body energies of condensed phase liquid methanol within m accuracy. We show that one can use the MBE for methanol clusters of a thousand molecules without significant computational expense on typical GPU workstations. We also present a novel chemical embedding, which has the advantage that it is invertible to ball-and-stick geometries, asses it as a descriptor to learn the MBE, and propose it as a useful tool for inverse-design.

Ii Methods

Studies have shown that the MBE converges rapidly for van der Waals and water clusters.Cui, Liu, and Jordan (2006); Góra et al. (2011); Ouyang, Cvitkovic, and Bettens (2014); Hermann et al. (2007); Medders and Paesani (2013) Convergence is relatively slow for metallic or covalent interactionsHermann et al. (2007); Paulus et al. (2004), although schemes have been proposed to improve the accuracy of the MBE on covalent systemsMayhall and Raghavachari (2012); Richard and Herbert (2012). We chose methanol for its strong hydrogen bonding, but nothing about this work is specialized or limited to systems of this size. RI-MP2 with the cc-pVTZ basis is used to calculate all of the training and testing data for the many-body energies. The integral precision and SCF convergence criteria were as tight as possibleRichard, Lao, and Herbert (2014a, b) and BSSE using the K-mer centered basis set approach ()Góra et al. (2011) was applied. Training and test geometries are drawn from an AMBER molecular dynamics trajectoryFennell and Gezelter (2006); Lamichhane, Gezelter, and Newman (2014); Lamichhane, Newman, and Gezelter (2014); D.A. Case and Kollman () of 108 methanol molecules at 330 K and ab-initio trajectory of three methanols at 500 K. The total data set include 844,800 samples one-body energies, 74,240 samples for two-body energies and 36,864 samples for three-body energies. of the total data set is used for testing. All of the ab-initio calculations are done with the package Shao et al. (2015). Previous studies have shown that cumulative two-body energies and cumulative three-body energies converged at a cutoff of 10 Å for .Cui, Liu, and Jordan (2006); Dahlke and Truhlar (2007b). We also found that both the two-body and three-body energies negligibly different from limiting formulas at a cutoff of 10 Å as shown in Figure SI-1 so our dimers and trimers were generated within this cutoff of 10 Å. Krizhevsky, Sutskever, and Hinton (2012) was used to train and evaluate the neural network.

Figure 1: Top panel: The fragment energy of each N-Body is calculated by embedding the geometry into either the Coulomb Matrix or Depth map, and evaluating the output of a neural net with several hidden layers and one output. Bottom panel: Generative adversarial network scheme. A z-vector is transformed and passed through convolutional hidden layers to generate a hallucinated depth map.

Choosing the chemical embedding for the system as the input to the NN has a great effect on performance. Many different chemical descriptors have been proposed, including the Coulomb matrixHansen et al. (2013); Montavon et al. (2012); Faber et al. (2015), symmetry functionsBehler and Parrinello (2007); Behler (2011a), bispectrumBartók and Payne (2010); Bartók, Kondor, and Csányi (2013), permutation invariant polynomialsZhang and Zhang (2014); Jiang and Guo (2013), metric fingerprintsSadeghi et al. (2013); Zhu et al. (2016); Schaefer and Goedecker (2016) and the radial distribution Fourier series, von Lilienfeld et al. (2015) which is based on the electronic density and is similar to a descriptor our group has used in the past for learning kinetic functionalsYao and Parkhill (2016). Systematic comparison of different descriptors is beyond the scope of this paper and we choose the Coulomb matrix (CM) as our input for neural networks for its simplicity and we show that it is capable of the task. The CM, however, is not permutationally invariant, therefore, in this study we augmented our training data with all the permutations of hydrogen atoms on carbon and all the permutations of methanol molecules in the dimer and trimer to learn the permutation invariance. Similar data augmentation techniques have been widely used in image recognition to achieve translation and rotation invarianceKrizhevsky, Sutskever, and Hinton (2012). As shown in the Figure SI-2 and Figure SI-3, the permutation invariance is learned with satisfying accuracy. The permutation invariance can also be avoided by averaging the result of all the possible permutations.
We experimented with a novel chemical embedding, which we call the depth map (D-map). The purpose of this descriptor is not to improve over the accuracy of the CM, but rather to have an input which provides reasonable energies and inverts directly to molecular geometries. If networks could accurately learn from an invertible input, they could also become useful tools for molecular inverse-design. Similar types of NN inputs have been used in the area of 3D detection and object recognition.Song and Xiao (2014); Shrivastava and Gupta (2013); Lai et al. (2012); Kim, Xu, and Savarese (2013) An example D-map can be seen in Figure 1. It is simply a depth of field image of a ball-and-stick structure. A simple routine to calculate one is given in the supplement. Given the usefulness of this input in molecular design we were curious how well it could be used to predict energies, and will compare it to the CM in the results. Generative Adversarial Networks (GANs) have since been studied extensively for their ability to hallucinate authentic looking images.Goodfellow et al. (2014); Denton et al. (2015); Radford, Metz, and Chintala (2015); Im et al. (2016); Yoo et al. (2016); Salimans et al. (2016) We trained a GAN on the D-map to produce hallucinated images of methanols, and discuss the utility this provides.

Iii Results

Figure 2 shows the comparison of the one-body, two-body and three-body energies calculated using MP2 and our neural network for all the test samples. The neural networks give close agreement with MP2 such that errors in the underlying model chemistry would be the limiting factor of MBE-NN. The Mean Absolute Error (MAE) and Mean Signed Error (MSE) of the energy of each many-body terms over a test set independent of the training data are shown in Table 1. The MSE is balanced in the sense that each order of the expansion adds comparable microhartree errors to a total energy, as we will discuss in results which follow.
The higher MAE of the higher-body terms is a predictable consequence of our design principle that the ratio of total error to wall-hours should be kept roughly constant at each order of expansion. The three-body energy is a surface of much higher dimension than the two body energy, and this causes difficulty in three ways: a need for more training data, a larger number of invariances, and network capacity. However, we use much less three body data, because it is more expensive and less significant. The histogram of errors of all the many-body terms are also shown in Figure 2, and appears uncorrelated which is supported by our later observation that the error per-methanol is essentially insensitive to system size.

Error 1-body 2-body 3-body (CM) 3-body (DM)
MSE 0.24 0.90 -1.16 -3.96
MAE 5.99 15.6 20.0 39.0
Rate 0.005 0.008 0.005 0.009
Table 1: The MAE and MSE (microhartree) of one-body energy, two-body energy, three-body energy with Coulomb matrix input and three-body energy with depth map input predicted by neural network. We calculate a rate of error as MAE(microhartree)/wall-hours of RI-MP2 in Q-Chem required to generate the training data.

The three-body network trained on the D-map also provides reasonable energies. Comparing the three-body energy plots for the CM and the D-map, in most cases the D-map appears to do nearly as well as the CM; however, for a handful of cases the D-map makes significantly poorer predictions. We note this tends to happen when all or part of an oxygen atom becomes eclipsed by the methyl group. Furthermore, we also note the D-map tends to make poorer predictions for energies which are near zero. The distribution of errors remains normally distributed. The D-map does provide a few advantages over the CM: It provides a low dimensional encoding of the space of methanol geometries. It also has constant shape regardless of chemical input, and suffers from fewer problems with invariances.
As expected the CM makes more accurate predictions than the D-map; however, our aim with the D-map was to provide a chemical embedding which can easily be mapped back to the original geometry of the system. For example this can be used to predict molecules, which maximize a desired property directly without searching chemical space. To this end we then trained a GAN, based on Radford et al.,Radford, Metz, and Chintala (2015) using the D-map by separating element types into separate color channels, to produce hallucinated images of methanol trimers. An example hallucinated D-map can be seen in the bottom panel of Figure 1. The network maps a random z-vector back to a D-map. By varying one element of the z-vector, we were able to control image generation to tune properties. The examples in Figures 1 are from the same z-vector with one varied element , which rotates one of the methanol end over end. It is easy to imagine extending this to other properties and using a GAN for inverse-design. For the remainder of this paper we employ the Coulomb embedding for the MBE.

Figure 2: The top left, top right and bottom left panels show plots of the one-body, two-body and three-body energies calculated from MP2 (x-axis) and our neural network (y-axis), respectively. For three-body energy, the result of using the Coulomb matrix (CM) as the input is shown in orange and the result of using the depth map (DM) as the input is shown in green. The bottom right panel shows the histogram of the errors of the many-body energy terms predicted by the neural network. The one-body and two-body errors are shown in red and blue respectively.
Geometry MP2a MBE-NN HFa B3LYPa
chair 0 0 0 0
bowl 1.50 1.40 1.75 2.02
chain 4.54 4.10 2.56 5.29

a MP2, HF and B3LYP energies are extrapolated to a complete basis.

Table 2: Relative energies (m) of three minimal energy geometries of methanol trimer.

Table 2 gives the relative energies of the three minimal energy geometries, chair, bowl and chainKazachenko, Bulusu, and Thakkar (2013) of a methanol trimer. All methods shown in Table 2 get the ordering of these three geometries correct. Compared with MP2, our many-body expansion neural network (NN-MBE) has an error within , which is small compared to Hartree-Fock and B3LYP, which are both significantly more costly. Another important feature is the smoothness of predicted surfaces. Figure 3 plots the change of the three-body energy, two-body energy and total energy when one methanol in a methanol trimer is pulled away from the other two. The agreement is best near the bonding minimum and long distance. Relatively fewer training cases in this region are sampled by the MD trajectories.

Figure 3: Dashed line, dotted line and solid line show the change of three-body energy, two-body energy and total energy when one methanol is pulled away from the other two. Energies calculated by MP2-MBE and NN-MBE are shown in blue curve and orange curve, respectively.

The relative energies of five random clusters are shown in Figure 4 to assess the errors due to NN and MBE. Compared with the real MP2 energies, the MAE of the the MBE using MP2 (MP2-MBE) is 0.12 m per molecule, and with NN-MBE 0.10 m. Remarkably there is no degradation of accuracy involved in using NN-MBE, despite massive speedup. Instead the method is limited by the quality of the model chemistry it is built on and the accuracy of the MBE itself.

Figure 4: Bottom panel: relative energies of five clusters. Energies calculated by MP2, MP2-MBE and NN-MBE are shown in blue lines, green lines and orange lines, respectively. Top panel: green dots show the difference between MP2-MBE energies and MP2 energies and orange dots show the difference between NN-MBE energies and MP2 energies.

Proper treatment of solvent effects is crucial for describing most chemical processes. The top panel of Figure 5 shows the energy change of breaking the hydrogen bond between two methanols when the solvation shell is not included. MP2-MBE predicts the energy change to be 5.6 kcal/mol and NN-MBE gives 5.8 kcal/mol. When the solvation shell (with a radius of 10 Å) is included, as shown in the bottom panel, the energy change dramatically increased to 13.3 kcal/mol, which shows the large influence of solvent effects. The NN-MBE predicts the energy change with solvation shell to be 14.6 kcal/mol, 1.3 kcal/mol larger than the MP2-MBE result. Considering the speed up of the NN-MBE, discussed below, and its accuracy, the scheme shows promise for condensed phase phenomena.

Figure 5: The top panel (without solvation shell) and bottom panel (with solvation shell) show the energy changes of breaking a hydrogen bond between two methanol by rotating one methanol by 180 degrees around the C-O bond. The units of the energy are kcal/mol. The solvation shell influences the energy change significantly and the neural network predicts the energy change with an accuracy of 1 kcal/mol

We also investigated the error of NN-MBE as a function of system size. Figure 6 shows the error per molecule and the error per cluster of the NN-MBE (with respect to MP2-MBE) with an increasing number of molecules in the cluster. The error per cluster stays within the range of 3 m and the error per molecule reaches a maximum at 70 units and shows signs of sub-extensive behavior. Figure SI-5 provides the total wall time comparison of the NN-MBE and MP2-MBE, showing that the NN-MBE offers a speed up of more than two million relative to MP2-MBE without any type of optimization.

Figure 6: Error per MeOH (left orange axis) and total error per cluster (right blue axis) of methanol clusters with different size. Error is define as the difference between NN-MBE energy and MP2-MBE energy. The total error of per cluster which includes up to 200 molecules is in the range of 3 m and the error per molecule decreases with system size.

Iv Discussion and Conclusions

We have shown that a NN-MBE can be used to calculate the energy of methanol clusters with a speed up in the millions with respect to the MP2-MBE. The error of the NN-MBE is within ms, which is similar to the error of MP2-MBE with expansion up to three-body terms. The histogram of the errors of the NN-MBE display Gaussian shape, which makes the error per molecule decrease with the increase of system size. The satisfying accuracy and huge speed up enable the NN-MBE to treat large system with ab-inito accuracy, which would otherwise be impossible, such as treating solvation shell effects in ab-initio calculations. The Coulomb matrix is not invariant to permutations and even though we have shown that permutation invariance can be learned by augmenting the training samples with all of the possible permutations, it is still not perfectly invariant. Our current study focused on methanol and this scheme can easily be generalized to other systems.
We introduced a new descriptor, the D-map, which is invertible with the geometry of a system. The D-map was able to predict the three-body energies reasonably well, and provides several advantages of its own. We then showed that a generative adversarial network could be trained on the D-map to provide hallucinated images, which are tunable and should be useful for inverse molecular design.

V Acknowledgement

We thank The University of Notre Dame’s College of Science and Department of Chemistry and Biochemistry for generous start-up funding, and Nvidia corporation for some processors used in this work.

References

  • Raghavachari and Saha (2015) K. Raghavachari and A. Saha, “Accurate composite and fragment-based quantum chemical models for large molecules,” Chem. Rev. 115, 5643–5677 (2015).
  • Mayhall and Raghavachari (2012) N. J. Mayhall and K. Raghavachari, “Many-overlapping-body (mob) expansion: A generalized many body expansion for nondisjoint monomers in molecular fragmentation calculations of covalent molecules,” J. Chem. Theory Comput. 8, 2669–2675 (2012).
  • Richard and Herbert (2012) R. M. Richard and J. M. Herbert, “A generalized many-body expansion and a unified view of fragment-based methods in electronic structure theory,” J. Chem. Phys. 137, 064113 (2012).
  • Liu and Herbert (2016) J. Liu and J. M. Herbert, “Pair-pair approximation to the generalized many-body expansion: An alternative to the four-body expansion for ab initio prediction of protein energetics via molecular fragmentation,” J. Chem. Theory Comput.  (2016).
  • Wen et al. (2012) S. Wen, K. Nanda, Y. Huang,  and G. J. Beran, “Practical quantum mechanics-based fragment methods for predicting molecular crystal properties,” Phys. Chem. Chem. Phys. 14, 7578–7590 (2012).
  • Beran (2009) G. J. Beran, “Approximating quantum many-body intermolecular interactions in molecular clusters using classical polarizable force fields,” J. Chem. Phys. 130, 164115 (2009).
  • Saha and Raghavachari (2013) A. Saha and K. Raghavachari, “Dimers of dimers (dod): A new fragment-based method applied to large water clusters,” J. Chem. Theory Comput. 10, 58–67 (2013).
  • Gordon et al. (2012) M. S. Gordon, D. G. Fedorov, S. R. Pruitt,  and L. V. Slipchenko, “Fragmentation methods: a route to accurate calculations on large systems,” Chem. Rev 112, 632–672 (2012).
  • Dahlke and Truhlar (2007a) E. E. Dahlke and D. G. Truhlar, “Electrostatically embedded many-body expansion for large systems, with applications to water clusters,” J. Chem. Theory Comput. 3, 46–53 (2007a).
  • Medders, Babin, and Paesani (2013) G. R. Medders, V. Babin,  and F. Paesani, “A critical assessment of two-body and three-body interactions in water,” J. Chem. Theory Comput. 9, 1103–1114 (2013).
  • von Lilienfeld and Tkatchenko (2010) O. A. von Lilienfeld and A. Tkatchenko, “Two-and three-body interatomic dispersion energy contributions to binding in molecules and solids,” The Journal of chemical physics 132, 234109 (2010).
  • Jose, Beckett, and Raghavachari (2015) K. J. Jose, D. Beckett,  and K. Raghavachari, “Vibrational circular dichroism spectra for large molecules through molecules-in-molecules fragment-based approach,” J. Chem. Theory Comput. 11, 4238–4247 (2015).
  • Behler and Parrinello (2007) J. Behler and M. Parrinello, “Generalized neural-network representation of high-dimensional potential-energy surfaces,” Phys. Rev. Lett. 98, 146401 (2007).
  • Behler (2011a) J. Behler, “Neural network potential-energy surfaces in chemistry: a tool for large-scale simulations,” Phys. Chem. Chem. Phys. 13, 17930–17955 (2011a).
  • Jiang and Guo (2013) B. Jiang and H. Guo, “Permutation invariant polynomial neural network approach to fitting potential energy surfaces,” J. Chem. Phys. 139, 054112 (2013).
  • Handley and Popelier (2010) C. M. Handley and P. L. Popelier, “Potential energy surfaces fitted by artificial neural networks,” J. Phys. Chem. A 114, 3371–3383 (2010).
  • Cuny et al. (2016) J. Cuny, Y. Xie, C. J. Pickard,  and A. A. Hassanali, “Ab initio quality nmr parameters in solid-state materials using a high-dimensional neural-network representation,” J. Chem. Theory Comput. 12, 765–773 (2016).
  • Zhang and Zhang (2014) Z. Zhang and D. H. Zhang, “Effects of reagent rotational excitation on the h+ chd3→ h2+ cd3 reaction: A seven dimensional time-dependent wave packet study,” J. Chem. Phys. 141, 144309 (2014).
  • Koch and Zhang (2014) W. Koch and D. H. Zhang, “Communication: Separable potential energy surfaces from multiplicative artificial neural networks,” J. Chem. Phys. 141, 021101 (2014).
  • Chen, Xu, and Zhang (2013) J. Chen, X. Xu,  and D. H. Zhang, “Communication: An accurate global potential energy surface for the oh+ co→ h+ co2 reaction using neural networks,” J. Chem. Phys. 138, 221104 (2013).
  • Pinski and Csányi (2013) P. Pinski and G. Csányi, “Reactive many-body expansion for a protonated water cluster,” J. Chem. Theory Comput. 10, 68–75 (2013).
  • Xantheas (1994) S. S. Xantheas, “Ab initio studies of cyclic water clusters (h2o) n, n = 1-6. ii. analysis of many-body interactions,” J. Chem. Phys. 100, 7523 (1994).
  • Xantheas (2000) S. S. Xantheas, “Cooperativity and hydrogen bonding network in water clusters,” Chem. Phys. 258, 225–231 (2000).
  • Kulkarni, Ganesh, and Gadre (2004) A. D. Kulkarni, V. Ganesh,  and S. R. Gadre, ‘‘Many-body interaction analysis: Algorithm development and application to large molecular clusters,” J. Chem. Phys. 121, 5043 (2004).
  • Ouyang, Cvitkovic, and Bettens (2014) J. F. Ouyang, M. W. Cvitkovic,  and R. P. Bettens, “Trouble with the many-body expansion,” J. Chem. Theory Comput. 10, 3699–3707 (2014).
  • Boys and Bernardi (1970) S. Boys and F. Bernardi, “The calculation of small molecular interactions by the differences of separate total energies. some procedures with reduced errors,” Mol. Phys. 19, 553–566 (1970).
  • Dahlke and Truhlar (2007b) E. E. Dahlke and D. G. Truhlar, “Electrostatically embedded many-body correlation energy, with applications to the calculation of accurate second-order møller-plesset perturbation theory energies for large water clusters,” J. Chem. Theory Comput. 3, 1342–1348 (2007b).
  • Richard and Herbert (2013) R. M. Richard and J. M. Herbert, “Many-body expansion with overlapping fragments: Analysis of two approaches,” J. Chem. Theory Comput. 9, 1408–1416 (2013), pMID: 26587603, http://dx.doi.org/10.1021/ct300985h .
  • Lao et al. (2016) K. U. Lao, K.-Y. Liu, R. M. Richard,  and J. M. Herbert, “Understanding the many-body expansion for large systems. ii. accuracy considerations,” J. Chem. Phys. 144, 164105 (2016).
  • Behler (2011b) J. Behler, “Atom-centered symmetry functions for constructing high-dimensional neural network potentials,” J. Chem. Phys. 134, 074106 (2011b).
  • Khaliullin et al. (2011) R. Z. Khaliullin, H. Eshet, T. D. Kühne, J. Behler,  and M. Parrinello, “Nucleation mechanism for the direct graphite-to-diamond phase transition,” Nat. Mater. 10, 693–697 (2011).
  • Xin, Chen, and Zhang (2014) X. Xin, J. Chen,  and D. H. Zhang, “Global potential energy surface for the h+ch¡-¿h2+ch3 reaction using neural networks,” Chin. J. Chem. Phys. 27, 373 (2014).
  • Shao et al. (2016) K. Shao, J. Chen, Z. Zhao,  and D. H. Zhang, “Communication: Fitting potential energy surfaces with fundamental invariant neural network,” J. Chem. Phys. 145, 071101 (2016).
  • Li and Guo (2015) J. Li and H. Guo, “Permutationally invariant fitting of intermolecular potential energy surfaces: A case study of the ne-c2h2 system,” J. Chem. Phys. 143, 214304 (2015).
  • Li et al. (2015) J. Li, J. Chen, Z. Zhao, D. Xie, D. H. Zhang,  and H. Guo, “A permutationally invariant full-dimensional ab initio potential energy surface for the abstraction and exchange channels of the h+ ch4 system,” J. Chem. Phys. 142, 204302 (2015).
  • Medders et al. (2015) G. R. Medders, A. W. Götz, M. A. Morales, P. Bajaj,  and F. Paesani, “On the representation of many-body interactions in water,” J. Chem. Phys. 143, 104102 (2015).
  • Conte, Qu, and Bowman (2015) R. Conte, C. Qu,  and J. M. Bowman, “Permutationally invariant fitting of many-body, non-covalent interactions with application to three-body methane–water–water,” J. Chem. Theory Comput. 11, 1631–1638 (2015).
  • Rupp (2015) M. Rupp, “Machine learning for quantum mechanics in a nutshell,” Int. J. Quantum Chem. 115, 1058–1073 (2015).
  • Hansen et al. (2015) K. Hansen, F. Biegler, R. Ramakrishnan, W. Pronobis, O. A. von Lilienfeld, K.-R. Müller,  and A. Tkatchenko, “Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space,” J. Phys. Chem. Lett. 6, 2326–2331 (2015)http://dx.doi.org/10.1021/acs.jpclett.5b00831 .
  • Montavon et al. (2013) G. Montavon, M. Rupp, V. Gobre, A. Vazquez-Mayagoitia, K. Hansen, A. Tkatchenko, K.-R. Müller,  and O. A. von Lilienfeld, “Machine learning of molecular electronic properties in chemical compound space,” New J. Phys. 15, 095003 (2013).
  • Pilania et al. (2013) G. Pilania, C. Wang, X. Jiang, S. Rajasekaran,  and R. Ramprasad, “Accelerating materials property predictions using machine learning,” Sci. Rep. 3 (2013).
  • Ghasemi et al. (2015) S. A. Ghasemi, A. Hofstetter, S. Saha,  and S. Goedecker, ‘‘Interatomic potentials for ionic systems with density functional accuracy based on charge densities obtained by a neural network,” Phys. Rev. B 92, 045131 (2015).
  • Schütt et al. (2014) K. Schütt, H. Glawe, F. Brockherde, A. Sanna, K. Müller,  and E. Gross, “How to represent crystal structures for machine learning: Towards fast prediction of electronic properties,” Phys. Rev. B 89, 205118 (2014).
  • Olivares-Amaya et al. (2011) R. Olivares-Amaya, C. Amador-Bedolla, J. Hachmann, S. Atahan-Evrenk, R. S. Sánchez-Carrera, L. Vogt,  and A. Aspuru-Guzik, “Accelerated computational discovery of high-performance materials for organic photovoltaics by means of cheminformatics,” Energ. Environ. Sci. 4, 4849–4861 (2011).
  • Ma et al. (2015) X. Ma, Z. Li, L. E. Achenie,  and H. Xin, “Machine-learning-augmented chemisorption model for co2 electroreduction catalyst screening,” J. Phys. Chem. Lett. 6, 3528–3533 (2015).
  • Ediz et al. (2009) V. Ediz, A. C. Monda, R. P. Brown,  and D. J. Yaron, “Using molecular similarity to develop reliable models of chemical reactions in complex environments,” Journal of chemical theory and computation 5, 3175–3184 (2009).
  • Lopez-Bezanilla and von Lilienfeld (2014) A. Lopez-Bezanilla and O. A. von Lilienfeld, “Modeling electronic quantum transport with machine learning,” Phys. Rev. B 89, 235411 (2014).
  • Snyder et al. (2013) J. C. Snyder, M. Rupp, K. Hansen, L. Blooston, K.-R. Müller,  and K. Burke, “Orbital-free bond breaking via machine learning,” J. Chem. Phys. 139, 224104 (2013).
  • Snyder et al. (2012) J. C. Snyder, M. Rupp, K. Hansen, K.-R. Müller,  and K. Burke, “Finding density functionals with machine learning,” Phys. Rev. Lett. 108, 253002 (2012).
  • Yao and Parkhill (2016) K. Yao and J. Parkhill, “Kinetic energy of hydrocarbons as a function of electron density and convolutional neural networks,” J. Chem. Theory Comput. 12, 1139–1147 (2016).
  • Hachmann et al. (2011) J. Hachmann, R. Olivares-Amaya, S. Atahan-Evrenk, C. Amador-Bedolla, R. S. Sánchez-Carrera, A. Gold-Parker, L. Vogt, A. M. Brockway,  and A. Aspuru-Guzik, “The harvard clean energy project: large-scale computational screening and design of organic photovoltaics on the world community grid,” J. Phys. Chem. Lett. 2, 2241–2251 (2011).
  • Hachmann et al. (2014) J. Hachmann, R. Olivares-Amaya, A. Jinich, A. L. Appleton, M. A. Blood-Forsythe, L. R. Seress, C. Roman-Salgado, K. Trepte, S. Atahan-Evrenk,  and S. Er, ‘‘Lead candidates for high-performance organic photovoltaics from high-throughput quantum chemistry–the harvard clean energy project,” Energ. Environ. Sci. 7, 698–704 (2014).
  • Hautier et al. (2010) G. Hautier, C. C. Fischer, A. Jain, T. Mueller,  and G. Ceder, “Finding nature’s missing ternary oxide compounds using machine learning and density functional theory,” Chem. Mater. 22, 3762–3767 (2010).
  • Ediz et al. (2008) V. Ediz, J. L. Lee, B. A. Armitage,  and D. Yaron, “Molecular engineering of torsional potentials in fluorogenic dyes via electronic substituent effects,” The Journal of Physical Chemistry A 112, 9692–9701 (2008).
  • Malshe et al. (2009) M. Malshe, R. Narulkar, L. Raff, M. Hagan, S. Bukkapatnam, P. Agrawal,  and R. Komanduri, “Development of generalized potential-energy surfaces using many-body expansions, neural networks, and moiety energy approximations,” J. Chem. Phys. 130, 184102 (2009).
  • Bartók et al. (2013) A. P. Bartók, M. J. Gillan, F. R. Manby,  and G. Csányi, “Machine-learning approach for one-and two-body corrections to density functional theory: Applications to molecular and condensed water,” Phys. Rev. B 88, 054104 (2013).
  • Cui, Liu, and Jordan (2006) J. Cui, H. Liu,  and K. D. Jordan, “Theoretical characterization of the (h2o) 21 cluster: Application of an n-body decomposition procedure,” J. Phys. Chem. B 110, 18872–18878 (2006).
  • Góra et al. (2011) U. Góra, R. Podeszwa, W. Cencek,  and K. Szalewicz, “Interaction energies of large clusters from many-body expansion,” J. Chem. Phys. 135, 224102 (2011).
  • Hermann et al. (2007) A. Hermann, R. P. Krawczyk, M. Lein, P. Schwerdtfeger, I. P. Hamilton,  and J. J. Stewart, “Convergence of the many-body expansion of interaction potentials: From van der waals to covalent and metallic systems,” Phys. Rev. A 76, 013202 (2007).
  • Medders and Paesani (2013) G. R. Medders and F. Paesani, “Many-body convergence of the electrostatic properties of water,” J. Chem. Theory Comput. 9, 4844–4852 (2013).
  • Paulus et al. (2004) B. Paulus, K. Rosciszewski, N. Gaston, P. Schwerdtfeger,  and H. Stoll, “Convergence of the ab initio many-body expansion for the cohesive energy of solid mercury,” Phys. Rev. B 70, 165106 (2004).
  • Richard, Lao, and Herbert (2014a) R. M. Richard, K. U. Lao,  and J. M. Herbert, “Aiming for benchmark accuracy with the many-body expansion,” Accounts Chem. Res. 47, 2828–2836 (2014a).
  • Richard, Lao, and Herbert (2014b) R. M. Richard, K. U. Lao,  and J. M. Herbert, “Understanding the many-body expansion for large systems. i. precision considerations,” J. Chem. Phys. 141, 014108 (2014b).
  • Fennell and Gezelter (2006) C. J. Fennell and J. D. Gezelter, “Is the ewald summation still necessary? pairwise alternatives to the accepted standard for long-range electrostatics,” J. Chem. Phys. 124, 234104 (2006).
  • Lamichhane, Gezelter, and Newman (2014) M. Lamichhane, J. D. Gezelter,  and K. E. Newman, “Real space electrostatics for multipoles. i. development of methods,” J. Chem. Phys. 141, 134109 (2014).
  • Lamichhane, Newman, and Gezelter (2014) M. Lamichhane, K. E. Newman,  and J. D. Gezelter, ‘‘Real space electrostatics for multipoles. ii. comparisons with the ewald sum,” J. Chem. Phys. 141, 134110 (2014).
  • (67) D. C. T. C. I. T. D. R. D. T. G. H. G. A. G. N. H. S. I. P. J. J. K. A. K. T. L. S. L. P. L. C. L. T. L. R. L. B. M. D. M. K. M. G. M. H. N. H. N. I. O. A. O. D. R. A. R. C. S. C. S. W. B.-S. J. S. R. W. J. W. R. W. X. W. L. X. D.A. Case, R.M. Betz and P. Kollman, “Amber 2016, university of california, san francisco.” .
  • Shao et al. (2015) Y. Shao, Z. Gan, E. Epifanovsky, A. T. Gilbert, M. Wormit, J. Kussmann, A. W. Lange, A. Behn, J. Deng, X. Feng, D. Ghosh, M. Goldey, P. R. Horn, L. D. Jacobson, I. Kaliman, R. Z. Khaliullin, T. Kuś, A. Landau, J. Liu, E. I. Proynov, Y. M. Rhee, R. M. Richard, M. A. Rohrdanz, R. P. Steele, E. J. Sundstrom, H. L. Woodcock, P. M. Zimmerman, D. Zuev, B. Albrecht, E. Alguire, B. Austin, G. J. O. Beran, Y. A. Bernard, E. Berquist, K. Brandhorst, K. B. Bravaya, S. T. Brown, D. Casanova, C.-M. Chang, Y. Chen, S. H. Chien, K. D. Closser, D. L. Crittenden, M. Diedenhofen, R. A. DiStasio, H. Do, A. D. Dutoi, R. G. Edgar, S. Fatehi, L. Fusti-Molnar, A. Ghysels, A. Golubeva-Zadorozhnaya, J. Gomes, M. W. Hanson-Heine, P. H. Harbach, A. W. Hauser, E. G. Hohenstein, Z. C. Holden, T.-C. Jagau, H. Ji, B. Kaduk, K. Khistyaev, J. Kim, J. Kim, R. A. King, P. Klunzinger, D. Kosenkov, T. Kowalczyk, C. M. Krauter, K. U. Lao, A. Laurent, K. V. Lawler, S. V. Levchenko, C. Y. Lin, F. Liu, E. Livshits, R. C. Lochan, A. Luenser, P. Manohar, S. F. Manzer, S.-P. Mao, N. Mardirossian, A. V. Marenich, S. A. Maurer, N. J. Mayhall, E. Neuscamman, C. M. Oana, R. Olivares-Amaya, D. P. O’Neill, J. A. Parkhill, T. M. Perrine, R. Peverati, A. Prociuk, D. R. Rehn, E. Rosta, N. J. Russ, S. M. Sharada, S. Sharma, D. W. Small, A. Sodt, T. Stein, D. Stück, Y.-C. Su, A. J. Thom, T. Tsuchimochi, V. Vanovschi, L. Vogt, O. Vydrov, T. Wang, M. A. Watson, J. Wenzel, A. White, C. F. Williams, J. Yang, S. Yeganeh, S. R. Yost, Z.-Q. You, I. Y. Zhang, X. Zhang, Y. Zhao, B. R. Brooks, G. K. Chan, D. M. Chipman, C. J. Cramer, W. A. Goddard, M. S. Gordon, W. J. Hehre, A. Klamt, H. F. Schaefer, M. W. Schmidt, C. D. Sherrill, D. G. Truhlar, A. Warshel, X. Xu, A. Aspuru-Guzik, R. Baer, A. T. Bell, N. A. Besley, J.-D. Chai, A. Dreuw, B. D. Dunietz, T. R. Furlani, S. R. Gwaltney, C.-P. Hsu, Y. Jung, J. Kong, D. S. Lambrecht, W. Liang, C. Ochsenfeld, V. A. Rassolov, L. V. Slipchenko, J. E. Subotnik, T. Van Voorhis, J. M. Herbert, A. I. Krylov, P. M. Gill,  and M. Head-Gordon, “Advances in molecular quantum chemistry contained in the q-chem 4 program package,” Mol. Phys. 113, 184–215 (2015).
  • Krizhevsky, Sutskever, and Hinton (2012) A. Krizhevsky, I. Sutskever,  and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Adv. Neural Inf. Process. Syst. 25, edited by F. Pereira, C. J. C. Burges, L. Bottou,  and K. Q. Weinberger (Curran Associates, Inc., 2012) pp. 1097–1105.
  • Hansen et al. (2013) K. Hansen, G. Montavon, F. Biegler, S. Fazli, M. Rupp, M. Scheffler, O. A. Von Lilienfeld, A. Tkatchenko,  and K.-R. Müller, “Assessment and validation of machine learning methods for predicting molecular atomization energies,” J. Chem. Theory Comput. 9, 3404–3419 (2013).
  • Montavon et al. (2012) G. Montavon, K. Hansen, S. Fazli, M. Rupp, F. Biegler, A. Ziehe, A. Tkatchenko, A. V. Lilienfeld,  and K.-R. Müller, “Learning invariant representations of molecules for atomization energy prediction,” in Adv. Neural Inf. Process. Syst. 25, edited by F. Pereira, C. J. C. Burges, L. Bottou,  and K. Q. Weinberger (Curran Associates, Inc., 2012) pp. 440–448.
  • Faber et al. (2015) F. Faber, A. Lindmaa, O. A. von Lilienfeld,  and R. Armiento, “Crystal structure representations for machine learning models of formation energies,” Int. J. Quantum Chem. 115, 1094–1101 (2015).
  • Bartók and Payne (2010) A. P. Bartók and M. C. Payne, “Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons,” Phys. Rev. Lett. 104, 136403 (2010).
  • Bartók, Kondor, and Csányi (2013) A. P. Bartók, R. Kondor,  and G. Csányi, “On representing chemical environments,” Phys. Rev. B 87, 184115 (2013).
  • Sadeghi et al. (2013) A. Sadeghi, S. A. Ghasemi, B. Schaefer, S. Mohr, M. A. Lill,  and S. Goedecker, “Metrics for measuring distances in configuration spaces,” J. Chem. Phys. 139, 184118 (2013).
  • Zhu et al. (2016) L. Zhu, M. Amsler, T. Fuhrer, B. Schaefer, S. Faraji, S. Rostami, S. A. Ghasemi, A. Sadeghi, M. Grauzinyte,  and Wolverton, “A fingerprint based metric for measuring similarities of crystalline structures,” J. Chem. Phys. 144, 034203 (2016).
  • Schaefer and Goedecker (2016) B. Schaefer and S. Goedecker, ‘‘Computationally efficient characterization of potential energy surfaces based on fingerprint distances,” J. Chem. Phys. 145, 034101 (2016).
  • von Lilienfeld et al. (2015) O. A. von Lilienfeld, R. Ramakrishnan, M. Rupp,  and A. Knoll, “Fourier series of atomic radial distribution functions: A molecular fingerprint for machine learning models of quantum chemical properties,” Int. J. Quantum Chem. 115, 1084–1093 (2015).
  • Song and Xiao (2014) S. Song and J. Xiao, ‘‘Sliding shapes for 3d object detection in depth images,” in Computer Vision – ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VI, edited by D. Fleet, T. Pajdla, B. Schiele,  and T. Tuytelaars (Springer International Publishing, Cham, 2014) pp. 634–651.
  • Shrivastava and Gupta (2013) A. Shrivastava and A. Gupta, “Building part-based object detectors via 3d geometry,” in The IEEE International Conference on Computer Vision (ICCV) (2013).
  • Lai et al. (2012) K. Lai, L. Bo, X. Ren,  and D. Fox, “Detection-based object labeling in 3d scenes,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on (2012) pp. 1330–1337.
  • Kim, Xu, and Savarese (2013) B.-s. Kim, S. Xu,  and S. Savarese, “Accurate localization of 3d objects from rgb-d data using segmentation hypotheses,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013).
  • Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,  and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence,  and K. Q. Weinberger (Curran Associates, Inc., 2014) pp. 2672–2680.
  • Denton et al. (2015) E. L. Denton, S. Chintala, a. szlam,  and R. Fergus, ‘‘Deep generative image models using a laplacian pyramid of adversarial networks,” in Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama,  and R. Garnett (Curran Associates, Inc., 2015) pp. 1486–1494.
  • Radford, Metz, and Chintala (2015) A. Radford, L. Metz,  and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” CoRR abs/1511.06434 (2015).
  • Im et al. (2016) D. J. Im, C. D. Kim, H. Jiang,  and R. Memisevic, “Generating images with recurrent adversarial networks,” CoRR abs/1602.05110 (2016).
  • Yoo et al. (2016) D. Yoo, N. Kim, S. Park, A. S. Paek,  and I. Kweon, “Pixel-level domain transfer,” CoRR abs/1603.07442 (2016).
  • Salimans et al. (2016) T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford,  and X. Chen, “Improved techniques for training gans,” CoRR abs/1606.03498 (2016).
  • Kazachenko, Bulusu, and Thakkar (2013) S. Kazachenko, S. Bulusu,  and A. J. Thakkar, “Methanol clusters (ch3oh) n: Putative global minimum-energy structures from model potentials and dispersion-corrected density functional theory,” J. Chem. Phys. 138, 224303 (2013).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
207511
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description