# Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons

###### Abstract

We introduce a class of interatomic potential models that can be automatically generated from data consisting of the energies and forces experienced by atoms, as derived from quantum mechanical calculations. The models do not have a fixed functional form and hence are capable of modeling complex potential energy landscapes. They are systematically improvable with more data. We apply the method to bulk crystals, and test it by calculating properties at high temperatures. Using the interatomic potential to generate the long molecular dynamics trajectories required for such calculations saves orders of magnitude in computational cost.

###### pacs:

65.40.De,71.15.Nc,31.50.-x,34.20.CfAtomic scale modeling of materials is now routinely and widely applied, and encompasses a range of techniques from exact quantum chemical methods Szabo and Ostlund (1996) through density functional theory (DFT) Payne et al. (1992) and semi-empirical quantum mechanics Finnis (2003) to analytic interatomic potentials Brenner (2000). The associated trade-offs in accuracy and computational cost are well known. Arguably, there is a gap between models that treat electrons explicitly, and those that do not. Models in the former class are in practice limited to handling a few thousand atoms, while the simple analytic interatomic potentials are limited in accuracy, regardless of how they are parametrized. The panels in the top row of Fig. 1 illustrates the typical performance of analytic potentials in bulk semiconductors. Perhaps surprisingly, potentials that are generally regarded as adequate for describing these bulk phases show significant deviation from the quantum mechanical potential energy surface. This in turn gives rise to significant errors in predicting properties such as elastic constants and phonon spectra.

In this letter we are concerned with the problem of modeling the Born-Oppenheimer potential energy surface (PES) of a set of atoms, but without recourse to simulating the electrons explicitly. We mostly restrict our attention to modeling the bulk phases of carbon, silicon, germanium, iron and gallium nitride, using a unified framework. Even such single-phase potentials could be useful for calculating physical properties, e.g. the thermal expansion coefficient, the phonon contribution to the thermal conductivity, the temperature dependence of the phonon modes, or as part of QM/MM hybrid schemes Bernstein et al. (2009).

The first key insight is that this is actually practicable: the reason that interatomic potentials are at all useful is that the PES is a relatively smooth function of the nuclear coordinates. Improving potential modeling is difficult not because the PES is rough, but because it does not easily decompose into simple closed functional forms. Secondly, away from isolated quantum critical points, the behavior of atoms is localized in the sense that if the total energy of a system is written as a sum of atomic energies,

(1) |

where is the relative position between atoms and , then good approximations of can be obtained by restricting the set of atoms over which the index runs to some fixed neighborhood of atom , i.e., . In fact, we take Eq. (1) with this restriction as the defining feature of an interatomic potential. Note that in general it is desirable to separate out Coulomb and dispersion terms from the atomic energy function, , because the covalent part that remains can then be localized much better for the same overall accuracy. The strict localization of enables the independent computation of atomic energies. However, it also puts a limit on the accuracy with which the PES can be approximated. Consider an atom whose environment inside is fixed. The true quantum mechanical force on this atom will show a variation, depending on its environment outside the cutoff. An estimate of this variance is shown on Fig. 1 by the horizontal lines: no interatomic potential with the given cutoff can have a lower typical force error.

To date, two works have attempted to model the PES in its full generality. In the first Brown et al. (2003), small molecules were modeled by expanding the total energy in polynomials of all the atomic coordinates, without restricting the range of the atomic energy function. While this gave extremely accurate results, it cannot scale to more than a few atoms. More recently, a neural network was used to model the atomic energy Behler and Parrinello (2007). Our philosophy and aims are similar to the latter work: we compute by interpolating a set of stored reference quantum mechanical results using a carefully constructed measure of similarity between atomic neighborhoods. We strive for computational efficiency in our use of expensive ab initio data by using both the total energy and the atomic forces to obtain the best possible estimate for given our assumptions about its smoothness. Furthermore, our scheme makes the generation of potential models automatic, with almost no need for human intervention in going from quantum mechanical data to the final interatomic potential model. In the following, we present an overview of our formalism. Detailed derivations are given in the Supplementary Information (SI).

The atomic energy function is invariant under translation, rotation and the permutation of atoms. One of the key ideas in the present work is to represent atomic neighborhoods in a transformed system of coordinates that accounts for these symmetries. Ideally, this mapping should be one-to-one: mapping different neighborhood configurations to the same coordinates would introduce systematic errors into the model that cannot be improved by adding more quantum mechanical data. We begin by forming a local atomic density from the neighbors of atom , as

(2) |

where is a cutoff function, in which the cutoff radius reflects the spatial scale of the interactions. The choice of cutoff function is somewhat ad-hoc: any smooth function with compact support could be used.

The local atomic density is invariant to permuting the atoms in the neighborhood. One way to achieve rotational invariance as well would be to expand it in spherical harmonics and a set of radial basis functions and appropriately combine the resulting coefficients, similarly to how the structure factor is computed from Fourier components. However, just as the structure factor (a two-point correlation) is missing all “phase” information (the relative phases of the different plane waves), such a set of spherical invariants would lose a lot of information about the configuration of the neighborhood. In contrast, the bispectrum Dianat and Rao (1990), which is a three-point correlation function, is a much richer system of invariants, and can provide an almost one-to-one representation of the atomic neighborhood.

In our method we first project the atomic density onto the surface of the four-dimensional unit sphere, similarly to how the Riemann sphere is constructed, with the transformation

(3) |

where . The advantage of this is that the 4D surface contains all the information from the 3D spherical region inside the cutoff, including the radial dimension, and thus 4D spherical harmonics (also called Wigner matrices, Varshalovich et al. (1987)) constitute a natural complete basis for the interior of the 3D sphere, without the need for radial basis functions. The projection of the atomic density on the surface of the 4D sphere can therefore be expanded in 4D spherical harmonics using coefficients (dropping the atomic index for clarity)

(4) |

The bispectrum built from these coefficients is given by

(5) |

where are the ordinary Clebsch-Gordan coefficients. The elements of this three-index array, which we will denote by for atom , are invariant with respect to permutation of atoms and rotations of 4D space, and hence also 3D space. In practice, we use only a truncated version, with , corresponding to a limit in the spatial resolution with which we describe the atomic neighborhood.

Determining the PES is now reduced to interpolating the atomic energy in the truncated bispectrum-space, and for this we use a non-parametric method called Gaussian Process (GP) regression Rasmussen and Williams (2006); MacKay (2003). In the GP framework, assuming Gaussian basis functions, the best estimate for the atomic energy function is given by

(6) |

where and range over the reference configurations and bispectrum components, respectively and are (hyper)parameters. The GP is called a non-parametric method because the kernels are not fixed but centered on the data, and hence, loosely, any continuous function can in principle be obtained from Eq. (6) Steinwart (2001).

The GP differs from a simple radial basis function least-squares fit in the way the coefficients are computed. The covariance, i.e. the measure of similarity, of the reference configurations is defined as

(7) |

where and are two further hyperparameters and is the identity matrix. The interpolation coefficients are then given by

(8) |

where is the set of reference values (quantum mechanical energies). This simple expression for the coefficients is derived in detail in MacKay (2003). Thus Eq. (6) gives the atomic energy function in closed form as a function of the quantum mechanical data.

In addition to preserving exact symmetries, another hurdle is that although we wish to infer the atomic energy function, the data we can collect directly are not values of atomic energies, but total energies of sets of atoms, and forces on atoms, the latter being sums of partial derivatives of neighboring atomic energiesAhnert (2005). Furthermore, our data will be heavily correlated: e.g. the neighborhoods of atoms in a slightly perturbed ideal crystal are very similar to each other. Both of these problems are solved by applying a sparsification procedureSnelson and Ghahramani (2006), in which a predetermined number (much smaller than the total data size) of “sparse” configurations are chosen randomly from the set of all configurations and the data values in Eq. (8) are replaced by linear combinations of all data values. The models in this work used 300 such sparse configurations. The final expression for the model, which we call GAP, is derived in the SI.

All the DFT data in this work was generated with the Castep packageClark et al. (2005). The reference configurations were obtained by randomly displacing the atoms and the lattice vectors from their equilibrium values in 2, 8, 16 and 64-atom cubic unit cells by up to .

DFT | GAP | Brenner | Tersoff (T) | |
---|---|---|---|---|

1x1 unreconstructed | 6.41 | 6.36 | 4.46 | 2.85 |

2x1 Pandey | 4.23 | 4.40 | 3.42 | 4.77 |

C | Si | Ge | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|

DFT | GAP | T | DFT | GAP | T | DFT | GAP | T | |||

1118 | 1081 | 1072 | 154 | 152 | 143 | 108 | 114 | 138 | |||

151 | 157 | 108 | 56 | 59 | 75 | 38 | 35 | 44 | |||

610 | 608 | 673 | 100 | 101 | 119 | 75 | 75 | 93 | |||

603 | 601 | 641 | 75 | 69 | 69 | 58 | 54 | 66 |

The lower panels of Fig. 1 show the performance of the GAP model for semiconductors in terms of the accuracy of forces for near-bulk configurations. For diamond, the GAP model is shown to improve significantly as the cutoff is increased (there is also systematic improvement as is increased, this is shown in the SI). For all three materials the RMS errors in the energy are less than 1 meV/at. Table 1 shows the elastic constants. It is remarkable that the existing potentials are not able to reproduce all elastic constants to better than 25% for any setting of their parameters. Fig. 2 shows the phonon spectrum for diamond and iron. For diamond, the GAP model shows excellent accuracy at zero temperature over most of the Brillouin zone, with a slight deviation for optical modes in the direction. The agreement with experiment is also good for the frequency of the Raman mode as a function of temperature. For iron, the agreement with DFT is even better. We also computed the linear thermal expansion coefficient of diamond, shown in Fig. 3, using two different methods, applicable at low and high temperatures. Our low temperature curve is derived from the phonon spectrum via the quasi-harmonic approximation and agrees well with the DFT and experimental results. At higher temperatures higher order anharmonic terms come into play, so we use molecular dynamics (MD) and obtain good agreement with experiment, showing that the GAP model is accurate significantly beyond the small displacements that control phonons.

Finally, we extended the GAP model by including reference configurations generated by random displacements around a diamond vacancy and graphite. Fig. 4 shows the energetics of the transition path for a migrating vacancy in diamond and the transition from rhombohedral graphite to diamond. The agreement with DFT is excellent, demonstrating that we can construct a truly reactive model that describes the - transition correctly, in contrast to currently used interatomic potentials.

Even for the small systems considered above, the GAP model is orders of magnitude faster than standard plane-wave DFT codes, but significantly more expensive than simple analytical potentials. The computational cost is roughly comparable to the cost of numerical bond order potential modelsMrovec et al. (2004). The current implementation of the GAP model takes 0.01 s/atom/timestep on a single CPU core. For comparison, a timestep of the 216-atom unit cell of Fig. 3 takes 191 s/atom using Castep, about 20,000 times longer, while the same for iron would take more than a million times longer.

In summary, we have outlined a framework for automatically generating finite range interatomic potential models from quantum-mechanically calculated atomic forces and energies. The models were tested on bulk semiconductors and iron and were found to have remarkable accuracy in matching the ab initio potential energy surface at a fraction of the cost, thus demonstrating the fundamental capabilities of the method. Preliminary data for GaN, presented in the SI, shows that the extension to multi-component and charged systems is straightforward by augmenting the local energy with a simple Coulomb term using fixed charges. Our long term goal is to expand the range of interpolated configurations and thus create“general” interatomic potentials for one- and two-component materials whose accuracy approaches that of quantum mechanics.

###### Acknowledgements.

The authors thank Sebastian Ahnert, Noam Bernstein, Zoubin Ghahramani, Edward Snelson and Carl Rasmussen for discussions. APB is supported by the EPSRC. GC acknowledges support from the EPSRC under grant number EP/C52392X/1. Part of the computational work was carried out on the Darwin Supercomputer of the University of Cambridge High Performance Computing Service.## References

- Szabo and Ostlund (1996) A. Szabo and N. S. Ostlund, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory (Dover Publications, 1996).
- Payne et al. (1992) M. C. Payne et al., Rev. Mod. Phys. 64, 1045 (1992).
- Finnis (2003) M. Finnis, Interatomic Forces in Condensed Matter (Oxford University Press, Oxford, 2003).
- Brenner (2000) D. W. Brenner, phys. stat. sol. (b) 217, 23 (2000).
- Brenner et al. (2002) D. W. Brenner et al., J. Phys. Cond. Mat. 14, 783 (2002).
- Tersoff (1989) J. Tersoff, Phys. Rev. B 39, 5566 (1989).
- Bernstein et al. (2009) N. Bernstein, J. R. Kermode, and G. Csányi, Rep. Prog. Phys. 72, 026501 (2009).
- Brown et al. (2003) A. Brown et al., J. Chem. Phys. 119, 8790 (2003).
- Behler and Parrinello (2007) J. Behler and M. Parrinello, Phys. Rev. Lett. 98, 146401 (2007).
- Dianat and Rao (1990) S. A. Dianat and R. M. Rao, Opt. Eng. 29, 504 (1990).
- Varshalovich et al. (1987) D. A. Varshalovich, A. N. Moskalev, and V. K. Khersonskii, Quantum theory of angular momentum (World Scientific Pub. Co., 1987).
- Rasmussen and Williams (2006) C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning (MIT Press, 2006).
- MacKay (2003) D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms (Cambridge University Press, 2003).
- Steinwart (2001) I. Steinwart, J. Mach. Learn. Res. 2, 67 (2001).
- Ahnert (2005) S. Ahnert, Ph.D. thesis, University of Cambridge (2005).
- Snelson and Ghahramani (2006) E. Snelson and Z. Ghahramani, in Advances in Neural Information Processing Systems 18, edited by Y. Weiss, B. Schölkopf, and J. Platt (MIT Press, 2006), pp. 1257–1264.
- Clark et al. (2005) S. J. Clark et al., Zeit. Krist. 220, 567 (2005).
- Warren et al. (1967) J. L. Warren, J. L. Yarnell, G. Dolling, and R. A. Cowley, Phys. Rev. 158, 805 (1967).
- Liu et al. (2000) M. S. Liu, L. A. Bursill, S. Prawer, and R. Beserman, Phys. Rev. B 61, 3391 (2000).
- Lang et al. (1999) G. Lang et al., Phys. Rev. B 59, 6182 (1999).
- Wang et al. (1990) C. Z. Wang, C. T. Chan, and K. M. Ho, Phys. Rev. B 42, 11276 (1990).
- Finnis and Sinclair (1984) M. W. Finnis and J. E. Sinclair, Phil. Mag. A 50, 45 (1984).
- Mrovec et al. (2004) M. Mrovec, D. Nguyen-Manh, D. G. Pettifor, and V. Vitek, Phys. Rev. B 69, 094115 (2004).
- Pavone et al. (1993) P. Pavone et al., Phys. Rev. B 48, 3156 (1993).
- Slack and Bartram (1975) G. A. Slack and S. F. Bartram, J. App. Phys. 46, 89 (1975).