De novo exploration and self-guided learning of potential-energy surfaces

De novo exploration and self-guided learning of potential-energy surfaces

Noam Bernstein Center for Materials Physics and Technology, U.S. Naval Research Laboratory, Washington, DC 20375, United States    Gábor Csányi Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom    Volker L. Deringer vld24@cam.ac.uk Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom Department of Chemistry, University of Cambridge, Cambridge CB2 1EW, United Kingdom
July 2, 2019

Interatomic potential models based on machine learning (ML) are rapidly developing as tools for materials simulations. However, because of their flexibility, they require large fitting databases that are normally created with substantial manual selection and tuning of reference configurations. Here, we show that ML potentials can be built in a largely automated fashion, exploring and fitting potential-energy surfaces from the beginning (de novo) within one and the same protocol. The key enabling step is the use of a configuration-averaged kernel metric that allows one to select the few most relevant structures at each step. The resulting potentials are accurate and robust for the wide range of configurations that occur during structure searching, despite only requiring a relatively small number of single-point DFT calculations on small unit cells. We apply the method to materials with diverse chemical nature and coordination environments, marking a milestone toward the more routine application of ML potentials in physics, chemistry, and materials science.

Introduction

Atomic-scale modeling has become a cornerstone of scientific research. Quantum-mechanical methods, most prominently based on density-functional theory (DFT), describe the atomistic structures and physical properties of materials with high confidence Lejaeghere et al. (2016); increasingly, they also make it possible to discover previously unknown crystal structures and synthesis targets Oganov et al. (2019). Still, quantum-mechanical materials simulations are severely limited by their high computational cost.

Machine learning (ML) has emerged as a promising approach to tackle this long-standing problem Behler and Parrinello (2007); Bartók et al. (2010); Li et al. (2015); Artrith and Urban (2016); Smith et al. (2017); Chmiela et al. (2017); Behler (2017); Huan et al. (2017); Chmiela et al. (2018); Zhang et al. (2018). ML-based interatomic potentials approximate the high-dimensional potential-energy surface (PES) by fitting to a reference database, which is usually computed at the DFT level. Once generated, ML potentials enable accurate simulations that are orders of magnitude faster than the reference method. They can solve challenging structural problems, as has been demonstrated for the atomic-scale deposition and growth of amorphous carbon films Caro et al. (2018a), for proton-transfer mechanisms Hellström et al. (2019) or dislocations in materials Fellinger et al. (2018); Maresca et al. (2018), involving thousands of atoms in the simulation. More recently, it was shown that ML potentials can be suitable tools for global structure searches targeting crystalline phases Deringer et al. (2017, 2018a, 2018b); Podryabinkin et al. (2019), clusters Ouyang et al. (2015); Tong et al. (2018); Kolsbjerg et al. (2018); Hajinazar et al. (2019), and nanostructures Eivari et al. (2017).

Assembling the reference databases to which ML potentials are fitted is currently mostly a manual and laborious process, guided by the physical problem under study. For example, hierarchical databases for transition metals have been built that start with simple unit cells and gradually add relevant defect models Szlachta et al. (2014); Dragoni et al. (2018); liquid and amorphous materials can be described by iteratively grown databases that contain relatively small-sized MD snapshots Eshet et al. (2010); Sosso et al. (2012); Deringer and Csányi (2017); Mocanu et al. (2018). A “general-purpose” Gaussian Approximation Potential (GAP) ML model for elemental silicon was recently developed Bartók et al. (2018) which can describe crystalline phases with meV-per-atom accuracy, treat defects, cracks, and surfaces Bartók et al. (2017), and generate amorphous silicon structures in excellent agreement with experiment Deringer et al. (2018c). Despite their success in achieving their stated goals, none of these potentials are expected to be even reasonable for crystal structures not included in their databases, say, phases that are stable at very high pressures.

In contrast, structure searching (that is, a global exploration of the PES) can be a suitable approach for finding structures to be included in the training databases in the first place Hajinazar et al. (2017); Deringer et al. (2018a, b); Podryabinkin et al. (2019). The principal idea to explore configuration space with preliminary ML potentials is well established: since the first high-dimensional ML potentials have been made, it was shown how they can be refined by exploring unknown structures Behler and Parrinello (2007); Behler et al. (2008); Sosso et al. (2012), and “on the fly” schemes were proposed to add required data while an MD simulation is being run Li et al. (2015); Jinnouchi et al. (2019a); Vandermause et al. (2019); Jinnouchi et al. (2019b). We have previously shown that the PES of boron can be iteratively sampled without prior knowledge of any crystal structures involved; we called the method “GAP-driven random structure searching” (GAP-RSS) Deringer et al. (2018a), reminiscent of the successful Ab Initio Random Structure Searching (AIRSS) approach Pickard and Needs (2006, 2011). Subsequently we demonstrated, by way of an example, that the crystal structure of black phosphorus can be discovered by GAP-RSS within a few iterations, and we identified several previously unknown hypothetical allotropes of phosphorus Deringer et al. (2018b).

Figure 1: An automated protocol that iteratively explores structural space and fits machine learning (ML) based interatomic potentials. (a) General overview of the approach. From an ensemble of randomized unit cells (blue), we select the most relevant ones using the leverage-score CUR algorithm. Selected cells are evaluated with single-point DFT computations and used to fit an initial Gaussian Approximation Potential (GAP) (orange). Then, this potential is used to relax a new ensemble of randomized cells (green), selecting again the most relevant snapshots, and repeating the cycle. (b) Illustration of the multi-step selection procedure. We first consider all trajectories in a given generation, sketched by connected points, and select the most relevant local minima (using an energy criterion, the flattened histogram, and then a structural criterion, the CUR clustering). From the trajectories leading to these minima, we then select the most representative cells; these can be intermediates (green) or end points (purple) of relaxations. The structures finally selected (magenta) are DFT-evaluated and added to the database.

In the context of ML potential fitting, so-called “active learning” schemes which detect extrapolation (indicating when the potential moves away from “known” configurations) are currently receiving much attention. A query-by-committee active-learning approach was suggested in 2012 by Artrith and Behler Artrith and Behler (2012). More recently, Shapeev and co-workers employed Moment Tensor Potentials Shapeev (2016) with active learningPodryabinkin and Shapeev (2017) to explore the PES and to fit ML potentials Podryabinkin et al. (2019); Gubaev et al. (2019), and E and co-workers described a generalized active-learning scheme for deep neural network potentials Zhang et al. (2018). So far, these studies mainly focused on specific intermetallic systems, namely, Al–Mg Zhang et al. (2018) and Cu–Pd, Co–Nb–V, and Al–Ni–Ti Gubaev et al. (2019), respectively. Furthermore, Podryabinkin et al. showed that their approach can identify various existing and hypothetical boron allotropes Podryabinkin et al. (2019). Finally, Jinnouchi et al. demonstrated how ab initio molecular dynamics (AIMD) simulations of specific systems can be sped up by active learning of the computed forces (in a modified GAP framework), using the predicted error of the Gaussian process to select new datapoints and to improve the speed of AIMD Jinnouchi et al. (2019a, b).

In this work, we present an efficient and unified approach for fitting ML potentials by GAP-RSS, exploring structural space from the beginning (de novo) by ML-driven searching and similarity-based distance metrics, all without any prior input of what structures are or are not relevant. We demonstrate the ability to cover a broad range of structures and chemistries, from graphite sheets to a densely packed transition metal. Our work provides conceptual insight into how computers can discover structural chemistry based on data and distance metrics alone, and it paves the way for a more routine application of ML potentials in materials discovery.

Results

A unified framework for exploring and fitting structural space

The overarching aim is to construct a ML potential with minimal effort: both in terms of computational resources and in terms of input required from the user. In regard to the former, we use only single-point DFT computations to generate the fitting database Deringer et al. (2018a). In regard to the latter, we define general heuristics wherever possible, such that neither the protocol nor its parameters need to be manually tuned for a specific system. The ML architecture is based on a hierarchical combination of two-, three-, and many-body descriptors Deringer and Csányi (2017). The two parameters that need to be set by the user are a “characteristic” distance and whether the material is primarily covalent or metallic. For the distance, we choose tabulated covalent (for C, B, and Si)Cordero et al. (2008) or metallic (for Ti) radii, depending on the nature of the system. These define the volume of the initial structures and the cutoffs for the ML descriptors (Methods section).

Figure 2: “Learning” the crystal structure of -rhombohedral boron. Top: Error of iteratively generated GAP-RSS models, for the energy of the optimised ground-state structure of -B, referenced to DFT. Two independent runs are compared in which (purple) or (green) structures per iteration are fed back into the database. Bottom: Evolution of the B icosahedron as the defining structural fragment. For three points of the cycles, having completed 400 (“A”), 500 (“B”), and 2,500 (“C”) DFT evaluations in total, the respective lowest-energy structure (at the DFT level) from this iteration is shown, as visualized using VESTA Momma and Izumi (2011). Bonds between atoms are drawn using a cut-off of 1.9 Å; note that there are further connections between the B icosahedra with slightly larger BB distances.

Our approach is based on an iterative cycle, as shown in the diagram in Fig. 1a. We generate ensembles of randomized structures as in the AIRSS framework Pickard and Needs (2006, 2011), a structure-searching approach that is widely used in physics, chemistry, and materials science Pickard and Needs (2008); Marqués et al. (2011); Stratford et al. (2017). In the first iteration, we generate 10,000 initial structures, from which we select the most diverse ones using the leverage-score CUR algorithm Mahoney and Drineas (2009). The distance between candidate structures, therein, is quantified by the Smooth Overlap of Atomic Positions (SOAP) descriptor Bartók et al. (2013), which has been widely used in GAP fitting Deringer and Csányi (2017); Bartók et al. (2018) and in structural analysis De et al. (2016); Mavračić et al. (2018); Caro et al. (2018b). While SOAP is normally used to discriminate between pairs of individual atoms and thus of local configurations, we here use a configuration-averaged SOAP descriptor that compares entire unit cells to one another (Methods section) Mavračić et al. (2018). We find this to be crucial for selecting the most representative structures, of which we can only evaluate a small number () with DFT. We also generate dimer configurations in vacuum at a wide range of bond lengths.

With the starting configurations in hand, we perform single-point DFT computations and fit an initial (coarse) potential to the resulting data; in subsequent iterations, we extend the database and thereby refine the potential Deringer et al. (2018a). In each iteration, we start from the same number of initial structures, and minimize their enthalpy using the GAP from the previous iteration. We then select the most relevant and diverse configurations from the full set of configurations seen throughout the minimization trajectories, for which we employ a combination of Boltzmann-probability biased flat histogram sampling (to focus on low-energy structures) and leverage-score CUR (to select the most diverse structures among those), as illustrated in Fig. 1b. These selected configurations are evaluated using single-point DFT calculations and added to the fitting database.

The iterative procedure runs until the results are satisfactory. Here we terminate our searches after 2,500 DFT data points have been collected, and our results show this to be sufficient to discover and describe all structures discussed in the present work. Other quality criteria, such as based on the distribution of energies in the database Deringer et al. (2018a), might be defined as well; the generality of our approach is not affected by this choice.

Figure 3: “Learning” diverse crystal structures without prior knowledge, including textbook examples of an insulator (carbon), a semiconductor (silicon), and a metal (titanium). (ac) Energy error, defined as the difference between DFT- and GAP-computed energies for structures optimized with the respective method. GAP-RSS models that deviate from the DFT result by less than 1 meV / atom are considered to be fully converged and therefore their errors are drawn as a constant minimum value to ease visualization. (df) Energy–volume curves computed with the final GAP-RSS model (solid lines) and the DFT reference method (dashed lines). The open-framework oS24 structure, at high pressure, collapses into a more densely packed phase (“”; see SI for details). All energies are referenced to the DFT result for the respective most stable crystal structure.

We demonstrate the method for boron, one of the most structurally complex elements Albert and Hillebrecht (2009). With the exception of a high-pressure -Ga type phase, all relevant boron allotropes contain B icosahedra as the defining structural unit Albert and Hillebrecht (2009). Boron has been the topic of structure searches with DFT Oganov et al. (2009); Wu et al. (2012); Mannix et al. (2015); Ahnert et al. (2017) and, more recently, with ML potentials for bulk allotropes Deringer et al. (2018a); Podryabinkin et al. (2019) and gas-phase clusters Tong et al. (2018). Our previous work showed how the PES for boron can be fitted in a ML framework Deringer et al. (2018a), leading to the first interatomic potential able to describe the different allotropes. However, at that time, we generated and fed back 250 cells per iteration (without further selection), and added the structure of -B manually at a later stage.Deringer et al. (2018a)

Our new protocol “discovers” the structure of -B in a self-guided way, as illustrated in Fig. 2. The increasingly accurate description of the B icosahedron is reflected in a gradually lowered energy error, falling below the 10 meV/atom threshold with less than 2,000 DFT evaluations, and below 4 meV/atom once the cycle is completed. This improvement is best understood by inspecting the respective lowest-energy structures that enter the database in a given iteration (Fig. 2). The lowest-energy structure at point A already contains several three-membered rings, but no B icosahedra yet. With one more iteration, there is a sharp drop in the GAP error (from 175 to 51 meV/at.), concomitant with the first appearance of a rather distorted -B structure (B). The final database has seen several instances of the correctly ordered structure (C).

Our experiment suggests that adding structures at each step slightly outperforms a similar cycle with , although both settings lead to satisfactory results. In the remainder of this paper, we will perform all GAP-RSS searches with and up to a total of 2,500 single-point DFT evaluations.

Learning diverse crystal structures

Our method is not restricted to a particular chemical system. To demonstrate this, we now apply it to three prototypical materials side by side: carbon, silicon, and titanium, which all exhibit multiple crystal structures.

In carbon (Fig. 3a), both the layered structure of graphite and the tetrahedral network of diamond are correctly “learned” during our iterations. For graphite, the energy error reaches a plateau after only a few hundred DFT evaluations; for diamond, the initial error is very large, and after a dozen or so iterations we observe a rapid drop—concomitant with a drop in the error for the structurally very similar lonsdaleite (“hexagonal diamond”). The final prediction error is well below 1 meV/atom for the bonded allotropes, and on the order of 4 meV/atom for graphite. We have previously shown that the forces in diamond show higher locality than those in graphite, making their description by a finite-ranged ML potential easier Deringer and Csányi (2017), given that sufficient training data are available. We also note that our method captures the difference between diamond and lonsdaleite very well: its value is 27 meV/atom with the final GAP-RSS version, and 28 meV/atom with DFT.

In silicon (Fig. 3b), the ground-state (diamond-type) structure is very quickly learned, more quickly so than diamond carbon, which we ascribe to the absence of a competing threefold-coordinated phase in the case of Si. We further test our evolving potentials on the high-pressure form, the -tin type allotrope (space group ), which is easily discovered; the larger residual error for -Sn-type than for diamond-type Si is consistent with previous studies using a manually tuned potential Bartók et al. (2018). We also test our method on a recently synthesized open-framework structure with 24 atoms in the unit cell (oS24) Kim et al. (2015), which consists of distorted tetrahedral building units that are linked in different ways, which the potential has not “seen”. Still, a good description is achieved after a few iterations.

In titanium (Fig. 3c), a hexagonal close packed (hcp) structure is observed at ambient conditions; however, the zero-Kelvin ground state has been under debate: depending on the DFT method, either hcp or the so-called phase is obtained as the minimum. Our method clearly reproduces the qualitative and quantitative difference between the two allotropes (22 meV/atom with the final GAP-RSS iteration versus 24 meV/atom with DFT) at the computational level we use.

Looking beyond the minimum structures, the DFT energy–volume curves are, by and large, well reproduced by GAP-RSS; see Fig. 3d–f. There is some deviation at large volumes for hcp and -type Ti, but this is an acceptable issue as these regions of the PES are not as relevant, corresponding to negative external pressure. If one were interested in very accurate elastic properties, one would choose to include less dense structures by modifying the pressure parameters (Methods section, Eq. 6). Indeed, it was recently shown that a ML potential for Ti, fitted to a database of 2,700 structures built from the phases on which we test here (, hcp, bcc) and other relevant structures can make an accurate prediction of energetic and elastic properties Takahashi et al. (2017).

Figure 4: Scatter plots of predicted versus DFT energies for ensembles of structures added to the reference databases in the final iteration. Note that the energy scale is continuous, but it changes from linear to logarithmic scaling at 0.1 eV/at., allowing us to visualize both low- and higher-energy regions.

Entire potential-energy landscapes

While the most relevant crystal structures for materials are usually well known and available from databases, we show that our chemically “agnostic” approach is more general. In Fig. 4, we show an energy–energy scatter plot for the last set of GAP-RSS minimizations, evaluated with DFT and with the preceding GAP version, and again across three different chemical systems. We survey both the low- and higher-energy regions of the PES—up to 1 eV per atom, which is very roughly the upper stability limit at which crystalline carbon phases may be expected to exist Aykol et al. (2018).

Figure 5: Visualizing the highly diverse structures, both at low and relatively high energies above the global minimum, that have been explored by GAP-RSS and added to the reference database in the last iteration. A similarity map compares three systems side-by-side (carbon, triangles; silicon, squares; titanium, circles), as described in the text. The resulting plot (with arbitrary axis values) emphasizes relationships between the different databases. The structures, “learned” from scratch by our protocol, range all the way from threefold-coordinated graphite, fourfold-coordinated (sp-like) allotropes of C and Si, onward to high-pressure Si structures and finally densely packed variants of Ti. A higher-energy structure ( eV/at. above diamond-type silicon) from an earlier step in a minimization trajectory is included as an example, as enclosed by a dashed line.

To analyze and understand the outcome of these searches in structural and chemical terms, we compute the distance between any two structures A and B as

(1)

where is again the configuration-averaged SOAP kernel for A and B,i.e. a distance between two entire unit cells (Methods section). We then use a dimensionality reduction technique to draw a two-dimensional structural map reflecting these distances. Such SOAP-based maps have been used with success to analyze structural and chemical relationships in different materials datasets De et al. (2016); Engel et al. (2018); Caro et al. (2018b). Here, we use them to illustrate how different materials (including their allotropes as known from chemistry textbooks) are related in structural space.

To compare different materials with inherently different absolute bond lengths, we re-scale their unit cells such that the minimum bond length in each is  Å, inspired by approaches for topological analyses of different structures Delgado-Friedrichs and O’Keeffe (2003). We then use SOAP (, ) to determine the distance of all pairs of cells in this set, and use principal component analysis to represent this dataset in a 2D plane. Figure 5 shows the resulting plot, in which we have encoded the species by symbols and the average coordination number by color. (Coordination numbers are determined by counting nearest neighbors up to .)

The results fall within four groups, moving from the left to the right through Fig. 5. The first group is given by graphite-like structures; they are three-fold coordinated and only carbon structures (circles) are found there. Roman numerals in Fig. 5 indicate examples, and in this first group we observe flat (i) and buckled (ii) graphite sheets. In the second group, we have four-fold coordinated (“diamond-like”) networks, made up by both carbon and silicon (recall that we are using a normalized bond length, so diamond carbon and diamond-type silicon will fall on the same position in the plot). The structures that are shown as insets are characteristic examples; from left to right, there is a distorted lonsdaleite-type structure (iii), the well-known unj framework (also referred to as the “chiral framework structure” in group-14 elements Pickard and Needs (2010); iv), and a more complex -bonded allotrope (v). While the axis values in our plot are arbitrary, they naturally reflect the structural evolution toward higher coordination numbers, and therefore we next observe a set of high-pressure silicon structures (squares), such as the simple-hexagonal one (vi), with an additional contribution from lower-coordinated titanium structures (circles). Finally, there is a set of densely packed structures, all clustered closely together; these are titanium structures including hcp (vii) and the type (viii). In the center of the plot, there is a structure that bears resemblance to none of the previously mentioned ones (ix), an energetically high-lying and strongly disordered intermediate from a relaxation trajectory that was added to the reference database, rather than a local minimum. This dissimilarity is reflected in relatively large distances from other entries in the SOAP-based similarity map.

Discussion

We have shown that automated protocols can be designed for generating structural databases and fitting potential-energy surfaces of materials in a self-guided way. This allows for the generation of ML-based interatomic potentials with minimal effort, both in terms of computational and user time, and it represents a step toward wide applicability of these techniques in computational materials science. Once a core (RSS-based) database has been constructed, it can be readily improved by adding defect, surface, and liquid/amorphous structural models, while at the same time being sufficiently robust to avoid unphysical behavior—even when taken to the more extreme regions of configuration space that are explored early on during RSS.

We targeted here the space of three-dimensional inorganic crystal structures, but conceptually similar approaches may be useful for nanoparticles Artrith and Kolpak (2014); Kolsbjerg et al. (2018). Finally, organic (molecular) materials are also beginning to be described very reliably with ML potentials Smith et al. (2017); Chmiela et al. (2018), and an interesting open question is how to use the structural diversity inherent in RSS in the context of organic solids Zilka et al. (2017).

Methods

Interatomic potential fitting

To fit interatomic potentials, we use the Gaussian Approximation Potential (GAP) ML framework Bartók et al. (2010) and the associated computer code, which is freely available for non-commercial research at http://www.libatoms.org. Compared to previous work, we have here developed suitable heuristics to automate and generalize the choice of fitting parameters where possible.

We use a linear combination of 2-, 3-, and many-body terms following Refs. 70 and 30, with defining parameters given in Table 1. The 2-body (“2b”) and 3b descriptors are scalar distances and symmetrized three-component vectors, respectively. For the many-body term, we use the Smooth Overlap of Atomic Positions (SOAP) kernel Bartók et al. (2013), which has been used to fit GAPs for diverse systems Szlachta et al. (2014); Deringer and Csányi (2017); Bartók et al. (2018); Mocanu et al. (2018). The overall energy scale of each descriptor’s contribution to the predicted energy (controlled by the parameter ) Bartók and Csányi (2015) is set automatically in our protocol. The 2b value is set from the variance of energies in the fitting database, the 3b value is set from the energy error between a 2b only fit and the fitting database, and the SOAP value is set from the energy error for a 2b+3b only fit.

The cutoffs for the three types of descriptors are expressed in terms of the characteristic radius (Table 1): that for 2b is longest range, while that for 3b is shortest (intended to capture only nearest neighbors), and the SOAP is intermediate in range. The resulting cutoff settings are listed in Table 1, the characteristic radii for the systems studied here being 0.84, 0.76, 1.11, and 1.47 Å for B, C, Si, and Ti, respectively.

The weights on the energies, forces, and stresses that are fit are set by diagonal noise terms in Gaussian process regression Bartók et al. (2010). We set these according to the reference energy of a given structure, to make the fit more accurate for relatively low-energy structures at each volume while providing flexibility for the higher-energy regions. The values are piecewise-linear functions in , which is the per-atom reference energy difference relative to the same volume on the convex hull bounding the set of points from below (in energy). For the energy the error is 1 meV/atom for  eV, 100 meV/atom for  eV, and linearly interpolated in between. For forces the corresponding values are 31.6 and 316 meV/Å, and for virials the values are 63.2 and 632 meV/atom.

Comparing structures

We also use SOAP, although with different parameters (,  Å,  Å), to compare the similarity of environments (as proposed in Ref. 54) in selecting from which data to train (in the CUR step). We obtain what we call a “configuration-averaged” SOAP by averaging over all atoms in the cell. In the SOAP framework Bartók et al. (2013), the neighbor density of a given atom is expanded using a local basis set of radial basis functions and spherical harmonics ,

(2)

where runs over the neighbours of atom within the specified cutoff (including itself). To obtain a similarity measure between unit cells, rather than individual atoms, we then average the expansion coefficients over all atoms in the unit cell,

(3)

and construct the rotationally invariant power spectrum for the entire unit cell Mavračić et al. (2018),

(4)

Note that this is not equal to the average of the usual atomic SOAP power spectra used to describe the atomic neighbor environments. The final kernel to compare two cells, A and B, is then

(5)

where is a small integer number (here, ).

(Å)
(Å) (covalent) (metallic)
2-body 0.5 30
3-body 1.0 100
SOAP 0.75 2000 8 8 4
Table 1: Hyperparameters for descriptors that we use in GAP fitting. For all descriptors: Gaussian width (squared-exponential kernel for 2- and 3-body; atomic density width for SOAP); number of sparse points . For SOAP only: number of radial functions and angular momenta , and kernel exponent . Cutoffs are expressed in terms of the characteristic radius , listed for each material in the Interatomic potential fitting subsection.

Iterative generation of reference data

Randomized atomic positions are generated using the buildcell code of the AIRSS package version 0.9, available at https://www.mtg.msm.cam.ac.uk/Codes/AIRSS. The positions are repeated by 1–8 symmetry operations, and the cells contain 6–24 atoms. A minimum separation is also set, with a value of . The volumes per atom of the random cells are centered on for covalent, and for metallic systems. In the initial iteration, half of the structures are generated from the buildcell-default narrow range of volumes, and half from a wider range, % from the heuristic value. In all later iterations, only the default narrow range is used. The wide volume range configurations are meant to simply span a wide range of structures Deringer et al. (2018a), and use only even numbers of atoms. The narrow volume range configurations are meant to be good initial conditions for RSS, and so for 80% (20%) of the seed structures, we choose even (odd) numbers of atoms, respectively. This is because for most known structures, the number of atoms in the conventional unit cell is even (eight for diamond and rocksalt, for example), although for some it is odd, including the phase Sikka et al. (1982). Biasing initial seeds toward distributions that occur in nature is a central idea within the AIRSS formalism Pickard and Needs (2011).

With the initial potential available, we then run structural optimizations by relaxing the candidate configurations with a preconditioned LBFGS algorithm Packwood et al. (2016) to minimize the enthalpy until residual forces fall below 0.01 eV/Å. As in Ref. 19, we employ a random external pressure with probability density

(6)

here with  GPa, and . This protocol ensures that there is a small but finite external pressure, and also some smaller-volume structures are included in the fit Deringer et al. (2018a, b). We choose the same pressure range for all materials, for simplicity, although this value could be adjusted depending on the pressure region of interest.

The selection of configurations for DFT evaluation and fitting at each iteration involves a Boltzmann-biased flat histogram and leverage-score CUR, as illustrated in Fig. 1. To compute the selection probabilities for the flat histogram stage, the distribution of enthalpies (each computed using the pressure at which the corresponding RSS minimization was done) is approximated by the numpy num () histogram function, with default parameters. The probability of selecting each configuration is inversely proportional to the density of the corresponding histogram bin, multiplied by a Boltzmann biasing factor. The biasing factor is exponential in the energy per atom relative to the lowest energy configuration, divided by a temperature of 0.3 eV for the first iteration, 0.2 eV for the second, and 0.1 eV for all remaining iterations. The leverage-score CUR selection is based on the singular-value decomposition of the square kernel matrix using the SOAP descriptors (with the dot-product kernel and exponentiation by , Eq. 5). Applying the same algorithm to the rectangular matrix of SOAP descriptor vectors was significantly less effective.

Computational details

Reference energies and forces were obtained using DFT, with projector augmented-waves (PAW) Blöchl (1994); Kresse and Joubert (1999) as implemented in the Vienna Ab Initio Simulation Package (VASP) Kresse and Furthmüller (1996). Valence electrons were described by plane-wave basis sets with cutoff energies of 500 (B), 800 (C), 400 (Si), and 285 eV (Ti), respectively. Exchange and correlation were treated using the PBEsol functional Perdew et al. (2008) for all materials except carbon, where the opt-B88-vdW functional Dion et al. (2004); Román-Pérez and Soler (2009); Klimeš et al. (2011) was chosen to properly account for the van der Waals interactions in graphitic structures. Benchmark data for energy–volume curves were obtained by scaling selected unit cells within given volume increments and optimizing while constraining the volume and symmetry of the cell.

Data access statement

Data supporting this publication will be made openly available via https://www.repository.cam.ac.uk.

Author contributions

N.B., G.C., and V.L.D. jointly designed the research, developed the approach, and analyzed the data. N.B. developed the computational framework and performed the computations with input from all authors. V.L.D. wrote the paper with input from all authors.

Acknowledgements.
We thank Profs. C. J. Pickard and D. M. Proserpio for ongoing valuable discussions. N.B. acknowledges support from the Office of Naval Research through the U. S. Naval Research Laboratory’s core basic research program. G.C. acknowledges EPSRC grant EP/P022596/1. V.L.D. acknowledges a Leverhulme Early Career Fellowship and support from the Isaac Newton Trust.

References

  • Lejaeghere et al. (2016) K. Lejaeghere et al., “Reproducibility in density functional theory calculations of solids,” Science 351, aad3000 (2016).
  • Oganov et al. (2019) A. R. Oganov, C. J. Pickard, Q. Zhu,  and R. J. Needs, “Structure prediction drives materials discovery,” Nat. Rev. Mater. , in press, DOI: 10.1038/s41578–019–0101–8 (2019).
  • Behler and Parrinello (2007) J. Behler and M. Parrinello, ‘‘Generalized neural-network representation of high-dimensional potential-energy surfaces,” Phys. Rev. Lett. 98, 146401 (2007).
  • Bartók et al. (2010) A. P. Bartók, M. C. Payne, R. Kondor,  and G. Csányi, “Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons,” Phys. Rev. Lett. 104, 136403 (2010).
  • Li et al. (2015) Z. Li, J. R. Kermode,  and A. De Vita, “Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces,” Phys. Rev. Lett. 114, 096405 (2015).
  • Artrith and Urban (2016) N. Artrith and A. Urban, “An implementation of artificial neural-network potentials for atomistic materials simulations: Performance for TiO,” Comput. Mater. Sci. 114, 135–150 (2016).
  • Smith et al. (2017) J. S. Smith, O. Isayev,  and A. E. Roitberg, “Ani-1: an extensible neural network potential with DFT accuracy at force field computational cost,” Chem. Sci. 8, 3192–3203 (2017).
  • Chmiela et al. (2017) S. Chmiela, A. Tkatchenko, H. E. Sauceda, I. Poltavsky, K. T. Schütt,  and K.-R. Müller, “Machine learning of accurate energy-conserving molecular force fields,” Sci. Adv. 3, e1603015 (2017).
  • Behler (2017) J. Behler, “First principles neural network potentials for reactive simulations of large molecular and condensed systems,” Angew. Chem. Int. Ed. 56, 12828–12840 (2017).
  • Huan et al. (2017) T. D. Huan, R. Batra, J. Chapman, S. Krishnan, L. Chen,  and R. Ramprasad, “A universal strategy for the creation of machine learning-based atomistic force fields,” npj Comput. Mater. 3, 37 (2017).
  • Chmiela et al. (2018) S. Chmiela, H. E. Sauceda, K.-R. Müller,  and A. Tkatchenko, “Towards exact molecular dynamics simulations with machine-learned force fields,” Nat. Commun. 9, 3887 (2018).
  • Zhang et al. (2018) L. Zhang, J. Han, H. Wang, R. Car,  and W. E, “Deep potential molecular dynamics: A scalable model with the accuracy of quantum mechanics,” Phys. Rev. Lett. 120, 143001 (2018).
  • Caro et al. (2018a) M. A. Caro, V. L. Deringer, J. Koskinen, T. Laurila,  and G. Csányi, ‘‘Growth mechanism and origin of high content in tetrahedral amorphous carbon,” Phys. Rev. Lett. 120, 166101 (2018a).
  • Hellström et al. (2019) M. Hellström, V. Quaranta,  and J. Behler, “One-dimensional vs. two-dimensional proton transport processes at solid–liquid zinc-oxide–water interfaces,” Chem. Sci. 10, 1232–1243 (2019).
  • Fellinger et al. (2018) M. R. Fellinger, A. M. Z. Tan, L. G. Hector,  and D. R. Trinkle, “Geometries of edge and mixed dislocations in bcc Fe from first-principles calculations,” Phys. Rev. Materials 2, 113605 (2018).
  • Maresca et al. (2018) F. Maresca, D. Dragoni, G. Csányi, N. Marzari,  and W. A. Curtin, “Screw dislocation structure and mobility in body centered cubic Fe predicted by a gaussian approximation potential,” npj Comput. Mater. 4, 69 (2018).
  • Deringer et al. (2017) V. L. Deringer, G. Csányi,  and D. M. Proserpio, “Extracting crystal chemistry from amorphous carbon structures,” ChemPhysChem 18, 873–877 (2017).
  • Deringer et al. (2018a) V. L. Deringer, C. J. Pickard,  and G. Csányi, “Data-driven learning of total and local energies in elemental boron,” Phys. Rev. Lett. 120, 156001 (2018a).
  • Deringer et al. (2018b) V. L. Deringer, D. M. Proserpio, G. Csányi,  and C. J. Pickard, “Data-driven learning and prediction of inorganic crystal structures,” Faraday Discuss. 211, 45–59 (2018b).
  • Podryabinkin et al. (2019) E. V. Podryabinkin, E. V. Tikhonov, A. V. Shapeev,  and A. R. Oganov, “Accelerating crystal structure prediction by machine-learning interatomic potentials with active learning,” Phys. Rev. B 99, 064114 (2019).
  • Ouyang et al. (2015) R. Ouyang, Y. Xie,  and D.-e. Jiang, ‘‘Global minimization of gold clusters by combining neural network potentials and the basin-hopping method,” Nanoscale 7, 14817–14821 (2015).
  • Tong et al. (2018) Q. Tong, L. Xue, J. Lv, Y. Wang,  and Y. Ma, “Accelerating calypso structure prediction by data-driven learning of a potential energy surface,” Faraday Discuss. 211, 31–43 (2018).
  • Kolsbjerg et al. (2018) E. L. Kolsbjerg, A. A. Peterson,  and B. Hammer, “Neural-network-enhanced evolutionary algorithm applied to supported metal nanoparticles,” Phys. Rev. B 97, 195424 (2018).
  • Hajinazar et al. (2019) S. Hajinazar, E. D. Sandoval, A. J. Cullo,  and A. N. Kolmogorov, “Multitribe evolutionary search for stable Cu–Pd–Ag nanoparticles using neural network models,” Phys. Chem. Chem. Phys. 21, 8729–8742 (2019).
  • Eivari et al. (2017) H. A. Eivari, S. A. Ghasemi, H. Tahmasbi, S. Rostami, S. Faraji, R. Rasoulkhani, S. Goedecker,  and M. Amsler, “Two-dimensional hexagonal sheet of TiO,” Chem. Mater. 29, 8594–8603 (2017).
  • Szlachta et al. (2014) W. J. Szlachta, A. P. Bartók,  and G. Csányi, “Accuracy and transferability of gaussian approximation potential models for tungsten,” Phys. Rev. B 90, 104108 (2014).
  • Dragoni et al. (2018) D. Dragoni, T. D. Daff, G. Csányi,  and N. Marzari, “Achieving DFT accuracy with a machine-learning interatomic potential: Thermomechanics and defects in bcc ferromagnetic iron,” Phys. Rev. Materials 2, 013808 (2018).
  • Eshet et al. (2010) H. Eshet, R. Z. Khaliullin, T. D. Kühne, J. Behler,  and M. Parrinello, “Ab initio quality neural-network potential for sodium,” Phys. Rev. B 81, 184107 (2010).
  • Sosso et al. (2012) G. C. Sosso, G. Miceli, S. Caravati, J. Behler,  and M. Bernasconi, “Neural network interatomic potential for the phase change material GeTe,” Phys. Rev. B 85, 174103 (2012).
  • Deringer and Csányi (2017) V. L. Deringer and G. Csányi, “Machine learning based interatomic potential for amorphous carbon,” Phys. Rev. B 95, 094203 (2017).
  • Mocanu et al. (2018) F. C. Mocanu, K. Konstantinou, T. H. Lee, N. Bernstein, V. L. Deringer, G. Csányi,  and S. R. Elliott, “Modeling the phase-change memory material, GeSbTe, with a machine-learned interatomic potential,” J. Phys. Chem. B 122, 8998–9006 (2018).
  • Bartók et al. (2018) A. P. Bartók, J. Kermode, N. Bernstein,  and G. Csányi, “Machine learning a general-purpose interatomic potential for silicon,” Phys. Rev. X 8, 041048 (2018).
  • Bartók et al. (2017) A. P. Bartók, S. De, C. Poelking, N. Bernstein, J. R. Kermode, G. Csányi,  and M. Ceriotti, “Machine learning unifies the modeling of materials and molecules,” Sci. Adv. 3 (2017).
  • Deringer et al. (2018c) V. L. Deringer, Noam Bernstein, A. P. Bartók, M. J. Cliffe, R. N. Kerber, L. E. Marbella, C. P. Grey, S. R. Elliott,  and G. Csányi, “Realistic atomistic structure of amorphous silicon from machine-learning-driven molecular dynamics,” J. Phys. Chem. Lett. 9, 2879–2885 (2018c).
  • Hajinazar et al. (2017) S. Hajinazar, J. Shao,  and A. N. Kolmogorov, “Stratified construction of neural network based interatomic models for multicomponent materials,” Phys. Rev. B 95, 014114 (2017).
  • Behler et al. (2008) J. Behler, R. Martoňák, D. Donadio,  and M. Parrinello, “Metadynamics simulations of the high-pressure phases of silicon employing a high-dimensional neural network potential,” Phys. Rev. Lett. 100, 185501 (2008).
  • Jinnouchi et al. (2019a) R. Jinnouchi, J. Lahnsteiner, F. Karsai, G. Kresse,  and M. Bokdam, “Phase transitions of hybrid perovskites simulated by machine-learning force fields trained on-the-fly with bayesian inference,”  (2019a), arXiv:1903.09613 [cond-mat.mtrl-sci] .
  • Vandermause et al. (2019) J. Vandermause, S. B. Torrisi, S.Batzner, A. M. Kolpak,  and B. Kozinsky, “On-the-fly machine learning force field generation: Application to melting points,”  (2019), arXiv:1904.02042 [physics.comp-ph] .
  • Jinnouchi et al. (2019b) R. Jinnouchi, F. Karsai,  and G. Kresse, “On-the-fly machine learning force field generation: Application to melting points,”  (2019b), arXiv:1904.12961 [cond-mat.mtrl-sci] .
  • Pickard and Needs (2006) C. J. Pickard and R. J. Needs, “High-pressure phases of silane.” Phys. Rev. Lett. 97, 045504 (2006).
  • Pickard and Needs (2011) C. J. Pickard and R. J. Needs, “Ab initio random structure searching.” J. Phys.: Condens. Matter 23, 053201 (2011).
  • Artrith and Behler (2012) N. Artrith and J. Behler, “High-dimensional neural network potentials for metal surfaces: A prototype study for copper,” Phys. Rev. B 85, 045439 (2012).
  • Shapeev (2016) A. Shapeev, “Moment tensor potentials: A class of systematically improvable interatomic potentials,” Multiscale Model. Simul. 14, 1153–1173 (2016).
  • Podryabinkin and Shapeev (2017) E. V. Podryabinkin and A. V. Shapeev, ‘‘Active learning of linearly parametrized interatomic potentials,” Comput. Mater. Sci. 140, 171–180 (2017).
  • Gubaev et al. (2019) K. Gubaev, E. V. Podryabinkin, G. L. W. Hart,  and A. V. Shapeev, “Accelerating high-throughput searches for new alloys with active learning of interatomic potentials,” Comput. Mater. Sci. 156, 148–156 (2019).
  • Zhang et al. (2018) L. Zhang, D.-Y. Lin, H. Wang, R. Car,  and W. E, “Active learning of uniformly accurate interatomic potentials for materials simulation,” Phys. Rev. Mater. 3, 023804 (2019).
  • Cordero et al. (2008) B. Cordero, V. Gómez, A. E. Platero-Prats, M. Revés, J. Echeverría, E. Cremades, F. Barragán,  and S. Alvarez, “Covalent radii revisited,” Dalton Trans. , 2832–2838 (2008).
  • Momma and Izumi (2011) K. Momma and F. Izumi, “VESTA 3 for three-dimensional visualization of crystal, volumetric and morphology data,” J. Appl. Crystallogr. 44, 1272–1276 (2011).
  • Pickard and Needs (2008) C. J. Pickard and R. J. Needs, “Highly compressed ammonia forms an ionic crystal,” Nat. Mater. 7, 775–779 (2008).
  • Marqués et al. (2011) M. Marqués, M. I. McMahon, E. Gregoryanz, M. Hanfland, C. L. Guillaume, C. J. Pickard, G. J. Ackland,  and R. J. Nelmes, “Crystal structures of dense lithium: A metal-semiconductor-metal transition,” Phys. Rev. Lett. 106, 095502 (2011).
  • Stratford et al. (2017) J. M. Stratford, M. Mayo, P. K. Allan, O. Pecher, O. J. Borkiewicz, K. M. Wiaderek, K. W. Chapman, C. J. Pickard, A. J. Morris,  and C. P. Grey, ‘‘Investigating sodium storage mechanisms in tin anodes: A combined pair distribution function analysis, density functional theory, and solid-state nmr approach,” J. Am. Chem. Soc. 139, 7273–7286 (2017).
  • Mahoney and Drineas (2009) M. W. Mahoney and P. Drineas, “CUR matrix decompositions for improved data analysis,” Proc. Natl. Acad. Sci., U. S. A. 106, 697–702 (2009).
  • Bartók et al. (2013) A. P. Bartók, R. Kondor,  and G. Csányi, “On representing chemical environments,” Phys. Rev. B 87, 184115 (2013).
  • De et al. (2016) S. De, A. P. Bartók, G. Csányi,  and M. Ceriotti, “Comparing molecules and solids across structural and alchemical space,” Phys. Chem. Chem. Phys. 18, 13754–13769 (2016).
  • Mavračić et al. (2018) J. Mavračić, F. C. Mocanu, V. L. Deringer, G. Csányi,  and S. R. Elliott, ‘‘Similarity between amorphous and crystalline phases: The case of TiO,” J. Phys. Chem. Lett. 9, 2985–2990 (2018).
  • Caro et al. (2018b) M. A. Caro, A. Aarva, V. L. Deringer, G. Csányi,  and T. Laurila, “Reactivity of amorphous carbon surfaces: Rationalizing the role of structural motifs in functionalization using machine learning,” Chem. Mater. 30, 7446–7455 (2018b).
  • Albert and Hillebrecht (2009) B. Albert and H. Hillebrecht, “Boron: Elementary challenge for experimenters and theoreticians,” Angew. Chem. Int. Ed. 48, 8640–8668 (2009).
  • Oganov et al. (2009) A. R. Oganov, J. Chen, C. Gatti, Y. Ma, Y. Ma, C. W. Glass, X. Liu, T. Yu, O. O. Kurakevych,  and V. L. Solozhenko, “Ionic high-pressure form of elemental boron,” Nature 457, 863–867 (2009).
  • Wu et al. (2012) X. Wu, J. Dai, Y. Zhao, Z. Zhuo, J. Yang,  and X. C. Zeng, “Two-dimensional boron monolayer sheets,” ACS Nano 6, 7443–7453 (2012).
  • Mannix et al. (2015) A. J. Mannix, X.-F. Zhou, B. Kiraly, J. D. Wood, D. Alducin, B. D. Myers, X. Liu, B. L. Fisher, U. Santiago, J. R. Guest, M. J. Yacaman, A. Ponce, A. R. Oganov, M. C. Hersam,  and N. P. Guisinger, “Synthesis of borophenes: Anisotropic, two-dimensional boron polymorphs,” Science 350, 1513–1516 (2015).
  • Ahnert et al. (2017) S. E. Ahnert, W. P. Grant,  and C. J. Pickard, “Revealing and exploiting hierarchical material structure through complex atomic networks,” npj Comput. Mater. 3, 35 (2017).
  • Kim et al. (2015) D. Y. Kim, S. Stefanoski, O. O. Kurakevych,  and T. A. Strobel, “Synthesis of an open-framework allotrope of silicon,” Nat. Mater. 14, 169–173 (2015).
  • Takahashi et al. (2017) A. Takahashi, A. Seko,  and I. Tanaka, “Conceptual and practical bases for the high accuracy of machine learning interatomic potentials: Application to elemental titanium,” Phys. Rev. Materials 1, 063801 (2017).
  • Aykol et al. (2018) M. Aykol, S. S. Dwaraknath, W. Sun,  and K. A. Persson, “Thermodynamic limit for synthesis of metastable inorganic materials,” Science Advances 4, eaaq0148 (2018).
  • Engel et al. (2018) E. A. Engel, A. Anelli, M. Ceriotti, C. J. Pickard,  and R. J. Needs, “Mapping uncharted territory in ice from zeolite networks to ice structures,” Nat. Commun. 9, 2173 (2018).
  • Delgado-Friedrichs and O’Keeffe (2003) O. Delgado-Friedrichs and M. O’Keeffe, “Identification of and symmetry computation for crystal nets,” Acta Crystallogr., Sect. A 59, 351–360 (2003).
  • Pickard and Needs (2010) C. J. Pickard and R. J. Needs, “Hypothetical low-energy chiral framework structure of group 14 elements,” Phys. Rev. B 81, 014106 (2010).
  • Artrith and Kolpak (2014) N. Artrith and A. M. Kolpak, ‘‘Understanding the composition and activity of electrocatalytic nanoalloys in aqueous solvents: A combination of DFT and accurate neural network potentials,” Nano Lett. 14, 2670–2676 (2014).
  • Zilka et al. (2017) M. Zilka, D. V. Dudenko, C. E. Hughes, P. A. Williams, S. Sturniolo, W. T. Franks, C. J. Pickard, J. R. Yates, K. D. M. Harris,  and S. P. Brown, “Ab initio random structure searching of organic molecular solids: assessment and validation against experimental data,” Phys. Chem. Chem. Phys. 19, 25949–25960 (2017).
  • Bartók and Csányi (2015) A. P. Bartók and G. Csányi, “Gaussian approximation potentials: A brief tutorial introduction,” Int. J. Quantum Chem. 115, 1051–1057 (2015).
  • Sikka et al. (1982) S. K. Sikka, Y. K. Vohra,  and R. Chidambaram, “Omega phase in materials,” Prog. Mater. Sci. 27, 245–310 (1982).
  • Packwood et al. (2016) D. Packwood, J. Kermode, L. Mones, N. Bernstein, J. Woolley, N. Gould, C. Ortner,  and G. Csányi, “A universal preconditioner for simulating condensed phase materials,” J. Chem. Phys. 144, 164109 (2016).
  • (73) The numpy python library version 1.15.2, http://www.numpy.org.
  • Blöchl (1994) P. E. Blöchl, “Projector augmented-wave method,” Phys. Rev. B 50, 17953–17979 (1994).
  • Kresse and Joubert (1999) G. Kresse and D. Joubert, “From ultrasoft pseudopotentials to the projector augmented-wave method,” Phys. Rev. B 59, 1758–1775 (1999).
  • Kresse and Furthmüller (1996) G. Kresse and J. Furthmüller, “Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set,” Phys. Rev. B 54, 11169–11186 (1996).
  • Perdew et al. (2008) J. P. Perdew, A. Ruzsinszky, G. I. Csonka, O. A. Vydrov, G. E. Scuseria, L. A. Constantin, X. Zhou,  and K. Burke, “Restoring the density-gradient expansion for exchange in solids and surfaces,” Phys. Rev. Lett. 100, 136406 (2008).
  • Dion et al. (2004) M. Dion, H. Rydberg, E. Schröder, D. C. Langreth,  and B. I. Lundqvist, “Van der Waals density functional for general geometries,” Phys. Rev. Lett. 92, 246401 (2004).
  • Román-Pérez and Soler (2009) G. Román-Pérez and J. M. Soler, “Efficient implementation of a van der Waals density functional: Application to double-wall carbon nanotubes,” Phys. Rev. Lett. 103, 096102 (2009).
  • Klimeš et al. (2011) J. Klimeš, D. R. Bowler,  and A. Michaelides, “Van der Waals density functionals applied to solids,” Phys. Rev. B 83, 195131 (2011).

Supplementary Information for “De novo exploration and self-guided learning of potential-energy surfaces”

[8mm]

Figure S1: Silicon allotropes at high pressure (as a supplement to Fig. 3e). We here show energy–volume curves as in the main text, but with smaller increments and extending down to much smaller unit-cell volumes (that is, higher-pressure regions), down to 50% of the equilibrium volume. In the case of the open-framework silicon structure oS24 (Ref. 62; panel a), a collapse is observed at Å, both in the DFT reference computation and in the GAP-RSS prediction. We also performed a test for diamond-type silicon, compressing it to similarly small volumes (and much more strongly than shown in Fig. 3e). Over the full range of volumes studied, good agreement is observed between the DFT reference and the GAP-RSS prediction (panel b), which indicates a robust “learning” of repulsion at very small interatomic distances.
max
(GPa) (eV / at.) (eV / Å)
(i) C (graphite) 0.111 +0.15 0.302
(ii) C (buckled) 0.088 +0.23 1.678
(iii) Si (dist. lon) 0.048 +0.32 0.742
(iv) Si (unj) 0.000 +0.06 0.022
(v) Si ( network) 0.124 +0.29 0.962
(vi) Si (simple hex.) 0.208 +0.25 0.653
(vii) Ti ( phase) 0.185 +0.03 0.000
(viii) Ti (hcp) 0.324 +0.05 0.238
(ix) Si (RSS intermed.) 0.172 +0.59 0.857
Table S1: Additional information regarding the selected structures shown as insets in Fig. 5: external pressure applied for this relaxation trajectory, ; energy relative to the respective ground state, ; maximum DFT-computed force component in this structure.
Å
Si -1.68605982 -0.41624606 3.95802416
Si 0.15435366 4.30188720 0.38981490
Si 4.01815977 -0.40190360 1.54106451
Si -2.28421007 2.67006652 2.34218566
Si 3.26751299 2.67507753 -0.00486561
Si 1.46783885 -1.91088147 3.46795123
Si 1.26796258 1.77652573 3.00303591
Si 0.48155278 0.67469729 1.14050708
Table S2: Lattice parameters and atomic coordinates for silicon structure (v), as added to the GAP-RSS reference database in the final iteration (Fig. 5).
Å
Si -1.20744491 4.11837921 3.06869747
Si 1.15061078 2.70140772 0.21706451
Si -2.74075733 -1.74587757 4.91801644
Si -2.97936509 2.43953892 3.67216632
Si -1.80907975 3.75555505 5.55352463
Si -2.12808832 -0.09478641 3.03665790
Si 1.86693516 0.25171580 3.77895234
Si -0.51394068 1.68239975 3.54412519
Si 3.12840615 4.18317367 0.57109224
Si 2.03701947 0.79695955 1.44756468
Table S3: As Table S2 but for silicon structure (ix).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
368734
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description