Reliable and Explainable Machine Learning Methods for Accelerated Material Discovery

Reliable and Explainable Machine Learning Methods for Accelerated Material Discovery

Bhavya Kailkhura*, Brian Gallagher, Sookyung Kim, Anna Hiszpanski, T. Yong-Jin Han*
Corresponding Authors: B.K. (kailkhura1@llnl.gov); T.Y.H. (han5@llnl.gov)
Lawrence Livermore National Laboratory
Abstract

Material scientists are increasingly adopting the use of machine learning (ML) for making potentially important decisions, such as, discovery, development, optimization, synthesis and characterization of materials. However, despite ML’s impressive performance in commercial applications, several unique challenges exist when applying ML in materials science applications. In such a context, the contributions of this work are twofold. First, we identify common pitfalls of existing ML techniques when learning from underrepresented/imbalanced material data. Specifically, we show that with imbalanced data, standard methods for assessing quality of ML models break down and lead to misleading conclusions. Furthermore, we found that the model’s own confidence score cannot be trusted and model introspection methods (using simpler models) do not help as they result in loss of predictive performance (reliability-explainability trade-off). Second, to overcome these challenges, we propose a general-purpose explainable and reliable machine-learning framework. Specifically, we propose a novel pipeline that employs an ensemble of simpler models to reliably predict material properties. We also propose a transfer learning technique and show that the performance loss due to models’ simplicity can be overcome by exploiting correlations among different material properties. A new evaluation metric and a trust score to better quantify the confidence in the predictions are also proposed. To improve the interpretability, we add a rationale generator component to our framework which provides both model-level and decision-level explanations. Finally, we demonstrate the versatility of our technique on two applications: predicting properties of crystalline compounds, and identifying novel potentially stable solar cell materials.

I Introduction

I-a Motivation

Driven by the success of machine learning (ML) in commercial applications (e.g., product recommendations and advertising), there are significant efforts to exploit these tools to analyze scientific data. One such effort is the emerging discipline of Materials Informatics which applies ML methods to accelerate the selection, development, and discovery of materials by learning structure-property relationships. Materials Informatics researchers are increasingly adopting ML methods in their workflow to build complex models for apriori prediction of materials’ physical, mechanical, optoelectronic, and thermal properties (e.g., crystal structure, melting temperature, formation enthalpy, band gap). While commercial use cases and material science applications may appear similar in their overall goals, we argue that fundamental differences exist in the corresponding data, tasks, and requirements. Applying ML techniques without careful consideration of their assumptions and limitations may lead to missed opportunities at best and a waste of substantial resources and incorrect scientific inferences at worst. In the following, we mention unique challenges that the Materials Informatics community must overcome for universal acceptance of ML solutions in material science.

Fig. 1: Histograms (number of compounds vs. targeted property bin) of targeted properties of the OQMD database show heavily skewed distributions. We show that conventional machine learning approaches: (a) produce inaccurate inferences in sparse regions of the property-space and (b) are overconfident in the accuracy of such predictions. The proposed approach overcomes these shortcomings.

Learning From Underrepresented and Distributionally Skewed Data: One of the fundamental assumptions of current ML methods is the availability of densely and uniformly sampled (or balanced) training data. It is well known that when there is an under-representation of certain classes in the data, standard ML algorithms fail to properly represent the distributive characteristics of the data and provide incorrect inferences across the classes of the data. Unfortunately, in most material science applications, balanced data is exceedingly rare, and virtually all problems of interest involve various forms of extrapolation due to underrepresented data and severe class distribution skews. As an example, materials scientists are often interested in designing (or discovering) compounds with uncommon targeted properties, e.g., high superconductivity or large for improved thermoelectric power, shape memory alloys (SMAs) with the targeted property of very low thermal hysteresis, and band gap energy in the desired range ( eV) for solar cells. In such applications, we encounter highly imbalanced data (with targeted materials being in the minority class) due to these design choices or constraints. Consider a task of predicting material properties (e.g., bandgap energy, formation energy, stability, etc.) from a set of feature vectors (or descriptors) corresponding to crystalline compounds. One representative database for such a data set is the Open Quantum Materials Database (OQMD)1, which contains several properties of crystalline compounds as calculated using density functional theory (DFT). Note that, the OQMD database contains data sets with strongly imbalanced distributions of target variables, i.e., material properties. In Figure 1, we plot the histogram of several commonly targeted properties. It can be seen that, the data set exhibits severe distribution skews. For example, of the compounds in the OQMD are possibly conductors with band gap value equal to zero.

Note that if the sole aim of the ML model applied to a classification problem is to maximize overall accuracy, the ML algorithm will perform quite well by ignoring or discarding the minority class. However, in practice, correctly classifying and learning from the minority class of interest is more important than possibly mis-classifying the majority classes.

Explainable ML Methods without Compromising the Model Accuracy: A common misconception is that increasing model complexity can address the challenges of underrepresented and distributionally skewed data. However, this can only superficially palliate some of these issues. Increasing the complexity of ML models may increase the overall accuracy of the system at the cost of making the model very hard to interpret. Ironically, scientists continue pushing from the opposite direction towards understanding rather than crunching numbers from big data. Understanding why an ML model made a certain prediction or recommendation is crucial, since it is this understanding that provides the confidence to make a decision and that will lead to new hypotheses and ultimately new scientific insights. Most of the existing approaches define explainability as the inverse of complexity and achieve explainability at the cost of accuracy. This introduces a risk of producing explainable but misleading predictions. With the advent of highly predictive but opaque ML models, it has become more important than ever to understand and explain the predictions of such models and to devise explainable scientific machine learning techniques without sacrificing predictive power.

Better Evaluation and Uncertainty Quantification Techniques for Building Trust in ML: Most material science problems are often under-constrained in nature as they suffer from a limitation of representative train/test samples yet involve a large number of physical variables. For this reason, relying solely on labeled instances available for testing or evaluating trained ML models can often fail to represent the true nature of relationships in material science problems. Hence, standard methods for assessing and ensuring generalizability of ML models break down and lead to misleading conclusions. In particular, it is easy to learn spurious relationships that look deceptively good on training and test sets (even after using methods such as cross-validation), but do not generalize well outside the available labeled data. A natural solution is to use a model’s own reported confidence (or uncertainty) score. However, a model’s confidence score alone may not be very reliable. For example, in computer vision, well-crafted perturbations to images can cause classifiers to make mistakes (such as, identifying a panda as a gibbon or confusing a cat with a computer) with very high confidence2. As we will show later, this problem also persists in the Materials Informatics pipeline (especially with distributional skewness). Nevertheless, knowing when a classifier’s (or regressor’s) prediction can be trusted is useful in several other applications for building trust in ML. Therefore, we need to augment current error-based testing techniques with additional components to quantify generalization performance of scientific ML algorithms and devise reliable uncertainty quantification methods to establish trust in these predictive models.

I-B Literature Survey

In the recent past, the materials science community has used ML methods for building predictive models for several applications3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16. Seko et al.11 considered the problem of building ML models to predict the melting temperatures of binary inorganic compounds. The problem of predicting the formation enthalpy of crystalline compounds using ML models was considered recently4, 5, 17. Predictive modeling for crystal structure formation at a certain composition are also being developed6, 18, 19, 20. The problem of band gap energy prediction of certain classes of crystals21, 22 and mechanical property prediction of metal alloys was also considered in the literature14, 15. Ward et al.23 proposed a general-purpose ML framework to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.

Thus far, the research on applying ML methods for material science applications has predominantly focused on improving overall accuracy of predictive modeling. However, imbalanced learning, explainability and reliability of ML methods in material science have not received any significant attention. As mentioned earlier, these aspects pose a real problem in deriving correct and reliable scientific inferences and the universal acceptance of machine learning solutions in material science, and deserves to be tackled head on.

I-C Our Contributions

In this paper, we take some first steps in addressing the challenge of building reliable and explainable ML solutions for Materials Informatics applications. The main contributions of the paper are twofold. First, we identify some shortcoming with training, testing, and uncertainty quantification steps in existing ML techniques while learning from underrepresented and distributionally skewed data. Our finding raises serious concerns regarding the reliability of existing Materials Informatics pipelines. Second, to overcome these challenges, we propose a general-purpose explainable and reliable machine-learning methods for enabling reliable learning from underrepresented and distributionally skewed data. We propose the following solutions: novel learning architecture to bias the training process to the goals of imbalanced domains; and sampling approaches to manipulate the training data distribution so as to allow the use of standard ML models; reliable evaluation metrics and uncertainty quantification methods to better capture the application bias. More specifically, we employ a novel partitioning scheme to enhance the accuracy of our predictions by first partitioning data into similar groups of materials based on their property values and training separate simpler regression models for each group. As oppose to existing approaches which train an independent regression model per property, we utilize transfer learning by exploiting correlation among different material properties to improve the regression performance. The proposed transfer learning technique can overcome the performance loss due to simplicity of the models. Next, to improve the explainability of the ML system, we add a rationale generator component to our framework. The goal of the rationale generator is twofold: provide explanations corresponding to an individual prediction, and provide explanations corresponding to the regression model. For individual prediction, the rationale generator provides explanations in terms of prototypes (or similar but known compounds). This helps a material scientist to use his/her domain knowledge to verify if similar known compounds or prototypes satisfy the requirements or constraints imposed. On the other hand, for regression models, the rationale generator provides global explanations regarding the whole material sub-classes. This is achieved by providing feature importance for every material sub-class. Finally, we propose a new evaluation metric and a trust score to better quantify confidence and establish trust in the ML predictions.

We demonstrate the applicability of our technique by using it for two applications: predicting five physically distinct properties of crystalline compounds, and identifying potentially stable solar cells. Our vision is that this framework could be used as a basis for creating explainable and reliable ML models based on the balanced/imbalanced data available in the materials databases and, thereby, initiate a major step forward in application of machine learning in materials science.

Ii Results and Discussions

The results of this work are described in two major subsections. First, we will discuss the development of our ML method with a focus on reliability and explainability using the data from the Open Quantum Materials Database (OQMD). Next, we will demonstrate the application of this method to two distinct material problems.

Ii-a General-Purpose Reliable and Explainable ML Framework

Fig. 2: An illustration of proposed ML pipeline for material property prediction.

To solve the problem of reliable learning and inference from distributionally skewed data, we propose a general purpose ML framework. Instead of developing yet another ML algorithm to improve accuracy for a specific application, our objective is to develop generic methods to improve reliability, explainablity and accuracy in the presence of imbalanced data. The proposed framework is agnostic to the type of training data, can utilize variety already-developed ML algorithms, and can be reused for a broad variety of material science problems. The framework is composed of three main components: novel training procedure for learning from imbalanced data, rationale generator for model-level and decision-level explainability, and reliable testing and uncertainty quantification techniques to evaluate the prediction performance of ML pipelines.

Ii-A1 Training Procedure

Building an ML model for materials properties prediction can be posed as a regression problem where the goal is to predict continuous valued property values from a set of material attributes/features. The challenge in our target task is that due to the presence of distributional skewness ML models do not generalize well (specifically in domains which are not well represented using available labeled data (or minority classes)). To solve this problem, we propose a generic ML training process that is applicable to a broad range of materials science applications which suffer from the distributionally skewed data. We will explain the proposed training process with the help of following running example: A material scientist is interested in learning an ML model targeting a specific class of material properties, e.g., stable wide bandgap materials in a certain targeted range. In most of the cases, we have domain knowledge about the range of property values for specific classes of materials, e.g., conductors have bandgap energies equal to zero, typical semiconductors have bandgap energies in the range of to eV, whereas wide bandgap materials have bandgap energies greater than eV. These requirements introduce a partition of the property space in multiple material classesaaaThis partition can also be introduced artificially by imposing constraints on the gradient of the property values so that compounds with similar property value are in the same class.. Given training data samples , where, is feature/attribute vector and is property value corresponding to compound , the steps in the proposed training procedure are as follows:

  1. Partition the property space in regions/classes and obtain transformed training data samples where .

  2. For each property in , perform sub-samplingbbbOther sophisticated sampling techniques24 or generative modeling approaches can also be used. on sample compounds in distinct classes, and obtain an evenly distributed training set: .

  3. Train multi-class classifiers (one per property) on balanced datasets to predict which class a compound belongs to.

  4. For every pair, train a regressor on to predict property values .

  5. Finally, utilize correlation among properties to improve the model accuracy by employing transfer learning (explained next).

At the test time, to predict property of the test compound, the ML algorithm first identifies the class the test compound belongs to by using trained multi-class classifier. Next, depending on the predicted class for property , regressor is used, along with transfer learning step, to predict property values of the test compound. Next, we provide details and justifications for each of these steps in our ML pipeline.

Steps to transform a regression problem into a multi-class classification problem on sub-sampled training data. The change that is carried out has the goal of balancing the distribution of the least represented (but more important) material classes with the more frequent observations. Furthermore, instead of having a single model trained on the entire training set, having smaller and simpler models for different classes of materials helps to gain better understanding of sub-domains using the rationale generator (explained later).

Next, we explain the proposed transfer learning technique which exploits correlations presented among different material properties to improve the regression performance. We devise a simple knowledge transfer scheme to utilize the marginal estimates/predictions from step where regressors were trained independently for different properties. Note that, for each compound , we get an independent estimate from step . In step , we augment the original attribute vector with independent estimates and use it as a modified attribute vector and train regressors for each pair. We found that this simple knowledge transfer scheme significantly improves the regression performance.

Ii-A2 Rationale Generator

The goal of rationale generator is to provide: decision level explanations, and model level explanations. Decision level explanations correspond to provide reasoning such as: what made an ML algorithm make a specific decision/prediction? On the other hand, model level explanations are focused on providing understandings at the class level, e.g., which chemical attributes help in discriminating among insulators, semi-conductors, and conductors?

Decision Level Explanations: The proposed ML pipeline explains its predictions for previously unseen compounds by providing similar known examples (or prototypes). Explanation by examples is motivated by the observation that studies of human reasoning have shown that the use of examples (analogy) is fundamental to the development of effective strategies for better decision-making25. Example-based explanations are widely used in the effort to improve user explainability of complex ML models. In our context, for every unseen test example, in addition to predicted property values, we provide similar experimentally known compounds with corresponding similarity to the test compound in the feature space. Our feature space is heterogeneous (both continuous and categorical features), thus, Euclidean distance is not reliable. Thus, we propose to quantify similarity using Gower’s metric26. Gower’s metric can be used to measure similarity between data containing a combination of logical, numerical, categorical or text entries. The distance is always a number between (similar) and (maximally dissimilar). Furthermore, as a consequence of breaking a large regression problem into a multi-class classification followed by a simpler regression problem, we can also provide a logical sequence of decisions taken to reach a prediction.

Model Level Explanations: Knowing which chemical attributes are important in a model’s prediction (feature importance) and how they are combined can be very powerful in helping material scientists understand and trust automatic ML systems. Due to the structure of our pipeline (regression+classification), we can provide a more fine grained feature importance explanations compared to having a single regression model. Specifically, we break the feature importance of attributes to predict a material property into: feature importance for discriminating among different material classes (inter-class), and feature importance for regression on a material sub-domain (intra-class). This provides a more in depth explanation of the property prediction process. Furthermore, we can also provide simple classification rules for different material classes using a decision tree classifier. A decision tree combines simple questions about the materials data in an interpretable way and help to gain understanding of the decision-making and prediction process.

Ii-A3 Robust Model Performance Evaluation and Uncertainty Quantification

The distributionally skewed training data biases the learning system towards solutions that may not be in accordance with the user’s end goal. Most existing learning systems work by searching the space of possible models with the goal of optimizing some criteria (or numerical score). These metrics are usually related to some form of average performance over the whole train/test data and can be misleading in cases where sampled train/test data is not representative of the true distribution. More specifically, commonly used evaluation metrics (such as mean squared error, R-squared error, etc.) assume an unbiased (or uniform) sampling of the test data and break down in the presence of distributionally skewed test data (shown later). Therefore, we propose to perform class specific evaluations (by partitioning the property space into multiple classes of interest) which better characterizes the predictive performance of ML models in the presence of distributionally skewed data. We also recommend visualizing predicted and actual property values in combination with the numeric scores to build a better intuition about the predictive performance.

Note that having a robust evaluation metric only partially solves the problem as ML models are susceptible to over-confident extrapolations. As we will show later, in imbalanced learning scenarios, ML models make overconfident extrapolations which have higher probability of being wrong (e.g., predicting conductor to be an insulator with confidence). In other words, a model’s own confidence score cannot be trusted. To overcome this problem, we use a set of labeled experimentally known compounds as side information to help determine a model’s trustworthiness for a particular unseen test example. The trust score is defined as follows:

(1)

The trust score takes into account the average Gower distance from the test sample to other samples in the same class vs. the average Gower distance to nearby samples in other classes. ranges from to where a higher value indicates a more trustworthy model.

Ii-B Example Applications

In this section, we discuss two distinct applications for our reliable and explainable ML pipeline to demonstrate its versatility: predicting five physically distinct properties of crystalline compounds and identifying potentially stable solar cells. In both the cases, we use the same general framework, i.e., the same attributes and ML pipeline. Through these examples, we discuss all aspects of creating reliable and explainable ML models: building a reliable machine learning model from distributionally skewed training data, generating explanations to gain better understanding of the data/model, evaluating model accuracy and employing the model to predict new materials.

Ii-B1 Predicting Properties of Crystalline Compounds

Density functional theory (DFT) is a ubiquitous tool for predicting the properties of crystalline compounds. However, DFT is fundamentally limited by the amount of computational time required for complex calculations. On the other hand, ML methods offer the promise of property predictions at several orders of magnitude faster rates than DFT. Thus, we explore the use of data from the OQMD DFT calculation databases as training data for ML models that can be used rapidly to assess many more materials than what would be feasible to evaluate using DFT.

Data Set: The OQMD contains several properties of approximately crystalline compounds as calculated using DFT. The diversity and scale of the data in the OQMD make it ideal for studying the performance of general-purpose ML models using a single, uniform data set. We select a subset of compounds from OQMD that represents the lowest-energy compound at each unique composition and use them as our training set. Building on existing strategies23, we use a set of attributes/features to represent each compound. Using these features, we consider the problem of developing reliable and explainable ML models to predict five physically distinct properties currently available through the OQMD: bandgap energy (eV), volume/atom (Å/atom), energy/atom (eV/atom), thermodynamic stability (eV/atom) and formation energy (eV/atom)27. Units for these properties are omitted in the rest of the paper for ease of notation. A detailed description of the attributes (used as inputs) and properties (used as outputs) are provided in the Supplementary Materials.

Method: We quantify the predictive performance of our approach using -fold cross-validation. Following the procedure mentioned in Sec. II-A1, we partition the property space for each property in classes. The decision boundary thresholds for class separation (with class distributions) are as follows: bandgap energy () with (), volume/atom () with (), energy/atom () with (), stability () with () and formation energy () with ().cccWe also tried different combinations of thresholds and trends in the obtained results were found to be consistent. In practice, these thresholds can be provided by domain experts depending on a specific application (as done in Sec. II-B2).. Sub-sampling ratios for sample compounds (for obtaining evenly distributed training set) were determined using the cross-validation. We train Extreme Gradient Boosting (XGB) classifiers to do multi-class () classification using the softmax objective for each property. Next, we train Gradient Boosting Regressors (GBRs) for each property-class pair independently (and refer to them as marginal regressors). Using these marginal regressors, we create augmented feature vectors for correlation based predictions. Finally, we train another set of GBR regressors for each property-class pair on augmented data (and refer to them as joint regressors as they exploit correlation present among properties to improve the prediction performance).

Metrics Energy/atom Volume/atom Bandgap Energy Formation Energy Stability
MAE
MSE
R
TABLE I: Results for conventional technique with overall prediction scores. Cross-validation gives an impression that conventional regressors have excellent regression performance (i.e., low MAE/MSE and high R score). However, later we show that these metrics provide misleading inferences due to the presence of distributionally skewed data.

Results: For the conventional scheme, we train independent GBR regressors to directly predict properties from the features corresponding to the compounds. In Table I, we report different error metrics to quantify the regression performance using the cross-validation. Note that these metrics report an accumulated/average error score on the test set (which comprises of compounds from all partitions of properties). These results are comparable to state of the art23 and suggest that conventional regressors have excellent regression performance (low MAE/MSE and high R score). Relying on the inference made by this evaluation method, we may be tempted to use these regression models in practice for different applications (such as, screening or discovery of novel solar cells). However, next we show that these metrics provide misleading inferences in the presence of distributionally skewed data. In Table III(a), we perform class specific evaluations (i.e., we partition the property space for each property in classes and use the test data belonging to each class separately). Surprisingly, Table III(a) shows that conventional regressors perform well only on a specific class (or range of property values) – specifically, only those in the majority classes (i.e., majority of compounds fall in those property value ranges). The conventional regression method performs particularly poorly with minority classes for bandgap energy and stability prediction where the data distribution is highly skewed (see Fig. 1). Unfortunately, the test data is also distributionally skewed and is not representative of the true data distribution. Thus, standard methods for assessing and ensuring generalizability of ML models break down and lead to misleading conclusions (as shown in Table I). On the other hand, class specific evaluations better characterize the predictive performance of ML models in the presence of distributionally skewed data.

Metrics Energy/atom Volume/atom Bandgap Energy Formation Energy Stability
Distribution () () () () ()
MAE
MSE
R
(a) Conventional technique. Class-specific cross-validation shows that the conventional technique performs poorly on minority classes. This important observation cannot be made from Table I.
Metrics Energy/atom Volume/atom Bandgap Energy Formation Energy Stability
MAE
MSE
R
(b) Proposed technique without transfer learning. Simplicity (or explainability) due to smaller and simpler models results in performance loss. This is not surprising as there is a trade-off between simplicity/explainability and prediction performance.
Metrics Energy/atom Volume/atom Bandgap Energy Formation Energy Stability
MAE
MSE
R
(c) Proposed technique with transfer learning. Transfer learning step in our pipeline compensates for the performance loss due to simplicity of models and in fact outperforms conventional technique (especially on minority classes). We suspect that this gain may also be due to the fact that simpler models perform better in low-data regime (e.g, minority classes), as opposed to complex models which may over-fit (and require a large amount of data to perform well).
TABLE II: Class-specific prediction score comparison. Class-specific cross-validation provides reliable inferences and shows the superiority of proposed scheme over conventional scheme.

In Table III(b), we show the effect of transforming a single complex regression model into ensemble of smaller and simpler models to gain a better understanding of sub-domains (Step - in Sec. II-A1). We notice that the performance of these transformed simpler models are worse compared to having a single complex model (as given in Table III(a)). This suggests that there is a trade-off between simplicity/explainability and accuracy.

Finally, Table III(c) shows how this performance loss due to simplicity of models can be overcome using the transfer learning (or correlation based fusion) step in our pipeline. We observe that the proposed transfer learning technique can very well exploit correlations in the property space which results in a significant performance gain compared to conventional regression approachdddSurprisingly, we did not observe any gain when using transfer learning with conventional technique. In fact, we observed that the models showed severe over-fitting behavior to the predicted properties.. Note that this gain is achieved in spite of having simper and smaller models in our ML pipeline. This suggests that a user can achieve high accuracy without sacrificing explainability. We also observed that sub-sampling step in our pipeline had a positive impact on the regression performance of minority classes.

Furthermore, our pipeline also quantifies uncertainties in its predictions providing a confidence score to the user. We show an illustration of the uncertainty quantification of bandgap energy and stability predictions on test samples in Figure 3. It can be seen that regressors perform poorly in regions with high uncertainty.

Fig. 3: Uncertainty quantification of the regressor (ground truth is in blue, predictions are in red, and gray shaded area represents uncertainty). \subrefbguq Bandgap energy, and \subrefstuq Stability. In several cases, regressors perform poorly in regions with high uncertainty.

We would also like to point out that in cases where the data from a specific class is heavily under-represented, none of the model design strategies will improve the performance and generating new data may be the only possible solution (e.g., bandgap energy prediction for minority classes). In such cases, relying solely on cross-validation score or confidence score may not provide reliable inference (shown later). To overcome this challenge, explainable machine learning can be a potentially viable solution.

Fig. 4: Feature importance for -class classification of bandgap energy. The rationale generator favors attributes related to melting temperature, electro-negativity, and volume per atom for explaining bandgap-energy predictions. These attributes are all known to be highly correlated with the bandgap energy level of crystalline compounds.

Next, we show the output of rationale generator in our pipeline. Specifically, we provide model-level explanations, as well as, decision-level explanations for each sub-class of materials. For model-level explanations, our pipeline provides feature importance for both classification and regression steps. Feature importance provides a score that indicates how useful (or valuable) each feature was in the construction of the model. The more an attribute is used to make key decisions with (classification/regression) model, the higher its relative importance. This importance is calculated explicitly for each attribute in the data set, allowing attributes to be ranked and compared to each other. In Fig. 4, we show the feature importance for our -class classifier for bandgap energy. It shows the attributes which help in discriminating among -classes on compounds (insulators, semi-conductors, and conductors) based on their bandgap energy values. Note that the rationale generator picked attributes related to the melting temperature, electro-negativity and volume per atom of constituent elements to be the most important features in determining the bandgap energy level of the compounds. This is reasonable as all these attributes are known to be highly correlated with the bandgap energy level of crystalline compounds. For example, melting temperature of constituent elements is positively correlated with inter-atomic forces (and in turn inter-atomic distances). Increased inter-atomic spacing decreases the potential seen by the electrons in the material, which in turn reduces the bandgap energy. Therefore, band structure changes as function of inter-atomic forces which is correlated with melting temperature. Similarly, in multi-element material system, as the electro-negativity difference between different atoms increases, so does the energy difference between bonding and anti-bonding orbitals. Therefore, the bandgap energy increases as the electro-negativities of constituent elements increase. Thus, the bandgap energy has a strong correlation with electro-negativity of constituent elements. Finally, mean volume per atom of constituent elements is also correlated with the inter-atomic distance in a material system. As explained above, inter-atomic distance is negatively correlated with the bandgap energy, and so does the mean volume per atom of constituent elements. Similar feature importance results for class-specific predictors can also be obtained (see Supplementary Material).

Our rationale generator also provides decision-level explanations. Specifically, for every unseen test example, in addition to predicted property value, we provide similar experimentally known compounds (or prototypes) with corresponding distances to the test compound. These prototypes are extremely useful in identifying if the ML model is making over-confident extrapolation which has higher probability of being wrong.

Test Compound Ground Truth Prediction Confidence Score Trust Score
(Class, Bandgap) (Class, Bandgap)
TABLE III: Bandgap energy prediction and uncertainty quantification. Model’s own confidence score alone may not be very reliable as it makes wrong and over-confident predictions on minority classes (i.e., classes and ). On the other hand, a higher (or lower) trust score consistently imply higher (or lower) probability that the classifier (or regressor) is correct.

In Table III, we show test compounds with ground truths (class, bandgap energy value), predictions (class, bandgap energy value), and corresponding confidence scores. It can be seen that both classifier and regressor make wrong and over-confident predictions on minority classes (i.e., classes and ). In other words, a higher confidence score from the model for minority class does not necessarily imply higher probability that the classifier (or regressor) is correct. For compounds in minority classes, ML model may simply not be the best judge of its own trustworthiness. On the other hand, the proposed trust score (as given in (1)) consistently outperforms classifier’s/regressor’s own confidence score. A higher/lower trust score from the model imply higher/lower probability that the classifier (or regressor) is correct. Furthermore, as our trust score is computed using distances from experimentally known compounds from Inorganic Crystal Structure Database (ICSD)28, it also provides some confidence on compounds amenability to be synthesized.

Ii-B2 Novel Stable Solar Cell Prediction

To show how our ML pipeline can be used for discovering new materials, we simulate a search for stable compounds with bandgap energy within a desired range. To evaluate the ability of our approach to locate compounds that are stable and have bandgap energies within the target range, we setup an experiment where a model was fit on the training data set and, then, was tasked with selecting which compounds in the test data were most likely to be stable and have a bandgap energy in the desired range for solar cells: eV.

Data Set: Same as before, for the training data, we selected a subset of compounds from OQMD that represents the lowest-energy compounds at each unique composition. We use same attributes as before. Using these attributes/features, we consider the problem of developing reliable and explainable ML models to predict two physically distinct properties of stable solar cells: bandgap energy, and stability. Note that this experiment is more challenging and practical as compared to Ward et al.23 where the training data set was considered to be compounds that were reported to be possible to be made experimentally in the ICSD (a total of entries) so that only bandgap energy, and not stability, needed to be considered. We choose test data set from Meredig et al.5 to be as-yet-undiscovered ternary compounds ( entries) which are not not yet in the OQMD.

Compounds Bandgap Energy Stability (Prototype , Distance) (Prototype , Distance) Trust Score
TABLE IV: Compositions of materials predicted using proposed ML pipeline to be stable candidates for solar cell applications with experimentally known prototypes and their distances from predicted candidates.

Method: Following the procedure mentioned in Sec. II-A1, we partition the property space for each property in classes. The decision boundary thresholds for class separation are as follows: bandgap energy (), and stability (). Similar to Sec. II-B1, we use Extreme Gradient Boosting (XGB) classifiers to do multiclass () classification and Gradient Boosting Regressors (GBRs) to do marginal and joint regression. We use models’ own confidence and trust score to rank the potentially stable solar cells.

Results: We used the proposed ML pipeline to search for new stable compounds (i.e., those not yet in the OQMD). Specifically, we use trained models to predict bandgap energy and stability of compositions that were suggested by Meredig et al.5 to be as-yet-undiscovered ternary compounds. Out of this list of compounds, we found that are likely to be stable and have favorable bandgap energies. A subset with the trust score are shown in Table IV. Similar experimentally known prototypes (as shown in Table IV) can also serve as an initial guess on the -d crystal structure of the predicted compounds. These recommendations appear reasonable as four of the six suggested compounds (CsCrSe, CsSbS, CsVSe, NaAgO) can be classified as I-III-VI semiconductors, which are semiconductors that contain an alkali metal, a transition metal, and a chalcogen; I-III-VI semiconductors are a known promising class of photovoltaic materials as many have direct bandgap energies of eV, making them well-matched to the solar spectrum. The best known I-III-VI photovoltaic is copper-indium-gallium-selenide (CIGS), which has solar cell power conversion efficiencies on par with silicon’s. The other two identified compounds – ThCO and PmPtSe – are unique in that they contain actinide and lanthanide elements. However, from a practical perspective, the scarcity and radioactivity of these elements may make it challenging to explore them experimentally.

A detailed list of potentially stable solar cell compounds (with corresponding property predictions and explanations) is provided in the Supplementary Material.

Iii Conclusions

In this paper, we considered the problem of learning reliable and explainable machine learning models from underrepresented and distributionally skewed materials science data. We identified common pitfalls of existing ML techniques while learning from imbalanced data. We show how applying ML techniques without careful consideration of its assumptions and limitations can lead to both quantitatively and qualitatively incorrect predictive models. To overcome the limitations of existing ML techniques, we proposed a general-purpose explainable and reliable ML framework for learning from imbalanced material data. We also proposed a new evaluation metric and a trust score to better quantify confidence in the predictions. The rationale generator component in our pipeline provides useful model-level and decision-level explanations to establish trust in the ML model and its predictions. Finally, we demonstrated the applicability of our technique on predicting five physically distinct properties of crystalline compounds, and identifying potentially stable solar cells.

Iv Materials and Methods

All machine learning models were created using the Scikit-learn29 and XGBoost30 machine learning libraries. The Materials Agnostic Platform for Informatics and Exploration (Magpie)23 was used to compute the attributes. Scikit-learn, XGBoost and Magpie are available under open-source licenses. The software, training data sets and input files used in this work are provided in the Supplementary Information associated with this manuscript.

V Acknowledgements

Authors would like to thank Dr. Joel Varley and Dr. Mike Surh for valuable feedbacks, suggestions and discussions in preparation of this manuscript.

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was supported by the LLNL-LDRD Program under Project No. 16-ERD-019 and 19-SI-001. (LLNL-JRNL-764864)

Vi Contributions

B.K. and T.Y.H conceived the project, B.K performed the experiments, B.K. and B.G. analyzed the results. All authors discussed the results and contributed to the writing of the manuscript.

Vii Competing Interests

The authors declare no conflict of interest

References

  • 1 S. Kirklin, J. E. Saal, B. Meredig, A. Thompson, J. W. Doak, M. Aykol, S. Rühl, and C. Wolverton, “The open quantum materials database (oqmd): assessing the accuracy of dft formation energies,” npj Computational Materials, vol. 1, p. 15010, 2015.
  • 2 T. A. Hogan and B. Kailkhura, “Universal hard-label black-box perturbations: Breaking security-through-obscurity defenses,” arXiv preprint arXiv:1811.03733, 2018.
  • 3 S. Srinivasan and K. Rajan, ““property phase diagrams” for compound semiconductors through data mining,” Materials, vol. 6, no. 1, pp. 279–290, 2013.
  • 4 L. M. Ghiringhelli, J. Vybiral, S. V. Levchenko, C. Draxl, and M. Scheffler, “Big data of materials science: Critical role of the descriptor,” Physical review letters, vol. 114, no. 10, p. 105503, 2015.
  • 5 B. Meredig, A. Agrawal, S. Kirklin, J. E. Saal, J. Doak, A. Thompson, K. Zhang, A. Choudhary, and C. Wolverton, “Combinatorial screening for new materials in unconstrained composition space with machine learning,” Physical Review B, vol. 89, no. 9, p. 094104, 2014.
  • 6 C. S. Kong, W. Luo, S. Arapan, P. Villars, S. Iwata, R. Ahuja, and K. Rajan, “Information-theoretic approach for the discovery of design rules for crystal chemistry,” Journal of chemical information and modeling, vol. 52, no. 7, pp. 1812–1820, 2012.
  • 7 F. Faber, A. Lindmaa, O. A. von Lilienfeld, and R. Armiento, “Crystal structure representations for machine learning models of formation energies,” International Journal of Quantum Chemistry, vol. 115, no. 16, pp. 1094–1101, 2015.
  • 8 K. Schütt, H. Glawe, F. Brockherde, A. Sanna, K. Müller, and E. Gross, “How to represent crystal structures for machine learning: Towards fast prediction of electronic properties,” Physical Review B, vol. 89, no. 20, p. 205118, 2014.
  • 9 G. Pilania, C. Wang, X. Jiang, S. Rajasekaran, and R. Ramprasad, “Accelerating materials property predictions using machine learning,” Scientific reports, vol. 3, 2013.
  • 10 A. P. Bartók, M. C. Payne, R. Kondor, and G. Csányi, “Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons,” Physical review letters, vol. 104, no. 13, p. 136403, 2010.
  • 11 A. Seko, T. Maekawa, K. Tsuda, and I. Tanaka, “Machine learning with systematic density-functional theory calculations: Application to melting temperatures of single-and binary-component solids,” Physical Review B, vol. 89, no. 5, p. 054303, 2014.
  • 12 Z.-Y. Hou, Q. Dai, X.-Q. Wu, and G.-T. Chen, “Artificial neural network aided design of catalyst for propane ammoxidation,” Applied Catalysis A: General, vol. 161, no. 1, pp. 183–190, 1997.
  • 13 B. G. Sumpter and D. W. Noid, “On the design, analysis, and characterization of materials using computational neural networks,” Annual Review of Materials Science, vol. 26, no. 1, pp. 223–277, 1996.
  • 14 H. Bhadeshia, R. Dimitriu, S. Forsik, J. Pak, and J. Ryu, “Performance of neural networks in materials science,” Materials Science and Technology, vol. 25, no. 4, pp. 504–510, 2009.
  • 15 S. Atahan-Evrenk and A. Aspuru-Guzik, “Prediction and calculation of crystal structures,” Topics in Current Chemistry, vol. 345, 2014.
  • 16 L. Yang and G. Ceder, “Data-mined similarity function between material compositions,” Physical Review B, vol. 88, no. 22, p. 224107, 2013.
  • 17 A. M. Deml, R. O’Hayre, C. Wolverton, and V. Stevanović, “Predicting density functional theory total energies and enthalpies of formation of metal-nonmetal compounds by linear regression,” Physical Review B, vol. 93, no. 8, p. 085142, 2016.
  • 18 S. Curtarolo, D. Morgan, K. Persson, J. Rodgers, and G. Ceder, “Predicting crystal structures with data mining of quantum calculations,” Physical review letters, vol. 91, no. 13, p. 135503, 2003.
  • 19 C. C. Fischer, K. J. Tibbetts, D. Morgan, and G. Ceder, “Predicting crystal structure by merging data mining with quantum mechanics,” Nature materials, vol. 5, no. 8, pp. 641–646, 2006.
  • 20 G. Hautier, C. Fischer, V. Ehrlacher, A. Jain, and G. Ceder, “Data mined ionic substitutions for the discovery of new compounds,” Inorganic chemistry, vol. 50, no. 2, pp. 656–663, 2010.
  • 21 P. Dey, J. Bible, S. Datta, S. Broderick, J. Jasinski, M. Sunkara, M. Menon, and K. Rajan, “Informatics-aided bandgap engineering for solar materials,” Computational Materials Science, vol. 83, pp. 185–195, 2014.
  • 22 G. Pilania, A. Mannodi-Kanakkithodi, B. Uberuaga, R. Ramprasad, J. Gubernatis, and T. Lookman, “Machine learning bandgaps of double perovskites,” Scientific reports, vol. 6, p. 19375, 2016.
  • 23 L. Ward, A. Agrawal, A. Choudhary, and C. Wolverton, “A general-purpose machine learning framework for predicting properties of inorganic materials,” npj Computational Materials, vol. 2, p. 16028, 2016.
  • 24 N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic minority over-sampling technique,” Journal of artificial intelligence research, vol. 16, pp. 321–357, 2002.
  • 25 A. Newell, H. A. Simon et al., Human problem solving.   Prentice-Hall Englewood Cliffs, NJ, 1972, vol. 104, no. 9.
  • 26 J. van den Hoven, “Clustering with optimised weights for gower’s metric,” Netherlands: University of Amsterdam, 2015.
  • 27 A. A. Emery and C. Wolverton, “High-throughput dft calculations of formation energy, stability and oxygen vacancy formation energy of abo 3 perovskites,” Scientific data, vol. 4, p. 170153, 2017.
  • 28 A. Belsky, M. Hellenbrandt, V. L. Karen, and P. Luksch, “New developments in the inorganic crystal structure database (icsd): accessibility in support of materials research and design,” Acta Crystallographica Section B: Structural Science, vol. 58, no. 3, pp. 364–369, 2002.
  • 29 F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, Nov. 2011. [Online]. Available: http://dl.acm.org/citation.cfm?id=1953048.2078195
  • 30 T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’16.   New York, NY, USA: ACM, 2016, pp. 785–794. [Online]. Available: http://doi.acm.org/10.1145/2939672.2939785
  • 31 A. A. Emery and C. Wolverton, “High-throughput dft calculations of formation energy, stability and oxygen vacancy formation energy of abo 3 perovskites,” Scientific data, vol. 4, p. 170153, 2017.

Supplementary Material

-a Attributes and Properties

The first step of our pipeline is to compute attributes (or chemical descriptors) based on the composition of materials. These attributes should be descriptive enough to enable a ML algorithm to construct general rules that can possibly “learn” chemistry. Building on existing strategies23, we use a set of attributes/features to represent each compound. These attributes are comprised of: stoichiometric properties, elemental statistics, electronic structure properties attributes, ionic compound attributes. A detailed procedure to compute these attributes can be found in The Materials Agnostic Platform for Informatics and Exploration (Magpie)23.

Using these features, we consider the problem of developing reliable and explainable ML models to predict five physically distinct properties currently available through the OQMD: bandgap energy (eV), volume/atom (Å/atom), energy/atom (eV/atom), thermodynamic stability (eV/atom) and formation energy (eV/atom). Formation energy is just the total Energy/atom minus some correction factors (i.e., the material with the lowest formation energy at each composition also has the lowest energy per atom). Stability has to do with whether a particular material is thermodynamically stable or not. Compounds with a negative stability are stable and those with a positive stability are unstable. More information on the output properties are provided by Emery et al.31, 27.

-B Feature Importance for Class-specific Regression

Feature importance results for class-specific predictors can also be obtained.

Fig. 5: Feature importance for class specific formation energy prediction regressors. \subreffe0fi class , \subreffe1fi class , and \subreffe2fi class ,.

In Fig. 5, we show feature importance for formation energy prediction regressors for all classes. For all three classes, the thermodynamic stability is found to be the most important attribute in predicting formation energy. From thermodynamic point of view, this makes sense as the stability is negatively correlated with the formation energy. More results are provided in the Supplementary Information associated with this manuscript.

-C Stable Solar Cell Compounds

A detailed list of potentially stable solar cell compounds (with corresponding property predictions and explanations) is provided in the Supplementary Information associated with this manuscript.

-D Other

The software, training data sets and input files used in this work are provided in the Supplementary Information associated with this manuscript.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
330301
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description