Clusters in Explanation Space: Inferring disease subtypes from model explanations

Clusters in Explanation Space: Inferring disease subtypes from model explanations

Abstract

Identification of disease subtypes and corresponding biomarkers can substantially improve clinical diagnosis and treatment selection. Discovering these subtypes in noisy, high dimensional biomedical data is often impossible for humans and challenging for machines.

We introduce a new approach to facilitate the discovery of disease subtypes: Instead of analyzing the original data, we train a diagnostic classifier (healthy vs. diseased) and extract instance-wise explanations for the classifier’s decisions. The distribution of instances in the explanation space of our diagnostic classifier amplifies the different reasons for belonging to the same class - resulting in a representation that is uniquely useful for discovering latent subtypes.

We compare our ability to recover subtypes via cluster analysis on model explanations to classical cluster analysis on the original data. In multiple datasets with known ground-truth subclasses, most compellingly on UK Biobank brain imaging data and transcriptome data from the Cancer Genome Atlas, we show that cluster analysis on model explanations substantially outperforms the classical approach.

While we believe clustering in explanation space to be particularly valuable for inferring disease subtypes, the method is more general and applicable to any kind of sub-type identification.

1 Introduction

Many diseases manifest differently in different humans. This heterogeneity is especially pronounced in fields like psychiatry or oncology where visible symptoms are far removed from the underlying pathomechanism. What appears to be a coherent collection of symptoms is often the expression of a variety of distinct disease subtypes, each with different disease progression and different treatment responses. Dealing with these subtypes is the purview of precision medicine: identified subtypes and their corresponding biomarkers are used to refine diagnoses, predict treatment responses and disease progression, and to inform further scientific research [Bzdok and Meyer-Lindenberg, 2017].

The search for biologically grounded subtypes frequently uses cluster analysis, often hierarchical, in either the original feature space or in some hand-crafted embedding space [Carey et al., 2006, Erro et al., 2013, Drysdale et al., 2017]. This is particularly challenging when faced with high dimensional feature spaces, such as MRI or genome data, where the high dimensionality and a generally low signal to noise ratio impede the straightforward application of modern clustering algorithms. In an intermediate step, data is often transformed from the original feature space into a more informative embedding space. In this paper, we propose a novel space that we believe to be particularly useful for identifying latent subtypes: the space of explanations corresponding to a diagnostic classifier.

Recent interest in explaining the output of complex machine learning models has been characterized by a wide range of approaches [Lipton, 2016, Montavon et al., 2018], most of them focused on providing an instance-wise explanation of a model’s output as either a subset of input features[Ribeiro et al., 2018, Chen et al., 2018], or a weighting of input features[Ribeiro et al., 2016, Lundberg and Lee, 2017]. The latter, where each input feature is weighted according to its contribution to the underlying model’s output for an instance, can be thought of as specifying a transformation from feature space to an explanation space. This explanation space is conditioned on the underlying model’s output. For example, in a classification task, assuming the underlying model is performing well, features that strongly affect classification will exhibit high variance in the new space, whereas others will exhibit correspondingly low variance.

We argue that the explanation space of a diagnostic classifier is an appropriate embedding space for subsequent cluster analyses aimed at the discovery of latent disease subtypes. Firstly, the explanations should collapse features that are irrelevant to the classification of a particular disease, thereby mitigating the Curse of Dimensionality. Secondly, we expect different disease subtypes to have structurally different explanations for belonging to a disease class. The explanation space of a diagnostic classifier is expected to amplify the different reasons for belonging to the same class - resulting in a representation that should be uniquely useful for recovering latent subtypes. The intuition here is that instance-wise explanations refer to a local part of the classifier’s decision boundary. This means that if the decision boundary is differently oriented for different parts of the space (due, for example, to multiple distinct underlying subclasses), the explanations will also differ meaningfully.

For a proof of principle, we take the approach of converting multi-class classification problems into binary ones for the purposes of training the underlying model. This means that each ‘class’ has several distinct subclasses, of which the classifier is unaware - while we retain ground-truth knowledge of the respective subclasses. In four revisited datasets, we demonstrate that the clusters in explanation space recover these known subclasses.

2 Related Work

Examples of clustering work in the original feature space include Carey et al. [2006], who use a hierarchical clustering approach to population-based distributions and clinical associations for breast-cancer subtypes, and Erro et al. [2013], who use a k-means based clustering approach to test the hypothesis that the variability in the clinical phenotype of Parkinson’s disease was caused by the existence of multiple distinct subtypes of the disease. Drysdale et al. [2017] show that patients with depression can be subdivided into four neurophysiological subtypes, by hierarchical clustering on a learned embedding space.

Using instance-wise explanations for clustering has previously been discussed in Lundberg et al. [2018], under the name “supervised clustering”. These authors show that clustering on Shapley values explains more model variance than other tree-specific feature importance estimates [Lundberg et al., 2018, Saabas, 2014]. To the best of our knowledge, no previous work has shown if, and to what extent, clusters in explanations space can be of practical relevance, nor has any work drawn the link to disease-subtyping in precision medicine. The present work bridges the gap between recent advances in explainable AI and real-world medical applications, by linking the former to a crucial biomedical problem and providing not only a proof-of-concept, but a full, clinically relevant, example - the discovery of cancer-subtypes.

3 Data

We chose four datasets of increasing complexity to evaluate the efficacy of subclass recovery in explanation space. Firstly, simple synthetic data for a proof-of-concept. Secondly, Fashion-MNIST as a machine learning benchmark. Thirdly, age prediction from brain imaging data as a simple biomedical example. Lastly, cancer subtype detection as a challenging real-world biomedical problem. Brain imaging data was provided by the UK Biobank, cancer transcriptome data from the Cancer Genome Atlas. The two are among the world’s largest biomedical data collections and represent the two most likely fields of applications: precision psychiatry and oncology, with big data in both p (dimensionality) and n (sample size).

The original Madelon dataset from the 2003 NIPS feature selection challenge [Guyon et al., 2005] is a synthetic binary classification problem, with data points placed in clusters on the vertices of a hypercube in a subspace of the feature dimension. The scikit-learn implementation [Pedregosa et al., 2011] of the generative algorithm was used to create a variation with 16 classes, 50 features, and data points distributed in one cluster per class on the vertices of a four-dimensional hypercube.

Zalando provides Fashion-MNIST [Xiao et al., 2017] as a more challenging drop-in replacement for MNIST, with the same dimensionality (782) and sample size (70,000). We chose Fashion-MNIST, because the original MNIST has been argued to be too easy a problem for modern methods. Indeed, most classes in the dataset can already be separated in the first two principal components of the feature space, making it ill-suited for the present analysis.

The UK Biobank [Miller et al., 2016] is one of the world’s largest biomedical data collections. It provides, amongst a multitude of other phenotypes, structural (T1) brain MRI data for 10000 participants. Data provided by UKBB is preprocessed into 164 biologically motivated imaging-derived phenotypes (IDPs). For a biomedical proof-of-concept, we chose age as simple target variable - cut into 4 quartile “classes”.

The Cancer Genome Atlas (TCGA) project [Weinstein et al., 2013] provides transcriptome1 data for various forms and subtypes of cancer. The data consists of 60,498 gene expression (fpkm) values for 8500 participants (after removing participants with missing values). We chose the cancer tissue’s immune model based subtype (6 classes, i.e. Wound Healing, IFN-gamma Dominant, Inflammatory, Lymphocyte Depleted, Immunologically Quiet, TGF-beta Dominant) as a complex, clinically relevant target variable.

4 Methods

For each dataset, classes were split into two supersets, one containing the first three2 classes, the other containing the remainder 3. This yielded a new binary classification problem. This setup allows for evaluating approaches to subtype discovery. The binary classification data is intended to represent observable characteristics (e.g. the healthy vs. diseased distinction in medicine) while the original classes represent the hidden subtypes.

A Random Forest classifier [Breiman, 2001] was trained on the binary classification problem and SHapley Additive Explanations (SHAP; Lundberg and Lee 2017) were used to generate instance-wise explanations for the predictions of the classifier. The Shapley values of a conditional expectation function of the original model provide a unique additive feature importance measure that satisfies several important properties [Lundberg and Lee, 2017], and determine the expected contribution of each feature to the model’s output, under all possible subsets of features. Our choice to use SHAP was in this case motivated by the fact that there exists an efficient solution to the computation of Shapley values in the case of Random Forests [Lundberg et al., 2018]. Investigating the applicability of other explanation methods and classifiers to specific domains would be a natural direction for future work.

Each explanation is represented as a vector of the same dimensionality as the original feature space, indicating how strongly and in what direction each feature contributed to the prediction result of a given observation. The space spanned by the model explanations was then compared to the original feature space, with respect to the distinguishability of ground-truth subtypes (the original classes of the dataset).

5 Results

To compare the correspondence of clustering in the respective spaces to the underlying distribution of class labels we used three qualitatively distinct approaches. Firstly, we visually inspected the projection into the first two principal components. Distinct clusters in the data would constitute major directions of variance and should be easily visible in the PCA projection, allowing for a first sanity check (Fig. 1). Secondly, we evaluated standard clustering quality indices to quantify structural differences between representations. We chose the Davies-Bouldin index, defined as the average similarity between each cluster and its most similar other cluster [Davies and Bouldin, 1979], the Silhouette Coefficient, which balances the mean distance between a sample and all other points in its cluster with the mean distance between that sample and all the points in the next nearest cluster [Rousseeuw, 1987], and the Calinski-Harabaz Index, which is given as the ratio of the between-clusters dispersion mean and the within-cluster dispersion [Caliński and Harabasz, 1974]. Lastly, to ensure that transformation from feature space into explanation space really does lead to improved subclass recovery, we applied Agglomerative Clustering and report the Adjusted Mutual Information [Vinh et al., 2010] between reconstructed subclasses with ground-truth labels.

To compare improvements derived from transformation into explanation space with those which can be gained by dimensionality reduction alone, we apply PCA, Isomap, and T-SNE4 to both feature space and explanation space, reducing their dimensionality down to two. We report clustering quality indices and Mutual Information for both feature space and explanation space in the original dimensionality as well as the reduced PCA, Isomap, and T-SNE spaces.

Quantitative results are shown in Table 1. Clustering quality indices and post-clustering Mutual Information consistently improved (average AMI gain of 0.45) when moving from feature space to explanation space in all four datasets, both before and after dimensionality reduction. Dimensionality reduction improved subclass recovery in both feature space and explanation space, with PCA performing worst, T-SNE performing best, and Isomap somewhat inconsistently in-between. Notably, improvement derived from dimensionality reduction was larger when reducing explanation space than when reducing feature space, with an average AMI gain from T-SNE of 0.06 in feature space and 0.13 in explanation space.

6 Conclusion

We propose explanation space as a powerful embedding to facilitate detection of latent disease subtypes, particularly by cluster analysis. In four revisited datasets, we have shown both that the distribution of data points in explanation space is sharply clustered, and that these clusters more accurately correspond to ground-truth subclasses than clusters derived from the original feature space.

We also demonstrated the relevance of our approach to the real world problem of disease subtype discovery. In both the Cancer Genome Atlas and the UK Biobank brain imaging data, task-conditioned explanations proved to be highly informative - to the extent that subtypes can be easily visually identified (Fig. 1).

Existing approaches to subtype discovery [Carey et al., 2006, Erro et al., 2013, Drysdale et al., 2017] can easily be adapted to work on task conditioned explanations. Disease status labels are naturally available and can be used to train a diagnostic classifier. The classifier’s instance-wise explanations have the same dimensionality as the original data and can serve as a drop-in replacement.

Transforming from feature space to explanation space should not be seen in competition to dimensionality reduction, but rather as a complimentary processing step. Both help to recover latent subtypes and the benefits from dimensionality reduction appear to be substantially amplified by first transforming into an appropriate explanation space.

We hope that this work will serve as a starting point for further exploration of explanation-based methods for inferring disease subtypes. More broadly, the fact that off-the-shelf explanatory tools can be used to generate task-specific embeddings is undoubtedly a promising avenue for a variety of applications.

Figure 1: Projection into first two principal components after PCA of feature space (top row) and explanation space (bottom row) of Synthetic, Fashion-MNIST, UK Biobank Brain-Age, and Cancer Genome Atlas datasets. Shown are the (sub-)classes of the three-class superset used for binary classification. As can be seen, clusters are well defined in explanation space; for biomedical datasets, these sub classes correspond to real subtypes. For example in TCGA (rightmost plot), the visible clusters correspond to Wound Healing (black), IFN-gamma Dominant (yellow), and Inflammatory (pink) cancer tissue immune model types.
Data Dim. Red. Space Davies-Bouldin Calinski-Harabaz Silhouette Adjusted MI
Sythetic none (f) 7.00 6.52 0.02 0.08
(e) 1.11 192.62 0.46 0.59
PCA (f) 3.30 38.08 0.01 0.14
(e) 0.78 258.24 0.66 0.59
Isomap (f) 12.90 3.93 -0.02 0.02
(e) 0.72 187.73 0.71 0.52
T-SNE (f) 29.45 0.61 -0.02 0.00
(e) 0.29 2937.57 0.79 0.97
Fashion none (f) 11.10 2.86 -0.03 0.00
(e) 1.32 205.45 0.30 0.61
PCA (f) 5.72 2.18 -0.12 0.00
(e) 0.74 581.87 0.51 0.56
Isomap (f) 4.87 75.60 -0.07 0.00
(e) 0.92 398.96 0.41 0.56
T-SNE (f) 2.30 65.03 0.12 0.13
(e) 0.69 768.13 0.52 0.71
UKBB none (f) 4.24 26.38 0.04 0.04
(e) 2.33 84.91 0.12 0.15
PCA (f) 2.53 57.65 0.07 0.05
(e) 1.54 152.25 0.21 0.14
Isomap (f) 2.60 60.75 0.08 0.04
(e) 1.74 122.90 0.21 0.14
T-SNE (f) 2.64 59.99 0.08 0.05
(e) 1.44 180.59 0.24 0.12
TCGA none (f) 8.58 8.54 0.01 0.10
(e) 2.64 57.37 0.13 0.68
PCA (f) 4.44 17.81 -0.01 0.05
(e) 0.59 929.48 0.59 0.67
Isomap (f) 8.27 70.70 0.02 0.16
(e) 0.51 834.70 0.62 0.70
T-SNE (f) 5.03 106.36 0.11 0.28
(e) 0.51 1561.08 0.60 0.73
Table 1: Cluster tightness and separation (Davies-Bouldin, Calinski-Harabaz, Silhouette), as well as subclass recovery performance (Adjusted Mutual Information after hierarchical clustering), in explanation space (e) and feature space (f), in original dimensionality and post PCA, Isomap, and T-SNE. Higher values are better (except for Davies-Bouldin, where lower is better).

Footnotes

  1. The transcriptome consists of all “transcripts”, i.e. copies, of DNA into RNA, that are necessary to implement DNA instructions.
  2. The first and fourth class for UKBB.
  3. The decision to split into three-vs-rest was due to ease of visualization, and simplicity of choice.
  4. All run with scikit-learn default parameters.

References

  1. Random forests. Machine Learning 45 (1), pp. 5–32. Cited by: §4.
  2. Machine learning for precision psychiatry: opportunites and challenges. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. Cited by: §1.
  3. A dendrite method for cluster analysis. Communications in Statistics - Theory and Methods 3 (1), pp. 1–27. Cited by: §5.
  4. Race, breast cancer subtypes, and survival in the carolina breast cancer study. JAMA 295 (21), pp. 2492–2502. Cited by: §1, §2, §6.
  5. Learning to explain: an information-theoretic perspective on model interpretation. arXiv preprint arXiv:1802.07814. Cited by: §1.
  6. A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence (2), pp. 224–227. Cited by: §5.
  7. Resting-state connectivity biomarkers define neurophysiological subtypes of depression. Nature Medicine 23 (1), pp. 28. Cited by: §1, §2, §6.
  8. The heterogeneity of early parkinson’s disease: a cluster analysis on newly diagnosed untreated patients. PloS One 8 (8), pp. e70244. Cited by: §1, §2, §6.
  9. Result analysis of the nips 2003 feature selection challenge. In Advances in Neural Information Processing Systems, pp. 545–552. Cited by: §3.
  10. The mythos of model interpretability. arXiv preprint arXiv:1606.03490. Cited by: §1.
  11. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888. Cited by: §2, §4.
  12. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pp. 4765–4774. Cited by: §1, §4.
  13. Multimodal population brain imaging in the uk biobank prospective epidemiological study. Nature neuroscience 19 (11), pp. 1523. Cited by: §3.
  14. Methods for interpreting and understanding deep neural networks. Digital Signal Processing 73, pp. 1–15. Cited by: §1.
  15. Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §3.
  16. Why should i trust you?: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Cited by: §1.
  17. Anchors: high-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence, Cited by: §1.
  18. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics 20, pp. 53–65. Cited by: §5.
  19. Interpreting random forests. Note: \urlhttp://blog.datadive.net/interpreting-random-forests/accessed 24/10/2018 Cited by: §2.
  20. Information theoretic measures for clusterings comparison: variants, properties, normalization and correction for chance. Journal of Machine Learning Research 11 (Oct), pp. 2837–2854. Cited by: §5.
  21. The cancer genome atlas pan-cancer analysis project. Nature Genetics 45 (10), pp. 1113. Cited by: §3.
  22. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402413
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description