Using permutations to detect, quantify and correct for confounding in machine learning predictions
Clinical machine learning applications are often plagued with confounders that are clinically irrelevant, but can still artificially boost the predictive performance of the algorithms. Confounding is especially problematic in mobile health studies run “in the wild”, where it is challenging to balance the demographic characteristics of participants that self select to enter the study. An effective approach to remove the influence of confounders is to match samples in order to improve the balance in the data. The caveat is that we end-up with a smaller number of participants to train and evaluate the machine learning algorithm. Alternative confounding adjustment methods that make more efficient use of the data (e.g., inverse probability weighting) usually rely on modeling assumptions, and it is unclear how robust these methods are to violations of these assumptions. Here, rather than proposing a new approach to prevent/reduce the learning of confounding signals by a machine learning algorithm, we develop novel statistical tools to detect, quantify and correct for the influence of observed confounders. Our tools are based on restricted and standard permutation approaches and can be used to evaluate how well a confounding adjustment method is actually working. We use restricted permutations to test if an algorithm has learned disease signal in the presence of confounding signal, and to develop a novel statistical test to detect confounding learning per se. Furthermore, we prove that restricted permutations provide an alternative method to compute partial correlations, and use this result as a motivation to develop a novel approach to estimate the corrected predictive performance of a learner. We evaluate the statistical properties of our methods in simulation studies.