Scanner Invariant Representations for Diffusion MRI Harmonization
Title: Scanner Invariant Representations for Diffusion MRI Harmonization
Authors: Daniel Moyer (1,2), Greg Ver Steeg (2), Chantal M. W. Tax (3), Paul M. Thompson* (1)
(1) Imaging Genetics Center, Mark and Mary Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
(2) Information Sciences Institute, Marina del Rey, CA, USA
(3) CUBRIC, School of Psychology, Cardiff University, Cardiff, United Kingdom
Keywords: Harmonization, Invariant Representation, Diffusion MRI
Word count: 5966
Figure and table count: 10
*Corresponding author email: firstname.lastname@example.org
Purpose: In the present work we describe the correction of diffusion-weighted MRI for site and scanner biases using a novel method based on invariant representation.
Theory and Methods: Pooled imaging data from multiple sources are subject to variation between the sources. Correcting for these biases has become very important as imaging studies increase in size and multi-site cases become more common. We propose learning an intermediate representation invariant to site/protocol variables, a technique adapted from information theory-based algorithmic fairness; by leveraging the data processing inequality, such a representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to underlying structures. To implement this, we use a deep learning method based on variational auto-encoders (VAE) to construct scanner invariant encodings of the imaging data.
Results: To evaluate our method, we use training data from the 2018 MICCAI Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our proposed method shows improvements on independent test data relative to a recently published baseline method on each subtask, mapping data from three different scanning contexts to and from one separate target scanning context.
Conclusion: As imaging studies continue to grow, the use of pooled multi-site imaging will similarly increase. Invariant representation presents a strong candidate for the harmonization of these data.
Observational conditions may vary strongly within a medical imaging study. Researchers are often aware of these conditions (e.g., scanner, site, technician, facility, etc.) but are unable to modify the experimental design to compensate, due to cost or geographic necessity. In magnetic resonance imaging (MRI), variations in scanner characteristics such as the magnetic field strength, scanner vendor, receiver coil hardware, applied gradient fields, or primary image reconstruction methods may have strong effects on collected data [11, 15, 28]; multi-site studies in particular are subject to these effects [21, 31, 35, 55]. Data harmonization is the process of removing or compensating for this unwanted variation through post-hoc corrections. In the present work we focus on harmonization for diffusion MRI (dMRI), a modality known to have scanner/site biases [12, 16, 44, 45, 53, 54, 56, 57, 58] as well as several extra possible degrees of freedom with respect to protocol (e.g., angular resolution, -values, gradient waveform choice, etc.).
Several prior methods approach diffusion MRI harmonization as a regression problem. Supervised image-to-image transfer methods have been proposed [9, 49], while for the unsupervised case site effects are often modeled as covariate effects, either at a summary statistic level [15, 55] or on the image directly . All of these methods directly transform scans from one site/scanner context to another. Further, while all methods require paired scans to correctly validate their results (subjects or phantoms scanned on both target and reference scanners), supervised methods also require paired training data. The collection of such data is expensive and difficult to collect at a large scale.
In this paper we instead frame the harmonization problem as an unsupervised image-to-image transfer problem, where harmonizing transformations may be learned without explicitly paired scans. We propose that a subset of harmonization solutions may be found by learning scanner-invariant representations, i.e., representations of the images that are uninformative of which scanner the images were collected on. These representations and the mappings between them may then be manipulated to provide image reconstructions that are minimally informative of their original collection site. We thus provide an encoder/decoder method for learning mappings to and from invariant representations computationally. This method has several advantages over regression-based methods, including a practical implementation that does not require paired data, i.e., a traveling phantom as training input, and an extension to a multi-site case.
We demonstrate our proposed method on the MICCAI Computational Diffusion MRI challenge dataset [41, 51, 59], showing substantial improvement compared to a recently published baseline method. We also introduce technical improvements to the training of neural architectures on diffusion-weighted data, and discuss the limitations and error modes of our proposed method.
1.1 Relevant Prior Work
Harmonization has been an acknowledged problem in MR imaging and specifically diffusion imaging for some time . Numerous studies have noted significant differences in diffusion summary measures (e.g., fractional anisotropy; FA) between scanners and sites [44, 53, 54]. Further protocol differences arise between sites due to limitations of the available scanners, unavoidable changes or upgrades in scanners or protocols, or when combining data retrospectively from multiple studies; effects of variations in scanning protocols on derived measures include effects of voxel size , -values (the diffusion weightings used) [12, 45], and angular resolution or -space sampling [16, 56, 57, 58] among other parameters. These problems were also examined by the MICCAI Computational Diffusion MRI 2017 and 2018 challenges [41, 51], which held an open comparison of methods for a supervised (paired) task.
Most previously proposed harmonization methods have relied on forms of regression. Harmonization of summary statistics (voxel-wise or region-wise) include random/mixed-effect models  as well as the scale-and-shift random effects regression of ComBat [15, 55]. This latter method was adapted from the genomics literature , and employs a variational Bayes scheme to learn model coefficients.
A more nuanced family of regression methods for diffusion imaging was recently introduced in a series of papers by Mirzaalian et al. [38, 39, 37]. This was later analyzed empirically in Karayumak et al. , which compared it against ComBat  for summary statistics. This family of methods computes a power spectrum from a spherical harmonic (SH) basis, then generates a template from these images using multi-channel diffeomorphic mappings. The resulting template is used to compute spatial maps of average SH power spectra by scanner/protocol, which are then used in a scale regression on individual subjects. While these papers take a very different approach from our own, the resulting method has a very similar usage pattern and output. We compare our approach directly to this method.
In a supervised (paired) task, direct image-to-image transfer has been explored both in the harmonization context [29, 34, 41] as well as the similar super-resolution context [9, 49]. This family of methods generally relies on high expressive-capacity function fitting (e.g., neural networks) to map directly between patches of pairs of images. This requires explicitly paired data, in that the same brains must be scanned at all sites. These methods perform well empirically, as tested by the CDMRI challenge , but require paired data in the training set. Our proposed method does not require paired data to train; however, in our opinion, best practice validation still requires paired data in the (holdout) test-set.
Our goal is to map diffusion MRI scans from one scanner/site context to another, so that given an image from one site we could predict accurately what it would have looked like were it collected at another site. In order to do this, we construct an encoding function that takes each image to a corresponding vector , and a conditional decoding function that takes each and a specified site back to an image (the “reconstruction” of the original image).
We further wish to remove trends and biases in that are informative of from the reconstruction , so that all data remapped to a given site have the same bias (this is the harmonization task). In order to do so, it would be sufficient to constrain , the intermediate representation, to be independent of , denoted . This is a hard constraint, and direct optimization of and subject to that constraint would be non-trivially difficult.
Instead, we choose to relax the constraint to the mutual information . Mutual information, taken from information theory, quantifies the amount of information shared between two variables e.g. and . if and only if , and so its minimization is a relaxation of our desired constraint. For a comprehensive reference on information theory, we refer the reader to Ch. 2 and Ch. 8 of Cover and Thomas .
After relaxing the independence constraint to mutual information, we would like to optimize and so that has minimal scanner-specific information, and so that has minimal differences from the original data. We demonstrate one solution for doing this using a variational bound on , parameterizing and using neural networks. The underlying theory is explored in Moyer et al. , where it is used in the context of algorithmic fairness. We reproduce it here for clarity, and further reinterpret their theoretical results in the imaging harmonization context, adding our own data processing inequality interpretation of test-time remapping.
Learning the mapping does not require matching pairs of data from pairs of sites . Best practices in validation and testing do require such data, but during training we can minimize without having examples of the same subject collected on different scanners. This is due to our bound of described in Eq. 1, which is not reliant on inter-site correspondence.
At test time we can manipulate this mapping to reconstruct images at a different site than they were originally collected at; we use this mapping as our harmonization tool. Again, by the data processing inequality, the amount of information these (new) reconstructed images contain about their original collection site is bounded by , which we explicitly minimize.
2.1 Scanner Invariant Variational Auto-Encoders
We wish to learn a mapping from data (associated with scanner ) to some latent space such that , yet also where is maximally relevant to . We start by relaxing to , and then bounding (detailed demonstration in Appendix A):
where is the empirical marginal distribution of under , the specified encoding which we control, and is a variational approximation to the conditional likelihood of given and again under . Here, denotes the Kullback-Leibler divergence and denotes Shannon entropy.
The bound in Eq.1 has three components: a conditional reconstruction, a compressive divergence term, and a constant term denoting the conditional entropy of the scan given the scanner. We can interpret Eq.1 as stating that the information in about is bounded above by uncertainty of given and , plus a penalty on the information in and a constant representing the information has about overall.
Intuitively, this breakdown makes sense: if we reconstruct given , and are otherwise compressing , the optimal compressive has no information about for reconstruction; can always remove information about without penalty, because the reconstruction term is handed that information immediately. Further, if is highly correlated with , i.e. is very low, then our bound will be worse.
We can now construct a variational encoding/conditional-decoding pair and which fits our variational bound of nicely, and which also fits our overall goal of re-mapping accurately through . Following Kingma and Welling , we use a generative log-likelihood as an objective:
Here however, we inject the conditional likelihood to match our bound for . This also fits our test-time desired Markov chain (with condition ) where is the harmonized reconstruction at new site :
Following the original VAE derivation (again in Kingma and Welling), we can derive a similar VAE with -invariant encodings by introducing the encoder :
We assume that the prior , i.e., that the conditional prior is equal to the marginal prior over . In the generative context, this would be a strong model mis-specification: if we believe that there truly are generating latent factors, it is unlikely that those factors would be independent of . However, we are not in such a generative frame, and instead would like to find a code that is invariant to , so it is reasonable to use a prior that also has this property. Taking this assumption, we have
This is a conditional extension of the VAE objective from Kingma and Welling . Putting this objective together with the penalty term in Eq.1, we have the following variational bound on the combined objective (up to a constant):
We use the negation of Eq.7 as the main loss term for learning and , where we want to minimize the negative of the bound. As described in Higgins et al. , an additional parameter may be multiplied with the divergence from the prior (the first term of Eq. 7) for further control over the VAE prior.
As we have it written in Eq. 7, the site variable has ambiguous dimension. For applications with only two sites, might be binary, while in the multi-site case, might be a one-hot vector
2.2 Diffusion-space Error Propagation from SH representations
A convenient representation for diffusion-weighted MRI is the spherical harmonics (SH) basis . These provide a countable set of basis functions from the sphere to and from which projection is easy and often performed (e.g., in graphics). In this paper, our input data and the reconstruction error is computed with respect to the SH coefficients. However, for the eventual output, the data representation that we would like to use is not in this basis, but in the original image representation which is conditional on a set of gradient vectors (b-vectors). These vectors are in general different for each subject due to spatial positioning and motion, and often change in number between sites/protocols. Rigid transformation and alignment of scan data, used in many pre-processing steps, also change vector orientation. While the function norm is preserved under projection to the SH basis (i.e., asymptotically SH projection is an isomorphism for ), this is not the case for a norm on general finite sets of vectors.
To correct for this, we construct a projection matrix from the shared continuous SH basis to the diffusion gradient directions. This projection can then be used in conjunction with decoder output to map output SH coefficients to the original subject-specific -vector representation. We allow each channel to “pass through” the projection (mapped as identity), as they are without orientation. While we use the SH representation for both input and reconstruction (to leverage our invariance results), we augment the loss function from Eq. 1 with a “real-space” loss function, the reconstruction loss in each subject’s original domain. This encourages the overall loss function to be faithful to our use-case in the original image space.
3.1 Computational Implementation
We parameterize and using neural networks, fitting their parameters by mini-batch gradient-based optimization. The loss in Eq. 7 is defined generally, and invariant representations may be learned using many different function parameterizations. However, the flexibility of neural networks as function approximators make them ideal for this application. We apply these networks to small image patches, concatenating patch-wise outputs to create harmonized images. The overall architecture is shown in Figure 0(a), and the training and testing configurations are diagrammed in Figure 0(b), with exact parameters given in Section 3. We discuss the use of patches and its relative advantages and drawbacks in Section 5. Notably, in Figure 0(b) each sample consists of a single unpaired patch, and batches of data consist of patches and protocol identifiers (one-hot vectors). As diagrammed on the right-hand side of Figure 0(b), protocol identifiers are manipulated at test time to produce harmonized reconstructions.
Our primary reconstruction loss is computed in the SH domain with respect to the entire patch. We then add a secondary loss function for the center voxel based on the SH-to-DWI projection, and an adversarial loss which attempts to predict which scanner/protocol each reconstructed patch is from (seen at the right of Figure 0(a)). We added this branch in order to provide additional information toward keeping remapped patches “reasonable” when remapping to new sites; this prediction can be performed without explicit pairing of patches. Our loss function is then, in abstract,
where is SH reconstruction loss (using MSE), is the DWI space loss, and is the adversarial loss on the SH reconstruction, with three hyper parameters controlling trade-offs between objectives. This loss function trivially extends from the single-site case (one target site to/from one base site) to a multi-site case, where is categorical.
We use a standard adversarial training scheme for defining and minimizing (see e.g. Chapter 7.13 of Goodfellow, Bengio, and Courville ). The adversarial loss is the softmax cross-entropy loss of a secondary “adversary” network, shown in green in Figure 0(a). We alternate between optimizing the primary network (minimizing Eq. 8), and the adversary (minimizing ).
We optimize these networks by differentiating the loss functions (Eq. 8 and ) with respect to the network weights (i.e. backpropagation ) and then using the Adam optimizer , which is a first order optimization method. Optimization is undertaken using mini-batches. To compute gradients of the divergences in Eq. 7 efficiently, we use the re-parameterization trick of Kingma and Welling , using both a diagonal Gaussian conditional and a Gaussian prior . We also use the closed form bound for from Moyer et al. .
3.2 Data and Pre-processing
To evaluate our method, we use the 15 subjects from the 2018 CDMRI Challenge Harmonization dataset [50, 51]. These subjects were imaged on two different scanners: a 3 T GE Excite-HD “Connectom” and a 3 T Siemens Prisma scanner. For each scanner, two separate protocols were collected, one of which matches between the scanners at a low resolution, and another which does not match at a high resolution. This results in four different “site” combinations, for which all subjects were scanned, resulting in forty different acquisitions (10 subjects, 2 scanners, 2 protocols each). We split this into 9 training subjects, 1 validation subject, and 5 held out-test subjects.
The low resolution matching protocol had an isotropic spatial resolution of 2.4 mm with 30 gradient directions (, ) at two shells , as well as a minimum of 4 acquisitions, at least one of these with reverse phase encoding. These volumes were then corrected for EPI distortions, subject motion, and eddy current distortions using FSL’s TOPUP/eddy [3, 4]. Subjects from the “Connectom” scanner were then registered to the “Prisma” scanner using a affine transformation, fit to a co-temporally acquired T1-weighted image volume (previously registered to each corresponding FA volume). The -vectors were then appropriately rotated. In the case of the “Connectom” scanner, geometric distortions due to gradient non-linearities were corrected for using in-house software [17, 46]. The high resolution protocols are identical in pre-processing to their low resolution counterparts, but have isotropic voxel sizes of mm (, ) and 1.2 mm (, ) for “Prisma” and “Connectom” scanners respectively, each with 60 gradient directions per shell, same b-shell configurations (). We downsample the spatial resolution of the high resolution scans to mm isotropic to test the multi-task method, but keep the angular resolution differences. To simplify notation, we refer to the four scanner/protocol combinations by their scanner make and number of gradient directions: Prisma 30, Prisma 60, Connectom 30, and Connectom 60.
All scans were masked for white matter tissue. This was done in order to focus our analysis on the tissue most commonly assessed using diffusion MRI (see e.g. for a review ). We map each of these scans to an -order SH representation for input into our method, but retain the original domain for training outputs. We use the minimal weighted solution in the case of under-determined projections, which corresponds with the SVD solution (using the pseudo-inverse). This is well-defined, unlike direct projection.
3.3 Experimental Protocol
The original CDMRI 2018 challenge  specified three supervised tasks, mapping between one base “site” (Prisma 30) and the three target “sites” (Prisma 60, Connectom 30, and Connectom 60). We modify this task, removing correspondence/pairing knowledge between sites (keeping this information for validation and testing), and including the inverse mapping task (target to base). This results in six tasks, two for each target site.
We train a “single-site” network for each of the six tasks, learning representations for Prisma 30 and a single target site, a multi-site variant across all six tasks. During training the method is not provided corresponding patches, and is only given individual patches. A single sample corresponds to one patch, not a pair of patches. Paired patches are only used to calculate error measures.
We measure the performance of each method on the holdout set of subjects using the Root Mean Squared Error (RMSE) between each method’s output and the ground truth target images in the original DWI basis (after pre-processing). For comparison we also include results from Mirzaalian et al. , which is the only other unsupervised method we are aware of in the literature.
We further assess the performance of each method by estimating the fiber orientation distributions (FODs) for each reconstruction using Multi-Shell Multi-Tissue Constrained Spherical Deconvolution (MSMT-CSD) , with response functions estimated using the method proposed in Dhollander et al. . For both of these steps we use the implementations from MRtrix3 . For each FOD we compute the maxima at each voxel and compare it to the closest maxima of the ground truth image to compute angular error.
In order to assess the fidelity of common local diffusion model summary measures before and after harmonization, we measure the Mean Average Percent Error (Mean APE) and the Coefficient of Variation (CV) between method-estimated and observed summary measures, reported in Table 1. We measured Mean APE and CV for Fractional Anisotropy, Mean Diffusivity, Mean Kurtosis  , and Return-to-Origin-Probability (RTOP) . This mirrors the analysis in Ning et al. 2019 .
In order to test the specific effects of our compressive regularizations, we conducted two ablation tests of our method, comparing it to the “regular networks” with parameters described in Section 3.4. We re-trained both single-site and multi-site methods with the invariance parameter set to 0, but otherwise the same settings. We further trained two more networks for and set to 0. Effectively this ablates the added invariance-inducing compressive elements of the loss function. We then compared their performance by computing the voxelwise difference in RMSE in each heldout test subject.
For the proposed methods and corresponding ablated networks we assessed the amount to which we removed site information from of the learned representation by attempting predict from . If there is no information in about then we would expect the optimal predictor to do no better than random. To this end we trained feed forward networks to predict from (“post-hoc adversaries”). As shown in Moyer et al. , the cross-entropy error of these networks is a lower bound for the mutual information . The post-hoc adversaries had same configuration as the patch-adversaries (two -unit layers using activations and the softmax cross-entropy loss).
3.4 Configuration and Parameters
We implemented our method for image patches composed of a center voxel and each of its six immediate neighbors. Each of these voxels has two shells of DWI signal, which we mapped to the SH order basis, plus one channel. Unravelling these patches and shells, the input is then a vector with elements.
We use three-layer fully connected neural networks for encoder and conditional decoder , with , , hidden units respectively for the encoder, and the reverse (, , then ) for the decoder. The latent code is parameterized by a unit Gaussian layer (). This layer is then concatenated with the scanner/protocol one-hot representation , and input into the decoder. We use transformations at each hidden layer, with sigmoid output from the encoder for the variance of the Gaussian layer. The adversary is a fully connected two-layer network with 32 units at each layer, with units again at each hidden node.
For each task we train our network for 1000 epochs, which took 19 hours to train in the pair-wise case on standard desktop equipped with an external Nvidia Titan-Xp with 12GB of RAM using TensorFlow (32GB of CPU RAM, 4 cores). We loosely tune the hyper parameters so losses are approximately on the same order of magnitude, with , , , and . We use these same parameters for both the pair-wise tasks as well as the multi-task experiments. We use an Adam learning rate of and a batch size of 128. For each batch provided for primary network training we provide 10 epochs for training the adversary.
|*||P30||Method||FA PE||MD PE||MK PE||RTOP PE|
|Negative PE implies Actual Value Estimated Value|
Figure 2 plots the root mean squared error (RMSE) by voxel of the baseline, single-site proposed method, and multi-site proposed method, as evaluated on the holdout test subjects, in the original signal representation. Our proposed methods show improvement over the baseline method in each case. In the pair-wise task between similar protocols (mapping between Prisma 30 and Connectom 30), these improvements have non-overlapping inner quartile range. For dissimilar protocols, i.e. mapping between Prisma 30 and Prisma 60 or Connectom 60, our proposed method shows improvements, though the difference is less pronounced. Surprisingly, for higher resolution target images the multi-site method performs as well or better than the pair-wise method and the baseline; this may be due to the multi-task method receiving many more volumes overall, allowing it to gather more information (albeit biased by other scanners) or preventing it from overfitting.
Figure 2 plots the voxel-wise angular deflection of each method, as measured by MSMT-CSD. For Connectom 30 and Prisma 60, both to and from Prisma 30, all three methods are comparable, with median errors well below , and percentile errors all slightly above . For mappings to the Connectom 60 protocol, the Mirzaalian et al. method has generally higher error, though the inner quartile ranges still overlap for all methods. We plot a subset of the FODs from the original image and two of the reconstructions in Figure 3.
Figures 4, 5, and 6 show the spatial distribution of the error for each tested method on a single test subject, for mappings between Prisma 30 and Connectom 30, Prisma 60, and Connectom 60 respectively. For the Prisma 30 to Connectom 30 mapping, overall the Mirzaalian baseline  has higher error than the other methods as shown by the overall coloring. The Mirzaalian baseline  and the multi-site proposed method show significant white matter patterning (though in varying degree); optimally we would like to see uncorrelated residuals, like those shown in the single-site method.
The Connectom 60 error plots (Fig. 6) have a strong spatial patterns at both the occipital and frontal poles, shown in all methods. This wide-scale effect is somewhat mitigated by the proposed methods, but is still present in all error distributions.
Table 1 reports the Absolute Percent Error (APE) and the estimated Coefficient of Variation (CV) for each method voxel-wise for four commonly used diffusion summary measures: Fractional Anisotropy (FA), Mean Diffusivity (MD), Mean Kurtosis (MK) , and Return-to-Origin-Probability (RTOP) . It is well known that APE is biased towards methods reporting smaller values, and becomes inaccurate and inflated as actual observed values approach zero [5, 36]. In our context this means that for FA and MD, more spherical tensors are weighted strongly, while more anisotropic tensors are weighted less. Due to this bias, we also report the estimated Coefficient of Variation (CV) , which is the RMSE divided by the observed sample mean
For all reported summary measures except MK, the proposed methods map to and from Connectom 30 perform well under both error measures. Mapping to both Connectom 60 and Prisma 60 from Prisma 30 has higher error than the converse (Prisma 30 to Connectom/Prisma 60); this fits our intuitions about upsampling, as both “60” protocols have higher angular resolution.
For Mean Kurtosis in remapped scans to/from Connectom 30, the APE is very high while the CV is surprisingly low. This pattern is also seen in FA, MD, and MK for scans mapped to Connectom 60, and for MK in scans mapped from Prisma 60. Because the APE error is above 100% (but CV is small), we believe that the methods are overestimating small actual values, since underestimation error is bounded at 100% for non-negative measures. In order to further verify this, we computed the Percent Error (without absolute values) shown in Table 2, indicating the average bias above or below the actual observed value. Since the MK PE for both proposed methods is very close to the APE, this indicates that on average small values are being overestimated. Discussion continues in Section 5.
4.1 Ablation Test Results and Post-hoc Adversarial Accuracies
|Connectom 30||Prisma 60||Connectom 60|
|Connectom 30||Prisma 60||Connectom 60|
|Post-hoc Adversarial Accuracy, Predicting from|
|Best Possible||Full Model|
|Proposed Single-task, Connectom 30||0.5||0.61||0.63||0.63|
|Proposed Single-task, Prisma 60||0.5||0.5||0.51||0.54|
|Proposed Single-task, Connectom 60||0.5||0.63||0.68||0.85|
Table 3 shows the results of the ablation tests, where we set either to zero or both and to zero, effectively removing invariance and prior terms respectively from the primary loss function. We computed the difference in the RMSE between the regular model and the ablated models on the hold-out test dataset. For Connectom 30 both to and from Prisma 30, both the invariance term and the prior term hinder reconstruction performance. When only the invariance term is removed this effect is slight, but when both are removed the effect is much stronger in the multi-task setting.
For Prisma 60 mappings differences without the invariance term for the single task method are relatively small, while removing both the invariance and prior terms leads to large increases in RMSE. For mappings to Connectom 60, differences in RMSE follow a similar pattern to Prisma 60, with performance decreasing with both invariance and prior terms ablated. For the mappings from Connectom 60, performance strongly drops without the invariance term and then further drops without the prior term.
Table 4 shows the results of the post-hoc adversarial predictions. Setting uniformly increases post-hoc adversarial accuracy, and setting increases accuracy further in both Prisma 60 and Connectom 60 cases. For the multi-site model the prediction task is considerably harder, yet setting both and to zero induces relatively high adversarial accuracy in the multi-task setting (%).
It is unsurprising that the invariance term does not aid in reconstruction for more similar protocols/scanners. Inclusion of this term should lead to more compression, and thus less information in relevant to , which in turn should lead to worse reconstruction. Further, this intuition also extends to the VAE prior term, which is a sufficient condition for the compressive portion of the invariance term (if then ). It is interesting, however, that these terms lead to increased performance for dissimilar protocols/scanners i.e. Connectom 60 and Prisma 60. This indicates that these two loss terms are helpful for generalization.
Our proposed harmonization method is unsupervised in that we do not require multiple images from the same subject or phantom from separate sites (i.e. paired data) in order to train our method. It is advisable to validate using such data, but due to the expense of collecting images from the same subject at varying sites it is advantageous to limit reliance on these data.
We believe it is important to understand the trade-off between reconstructive error and adversarial accuracy (e.g. between performance in Figure 2 and 4). It is obviously desirable to have high reconstructive accuracy, yet any attempt to induce invariance necessarily removes information (i.e. site information), which reduces this accuracy. At the other end of the spectrum, there is always a family of perfectly invariant solutions (constant images), but these also have no information about the subjects, and subsequently very high reconstructive error. It is thus important to consider both in selecting a remapping method.
Because of the VAE prior’s sufficiency for compressing , empirically we can create an acceptable method without the invariance term (i.e. with ). This agrees with our intuition about Eq. 1, where compression plus conditional reconstruction is a proxy for invariance. It appears that the exact form of compression is less impactful. However, best performance is achieved by including an invariance term.
It is tempting to attempt to interpret the encodings , but these efforts should not be undertaken lightly. The encoding and decoding functions are designed to be non-linear, and individual components of may have interaction effects with other components. Further, the encodings are not images or patches, lacking a spatial domain. With careful construction analysis may be possible, but it is almost certainly non-trivial to do in the encoding domain.
In the current method we reconstruct images for a specific target site . We might instead look for a site agnostic image. This is philosophically challenging: images are by nature collected at sites, and there are no site-less images. While we can manipulate our method to produce an average site, the output image may not be representative of any of the images. It may be that all images must have site information, and that the quotient representation is not an image at all. On the other hand, for other tasks that are not images, e.g., prediction of pathology or prognosis, we can use to make unbiased (scanner-agnostic) predictions of . In cases where the actual goal is not in the image domain (for which the harmonization task is a pre-processing step), such a formulation may be beneficial, and could be built from our proposed method.
This method cannot remove long-range scanner-biases; this is due to the patch-based architecture. In theory, with larger patches, we could avoid this limitation; current hardware, in particular GPU memory and bus speeds, limit our computation to small patches for dMRI. Specific work in this domain has been done to reduce memory load , but it is by no means solved, especially for high angular resolution data such as the HCP dataset . We hypothesize that a similar architecture with larger patches or whole images could rectify this particular problem - architectures that may become accessible with increased hardware capabilities - or better model compression/computational reduction techniques.
In the present work, the proposed method was only evaluated on white matter, and not in grey matter (neither cortex nor subcortical structures). White matter analyses generally focus on models of restricted axonal compartments (fibers), with derived measures such as fiber orientation distribution functions (FODs) and voxel-wise data with generally high anisotropy. Grey matter analyses in contrast may focus more on signal from isotropic compartments and/or dendritic arbors , and notably their models may be robust or vulnerable to site-bias in different ways. We have not considered grey matter signal or model summary statistics in this analysis, and thus we advise caution when applying this method to identified grey matter voxels. Further, as Table 1 and Table 2 show, for low values of Mean Kurtosis the proposed method is inaccurate and has a positive bias in reconstruction. We advise caution when using this method where the accuracy of these measures for low relative values is critical.
In the present work we have constructed a method for learning scanner-invariant representations. These representations can then be used to reconstruct images under a variety of different scanner conditions, and due to the data processing inequality the reconstruction’s mutual information with the original scanner will be low. This we demonstrate to be useful for the unsupervised case of data harmonization in diffusion MRI; critically, we can harmonize data without explicit pairing between images, reducing the need for. Surprisingly in some cases the multi-task method outperforms a pairwise method with similar architecture. This may hint at further benefits for learning shared representations.
This work was supported by NIH grants P41 EB015922, R01 MH116147, R56 AG058854, RF1 AG041915, and U54 EB020403, DARPA grant W911NF-16-1-0575, as well as the NSF Graduate Research Fellowship Program Grant Number DGE-1418060, and a GPU grant from NVidia.
The data were acquired at the UK National Facility for In Vivo MR Imaging of Human Tissue Microstructure located in CUBRIC funded by the EPSRC (grant EP/M029778/1), and The Wolfson Foundation. Prior consent was obtained from all patients before each scanning session, along with the approval of the Cardiff University School of Psychology ethics committee. Acquisition and processing of the data was supported by a Rubicon grant from the NWO (680-50-1527), a Wellcome Trust Investigator Award (096646/Z/11/Z), and a Wellcome Trust Strategic Award (104943/Z/14/Z). We acknowledge the 2017 and 2018 MICCAI Computational Diffusion MRI committees (Francesco Grussu, Enrico Kaden, Lipeng Ning, Jelle Veraart, Elisenda Bonet-Carne, and Farshid Sepehrband) and CUBRIC, Cardiff University (Derek Jones, Umesh Rudrapatna, John Evans, Greg Parker, Slawomir Kusmia, Cyril Charron, and David Linden).
Appendix A Derivation of the Bound in Eq. 1
This bound is also found in Moyer et al. , where it is used in the context of Fair Representations. Again, we reproduce it here for clarity, but the demonstration remain unchanged. All entropic quantities are with respect to the empirical encoding distribution unless otherwise stated.
From the tri-variate identities of mututal information, we have that . However, the distribution of is exactly given by by construction, and thus the distribution of solely depends on . Thus,
We can then write the following:
This inequality is tight if and only if the variational approximation is correct; interpreted in an imaging context, if we cannot perform conditional reconstruction correctly this bound will not be tight.
- For a categorical variable with value out of possible values, its corresponding one-hot vector is a -dimensional vector with zeros in every entry except for the entry, which is one.
- For unbiased estimates the RMSE divided by sample mean should further be multiplied by a factor of , where is the number of tested voxels. However, this number is very close to , and the resulting change is negligible.
- (2010) Coefficient of variation. Encyclopedia of research design 1, pp. 169–171. Cited by: §4.
- (2018) Fixing a broken ELBO. In International Conference on Machine Learning, pp. 159–168.
- (2003) How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage 20 (2), pp. 870–888. Cited by: §3.2.
- (2016) An integrated approach to correction for off-resonance effects and subject movement in diffusion MR imaging. Neuroimage 125, pp. 1063–1078. Cited by: §3.2.
- (1992) Error measures for generalizing about forecasting methods: empirical comparisons. International Journal of Forecasting 8 (1), pp. 69–80. Cited by: Table 1, §4.
- (2008) Diffusion tensor imaging (DTI)-based white matter mapping in brain research: a review. Journal of Molecular Neuroscience 34 (1), pp. 51–61. Cited by: §3.2.
- (2009) Advanced normalization tools (ANTS). Insight j 2, pp. 1–35.
- (2017) CVAE-GAN: fine-grained image generation through asymmetric training. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2745–2754.
- (2018) Deeper image quality transfer: training low-memory neural networks for 3D images. In MICCAI, pp. 118–125. Cited by: §1.1, §1, §5.1.
- (2003) Inter-sequence and inter-imaging unit variability of diffusion tensor MR imaging histogram-derived metrics of the brain in healthy volunteers. American Journal of Neuroradiology 24 (4), pp. 638–643. Cited by: §4.
- (2014) Exploration of scanning effects in multi-site structural MRI studies. Journal of Neuroscience Methods 230, pp. 37–50. Cited by: §1.
- (2009) Looking for the optimal DTI acquisition scheme given a maximum scan time: are more b-values a waste of time?. Magnetic Resonance Imaging 27 (2), pp. 163–175. Cited by: §1.1, §1.
- (2012) Elements of Information Theory. John Wiley & Sons. Cited by: §2.
- (2016) Unsupervised 3-tissue response function estimation from single-shell or multi-shell diffusion MR data without a co-registered T1 image. In ISMRM Workshop on Breaking the Barriers of Diffusion MRI, Vol. 5. Cited by: §3.3.
- (2017) Harmonization of multi-site diffusion tensor imaging data. Neuroimage 161, pp. 149–170. Cited by: §1.1, §1, §1.
- (2010) Dependence of brain DTI maps of fractional anisotropy and mean diffusivity on the number of diffusion weighting directions. Journal of Applied Clinical Medical Physics 11 (1), pp. 176–190. Cited by: §1.1, §1.
- (2013) The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage 80, pp. 105–124. Cited by: §3.2.
- (2016) Deep learning. MIT press. Cited by: §3.1.
- (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680.
- (2009) Measurement of MRI scanner performance with the ADNI phantom. Medical Physics 36 (6Part1), pp. 2193–2205.
- (2018) A longitudinal human phantom reliability study of multi-center T1-weighted, DTI, and resting state fMRI data. Psychiatry Research: Neuroimaging. Cited by: §1.
- (2017) Beta-vae: learning basic visual concepts with a constrained variational framework.. ICLR 2 (5), pp. 6. Cited by: §2.1.
- (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134.
- (2005) Diffusional kurtosis imaging: the quantification of non-Gaussian water diffusion by means of magnetic resonance imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 53 (6), pp. 1432–1440. Cited by: §3.3, Table 1, §4.
- (2007) Modeling dendrite density from magnetic resonance diffusion measurements. Neuroimage 34 (4), pp. 1473–1486. Cited by: §5.1.
- (2014) Multi-tissue constrained spherical deconvolution for improved analysis of multi-shell diffusion MRI data. NeuroImage 103, pp. 411–426. Cited by: §3.3, Figure 2.
- (2007) Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics 8 (1), pp. 118–127. Cited by: §1.1, §1.1.
- (2006) Reliability in multi-site structural MRI studies: effects of gradient non-linearity correction on phantom and human data. Neuroimage 30 (2), pp. 436–443. Cited by: §1.
- (2018) Harmonizing diffusion MRI data across magnetic field strengths. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 116–124. Cited by: §1.1.
- (2019) Retrospective harmonization of multi-site diffusion MRI data acquired with different acquisition parameters. Neuroimage 184, pp. 180–200. Cited by: §1.1.
- (2017) Widespread white matter microstructural differences in schizophrenia across 4322 individuals: results from the ENIGMA schizophrenia DTI working group. Molecular Psychiatry. Cited by: §1.
- (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
- (2013) Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114. Cited by: §2.1, §2.1, §3.1.
- (2018) Spherical harmonic residual network for diffusion signal harmonization. arXiv preprint arXiv:1808.01595. Cited by: §1.1.
- (2012) Multicenter reliability of diffusion tensor imaging. Brain Connectivity 2 (6), pp. 345–355. Cited by: §1.
- (1993) Accuracy measures: theoretical and practical concerns. International Journal of Forecasting 9 (4), pp. 527–529. Cited by: Table 1, §4.
- (2015) Harmonizing diffusion MRI data across multiple sites and scanners. In MICCAI, pp. 12–19. Cited by: §1.1, §2.2.
- (2016) Inter-site and inter-scanner diffusion MRI data harmonization. Neuroimage 135, pp. 311–323. Cited by: §1.1.
- (2018) Multi-site harmonization of diffusion MRI data in a registration framework. Brain Imaging and Behavior 12 (1), pp. 284–295. Cited by: §1.1, §1, §3.3, Figure 2, Figure 4, Figure 5, Figure 6, §4.
- (2018) Invariant representations without adversarial training. In Advances in Neural Information Processing Systems 31, pp. 9102–9111. Cited by: Appendix A, §2, §3.1, §3.3.
- (2018) Multi-shell diffusion MRI harmonisation and enhancement challenge (MUSHAC): progress and results. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 217–224. Cited by: §1.1, §1.1, §1.
- (2019) Cross-scanner and cross-protocol harmonisation of multi-shell diffusion MRI data: open challenge and evaluation results. In ISMRM 27th annual meeting & exhibition, Cited by: §3.3, Table 1.
- (2013) Mean apparent propagator (MAP) MRI: a novel diffusion imaging method for mapping tissue microstructure. NeuroImage 78, pp. 16–32. Cited by: §3.3, Table 1, §4.
- (2010) Intercenter differences in diffusion tensor MRI acquisition. Journal of Magnetic Resonance Imaging 31 (6), pp. 1458–1468. Cited by: §1.1, §1.
- (2013) Reproducibility and biases in high field brain diffusion MRI: an evaluation of acquisition and analysis variables. Magnetic Resonance Imaging 31 (6), pp. 827–839. Cited by: §1.1, §1.
- (2018) Can we correct for interactions between subject motion and gradient-nonlinearity in diffusion MRI. In Proc. Int. Soc. Mag. Reson. Med., Vol. 1206. Cited by: §3.2.
- (1985) Learning internal representations by error propagation. Technical report California Univ San Diego La Jolla Inst for Cognitive Science. Cited by: Figure 1, §3.1.
- (2013) Advances in diffusion MRI acquisition and processing in the human connectome project. Neuroimage 80, pp. 125–143. Cited by: §5.1.
- (2017) Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution. In MICCAI, pp. 611–619. Cited by: §1.1, §1.
- (2019) Cross-scanner and cross-protocol diffusion mri data harmonisation: a benchmark database and evaluation of algorithms. Neuroimage. Cited by: §3.2.
- (2018) Cross-vendor and cross-protocol harmonisation of diffusion MRI data: a comparative study. In International Symposium on Magnetic Resonance in Medicine (Paris), Vol. 471. Cited by: §1.1, §1.1, §1, §3.2, §3.3.
- (2019) MRtrix3: a fast, flexible and open software framework for medical image processing and visualisation. NeuroImage, pp. 116137. Cited by: §3.3.
- (2010) Identical, but not the same: intra-site and inter-site reproducibility of fractional anisotropy measures on two 3.0 T scanners. Neuroimage 51 (4), pp. 1384–1394. Cited by: §1.1, §1, §4.
- (2009) Global white matter abnormalities in schizophrenia: a multisite diffusion tensor imaging study. Schizophrenia Bulletin 37 (1), pp. 222–232. Cited by: §1.1, §1.
- (2018) Diffusion MRI indices and their relation to cognitive impairment in brain aging: the updated multi-protocol approach in ADNI3. bioRxiv, pp. 476721. Cited by: §1.1, §1, §1.
- (2010) How does angular resolution affect diffusion imaging measures?. Neuroimage 49 (2), pp. 1357–1371. Cited by: §1.1, §1.
- (2013) Magnetic resonance field strength effects on diffusion measures and brain connectivity networks. Brain Connectivity 3 (1), pp. 72–86. Cited by: §1.1, §1.
- (2012) How do spatial and angular resolution affect brain connectivity maps from diffusion MRI?. In Biomedical Imaging (ISBI), 2012 9th IEEE International Symposium on, pp. 1–4. Cited by: §1.1, §1.
- (2018) Challenges and opportunities in dMRI data harmonization. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 157–172. Cited by: §1.1, §1.