Scanner Invariant Representationsfor Diffusion MRI Harmonization

Scanner Invariant Representations for Diffusion MRI Harmonization

Title: Scanner Invariant Representations for Diffusion MRI Harmonization
Authors: Daniel Moyer (1,2), Greg Ver Steeg (2), Chantal M. W. Tax (3), Paul M. Thompson* (1)
Institutions:
(1) Imaging Genetics Center, Mark and Mary Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
(2) Information Sciences Institute, Marina del Rey, CA, USA
(3) CUBRIC, School of Psychology, Cardiff University, Cardiff, United Kingdom
Keywords: Harmonization, Invariant Representation, Diffusion MRI
Word count: 5966
Figure and table count: 10

*Corresponding author email: pthomp@usc.edu

Abstract

Purpose: In the present work we describe the correction of diffusion-weighted MRI for site and scanner biases using a novel method based on invariant representation.

Theory and Methods: Pooled imaging data from multiple sources are subject to variation between the sources. Correcting for these biases has become very important as imaging studies increase in size and multi-site cases become more common. We propose learning an intermediate representation invariant to site/protocol variables, a technique adapted from information theory-based algorithmic fairness; by leveraging the data processing inequality, such a representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to underlying structures. To implement this, we use a deep learning method based on variational auto-encoders (VAE) to construct scanner invariant encodings of the imaging data.

Results: To evaluate our method, we use training data from the 2018 MICCAI Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our proposed method shows improvements on independent test data relative to a recently published baseline method on each subtask, mapping data from three different scanning contexts to and from one separate target scanning context.

Conclusion: As imaging studies continue to grow, the use of pooled multi-site imaging will similarly increase. Invariant representation presents a strong candidate for the harmonization of these data.

1 Introduction

Observational conditions may vary strongly within a medical imaging study. Researchers are often aware of these conditions (e.g., scanner, site, technician, facility, etc.) but are unable to modify the experimental design to compensate, due to cost or geographic necessity. In magnetic resonance imaging (MRI), variations in scanner characteristics such as the magnetic field strength, scanner vendor, receiver coil hardware, applied gradient fields, or primary image reconstruction methods may have strong effects on collected data [11, 15, 28]; multi-site studies in particular are subject to these effects [21, 31, 35, 55]. Data harmonization is the process of removing or compensating for this unwanted variation through post-hoc corrections. In the present work we focus on harmonization for diffusion MRI (dMRI), a modality known to have scanner/site biases [12, 16, 44, 45, 53, 54, 56, 57, 58] as well as several extra possible degrees of freedom with respect to protocol (e.g., angular resolution, -values, gradient waveform choice, etc.).

Several prior methods approach diffusion MRI harmonization as a regression problem. Supervised image-to-image transfer methods have been proposed [9, 49], while for the unsupervised case site effects are often modeled as covariate effects, either at a summary statistic level [15, 55] or on the image directly [39]. All of these methods directly transform scans from one site/scanner context to another. Further, while all methods require paired scans to correctly validate their results (subjects or phantoms scanned on both target and reference scanners), supervised methods also require paired training data. The collection of such data is expensive and difficult to collect at a large scale.

In this paper we instead frame the harmonization problem as an unsupervised image-to-image transfer problem, where harmonizing transformations may be learned without explicitly paired scans. We propose that a subset of harmonization solutions may be found by learning scanner-invariant representations, i.e., representations of the images that are uninformative of which scanner the images were collected on. These representations and the mappings between them may then be manipulated to provide image reconstructions that are minimally informative of their original collection site. We thus provide an encoder/decoder method for learning mappings to and from invariant representations computationally. This method has several advantages over regression-based methods, including a practical implementation that does not require paired data, i.e., a traveling phantom as training input, and an extension to a multi-site case.

We demonstrate our proposed method on the MICCAI Computational Diffusion MRI challenge dataset [41, 51, 59], showing substantial improvement compared to a recently published baseline method. We also introduce technical improvements to the training of neural architectures on diffusion-weighted data, and discuss the limitations and error modes of our proposed method.

1.1 Relevant Prior Work

Harmonization has been an acknowledged problem in MR imaging and specifically diffusion imaging for some time [59]. Numerous studies have noted significant differences in diffusion summary measures (e.g., fractional anisotropy; FA) between scanners and sites [44, 53, 54]. Further protocol differences arise between sites due to limitations of the available scanners, unavoidable changes or upgrades in scanners or protocols, or when combining data retrospectively from multiple studies; effects of variations in scanning protocols on derived measures include effects of voxel size [45], -values (the diffusion weightings used) [12, 45], and angular resolution or -space sampling [16, 56, 57, 58] among other parameters. These problems were also examined by the MICCAI Computational Diffusion MRI 2017 and 2018 challenges [41, 51], which held an open comparison of methods for a supervised (paired) task.

Most previously proposed harmonization methods have relied on forms of regression. Harmonization of summary statistics (voxel-wise or region-wise) include random/mixed-effect models [55] as well as the scale-and-shift random effects regression of ComBat [15, 55]. This latter method was adapted from the genomics literature [27], and employs a variational Bayes scheme to learn model coefficients.

A more nuanced family of regression methods for diffusion imaging was recently introduced in a series of papers by Mirzaalian et al. [38, 39, 37]. This was later analyzed empirically in Karayumak et al. [30], which compared it against ComBat [27] for summary statistics. This family of methods computes a power spectrum from a spherical harmonic (SH) basis, then generates a template from these images using multi-channel diffeomorphic mappings. The resulting template is used to compute spatial maps of average SH power spectra by scanner/protocol, which are then used in a scale regression on individual subjects. While these papers take a very different approach from our own, the resulting method has a very similar usage pattern and output. We compare our approach directly to this method.

In a supervised (paired) task, direct image-to-image transfer has been explored both in the harmonization context [29, 34, 41] as well as the similar super-resolution context [9, 49]. This family of methods generally relies on high expressive-capacity function fitting (e.g., neural networks) to map directly between patches of pairs of images. This requires explicitly paired data, in that the same brains must be scanned at all sites. These methods perform well empirically, as tested by the CDMRI challenge [51], but require paired data in the training set. Our proposed method does not require paired data to train; however, in our opinion, best practice validation still requires paired data in the (holdout) test-set.

2 Theory

Our goal is to map diffusion MRI scans from one scanner/site context to another, so that given an image from one site we could predict accurately what it would have looked like were it collected at another site. In order to do this, we construct an encoding function that takes each image to a corresponding vector , and a conditional decoding function that takes each and a specified site back to an image (the “reconstruction” of the original image).

We further wish to remove trends and biases in that are informative of from the reconstruction , so that all data remapped to a given site have the same bias (this is the harmonization task). In order to do so, it would be sufficient to constrain , the intermediate representation, to be independent of , denoted . This is a hard constraint, and direct optimization of and subject to that constraint would be non-trivially difficult.

Instead, we choose to relax the constraint to the mutual information . Mutual information, taken from information theory, quantifies the amount of information shared between two variables e.g. and . if and only if , and so its minimization is a relaxation of our desired constraint. For a comprehensive reference on information theory, we refer the reader to Ch. 2 and Ch. 8 of Cover and Thomas [13].

After relaxing the independence constraint to mutual information, we would like to optimize and so that has minimal scanner-specific information, and so that has minimal differences from the original data. We demonstrate one solution for doing this using a variational bound on , parameterizing and using neural networks. The underlying theory is explored in Moyer et al. [40], where it is used in the context of algorithmic fairness. We reproduce it here for clarity, and further reinterpret their theoretical results in the imaging harmonization context, adding our own data processing inequality interpretation of test-time remapping.

Learning the mapping does not require matching pairs of data from pairs of sites . Best practices in validation and testing do require such data, but during training we can minimize without having examples of the same subject collected on different scanners. This is due to our bound of described in Eq. 1, which is not reliant on inter-site correspondence.

At test time we can manipulate this mapping to reconstruct images at a different site than they were originally collected at; we use this mapping as our harmonization tool. Again, by the data processing inequality, the amount of information these (new) reconstructed images contain about their original collection site is bounded by , which we explicitly minimize.

2.1 Scanner Invariant Variational Auto-Encoders

We wish to learn a mapping from data (associated with scanner ) to some latent space such that , yet also where is maximally relevant to . We start by relaxing to , and then bounding (detailed demonstration in Appendix A):

(1)

where is the empirical marginal distribution of under , the specified encoding which we control, and is a variational approximation to the conditional likelihood of given and again under . Here, denotes the Kullback-Leibler divergence and denotes Shannon entropy.

The bound in Eq.1 has three components: a conditional reconstruction, a compressive divergence term, and a constant term denoting the conditional entropy of the scan given the scanner. We can interpret Eq.1 as stating that the information in about is bounded above by uncertainty of given and , plus a penalty on the information in and a constant representing the information has about overall.

Intuitively, this breakdown makes sense: if we reconstruct given , and are otherwise compressing , the optimal compressive has no information about for reconstruction; can always remove information about without penalty, because the reconstruction term is handed that information immediately. Further, if is highly correlated with , i.e. is very low, then our bound will be worse.

We can now construct a variational encoding/conditional-decoding pair and which fits our variational bound of nicely, and which also fits our overall goal of re-mapping accurately through . Following Kingma and Welling [33], we use a generative log-likelihood as an objective:

(2)

Here however, we inject the conditional likelihood to match our bound for . This also fits our test-time desired Markov chain (with condition ) where is the harmonized reconstruction at new site :

Following the original VAE derivation (again in Kingma and Welling), we can derive a similar VAE with -invariant encodings by introducing the encoder :

(3)
(4)
(5)

We assume that the prior , i.e., that the conditional prior is equal to the marginal prior over . In the generative context, this would be a strong model mis-specification: if we believe that there truly are generating latent factors, it is unlikely that those factors would be independent of . However, we are not in such a generative frame, and instead would like to find a code that is invariant to , so it is reasonable to use a prior that also has this property. Taking this assumption, we have

(6)

This is a conditional extension of the VAE objective from Kingma and Welling [33]. Putting this objective together with the penalty term in Eq.1, we have the following variational bound on the combined objective (up to a constant):

(7)

We use the negation of Eq.7 as the main loss term for learning and , where we want to minimize the negative of the bound. As described in Higgins et al. [22], an additional parameter may be multiplied with the divergence from the prior (the first term of Eq. 7) for further control over the VAE prior.

As we have it written in Eq. 7, the site variable has ambiguous dimension. For applications with only two sites, might be binary, while in the multi-site case, might be a one-hot vector1. We conduct experiments for both in Sections 3 and 4. More complex values are also possible, but we do not explore them in this paper.

2.2 Diffusion-space Error Propagation from SH representations

A convenient representation for diffusion-weighted MRI is the spherical harmonics (SH) basis [37]. These provide a countable set of basis functions from the sphere to and from which projection is easy and often performed (e.g., in graphics). In this paper, our input data and the reconstruction error is computed with respect to the SH coefficients. However, for the eventual output, the data representation that we would like to use is not in this basis, but in the original image representation which is conditional on a set of gradient vectors (b-vectors). These vectors are in general different for each subject due to spatial positioning and motion, and often change in number between sites/protocols. Rigid transformation and alignment of scan data, used in many pre-processing steps, also change vector orientation. While the function norm is preserved under projection to the SH basis (i.e., asymptotically SH projection is an isomorphism for ), this is not the case for a norm on general finite sets of vectors.

To correct for this, we construct a projection matrix from the shared continuous SH basis to the diffusion gradient directions. This projection can then be used in conjunction with decoder output to map output SH coefficients to the original subject-specific -vector representation. We allow each channel to “pass through” the projection (mapped as identity), as they are without orientation. While we use the SH representation for both input and reconstruction (to leverage our invariance results), we augment the loss function from Eq. 1 with a “real-space” loss function, the reconstruction loss in each subject’s original domain. This encourages the overall loss function to be faithful to our use-case in the original image space.

3 Methods

3.1 Computational Implementation

SH Subj

Adversary

Encoder

Conditional

Decoder

Input Patch (SH)

Hidden Units

Per Layer

and

(a) Diagram of network configuration and losses.

Training Configuration

Backpropagation

Subj

Subj

Subj

Input

Input

Input

Siteless

Siteless

Siteless

Site

Site

Site

Recon

Recon

Recon

Testing Configuration

Selected Target Site

Harmonized Output

Input

Input

Input

Siteless

Siteless

Siteless

Site

Site

Site

Recon

Recon

Recon

(b) Diagram of training and testing schema for the proposed method.
Figure 1: The top diagram describes the architecture of our method. The network is composed of an encoder branch (at top left), a conditional decoder , as well as two augmenting losses. The DWI-space reconstruction loss in blue is computed using the injected subject-specific projection matrix (from SH to b-vector representation). The patch adversary in green attempts to predict whether a reconstructed patch is originally from a given site (“remapped” vs. “real” patches). At test time the site id is manipulated to map data onto one specific site. The bottom diagram describes the training and testing schema for the proposed method, in the left and right boxes respectively. Site bias is represented by the differing colors. In both configurations, these are mapped to a site-invariant space , the colorless center column of both boxes. The remaining (site-independent) information can then be reconstructed into an image, given a site. In the training configuration data are remapped to their original site, and the loss calculated from Eq. 8 (secondary loss terms omitted from figure). Weights are then trained using the derivative of the loss (backpropagation [47]). In the testing configuration, the data are mapped to the selected site . The outputs in the right-hand column of the right box have bounded mutual information about their original site, vanishing with respect to the loss function.

We parameterize and using neural networks, fitting their parameters by mini-batch gradient-based optimization. The loss in Eq. 7 is defined generally, and invariant representations may be learned using many different function parameterizations. However, the flexibility of neural networks as function approximators make them ideal for this application. We apply these networks to small image patches, concatenating patch-wise outputs to create harmonized images. The overall architecture is shown in Figure 0(a), and the training and testing configurations are diagrammed in Figure 0(b), with exact parameters given in Section 3. We discuss the use of patches and its relative advantages and drawbacks in Section 5. Notably, in Figure 0(b) each sample consists of a single unpaired patch, and batches of data consist of patches and protocol identifiers (one-hot vectors). As diagrammed on the right-hand side of Figure 0(b), protocol identifiers are manipulated at test time to produce harmonized reconstructions.

Our primary reconstruction loss is computed in the SH domain with respect to the entire patch. We then add a secondary loss function for the center voxel based on the SH-to-DWI projection, and an adversarial loss which attempts to predict which scanner/protocol each reconstructed patch is from (seen at the right of Figure 0(a)). We added this branch in order to provide additional information toward keeping remapped patches “reasonable” when remapping to new sites; this prediction can be performed without explicit pairing of patches. Our loss function is then, in abstract,

(8)

where is SH reconstruction loss (using MSE), is the DWI space loss, and is the adversarial loss on the SH reconstruction, with three hyper parameters controlling trade-offs between objectives. This loss function trivially extends from the single-site case (one target site to/from one base site) to a multi-site case, where is categorical.

We use a standard adversarial training scheme for defining and minimizing (see e.g. Chapter 7.13 of Goodfellow, Bengio, and Courville [18]). The adversarial loss is the softmax cross-entropy loss of a secondary “adversary” network, shown in green in Figure 0(a). We alternate between optimizing the primary network (minimizing Eq. 8), and the adversary (minimizing ).

We optimize these networks by differentiating the loss functions (Eq. 8 and ) with respect to the network weights (i.e. backpropagation [47]) and then using the Adam optimizer [32], which is a first order optimization method. Optimization is undertaken using mini-batches. To compute gradients of the divergences in Eq. 7 efficiently, we use the re-parameterization trick of Kingma and Welling [33], using both a diagonal Gaussian conditional and a Gaussian prior . We also use the closed form bound for from Moyer et al. [40].

3.2 Data and Pre-processing

To evaluate our method, we use the 15 subjects from the 2018 CDMRI Challenge Harmonization dataset [50, 51]. These subjects were imaged on two different scanners: a 3 T GE Excite-HD “Connectom” and a 3 T Siemens Prisma scanner. For each scanner, two separate protocols were collected, one of which matches between the scanners at a low resolution, and another which does not match at a high resolution. This results in four different “site” combinations, for which all subjects were scanned, resulting in forty different acquisitions (10 subjects, 2 scanners, 2 protocols each). We split this into 9 training subjects, 1 validation subject, and 5 held out-test subjects.

The low resolution matching protocol had an isotropic spatial resolution of 2.4 mm with 30 gradient directions (, ) at two shells , as well as a minimum of 4 acquisitions, at least one of these with reverse phase encoding. These volumes were then corrected for EPI distortions, subject motion, and eddy current distortions using FSL’s TOPUP/eddy [3, 4]. Subjects from the “Connectom” scanner were then registered to the “Prisma” scanner using a affine transformation, fit to a co-temporally acquired T1-weighted image volume (previously registered to each corresponding FA volume). The -vectors were then appropriately rotated. In the case of the “Connectom” scanner, geometric distortions due to gradient non-linearities were corrected for using in-house software [17, 46]. The high resolution protocols are identical in pre-processing to their low resolution counterparts, but have isotropic voxel sizes of mm (, ) and 1.2 mm (, ) for “Prisma” and “Connectom” scanners respectively, each with 60 gradient directions per shell, same b-shell configurations (). We downsample the spatial resolution of the high resolution scans to mm isotropic to test the multi-task method, but keep the angular resolution differences. To simplify notation, we refer to the four scanner/protocol combinations by their scanner make and number of gradient directions: Prisma 30, Prisma 60, Connectom 30, and Connectom 60.

All scans were masked for white matter tissue. This was done in order to focus our analysis on the tissue most commonly assessed using diffusion MRI (see e.g. for a review [6]). We map each of these scans to an -order SH representation for input into our method, but retain the original domain for training outputs. We use the minimal weighted solution in the case of under-determined projections, which corresponds with the SVD solution (using the pseudo-inverse). This is well-defined, unlike direct projection.

3.3 Experimental Protocol

The original CDMRI 2018 challenge [51] specified three supervised tasks, mapping between one base “site” (Prisma 30) and the three target “sites” (Prisma 60, Connectom 30, and Connectom 60). We modify this task, removing correspondence/pairing knowledge between sites (keeping this information for validation and testing), and including the inverse mapping task (target to base). This results in six tasks, two for each target site.

We train a “single-site” network for each of the six tasks, learning representations for Prisma 30 and a single target site, a multi-site variant across all six tasks. During training the method is not provided corresponding patches, and is only given individual patches. A single sample corresponds to one patch, not a pair of patches. Paired patches are only used to calculate error measures.

We measure the performance of each method on the holdout set of subjects using the Root Mean Squared Error (RMSE) between each method’s output and the ground truth target images in the original DWI basis (after pre-processing). For comparison we also include results from Mirzaalian et al. [39], which is the only other unsupervised method we are aware of in the literature.

We further assess the performance of each method by estimating the fiber orientation distributions (FODs) for each reconstruction using Multi-Shell Multi-Tissue Constrained Spherical Deconvolution (MSMT-CSD) [26], with response functions estimated using the method proposed in Dhollander et al. [14]. For both of these steps we use the implementations from MRtrix3 [52]. For each FOD we compute the maxima at each voxel and compare it to the closest maxima of the ground truth image to compute angular error.

In order to assess the fidelity of common local diffusion model summary measures before and after harmonization, we measure the Mean Average Percent Error (Mean APE) and the Coefficient of Variation (CV) between method-estimated and observed summary measures, reported in Table 1. We measured Mean APE and CV for Fractional Anisotropy, Mean Diffusivity, Mean Kurtosis [24] , and Return-to-Origin-Probability (RTOP) [43]. This mirrors the analysis in Ning et al. 2019 [42].

In order to test the specific effects of our compressive regularizations, we conducted two ablation tests of our method, comparing it to the “regular networks” with parameters described in Section 3.4. We re-trained both single-site and multi-site methods with the invariance parameter set to 0, but otherwise the same settings. We further trained two more networks for and set to 0. Effectively this ablates the added invariance-inducing compressive elements of the loss function. We then compared their performance by computing the voxelwise difference in RMSE in each heldout test subject.

For the proposed methods and corresponding ablated networks we assessed the amount to which we removed site information from of the learned representation by attempting predict from . If there is no information in about then we would expect the optimal predictor to do no better than random. To this end we trained feed forward networks to predict from (“post-hoc adversaries”). As shown in Moyer et al. [40], the cross-entropy error of these networks is a lower bound for the mutual information . The post-hoc adversaries had same configuration as the patch-adversaries (two -unit layers using activations and the softmax cross-entropy loss).

3.4 Configuration and Parameters

We implemented our method for image patches composed of a center voxel and each of its six immediate neighbors. Each of these voxels has two shells of DWI signal, which we mapped to the SH order basis, plus one channel. Unravelling these patches and shells, the input is then a vector with elements.

We use three-layer fully connected neural networks for encoder and conditional decoder , with , , hidden units respectively for the encoder, and the reverse (, , then ) for the decoder. The latent code is parameterized by a unit Gaussian layer (). This layer is then concatenated with the scanner/protocol one-hot representation , and input into the decoder. We use transformations at each hidden layer, with sigmoid output from the encoder for the variance of the Gaussian layer. The adversary is a fully connected two-layer network with 32 units at each layer, with units again at each hidden node.

For each task we train our network for 1000 epochs, which took 19 hours to train in the pair-wise case on standard desktop equipped with an external Nvidia Titan-Xp with 12GB of RAM using TensorFlow (32GB of CPU RAM, 4 cores). We loosely tune the hyper parameters so losses are approximately on the same order of magnitude, with , , , and . We use these same parameters for both the pair-wise tasks as well as the multi-task experiments. We use an Adam learning rate of and a batch size of 128. For each batch provided for primary network training we provide 10 epochs for training the adversary.

4 Results

Figure 2: Here we plot the voxel-wise performance as measured by RMSE (top) and angular deflection (bottom), i.e. the angular difference between the global maxima of the Fiber Orientation Distributions recovered by MSMT-CSD [26] from the ground-truth and reconstructed images, measured in degrees. This is shown for the Mirzaalian et al. [39] method as well as our two proposed methods on each of the six harmonization tasks (Prisma 30 to and from each of the other scanner/site combinations). All RMSE values are calculated in the original signal representation. In both plots lower is better. For the angular deflection error, the percentile data point plotted in red.
FA MD MK RTOP
* P30 Method APE CV APE CV APE CV APE CV

Connectom 30

to

Mirzaalian 0.46 0.48 0.42 0.80 0.99 1.02 0.19 1.22
Single-task 0.25 0.26 0.12 0.17 3.37 0.30 0.11 0.28
Multi-task 0.28 0.30 0.12 0.16 3.15 0.28 0.11 0.16

from

Mirzaalian 0.50 0.39 0.52 0.59 0.96 1.05 0.22 0.26
Single-task 0.29 0.24 0.22 0.18 3.72 0.30 0.13 0.18
Multi-task 0.30 0.27 0.21 0.17 3.90 0.31 0.12 0.17

Prisma 60

to

Mirzaalian 0.52 0.55 0.60 0.67 0.99 1.05 0.29 0.41
Single-task 0.34 0.37 0.12 0.23 3.44 0.31 0.14 1.47
Multi-task 0.34 0.37 0.12 0.19 3.17 0.29 0.13 0.45

from

Mirzaalian 0.64 0.45 0.48 0.44 0.96 0.98 0.22 0.28
Single-task 0.41 0.30 0.38 0.15 0.48 0.14 0.09 0.16
Multi-task 0.42 0.32 0.38 0.15 0.45 0.13 0.08 0.11

Connectom 60

to

Mirzaalian 0.86 0.79 0.88 0.92 1.01 1.05 1.11 26.18
Single-task 0.35 0.36 0.26 0.81 3.22 0.35 0.16 0.50
Multi-task 0.35 0.36 0.14 0.23 3.31 0.36 0.14 0.37

from

Mirzaalian 3.19 1.26 4.64 5.97 0.88 0.90 0.63 0.69
Single-task 1.86 0.40 2.93 0.31 4.44 0.27 0.10 0.17
Multi-task 1.77 0.38 2.84 0.29 4.56 0.27 0.10 0.32
Table 1: Here we report the mean Absolute Percent Error (APE) and mean Coefficient of Variation (CV) per voxel for each of the methods for four common diffusion summary measures: Fractional Anisotropy (FA), Mean Diffusivity (MD), Mean Kurtosis (MK) [24] , and Return-to-Origin-Probability (RTOP) [43]. The APE measure is the same as the error metric reported in Ning et al. [42]; similar to Ning et al. [42], we report values as decimals (where corresponds to 100%), and not actual percentages. It is well known that the APE measure is biased towards methods reporting smaller values [5, 36]. We therefore also report the Coefficient of Variation, computed by dividing the RMSE by the observed sample mean. This measure is also sometimes referred to as the Relative RMSE.
* P30 Method FA PE MD PE MK PE RTOP PE

Connectom 30

to

Mirzaalian -0.27 -0.26 -0.87 -0.09
Single-task -0.05 0.02 3.31 -0.01
Multi-task -0.11 0.03 3.04 -0.03

from

Mirzaalian 0.22 -0.45 -0.95 0.20
Single-task 0.10 0.18 3.57 -0.07
Multi-task 0.04 0.15 3.79 -0.04

Prisma 60

to

Mirzaalian -0.15 -0.58 -0.93 -0.19
Single-task -0.15 -0.00 3.38 0.00
Multi-task -0.14 0.05 3.03 -0.05

from

Mirzaalian 0.48 -0.22 -0.94 0.20
Single-task 0.18 0.29 0.44 0.03
Multi-task 0.15 0.32 0.33 -0.01

Connectom 60

to

Mirzaalian 0.23 -0.88 -0.91 0.36
Single-task 0.02 0.16 3.06 -0.04
Multi-task -0.02 0.01 3.16 -0.00

from

Mirzaalian 2.32 3.65 -0.73 -0.62
Single-task 1.70 2.88 4.33 -0.02
Multi-task 1.56 2.77 4.48 0.01
Negative PE implies Actual Value Estimated Value
Table 2: Here we report the mean Percent Error (PE) per voxel for each of the methods for four common diffusion summary measures. Negative PE implies that the value from the real data was greater than the value from the harmonization method.
(c) Target image (Prisma 30).
(d) Reconstruction using Mirzaalian et al. method.
(e) Reconstruction using single-site method.
(f) Whole brain, subset location in blue.
Figure 3: Here we plot exemplar FOD glyphs (estimated via MSMT-CSD) from the actual data, a reconstruction using the Mirzaalian et al. method, and a reconstruction using the proposed single-site method. Inputs to each reconstruction were the data from the Prisma 60 protocol/site. The background colors represent the direction of the FOD maxima.

Figure 2 plots the root mean squared error (RMSE) by voxel of the baseline, single-site proposed method, and multi-site proposed method, as evaluated on the holdout test subjects, in the original signal representation. Our proposed methods show improvement over the baseline method in each case. In the pair-wise task between similar protocols (mapping between Prisma 30 and Connectom 30), these improvements have non-overlapping inner quartile range. For dissimilar protocols, i.e. mapping between Prisma 30 and Prisma 60 or Connectom 60, our proposed method shows improvements, though the difference is less pronounced. Surprisingly, for higher resolution target images the multi-site method performs as well or better than the pair-wise method and the baseline; this may be due to the multi-task method receiving many more volumes overall, allowing it to gather more information (albeit biased by other scanners) or preventing it from overfitting.

Figure 2 plots the voxel-wise angular deflection of each method, as measured by MSMT-CSD. For Connectom 30 and Prisma 60, both to and from Prisma 30, all three methods are comparable, with median errors well below , and percentile errors all slightly above . For mappings to the Connectom 60 protocol, the Mirzaalian et al. method has generally higher error, though the inner quartile ranges still overlap for all methods. We plot a subset of the FODs from the original image and two of the reconstructions in Figure 3.

Figures 4, 5, and 6 show the spatial distribution of the error for each tested method on a single test subject, for mappings between Prisma 30 and Connectom 30, Prisma 60, and Connectom 60 respectively. For the Prisma 30 to Connectom 30 mapping, overall the Mirzaalian baseline [39] has higher error than the other methods as shown by the overall coloring. The Mirzaalian baseline [39] and the multi-site proposed method show significant white matter patterning (though in varying degree); optimally we would like to see uncorrelated residuals, like those shown in the single-site method.

The Connectom 60 error plots (Fig. 6) have a strong spatial patterns at both the occipital and frontal poles, shown in all methods. This wide-scale effect is somewhat mitigated by the proposed methods, but is still present in all error distributions.

Table 1 reports the Absolute Percent Error (APE) and the estimated Coefficient of Variation (CV) for each method voxel-wise for four commonly used diffusion summary measures: Fractional Anisotropy (FA), Mean Diffusivity (MD), Mean Kurtosis (MK) [24], and Return-to-Origin-Probability (RTOP) [43]. It is well known that APE is biased towards methods reporting smaller values, and becomes inaccurate and inflated as actual observed values approach zero [5, 36]. In our context this means that for FA and MD, more spherical tensors are weighted strongly, while more anisotropic tensors are weighted less. Due to this bias, we also report the estimated Coefficient of Variation (CV) [1], which is the RMSE divided by the observed sample mean2. CV has also been used to assess summary statistic variation between scanners [10, 53], and is sometimes referred to as Relative RMSE.

For all reported summary measures except MK, the proposed methods map to and from Connectom 30 perform well under both error measures. Mapping to both Connectom 60 and Prisma 60 from Prisma 30 has higher error than the converse (Prisma 30 to Connectom/Prisma 60); this fits our intuitions about upsampling, as both “60” protocols have higher angular resolution.

For Mean Kurtosis in remapped scans to/from Connectom 30, the APE is very high while the CV is surprisingly low. This pattern is also seen in FA, MD, and MK for scans mapped to Connectom 60, and for MK in scans mapped from Prisma 60. Because the APE error is above 100% (but CV is small), we believe that the methods are overestimating small actual values, since underestimation error is bounded at 100% for non-negative measures. In order to further verify this, we computed the Percent Error (without absolute values) shown in Table 2, indicating the average bias above or below the actual observed value. Since the MK PE for both proposed methods is very close to the APE, this indicates that on average small values are being overestimated. Discussion continues in Section 5.

Figure 4: We plot the spatial distribution of RMSE per voxel, displayed in slices centered at (x,y,z) = (10,22,35) for mappings from the Prisma 30 protocol to the Connectom 30 protocol, for (top row) the Mirzaalian [39] baseline, (center row) the single-site proposed method, and (bottom row) the multi-site proposed method. The color scale is the same between the rows, as well as between this figure, Fig. 5, and Fig. 6. All RMSE values are calculated in the original signal representation.
Figure 5: We plot the spatial distribution of RMSE per voxel, displayed in slices centered at (x,y,z) = (10,22,35) for mappings from the Prisma 30 protocol to the Prisma 60 protocol, for (top row) the Mirzaalian [39] baseline, (center row) the single-site proposed method, and (bottom row) the multi-site proposed method. The color scale is the same between the rows, as well as between this figure, Fig. 4, and Fig. 6. All RMSE values are calculated in the original signal representation.
Figure 6: We plot the spatial distribution of RMSE per voxel, displayed in slices centered at (x,y,z) = (10,22,35) for mappings from the Prisma 30 protocol to the Connectom 60 protocol, for (top row) the Mirzaalian [39] baseline, (center row) the single-site proposed method, and (bottom row) the multi-site proposed method. The color scale is the same between the rows, as well as between this figure, Fig. 4, and Fig. 5. All RMSE values are calculated in the original signal representation.

4.1 Ablation Test Results and Post-hoc Adversarial Accuracies

for
Connectom 30 Prisma 60 Connectom 60
To From To From To From
Proposed Single-task 1.2 2.6 -1.4 7.6 -4.0 -46.8
Proposed Multi-task 6.3 2.6 16.7 12.25 -4.1 -65.7
for
Connectom 30 Prisma 60 Connectom 60
To From To From To From
Proposed Single-task 6.1 1.9 -6.4 -1.5 -17.0 -55.7
Proposed Multi-task 94.3 89.4 -324.8 -354.8 -217.0 -90.0
Table 3: Here we report the mean per-voxel test set RMSE change between the regular model and two ablated models, where (top) the invariance term was set to zero, and (bottom) the invariance term and the VAE prior term were set to zero. Negative values indicate that the regular model has better performance than the ablated models.
Post-hoc Adversarial Accuracy, Predicting from
Best Possible Full Model
Proposed Single-task, Connectom 30 0.5 0.61 0.63 0.63
Proposed Single-task, Prisma 60 0.5 0.5 0.51 0.54
Proposed Single-task, Connectom 60 0.5 0.63 0.68 0.85
Proposed Multi-task 0.25 0.41 0.41 0.62
Table 4: Here we report the mean per-patch test set classification accuracy for an adversary trained post-hoc to predict the site variable from the latent representation , for taken from the full model (center left column), the ablated model (center right column), and the and ablated model (right column). The far left column shows the best possible performance. The architecture and training protocol for the adversaries is described in Section 3.3. Here lower is better (“closer to invariant”).

Table 3 shows the results of the ablation tests, where we set either to zero or both and to zero, effectively removing invariance and prior terms respectively from the primary loss function. We computed the difference in the RMSE between the regular model and the ablated models on the hold-out test dataset. For Connectom 30 both to and from Prisma 30, both the invariance term and the prior term hinder reconstruction performance. When only the invariance term is removed this effect is slight, but when both are removed the effect is much stronger in the multi-task setting.

For Prisma 60 mappings differences without the invariance term for the single task method are relatively small, while removing both the invariance and prior terms leads to large increases in RMSE. For mappings to Connectom 60, differences in RMSE follow a similar pattern to Prisma 60, with performance decreasing with both invariance and prior terms ablated. For the mappings from Connectom 60, performance strongly drops without the invariance term and then further drops without the prior term.

Table 4 shows the results of the post-hoc adversarial predictions. Setting uniformly increases post-hoc adversarial accuracy, and setting increases accuracy further in both Prisma 60 and Connectom 60 cases. For the multi-site model the prediction task is considerably harder, yet setting both and to zero induces relatively high adversarial accuracy in the multi-task setting (%).

It is unsurprising that the invariance term does not aid in reconstruction for more similar protocols/scanners. Inclusion of this term should lead to more compression, and thus less information in relevant to , which in turn should lead to worse reconstruction. Further, this intuition also extends to the VAE prior term, which is a sufficient condition for the compressive portion of the invariance term (if then ). It is interesting, however, that these terms lead to increased performance for dissimilar protocols/scanners i.e. Connectom 60 and Prisma 60. This indicates that these two loss terms are helpful for generalization.

5 Discussion

Our proposed harmonization method is unsupervised in that we do not require multiple images from the same subject or phantom from separate sites (i.e. paired data) in order to train our method. It is advisable to validate using such data, but due to the expense of collecting images from the same subject at varying sites it is advantageous to limit reliance on these data.

We believe it is important to understand the trade-off between reconstructive error and adversarial accuracy (e.g. between performance in Figure 2 and 4). It is obviously desirable to have high reconstructive accuracy, yet any attempt to induce invariance necessarily removes information (i.e. site information), which reduces this accuracy. At the other end of the spectrum, there is always a family of perfectly invariant solutions (constant images), but these also have no information about the subjects, and subsequently very high reconstructive error. It is thus important to consider both in selecting a remapping method.

Because of the VAE prior’s sufficiency for compressing , empirically we can create an acceptable method without the invariance term (i.e. with ). This agrees with our intuition about Eq. 1, where compression plus conditional reconstruction is a proxy for invariance. It appears that the exact form of compression is less impactful. However, best performance is achieved by including an invariance term.

It is tempting to attempt to interpret the encodings , but these efforts should not be undertaken lightly. The encoding and decoding functions are designed to be non-linear, and individual components of may have interaction effects with other components. Further, the encodings are not images or patches, lacking a spatial domain. With careful construction analysis may be possible, but it is almost certainly non-trivial to do in the encoding domain.

In the current method we reconstruct images for a specific target site . We might instead look for a site agnostic image. This is philosophically challenging: images are by nature collected at sites, and there are no site-less images. While we can manipulate our method to produce an average site, the output image may not be representative of any of the images. It may be that all images must have site information, and that the quotient representation is not an image at all. On the other hand, for other tasks that are not images, e.g., prediction of pathology or prognosis, we can use to make unbiased (scanner-agnostic) predictions of . In cases where the actual goal is not in the image domain (for which the harmonization task is a pre-processing step), such a formulation may be beneficial, and could be built from our proposed method.

5.1 Limitations

This method cannot remove long-range scanner-biases; this is due to the patch-based architecture. In theory, with larger patches, we could avoid this limitation; current hardware, in particular GPU memory and bus speeds, limit our computation to small patches for dMRI. Specific work in this domain has been done to reduce memory load [9], but it is by no means solved, especially for high angular resolution data such as the HCP dataset [48]. We hypothesize that a similar architecture with larger patches or whole images could rectify this particular problem - architectures that may become accessible with increased hardware capabilities - or better model compression/computational reduction techniques.

In the present work, the proposed method was only evaluated on white matter, and not in grey matter (neither cortex nor subcortical structures). White matter analyses generally focus on models of restricted axonal compartments (fibers), with derived measures such as fiber orientation distribution functions (FODs) and voxel-wise data with generally high anisotropy. Grey matter analyses in contrast may focus more on signal from isotropic compartments and/or dendritic arbors [25], and notably their models may be robust or vulnerable to site-bias in different ways. We have not considered grey matter signal or model summary statistics in this analysis, and thus we advise caution when applying this method to identified grey matter voxels. Further, as Table 1 and Table 2 show, for low values of Mean Kurtosis the proposed method is inaccurate and has a positive bias in reconstruction. We advise caution when using this method where the accuracy of these measures for low relative values is critical.

6 Conclusion

In the present work we have constructed a method for learning scanner-invariant representations. These representations can then be used to reconstruct images under a variety of different scanner conditions, and due to the data processing inequality the reconstruction’s mutual information with the original scanner will be low. This we demonstrate to be useful for the unsupervised case of data harmonization in diffusion MRI; critically, we can harmonize data without explicit pairing between images, reducing the need for. Surprisingly in some cases the multi-task method outperforms a pairwise method with similar architecture. This may hint at further benefits for learning shared representations.

Acknowledgements

This work was supported by NIH grants P41 EB015922, R01 MH116147, R56 AG058854, RF1 AG041915, and U54 EB020403, DARPA grant W911NF-16-1-0575, as well as the NSF Graduate Research Fellowship Program Grant Number DGE-1418060, and a GPU grant from NVidia.

The data were acquired at the UK National Facility for In Vivo MR Imaging of Human Tissue Microstructure located in CUBRIC funded by the EPSRC (grant EP/M029778/1), and The Wolfson Foundation. Prior consent was obtained from all patients before each scanning session, along with the approval of the Cardiff University School of Psychology ethics committee. Acquisition and processing of the data was supported by a Rubicon grant from the NWO (680-50-1527), a Wellcome Trust Investigator Award (096646/Z/11/Z), and a Wellcome Trust Strategic Award (104943/Z/14/Z). We acknowledge the 2017 and 2018 MICCAI Computational Diffusion MRI committees (Francesco Grussu, Enrico Kaden, Lipeng Ning, Jelle Veraart, Elisenda Bonet-Carne, and Farshid Sepehrband) and CUBRIC, Cardiff University (Derek Jones, Umesh Rudrapatna, John Evans, Greg Parker, Slawomir Kusmia, Cyril Charron, and David Linden).

Appendix A Derivation of the Bound in Eq. 1

This bound is also found in Moyer et al. [40], where it is used in the context of Fair Representations. Again, we reproduce it here for clarity, but the demonstration remain unchanged. All entropic quantities are with respect to the empirical encoding distribution unless otherwise stated.

From the tri-variate identities of mututal information, we have that . However, the distribution of is exactly given by by construction, and thus the distribution of solely depends on . Thus,

(9)

We can then write the following:

(10)
(11)
(12)
(13)
(14)

This inequality is tight if and only if the variational approximation is correct; interpreted in an imaging context, if we cannot perform conditional reconstruction correctly this bound will not be tight.

Footnotes

  1. For a categorical variable with value out of possible values, its corresponding one-hot vector is a -dimensional vector with zeros in every entry except for the entry, which is one.
  2. For unbiased estimates the RMSE divided by sample mean should further be multiplied by a factor of , where is the number of tested voxels. However, this number is very close to , and the resulting change is negligible.

References

  1. H. Abdi (2010) Coefficient of variation. Encyclopedia of research design 1, pp. 169–171. Cited by: §4.
  2. A. Alemi, B. Poole, I. Fischer, J. Dillon, R. A. Saurous and K. Murphy (2018) Fixing a broken ELBO. In International Conference on Machine Learning, pp. 159–168.
  3. J. L. Andersson, S. Skare and J. Ashburner (2003) How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage 20 (2), pp. 870–888. Cited by: §3.2.
  4. J. L. Andersson and S. N. Sotiropoulos (2016) An integrated approach to correction for off-resonance effects and subject movement in diffusion MR imaging. Neuroimage 125, pp. 1063–1078. Cited by: §3.2.
  5. J. S. Armstrong and F. Collopy (1992) Error measures for generalizing about forecasting methods: empirical comparisons. International Journal of Forecasting 8 (1), pp. 69–80. Cited by: Table 1, §4.
  6. Y. Assaf and O. Pasternak (2008) Diffusion tensor imaging (DTI)-based white matter mapping in brain research: a review. Journal of Molecular Neuroscience 34 (1), pp. 51–61. Cited by: §3.2.
  7. B. B. Avants, N. Tustison and G. Song (2009) Advanced normalization tools (ANTS). Insight j 2, pp. 1–35.
  8. J. Bao, D. Chen, F. Wen, H. Li and G. Hua (2017) CVAE-GAN: fine-grained image generation through asymmetric training. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2745–2754.
  9. S. B. Blumberg, R. Tanno, I. Kokkinos and D. C. Alexander (2018) Deeper image quality transfer: training low-memory neural networks for 3D images. In MICCAI, pp. 118–125. Cited by: §1.1, §1, §5.1.
  10. M. Cercignani, R. Bammer, M. P. Sormani, F. Fazekas and M. Filippi (2003) Inter-sequence and inter-imaging unit variability of diffusion tensor MR imaging histogram-derived metrics of the brain in healthy volunteers. American Journal of Neuroradiology 24 (4), pp. 638–643. Cited by: §4.
  11. J. Chen, J. Liu, V. D. Calhoun, A. Arias-Vasquez, M. P. Zwiers, C. N. Gupta, B. Franke and J. A. Turner (2014) Exploration of scanning effects in multi-site structural MRI studies. Journal of Neuroscience Methods 230, pp. 37–50. Cited by: §1.
  12. M. M. Correia, T. A. Carpenter and G. B. Williams (2009) Looking for the optimal DTI acquisition scheme given a maximum scan time: are more b-values a waste of time?. Magnetic Resonance Imaging 27 (2), pp. 163–175. Cited by: §1.1, §1.
  13. T. M. Cover and J. A. Thomas (2012) Elements of Information Theory. John Wiley & Sons. Cited by: §2.
  14. T. Dhollander, D. Raffelt and A. Connelly (2016) Unsupervised 3-tissue response function estimation from single-shell or multi-shell diffusion MR data without a co-registered T1 image. In ISMRM Workshop on Breaking the Barriers of Diffusion MRI, Vol. 5. Cited by: §3.3.
  15. J. Fortin, D. Parker, B. Tunc, T. Watanabe, M. A. Elliott, K. Ruparel, D. R. Roalf, T. D. Satterthwaite, R. C. Gur and R. E. Gur (2017) Harmonization of multi-site diffusion tensor imaging data. Neuroimage 161, pp. 149–170. Cited by: §1.1, §1, §1.
  16. M. Giannelli, M. Cosottini, M. C. Michelassi, G. Lazzarotti, G. Belmonte, C. Bartolozzi and M. Lazzeri (2010) Dependence of brain DTI maps of fractional anisotropy and mean diffusivity on the number of diffusion weighting directions. Journal of Applied Clinical Medical Physics 11 (1), pp. 176–190. Cited by: §1.1, §1.
  17. M. F. Glasser, S. N. Sotiropoulos, J. A. Wilson, T. S. Coalson, B. Fischl, J. L. Andersson, J. Xu, S. Jbabdi, M. Webster and J. R. Polimeni (2013) The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage 80, pp. 105–124. Cited by: §3.2.
  18. I. Goodfellow, Y. Bengio and A. Courville (2016) Deep learning. MIT press. Cited by: §3.1.
  19. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680.
  20. J. L. Gunter, M. A. Bernstein, B. J. Borowski, C. P. Ward, P. J. Britson, J. P. Felmlee, N. Schuff, M. Weiner and C. R. Jack (2009) Measurement of MRI scanner performance with the ADNI phantom. Medical Physics 36 (6Part1), pp. 2193–2205.
  21. C. Hawco, J. D. Viviano, S. Chavez, E. W. Dickie, N. Calarco, P. Kochunov, M. Argyelan, J. Turner, A. K. Malhotra and R. W. Buchanan (2018) A longitudinal human phantom reliability study of multi-center T1-weighted, DTI, and resting state fMRI data. Psychiatry Research: Neuroimaging. Cited by: §1.
  22. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed and A. Lerchner (2017) Beta-vae: learning basic visual concepts with a constrained variational framework.. ICLR 2 (5), pp. 6. Cited by: §2.1.
  23. P. Isola, J. Zhu, T. Zhou and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134.
  24. J. H. Jensen, J. A. Helpern, A. Ramani, H. Lu and K. Kaczynski (2005) Diffusional kurtosis imaging: the quantification of non-Gaussian water diffusion by means of magnetic resonance imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 53 (6), pp. 1432–1440. Cited by: §3.3, Table 1, §4.
  25. S. N. Jespersen, C. D. Kroenke, L. Østergaard, J. J. Ackerman and D. A. Yablonskiy (2007) Modeling dendrite density from magnetic resonance diffusion measurements. Neuroimage 34 (4), pp. 1473–1486. Cited by: §5.1.
  26. B. Jeurissen, J. Tournier, T. Dhollander, A. Connelly and J. Sijbers (2014) Multi-tissue constrained spherical deconvolution for improved analysis of multi-shell diffusion MRI data. NeuroImage 103, pp. 411–426. Cited by: §3.3, Figure 2.
  27. W. E. Johnson, C. Li and A. Rabinovic (2007) Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics 8 (1), pp. 118–127. Cited by: §1.1, §1.1.
  28. J. Jovicich, S. Czanner, D. Greve, E. Haley, A. van Der Kouwe, R. Gollub, D. Kennedy, F. Schmitt, G. Brown and J. MacFall (2006) Reliability in multi-site structural MRI studies: effects of gradient non-linearity correction on phantom and human data. Neuroimage 30 (2), pp. 436–443. Cited by: §1.
  29. S. C. Karayumak, M. Kubicki and Y. Rathi (2018) Harmonizing diffusion MRI data across magnetic field strengths. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 116–124. Cited by: §1.1.
  30. S. C. Karayumak (2019) Retrospective harmonization of multi-site diffusion MRI data acquired with different acquisition parameters. Neuroimage 184, pp. 180–200. Cited by: §1.1.
  31. S. Kelly, N. Jahanshad, A. Zalesky, P. Kochunov, I. Agartz, C. Alloza, O. Andreassen, C. Arango, N. Banaj and S. Bouix (2017) Widespread white matter microstructural differences in schizophrenia across 4322 individuals: results from the ENIGMA schizophrenia DTI working group. Molecular Psychiatry. Cited by: §1.
  32. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
  33. D. P. Kingma and M. Welling (2013) Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114. Cited by: §2.1, §2.1, §3.1.
  34. S. Koppers (2018) Spherical harmonic residual network for diffusion signal harmonization. arXiv preprint arXiv:1808.01595. Cited by: §1.1.
  35. V. A. Magnotta, J. T. Matsui, D. Liu, H. J. Johnson, J. D. Long, B. D. Bolster Jr, B. A. Mueller, K. Lim, S. Mori and K. G. Helmer (2012) Multicenter reliability of diffusion tensor imaging. Brain Connectivity 2 (6), pp. 345–355. Cited by: §1.
  36. S. Makridakis (1993) Accuracy measures: theoretical and practical concerns. International Journal of Forecasting 9 (4), pp. 527–529. Cited by: Table 1, §4.
  37. H. Mirzaalian (2015) Harmonizing diffusion MRI data across multiple sites and scanners. In MICCAI, pp. 12–19. Cited by: §1.1, §2.2.
  38. H. Mirzaalian (2016) Inter-site and inter-scanner diffusion MRI data harmonization. Neuroimage 135, pp. 311–323. Cited by: §1.1.
  39. H. Mirzaalian (2018) Multi-site harmonization of diffusion MRI data in a registration framework. Brain Imaging and Behavior 12 (1), pp. 284–295. Cited by: §1.1, §1, §3.3, Figure 2, Figure 4, Figure 5, Figure 6, §4.
  40. D. Moyer, S. Gao, R. Brekelmans, A. Galstyan and G. Ver Steeg (2018) Invariant representations without adversarial training. In Advances in Neural Information Processing Systems 31, pp. 9102–9111. Cited by: Appendix A, §2, §3.1, §3.3.
  41. L. Ning, E. Bonet-Carne, F. Grussu, F. Sepehrband, E. Kaden, J. Veraart, S. B. Blumberg, C. S. Khoo, M. Palombo and J. Coll-Font (2018) Multi-shell diffusion MRI harmonisation and enhancement challenge (MUSHAC): progress and results. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 217–224. Cited by: §1.1, §1.1, §1.
  42. L. Ning, E. Bonet-Carne, F. Grussu, F. Sepehrband, E. Kaden, J. Veraart, S. B. Blumberg, C. S. Khoo, M. Palombo and J. Coll-Font (2019) Cross-scanner and cross-protocol harmonisation of multi-shell diffusion MRI data: open challenge and evaluation results. In ISMRM 27th annual meeting & exhibition, Cited by: §3.3, Table 1.
  43. E. Özarslan, C. G. Koay, T. M. Shepherd, M. E. Komlosh, M. O. İrfanoğlu, C. Pierpaoli and P. J. Basser (2013) Mean apparent propagator (MAP) MRI: a novel diffusion imaging method for mapping tissue microstructure. NeuroImage 78, pp. 16–32. Cited by: §3.3, Table 1, §4.
  44. E. Pagani, J. G. Hirsch, P. J. Pouwels, M. A. Horsfield, E. Perego, A. Gass, S. D. Roosendaal, F. Barkhof, F. Agosta and M. Rovaris (2010) Intercenter differences in diffusion tensor MRI acquisition. Journal of Magnetic Resonance Imaging 31 (6), pp. 1458–1468. Cited by: §1.1, §1.
  45. N. D. Papinutto, F. Maule and J. Jovicich (2013) Reproducibility and biases in high field brain diffusion MRI: an evaluation of acquisition and analysis variables. Magnetic Resonance Imaging 31 (6), pp. 827–839. Cited by: §1.1, §1.
  46. S. Rudrapatna, G. Parker, J. Roberts and D. Jones (2018) Can we correct for interactions between subject motion and gradient-nonlinearity in diffusion MRI. In Proc. Int. Soc. Mag. Reson. Med., Vol. 1206. Cited by: §3.2.
  47. D. E. Rumelhart, G. E. Hinton and R. J. Williams (1985) Learning internal representations by error propagation. Technical report California Univ San Diego La Jolla Inst for Cognitive Science. Cited by: Figure 1, §3.1.
  48. S. N. Sotiropoulos, S. Jbabdi, J. Xu, J. L. Andersson, S. Moeller, E. J. Auerbach, M. F. Glasser, M. Hernandez, G. Sapiro and M. Jenkinson (2013) Advances in diffusion MRI acquisition and processing in the human connectome project. Neuroimage 80, pp. 125–143. Cited by: §5.1.
  49. R. Tanno, D. E. Worrall, A. Ghosh, E. Kaden, S. N. Sotiropoulos, A. Criminisi and D. C. Alexander (2017) Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution. In MICCAI, pp. 611–619. Cited by: §1.1, §1.
  50. C. M. Tax, F. Grussu, E. Kaden, L. Ning, U. Rudrapatna, J. Evans, S. St-Jean, A. Leemans, S. Koppers and D. Merhof (2019) Cross-scanner and cross-protocol diffusion mri data harmonisation: a benchmark database and evaluation of algorithms. Neuroimage. Cited by: §3.2.
  51. C. M. Tax, F. Grussu, E. Kaden, L. Ning, U. Rudrapatna, J. Evans, S. St-Jean, A. Leemans, S. Puch and M. Rowe (2018) Cross-vendor and cross-protocol harmonisation of diffusion MRI data: a comparative study. In International Symposium on Magnetic Resonance in Medicine (Paris), Vol. 471. Cited by: §1.1, §1.1, §1, §3.2, §3.3.
  52. J. Tournier, R. Smith, D. Raffelt, R. Tabbara, T. Dhollander, M. Pietsch, D. Christiaens, B. Jeurissen, C. Yeh and A. Connelly (2019) MRtrix3: a fast, flexible and open software framework for medical image processing and visualisation. NeuroImage, pp. 116137. Cited by: §3.3.
  53. C. Vollmar (2010) Identical, but not the same: intra-site and inter-site reproducibility of fractional anisotropy measures on two 3.0 T scanners. Neuroimage 51 (4), pp. 1384–1394. Cited by: §1.1, §1, §4.
  54. T. White (2009) Global white matter abnormalities in schizophrenia: a multisite diffusion tensor imaging study. Schizophrenia Bulletin 37 (1), pp. 222–232. Cited by: §1.1, §1.
  55. A. Zavaliangos-Petropulu, T. M. Nir and S. I. Thomopoulos (2018) Diffusion MRI indices and their relation to cognitive impairment in brain aging: the updated multi-protocol approach in ADNI3. bioRxiv, pp. 476721. Cited by: §1.1, §1, §1.
  56. L. Zhan, A. D. Leow, N. Jahanshad, M. Chiang, M. Barysheva, A. D. Lee, A. W. Toga, K. L. McMahon, G. I. de Zubicaray and M. J. Wright (2010) How does angular resolution affect diffusion imaging measures?. Neuroimage 49 (2), pp. 1357–1371. Cited by: §1.1, §1.
  57. L. Zhan, B. A. Mueller, N. Jahanshad, Y. Jin, C. Lenglet, E. Yacoub, G. Sapiro, K. Ugurbil, N. Harel and A. W. Toga (2013) Magnetic resonance field strength effects on diffusion measures and brain connectivity networks. Brain Connectivity 3 (1), pp. 72–86. Cited by: §1.1, §1.
  58. L. Zhan (2012) How do spatial and angular resolution affect brain connectivity maps from diffusion MRI?. In Biomedical Imaging (ISBI), 2012 9th IEEE International Symposium on, pp. 1–4. Cited by: §1.1, §1.
  59. A. H. Zhu, D. C. Moyer, T. M. Nir, P. M. Thompson and N. Jahanshad (2018) Challenges and opportunities in dMRI data harmonization. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 157–172. Cited by: §1.1, §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406474
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description