Adaptive Compressed Sensing MRI with Unsupervised Learning

Adaptive Compressed Sensing MRI with Unsupervised Learning

Cagla D. Bahadir, Adrian V. Dalca, and Mert R. Sabuncu Cagla D. Bahadir and Mert R Sabuncu are with the Meinig School of Biomedical Engineering, Cornell University. Adrian V. Dalca is also with the Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology and the A.A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital. Mert R. Sabuncu is also with the School of Electrical and Computer Engineering, Cornell University. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
Abstract

In compressed sensing MRI, k-space measurements are under-sampled to achieve accelerated scan times. There are two fundamental problems in compressed sensing MRI: (1) where to sample and (2) how to reconstruct. In this paper, we tackle both problems simultaneously, using a novel unsupervised, end-to-end learning framework, called LOUPE. Our method trains a neural network model on a set of full-resolution MRI scans, which are retrospectively under-sampled and forwarded to an anti-aliasing model that computes a reconstruction, which is in turn compared with the input. In our experiments, we demonstrate that LOUPE-optimized under-sampling masks are data-dependent, varying significantly with the imaged anatomy, and perform well with different reconstruction methods. We present empirical results obtained with a large-scale, publicly available knee MRI dataset, where LOUPE offered the most superior reconstruction quality across different conditions. Even with an aggressive 8-fold acceleration rate, LOUPE’s reconstructions contained much of the anatomical detail that was missed by alternative masks and reconstruction methods. Our experiments also show how LOUPE yielded optimal under-sampling patterns that were significantly different for brain vs knee MRI scans. Our code is made freely available at https://github.com/cagladbahadir/LOUPE/.

Compressed Sensing, Magnetic Resonance Imaging, Deep Learning

I Introduction

Magnetic Resonance Imaging (MRI) is a ubiquitous, non-invasive, and versatile biomedical imaging technology. A central challenge in MRI is long scan times, which constrains accessibility and increases costs. One remedy is to accelerate MRI via compressed sensing [1, 2]. In compressed sensing MRI, k-space data (i.e., the Fourier transform of the image) is sampled below the Nyquist-Shannon rate [1], which is often referred to as “under-sampling.” Given an under-sampled set of measurements, the objective is to “reconstruct” the full-resolution MRI. This is the main problem that most of the compressed sensing literature is focused on and is classically formulated as an optimization problem that trades off two objectives: one that quantifies the fit between the measurements and the reconstruction (sometimes referred to as data fidelity), and another that captures prior knowledge on the distribution of MRI data. This latter objective is often achieved via the incorporation of regularization terms, such as the total variation penalty and/or a sparsity-inducing norm on transformation coefficients, like wavelets or a dictionary decomposition [3]. Such regularization functions aim to capture different properties of real-world MR images, which in turn help the ill-posed reconstruction problem by guiding to more realistic solutions. One approach to develop a data-driven regularization function is to construct a sparsifying dictionary, for example, based on image patches [4, 5, 6].

Once the optimization problem is set up, the reconstruction algorithm often iteratively minimizes the regularization term(s) and enforces data fidelity in k-space. This is classically solved for each acquired dataset, independently, and from scratch - a process that can be computationally demanding. Many of the widely used reconstruction methods follow the classical optimization framework, described above. SENSE [7], for example, is a popular image-domain based reconstruction method that uses the sensitivity encoding of multiple receiver coils and recovers the underlying pixel values by solving linear equations. This method is used in k-t SPARSE-SENSE [8], for accelerated dynamic MRI such as cardiac imaging and angiography. SPIRIT [9] is another popular iterative reconstruction technique that uses projections over convex sets and a conjugate gradient based optimization strategy. SPIRIT allows the use of flexible priors such as off-resonance correction and does not impose any constraints on the under-sampling pattern. SPIRIT is implemented with pseudo-random sampling in k-space and employs a spatial wavelet transform to impose sparsity in images in L1-SPIRIT [10]. As we describe in the following section, there has been a recent surge in machine learning based methods that take a different, and often computationally more efficient approach to solving the reconstruction problem. These techniques are gradually becoming more widespread and expected to complement existing regularized optimization based approaches.

Another critical component of compressed sensing MRI is the under-sampling pattern. For a given acceleration rate, there is an exponentially large number of possible patterns one can implement for under-sampling. Each of these under-sampling patterns will in general lead to different reconstruction performance that will depend on the statistics of the data and the utilized reconstruction method. One way to view this is to regard the reconstruction model as imputing the parts of the Fourier spectrum that was not sampled. For example, a frequency component that is constant for all possible datasets we might observe, does not need to be sampled when we use a reconstruction model that can leverage this information. On the other hand, highly variable parts of the spectrum will likely need to be measured to achieve accurate reconstructions. This simple viewpoint ignores the potentially multi-variate nature of the distribution of k-space measurements. For instance, missing parts of the Fourier spectrum might be reliably imputed from other acquired parts, due to strong statistical dependency. Widely used under-sampling strategies in compressed sensing MRI include Random Uniform [2], Variable Density [11] and equi-spaced Cartesian [12] with skipped lines. These techniques, however, are often implemented heuristically, not in a data-driven adaptive fashion, and their popularity is largely due to their ease of implementation and good performance when coupled with popular reconstruction methods. As we discuss below, some recent efforts compute optimized under-sampling patterns - a literature that is closely related to our primary objective.

In this paper, we propose to merge the two core problems of compressed sensing MRI: (1) optimizing the under-sampling pattern and (2) reconstruction. We consider these two problems simultaneously because they are closely inter-related. Specifically, the optimal under-sampling pattern should, in general, depend on the reconstruction method and vice versa. Inspired by recent developments in machine learning, we propose a novel end-to-end deep unsupervised learning strategy to solve the combined problem. We call our method LOUPE, which stands for Learning-based Optimization of the Under-sampling PattErn. A preliminary version of LOUPE was published as a conference paper [13]. In this paper, we present an extended treatment of the literature, more details on the methods, and new experimental results.

The rest of the paper is organized as follows. In the following two sections, we review two closely related bodies of work: a rapidly growing list of recent papers that use machine learning for efficient and accurate reconstruction; and several proposed approaches to optimize the under-sampling pattern. Section IV then presents the mathematical and implementation details of the proposed method, LOUPE. Section V presents our experiments, where we compare the performance obtained with several benchmark reconstruction methods and under-sampling patterns. Section VI concludes with a discussion.

Ii Machine Learning for Under-sampled Image Reconstruction

Over the last few years, machine learning methods have been increasingly used for medical image reconstruction [14, 15, 16], including compressed sensing MRI. An earlier example is the Bayesian non-parametric dictionary learning approach [17]. In this framework, dictionary learning is used in the reconstruction problem to obtain a regularization objective that is customized to the image at hand. The reconstruction problem is solved via a traditional optimization approach.

More recently, fueled by the success of deep learning, several machine learning techniques have been proposed to implement efficient and accurate reconstruction models. For instance, in a high profile paper, Zhu et al. demonstrated a versatile supervised learning approach, called AUTOMAP [16]. This approach uses a neural network model to map sensor measurements directly to reconstruction images, via a manifold representation. The experimental results show that AUTOMAP can produce high quality reconstructions that are robust to noise and other artifacts.

Another machine learning based reconstruction approach involves using a neural network to efficiently execute the iterations of the Alternating Direction Method of Multipliers (ADMM) method that solves a conventional optimization problem [18, 19]. This technique, called ADDM-Net, uses an “unrolled” neural network architecture to implement the iterative optimization procedure. In a similar approach, an unrolled neural network is used to implement the iterations of the Landweber method [20].

Another class of methods rely on the U-Net architecture [21] or its variants. For example U-Net-like models have been trained in a supervised fashion to remove aliasing artifacts in the imaging domain [22, 23]. In these methods, the neural network takes as input a poor quality reconstruction (e.g., obtained via a simple inverse Fourier transform applied to zero-filled k-space data) to compute a high quality output, where the objective is to minimize the loss function (e.g. squared difference) between the output and the ground truth provided during training. Inspired by the success of Generative Adversarial Networks (GANs) [24], several groups have proposed the use of an adversarial loss, in addition to the more conventional loss functions, to obtain better quality reconstructions [25, 26, 27].

The method we propose in this paper builds on the neural net-based reconstruction framework, which offers a computationally efficient and differentiable anti-aliasing model. We append the anti-aliasing model with a retrospective under-sampling module, and learn both blocks simultaneously.

Iii Adaptive Under-sampling in Compressed Sensing MRI

The under-sampling pattern in k-space is closely related to reconstruction performance. Several under-sampling patterns are widely used, including Random Uniform [2], Variable Density [28, 11] and equi-spaced Cartesian [12] with skipped lines. Since the early work by Lustig et al. [28], random or stochastic under-sampling strategies are commonly used, as they induce noise-like artifacts that are often easier to remove during reconstruction. However, as several papers have pointed out, the quality of reconstruction can also be improved by adapting the under-sampling pattern to the data and application. We underscore that there is a related, but more general problem of optimizing projections in compressed sensing, which has received considerable attention [29, 30, 31, 32]. Below, we review adaptive under-sampling strategies in compressed sensing MRI, which is the focus of our paper.

Knoll et.al proposed an under-sampling strategy that follows the power spectrum of a provided example (template) image [33]. This method collects denser samples in parts of k-space that has most of the power concentrated, such as the low-frequency components of real-world MRIs. The authors argue that most of the anatomical variability is reflected in the phase and not magnitude of the Fourier spectrum, justifying the decision to rely on the magnitude to construct the under-sampling pattern. In experiments, they apply their method to both knee and brain MRI scans, demonstrating better performance than parametric variable density under-sampling patterns [28].

Kumar Anand et al. presented a second-order cone optimization technique for finding the optimal under-sampling trajectory in volumetric MRI scans [34]. Their heuristic strategy accounts for coverage, hardware limitations and signal generation in a single convex optimization problem. Building on this approach, Curtis et al. employed a genetic algorithm to optimize sampling trajectories in k-space, while accounting for multi-coil configurations [35].

Seeger et al. employed a Bayesian inference approach to optimize Cartesian and spiral trajectories [36]. Their iterative, greedy technique seeks the under-sampling pattern in k-space that minimizes the uncertainty in the computed reconstruction, conditioned on the acquired measurements. In a recent paper that explores a similar direction, Haldar et al. proposed a new framework called Oracle-based Experiment Design for Imaging Parsimoniously Under Sparsity constraints (OEDIPUS) [37]. OEDIPUS solves an integer programming problem to minimize a lower (Cramer-Rao) bound on the variance of the reconstruction.

In their paper [38], Roman et al. offered a novel theoretical perspective on compressed sensing, which underscores the importance of adapting the under-sampling strategy to the structure in the data. The authors also presented an innovative multilevel sampling approach that depends on the resolution and and the structure of the data that empirically outperforms competing under-sampling schemes.

An alternative class of data-driven approaches aims to find the optimal under-sampling pattern that yields the best reconstruction or imputation quality, using some full-resolution training data that is retrospectively under-sampled. The central challenge in this framework is to efficiently solve two computationally challenging nested optimization problems. The first, outer problem is to identify the optimal under-sampling mask or sampling trajectory that achieves best reconstruction quality, which is in turn the result of an inner optimization step. Several authors have proposed to solve this nested pair of problems via heuristic, greedy algorithms [39, 40, 41, 42]. In a recent empirical study, Zijlstra et al. demonstrated that a data-driven optimization approach [43, 40] can yield better reconstructions than those obtained with more conventional methods [28, 33].

In this paper, we propose a data-driven approach to optimize the under-sampling pattern. Instead of formulating the problem as two nested optimization problems, we leverage modern machine learning techniques and take an end-to-end learning perspective. Similar to recent deep learning based reconstruction techniques [22, 23], we build on a U-Net [21] architecture, which we append with a probabilistic under-sampling step that is also learned. We assume we are provided with full-resolution MRI data, which we retrospectively under-sample. Thus, our approach, LOUPE, is unsupervised.

Iv Method

Iv-a Learning-based Optimization of Under-sampling Pattern

LOUPE solves the two problems of compressed sensing simultaneously: (1) identifying the optimal sunder-sampling pattern; and (2) reconstructing from the under-sampled measurements. Building on stochastic strategies of compressed sensing, we consider a probabilistic mask , which describes an independent Bernoulli random variable at each point in the measured k-space. The probabilistic mask is defined on the full-resolution k-space grid, and at each point takes on non-negative continuous probability values, i.e., , where is the total number of points on the full-resolution grid. For example, for a image, . Binary realizations drawn from represent an under-sampling mask : i.e., , where the sub-script indexes the grid points and denotes a Bernoulli random variable with parameter . The binary mask has value 1 for grid points that are acquired and 0 for points that were not acquired.

Suppose we are provided with a collection of complex-valued full-resolution images, denoted as , where corresponds to the scan number and is the total number of scans. In this development, we assume that these data are in image domain and not k-space, since in most real-world scenarios we have access to the images, and not the raw MRI data. However, modifying our approach to accept raw MRI measurements is straightforward and can produce improved results, since, for example, most saved image-domain data discard the imaginary parts in the estimates, which are often attributed to measurement error. In the LOUPE framework, we aim to solve the following problem:

(1)

where stands for the (forward) Fourier transform matrix, is the inverse Fourier transform matrix, denotes an anti-aliasing reconstruction function parameterized with ; and denotes the L1 norm. There are two terms in Equation (1): the first is a sparsity inducing penalty, while the second quantifies the quality of the reconstruction. The hyper-parameter controls the emphasis on the sparsity term and higher values yield more aggressive acceleration rates.

The loss function involves an expectation over the random binary mask . We approximate this expectation using a Monte Carlo-based sample averaging strategy:

(2)

where are independent realizations drawn from , and denotes a diagonal matrix obtained by setting the diagonal to the argument vector. We have also expressed the first term in terms of the probabilities . Similar to the re-parameterization trick used in variational techniques, such as the VAE [44], we can re-write Equation (2):

(3)

where are independent realizations drawn from , a spatially independent set of uniform random variables on . The result of the inequality operation is 1 if condition is satisfied and 0 otherwise. In Equation (3) the random draws are thus from a constant distribution (independent uniform) and the probabilistic mask effects the thresholding operation.

Iv-B Implementation

Fig. 1: LOUPE architecture with two building blocks: The Under-sampling Pattern Optimization Network and the Anti-aliasing Network. The model creates an end-to-end, unsupervised learning framework for optimizing the under-sampling pattern while learning the parameters for faithful reconstructions. Red arrows denote 2D convolutional layers with a kernel size of , followed by Leaky ReLU activation and Batch Normalization. Green vertical arrows represent average pooling operations and yellow vertical arrows refer to up-sampling. The initial number of channels in the U-Net is , which doubles after each average pooling layer. The U-Net also uses skip connections depicted with gray horizontal arrows, that concatenate the corresponding layers to facilitate better information flow through the network.

In implementing LOUPE, we used a 2D U-Net architecture [21] for the anti-aliasing function, as depicted in Figure 1 and similar to other prior work [22, 23]. To enable a differentiable loss function, we relaxed the non-differentiable threshold operation of Equation (3) with a sigmoid. This relaxation is similar to the one used in recent Gumbel-softmax [45] and concrete distributions [46] in training neural networks with discrete representations. Our final loss function is:

(4)

where is a sigmoid with slope . We implement as a neural network, described below.

Figure 1 illustrates the LOUPE model. The network accepts complex 2D images represented as two channels, corresponding to the imaginary and real components. In our experiments we use images of size . The network applies a forward discrete Fourier transform, , converting the data into k-space. The k-space measurements are next under-sampled by a binary mask , which is a Monte Carlo realization of the probabilistic mask . The values of the probabilistic mask are treated as unknown parameters that are learned during the training process. The binary mask is multiplied element-wise with the k-space data (equivalent to retrospective under-sampling by inserting zeros at missing points), which is then passed to an inverse discrete Fourier transform . This image, which will typically contain aliasing artifacts, is then provided as input to the anti-aliasing network . This network takes a complex-valued under-sampled image, represented as two-channels, and aims to minimize the reconstruction error – the squared difference between the magnitude images of ground truth and the reconstruction. We can view the entire pipeline to be made up of two building blocks: the first optimizing the under-sampling pattern, and the second solving the reconstruction problem.

The model was implemented in Keras [47], with Tensorflow [48] in the back-end. Custom functions were adapted from the Neuron library [49]. ADAM [50] with an initial learning rate of was used for optimization, and learning was terminated when validation loss plateaued. Consistent with prior work [44, 45, 46], a single Monte Carlo realization was drawn for each training datapoint. We used a batch size of 12. We also empirically identified the value of that yielded the desired sparsity level via an exhaustive search strategy. Our code is freely available at: https://github.com/cagladbahadir/LOUPE/

V Empirical Analysis

V-a Data

We conducted our experiments using the NYU fastMRI dataset [51] (fastmri.med.nyu.edu). This is a freely available, large-scale, public data set of knee MRI scans. The dataset originally comprises of 2D coronal knee scans acquired with two pulse sequences that result with Proton Density (PD) and Proton Density Fat Supressed (PDFS) weighted images. The multi-coil knee scans were collected on four different MRI scanners: Skyra (3T), Prisma (3T), Biograph mMR (3T) and Aera (1.5T). In our experiments, we used the provided emulated single-coil (ESC) k-space data of scans, which were derived from raw 15-channel multi-coil data and linearly combined to fit the ground truth in a least-squares sense. We further constrained our analyses to the scans acquired with Biograph mMR (3T), mainly to be able to conduct all presented analyses within a reasonable amount of time with the hardware resources we had access to. We used 100 volumes from the provided official training dataset as our training dataset, and split the provided validation data into two halves, where we used 10 volumes for validation and 10 volumes for test. We could not rely on the provided test data as they were not fully sampled. For the scans we analyzed, the sequence parameters were: echo train length of 4, matrix size of 320x320, in plane resolution of , slice thickness of and no gap between slices. The time of repetition (TR) varied between and and the Echo Time (TE) ranged between and . Training volumes had slices, where the validation volumes had and test volumes had .

Each set (training, validation and test) had differing slice sizes across volumes. After taking the Inverse Fourier Transform of the ESC k-space data, and rotating and flipping the images to match the orientation in the fastMRI paper, we cropped the central 320x320 and normalized the magnitude within each volume.

The U-Net architecture we employed was also used in the original NYU fastMRI study [51] for reconstruction, with the only difference that their implementation accepted a single-channel, real-valued input image. A linearly increasing slope after each epoch was used for the sigmoid in Equation 4 for a facilitated optimization and better approximation of the binary mask.

V-B Evaluation

We used three metrics in the quantitative evaluations of the reconstructions with respect to the ground truth images created from fully sampled measurements: (1) peak signal to noise ratio (PSNR), (2) structural similarity index (SSIM), and (3) normalized mean squared error (NMSE).

PSNR is a widely used metric for evaluating the quality of reconstructions in compressed sensing applications [52, 18], and is defined as:

(5)

where is, as above, the total number of pixels in the full-resolution grid, and denotes the reconstruction of the ground truth full-resolution image .

SSIM aims to quantify the perceived quality of an image [53]:

(6)

where and are defined as the “local” average values for the original and reconstruction images, respectively. The local averages are computed within an neighborhood. Similarly, and are the local variances; and is local covariance between the reconstruction and ground truth images. and are two constants that numerically stabilize the division, where is the dynamic range of the pixel values and and are user defined. We use the parameters provided in [51] for computing SSIM values: , , and a window size of . The dynamic range of pixel values was computed over the entire volume.

Finally, we also report NMSE values, a straightforward quantification of the deviation between the ground truth and reconstruction images:

(7)

V-C Benchmark Reconstruction Methods

We compared the LOUPE-optimized mask and other mask configurations, together with one machine learning (ML) based and three non-ML based reconstruction techniques.

The first benchmark reconstruction method is BM3D [54], which is an iterative algorithm that alternates between de-noising and reconstructions steps. BM3D is shown to yield faithful reconstructions in the under-sampled cases. We employed the open source code made available at: http://web.itu.edu.tr/eksioglue/pubs/BM3D_MRI.htm. The second reconstruction benchmark method, called LORAKS, is based on low-rank modeling of local k-space neighborhoods [55, 56]. We used the open source code provided by the authors at: https://mr.usc.edu/download/loraks2/. The third benchmark method is Total Generalized Variation (TGV) based reconstruction with Shearlet Transform [57]. The method regularizes image regions while keeping the edge information intact. We used the code provided by the authors: http://www.math.ucla.edu/~wotaoyin/papers/tgv_shearlet.html. The final benchmark reconstruction method is the residual U-Net [22], which is also a building block in our model. In this framework, the U-Net is used as an anti-aliasing neural network widely used for biomedical image applications. Unlike in LOUPE, the benchmark U-Net implementation is trained for a fixed under-sampling mask, that is provided by the user. In our experiments the U-Net models were individually trained and tested for each mask configuration.

Fig. 2: LOUPE-optimized and benchmark masks for two levels of acceleration rates for NYU fastMRI data set: and . Black dots represent k-space points that are sampled and white areas correspond to measurements that are not acquired.

V-D Alternative Under-sampling Masks

We implemented three widely used benchmark under-sampling masks, which are shown in Figure 2. Cartesian [12] with skipped lines is a popular mask that is straightforward to implement in a pulse sequence. Random Uniform [2] is used to show the benefits of stochastic under-sampling in creating incoherent, noise-like artifacts [1], which facilitates discriminating artifacts from the signal present in the image. Finally, we employed a Variable Density [11] mask that samples lower frequencies more densely, while still enjoying the benefits of creating noise-like artifacts. We created this mask using a 2D Gaussian distribution with a mean that was in the center of k-space. As is common is compressed sensing MRI [58], each mask configuration had a fully-sampled calibration region in the center of k-space111We also experimented with excluding the calibration region, which caused a significant deterioration in some of the benchmark reconstruction results. Thus, we only present our results for masks that included the calibration region..

V-E Results

Figure 2 shows the LOUPE-optimized under-sampling mask that was computed on PD and PDFS weighted images of knee MRI scans, for two different acceleration rates and . The masks share the behavior of a drop in density from lower to higher frequencies, however differ significantly in their symmetry. Although the LOUPE-optimized mask is similar to the Variable Density mask in terms of emphasizing lower frequencies, we emphasize that, importantly, lateral frequencies are favored significantly more than ventral/dorsal frequencies. In an earlier conference paper [13], we had reported a LOUPE-optimized mask for T1-weighted brain MRI scans, obtained from a public dataset [59]. Figure 3 shows a side by side comparison of the two LOUPE-optimized masks for the knee and brain anatomies, respectively, emphasizing the assmetry present in the knee dataset. The knee mask favors lateral frequencies more than ventral/dorsal frequencies due to the unique features of knee anatomy, where there is significantly more tissue contrast in the lateral direction. Brain scans, on the other hand, exhibit a more radially symmetric behavior. This comparison highlights the importance of a data-driven approach in identifying the under-sampling mask.

Fig. 3: LOUPE-optimized under-sampling masks compared side to side for the knee and brain anatomies. While these masks correspond to slightly different acceleration rates ( and , respectively), there is a striking difference in the symmetry, which can be appreciated visually. Note that black dots represent k-space points that are sampled and white areas correspond to measurements that are not acquired.

Figure 4 shows subject-level PSNR values computed for the different masks and reconstruction methods, under two different acceleration conditions. The PSNR results within each reconstruction method and mask configuration show two clusters, which correspond to the 6 test subjects with PD weighted images and 4 test subjects with PDFS weighted images. The fat suppression operation inherently lowers the signal level, as fat has the highest levels of signal in an MRI scan, thus yielding a noisier image, where small details are more apparent. In contrast, regular PD weighted scans that contain fat tissue have inherently higher SNR. Therefore, the PD scans yield higher PSNR values in reconstruction compared to the PDFS scans.

Figure 4 illustrates that the LOUPE-optimized mask yields significantly better results than competing masks, when used with any of the reconstruction methods we tried and under the two acceleration rates. Furthermore, the U-Net reconstruction model, coupled with the LOUPE-optimized mask yields the best PSNR values, for both acceleration rates and for all test subjects. These results suggest that the LOUPE-optimized mask can offer a performance gain even when not used with the U-Net based reconstruction method it was computed for. We further observe that the Variable Density mask outperforms the other two benchmark masks, yet yield significantly lower quality reconstructions than the LOUPE-optimized mask. The Random Uniform and Cartesian masks yield similar levels of PSNR values.

The PSNR results are in strong agreement with the other two metrics, NMSE and SSID. Table I lists the summary quantitative values for each of the reconstruction method and mask configurations, where averages within the PD and PDFS subjects are presented separately. Supplementary Table 1 includes standard deviations for these results. The lowest NMSE and highest PSNR and SSIM values are bolded for each reconstruction method, metric and subject group. We observe that the LOUPE-optimized mask yields the highest reconstruction quality under all considered conditions. For reference, we also added the best values reported in [51] that was achieved with a U-Net reconstruction model and a Variable Density Cartesian mask. We underscore that their model was trained on the full training set and evaluated on the full validation set. Nevertheless, we observe that their results are consistent with the ones we obtained with a Cartesian mask and a similar U-Net model.

Fig. 4: Quantitative evaluation of reconstruction quality for the NYU fastMRI data set. For each plot, we show four reconstruction methods using four acquisition masks, including the Optimized Mask obtained using LOUPE in green. The slice-averaged PSNR values for each test subject are connected with lines. For each box, the blue straight line shows the median value for patients with Proton Density images while the blue dashed line shows the median value for patients with Proton Density Fat Suppressed images The whiskers indicate the the most extreme (non-outlier) data points..
Fig. 5: U-Net based reconstructions for a representative PD slice from NYU fastMRI experiments with 8-fold acceleration rate. Each column corresponds to an under-sampling mask.

Figure 5 shows a representative PD weighted slice and U-Net based reconstructions, obtained after under-sampling with and different masks. Supplementary Figures 1,2,3 and 4 provide further examples for both PD and PDFS weighted slices, different reconstruction methods, and two acceleration rates. As can be appreciated from Figure 5, much of the structural details that correspond to the femur, tibia, tendons, ligaments, muscle tissue, and meniscus are more faithfully captured in the reconstructions obtained with the LOUPE-optimized mask, and blurred out with the benchmark masks. For example, edges of the posterior cruciate ligament and lateral meniscus suffer from blurring artifacts in the benchmark mask configurations compared to the LOUPE-optimized mask. Overall, in agreement with the quantitative results presented above, the U-Net reconstruction method, coupled with the LOUPE-optimized mask yields the best-looking reconstruction results and these reconstructions appear less prone to artifacts and blurring, while preserving more of the anatomical detail.

ACCELERATION METHOD MASKS NMSE PSNR SSIM
PD PDFS PD PDFS PD PDFS
4-fold BM3D Cartesian 0.0231 0.0668 33.04 28.04 0.79 0.54
Uniform Random 0.0241 0.0713 32.9 27.78 0.78 0.52
Variable Density 0.0162 0.0598 34.63 28.55 0.82 0.57
LOUPE-Optimized 0.0097 0.0516 36.99 29.19 0.86 0.6
LORAKS Cartesian 0.0201 0.0439 33.63 29.87 0.83 0.63
Uniform Random 0.0208 0.0444 33.48 29.82 0.82 0.63
Variable Density 0.0115 0.0358 36.08 30.76 0.87 0.66
LOUPE-Optimized 0.0068 0.0314 38.49 31.33 0.9 0.69
TGV Cartesian 0.0182 0.0452 34.06 29.74 0.83 0.63
Uniform Random 0.0193 0.0463 33.81 29.63 0.82 0.62
Variable Density 0.0106 0.037 36.44 30.61 0.87 0.66
LOUPE-Optimized 0.0066 0.0325 38.59 31.18 0.9 0.68
U-Net Cartesian 0.0108 0.036 36.3 30.77 0.9 0.7
Uniform Random 0.0114 0.034 36.22 30.98 0.9 0.69
Variable Density 0.0066 0.0272 38.56 31.94 0.92 0.71
LOUPE-Optimized 0.0049 0.0249 39.93 32.33 0.93 0.73
U-Net NYU Variable Density Cartesian 0.0154 0.0525 34.01 29.95 0.82 0.64
8-fold BM3D Cartesian 0.0267 0.0716 32.42 27.74 0.75 0.47
Uniform Random 0.0262 0.075 32.52 27.55 0.74 0.45
Variable Density 0.0232 0.0683 33.07 27.96 0.76 0.48
LOUPE-Optimized 0.0111 0.0519 36.35 29.15 0.83 0.55
LORAKS Cartesian 0.0263 0.0542 32.46 28.95 0.78 0.54
Uniform Random 0.0253 0.0538 32.63 28.99 0.78 0.54
Variable Density 0.0202 0.0484 33.61 29.44 0.81 0.57
LOUPE-Optimized 0.0095 0.0399 36.99 30.29 0.85 0.6
TGV Cartesian 0.0248 0.0549 32.72 28.9 0.78 0.54
Uniform Random 0.0241 0.0553 32.84 28.87 0.78 0.53
Variable Density 0.0188 0.0495 33.92 29.34 0.81 0.56
LOUPE-Optimized 0.0094 0.0404 37.05 30.23 0.85 0.6
U-Net Cartesian 0.0181 0.044 34.66 29.9 0.87 0.64
Uniform Random 0.0127 0.0368 35.62 30.63 0.87 0.64
Variable Density 0.01 0.0336 36.71 31.02 0.89 0.65
LOUPE-Optimized 0.0071 0.0294 38.26 31.6 0.91 0.68
U-Net NYU Variable Density Cartesian 0.0261 0.0682 31.5 28.71 0.76 0.56
TABLE I: Quantitative Results of Reconstruction Quality

Finally, Table II lists the run times (seconds per slice) for the different reconstruction methods and under-sampling masks. The U-Net values correspond to test (inference) time on an NVidia Titan Xp GPU. All other values are for CPU implementations.

METHOD ACCELERATION MASKS
LOUPE Gaussian Uniform Cartesian
BM3D 4-fold 23.6891.046 23.8541.062 23.8811.343 23.0220.874
8-fold 21.2360.529 21.5670.854 21.3321.505 21.0340.907
LORAKS 4-fold 5.1890.390 5.5230.578 4.9460.399 5.6730.061
8-fold 4.9640.113 4.9010.095 4.8350.057 4.8560.051
TGV 4-fold 0.8200.163 0.8960.181 1.0790.611 1.0190.628
8-fold 1.1350.757 1.6611.212 1.6311.320 1.7691.454
U-Net 4-fold 0.0480.002 0.0430.002 0.0480.001 0.0480.002
8-fold 0.0420.002 0.0410.003 0.0390.002 0.0410.002
TABLE II: Computation times (in seconds per slice, mean std) for different reconstruction methods, acceleration rates, and under-sampling masks. For all but the U-Net results, the execution times are for a modern CPU (a dual Intel Xeon E5-2640, 2.4GHz). U-Net run-times are for a GPU (NVidia Titan Xp).

Vi Discussion

Compressed sensing is a promising approach to accelerate MRI and make the technology more accessible and affordable. Since its introduction over a decade ago, it has gained in popularity and been used in a range of application domains. Compressed sensing MRI has two fundamental problems. The first problem is identifying where in k-space to sample (acquire) from. While there has been some effort in optimizing k-space trajectories based on different perspectives, the most promising direction, both from a theoretical and empirical point of view, is a data-driven approach where the optimal under-sampling pattern is identified using a collection of full-resolution scans. Existing techniques have proposed ad hoc, greedy algorithms to achieve this objective. The second core problem of compressed sensing MRI is solving the ill-posed inverse problem of reconstructing a high quality image from under-sampled measurements. This is conventionally solved via a regularized optimization approach, and more recently deep learning techniques have been proposed to achieve superior performance.

In this paper, we tackled both of these problems simultaneously. We leveraged a recently proposed deep learning based reconstruction method to implement an end-to-end unsupervised learning technique that enabled us to optimize the under-sampling pattern on some training data. Our approach relies on full-resolution data, that is under-sampled retrospectively, which is then fed to a reconstruction network. The final output is then compared to the original input to assess overall quality. We formulate an objective that trades off reconstruction quality and the number of collected measurements. The more emphasis we put on the latter, the more aggressive acceleration rates we can achieve.

Our results indicate that LOUPE, the proposed method, can compute an optimized under-sampling pattern that is data-dependent and performs well with different reconstruction methods. In this paper, we present empirical results obtained with a large-scale, publicly available knee MRI dataset. Across all conditions we experimented with, the LOUPE-optimized mask coupled with the U-Net based reconstruction model offered the most superior reconstruction quality. Even with an aggressive 8-fold acceleration rate, LOUPE’s reconstructions contained much of the anatomical detail that was missed by alternative masks and reconstruction methods, as seen in Figure 5.

The LOUPE-optimized under-sampling mask for the analyzed knee MRI scans exhibited an interesting asymmetric pattern, which emphasized high frequencies along the lateral direction much more than those along the vertical direction. This is likely due to nature of the knee anatomy that exhibits significantly more tissue contrast in the lateral direction. This is in stark contrast to the LOUPE-optimized mask that we recently presented at a conference for T1-weighted brain MRI scans [13]. The brain mask exhibited a radially symmetric nature, very similar to a variable density mask, which is widely used in compressed sensing MRI. This comparison highlights the importance of adapting the under-sampling pattern to the data and application at hand.

An important caveat of the presented LOUPE framework is that we largely ignored the costs of physically implementing the under-sampling pattern in an MR pulse sequence. In our present framework, the cost of an under-sampling pattern is merely captured by the number of collected measurements. We underscore that in 3D, the LOUPE-optimized mask can be implemented as a Cartesian trajectory, since the isolated measurement points in the coronal slices line up along the z-direction. Integrating the actual physical cost of a specific under-sampling pattern in an MR pulse sequence is nonetheless a direction that we leave for future research. One way to achieve this would be to constrain the sub-sampling mask to a class of viable trajectories, for example, by directly parameterizing those trajectories.

Another interesting direction for further exploration is how we quantify reconstruction quality. In this paper, we used an L1 difference between the ground truth and reconstruction images. While this loss function yields sharp reconstructions, it might miss clinically important anatomical details. An alternative approach could be to devise loss functions that emphasize structural features that are relevant to the clinical application.

In the current version of LOUPE, we did not consider parallel imaging via multiple coils. Such hardware-based techniques, in combination with compressed sensing, promise to yield even higher degrees of acceleration. Investigating how to combine LOUPE with multi-coil imaging will be an important area of research.

Finally, we would like to emphasize that LOUPE, as in any other data-driven optimization strategy, will perform sub-optimally if test time data differ significantly from training data. Thus, carefully monitoring reconstruction quality and ensuring the model is adapted to the new data will be of utmost importance in deploying this approach in the real world. For example, this could be achieved via fine-tuning the pre-trained LOUPE model on new data, as recently described in [60].

Acknowledgement

This work was, in part, supported by NIH R01 grants (R01LM012719 and R01AG053949, to MRS), the NSF NeuroNex grant (1707312, to MRS), and an NSF CAREER grant (1748377, to MRS). This work was also in part supported by a Fulbright Scholarship (to CDB).

Data used in the preparation of this article were obtained from the NYU fastMRI Initiative database [51]. As such, NYU fastMRI investigators provided data but did not participate in analysis or writing of this report. A listing of NYU fastMRI investigators, subject to updates, can be found at:fastmri.med.nyu.edu. The primary goal of fastMRI is to test whether machine learning can aid in the reconstruction of medical images.

References

  • [1] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE signal processing magazine, vol. 25, no. 2, p. 72, 2008.
  • [2] U. Gamper, P. Boesiger, and S. Kozerke, “Compressed sensing in dynamic MRI,” Magnetic Resonance in Medicine, vol. 59, no. 2, pp. 365–373, 2008.
  • [3] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty, “An efficient algorithm for compressed MR imaging using total variation and wavelets,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition.   IEEE, 2008, pp. 1–8.
  • [4] X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen, “Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator,” Medical image analysis, vol. 18, no. 6, pp. 843–856, 2014.
  • [5] S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled k-space data by dictionary learning,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028–1041, 2011.
  • [6] Z. Zhan, J.-F. Cai, D. Guo, Y. Liu, Z. Chen, and X. Qu, “Fast multiclass dictionaries learning with geometrical directions in MRI reconstruction,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 9, pp. 1850–1861, 2016.
  • [7] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger, “Sense: sensitivity encoding for fast MRI,” Magnetic resonance in medicine, vol. 42, no. 5, pp. 952–962, 1999.
  • [8] R. Otazo, D. Kim, L. Axel, and D. K. Sodickson, “Combination of compressed sensing and parallel imaging for highly accelerated first-pass cardiac perfusion MRI,” Magnetic resonance in medicine, vol. 64, no. 3, pp. 767–776, 2010.
  • [9] M. Lustig and J. M. Pauly, “SPIRiT: iterative self-consistent parallel imaging reconstruction from arbitrary k-space,” Magnetic resonance in medicine, vol. 64, no. 2, pp. 457–471, 2010.
  • [10] M. Lustig, M. Alley, S. Vasanawala, D. Donoho, and J. Pauly, “L1 SPIR-iT: Autocalibrating parallel imaging compressed sensing,” in Proc Intl Soc Mag Reson Med, vol. 17, 2009, p. 379.
  • [11] Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 264–270, 2010.
  • [12] J. P. Haldar, D. Hernando, and Z.-P. Liang, “Compressed-sensing mri with random encoding,” IEEE Transactions on Medical Imaging, vol. 30, no. 4, pp. 893–903, 2011.
  • [13] C. D. Bahadir, A. V. Dalca, and M. R. Sabuncu, “Learning-based optimization of the under-sampling pattern in MRI,” Proc of Information Processing in Medical Imaging, 2019.
  • [14] K. C. Tezcan, C. F. Baumgartner, R. Luechinger, K. P. Pruessmann, and E. Konukoglu, “Mr image reconstruction using deep density priors,” IEEE transactions on medical imaging, 2018.
  • [15] G. Wang, J. C. Ye, K. Mueller, and J. A. Fessler, “Image reconstruction is a new frontier of machine learning,” IEEE transactions on medical imaging, vol. 37, no. 6, pp. 1289–1296, 2018.
  • [16] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, p. 487, 2018.
  • [17] Y. Huang, J. Paisley, Q. Lin, X. Ding, X. Fu, and X.-P. Zhang, “Bayesian nonparametric dictionary learning for compressed sensing MRI,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5007–5019, 2014.
  • [18] J. Sun, H. Li, Z. Xu et al., “Deep ADMM-Net for compressive sensing MRI,” in Advances in neural information processing systems, 2016, pp. 10–18.
  • [19] Y. Yang, J. Sun, H. Li, and Z. Xu, “ADMM-Net: A deep learning approach for compressive sensing MRI,” arXiv preprint arXiv:1705.06869, 2017.
  • [20] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, and F. Knoll, “Learning a variational network for reconstruction of accelerated MRI data,” Magnetic resonance in medicine, vol. 79, no. 6, pp. 3055–3071, 2018.
  • [21] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  • [22] D. Lee, J. Yoo, and J. C. Ye, “Deep residual learning for compressed sensing MRI,” in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017).   IEEE, 2017, pp. 15–18.
  • [23] C. M. Hyun, H. P. Kim, S. M. Lee, S. Lee, and J. K. Seo, “Deep learning for undersampled MRI reconstruction,” Physics in Medicine & Biology, vol. 63, no. 13, p. 135007, 2018.
  • [24] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [25] G. Yang, S. Yu, H. Dong, G. Slabaugh, P. L. Dragotti, X. Ye, F. Liu, S. Arridge, J. Keegan, Y. Guo et al., “DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1310–1321, 2018.
  • [26] T. M. Quan, T. Nguyen-Duc, and W.-K. Jeong, “Compressed sensing mri reconstruction using a generative adversarial network with a cyclic loss,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1488–1497, 2018.
  • [27] M. Mardani, E. Gong, J. Y. Cheng, S. Vasanawala, G. Zaharchuk, M. Alley, N. Thakur, S. Han, W. Dally, J. M. Pauly et al., “Deep generative adversarial networks for compressed sensing automates MRI,” arXiv preprint arXiv:1706.00051, 2017.
  • [28] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid mr imaging,” Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007.
  • [29] M. Elad, “Optimized projections for compressed sensing,” IEEE Transactions on Signal Processing, vol. 55, no. 12, pp. 5695–5702, 2007.
  • [30] J. Xu, Y. Pi, and Z. Cao, “Optimized projection matrix for compressive sensing,” EURASIP Journal on Advances in Signal Processing, vol. 2010, no. 1, p. 560349, 2010.
  • [31] G. Puy, P. Vandergheynst, and Y. Wiaux, “On variable density compressive sampling,” IEEE Signal Processing Letters, vol. 18, no. 10, pp. 595–598, 2011.
  • [32] G. Li, Z. Zhu, D. Yang, L. Chang, and H. Bai, “On projection matrix optimization for compressive sensing systems,” IEEE Transactions on Signal Processing, vol. 61, no. 11, pp. 2887–2898, 2013.
  • [33] F. Knoll, C. Clason, C. Diwoky, and R. Stollberger, “Adapted random sampling patterns for accelerated MRI,” Magnetic resonance materials in physics, biology and medicine, vol. 24, no. 1, pp. 43–50, 2011.
  • [34] C. Kumar Anand, A. Thomas Curtis, and R. Kumar, “Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging,” Engineering Optimization, vol. 40, no. 2, pp. 117–136, 2008.
  • [35] A. T. Curtis and C. K. Anand, “Random volumetric mri trajectories via genetic algorithms,” Journal of Biomedical Imaging, vol. 2008, p. 6, 2008.
  • [36] M. Seeger, H. Nickisch, R. Pohmann, and B. Schölkopf, “Optimization of k-space trajectories for compressed sensing by Bayesian experimental design,” Magnetic Resonance in Medicine, vol. 63, no. 1, pp. 116–126, 2010.
  • [37] J. P. Haldar and D. Kim, “OEDIPUS: an experiment design framework for sparsity-constrained MRI,” IEEE Transactions on Medical Imaging, 2019.
  • [38] B. Roman, A. Hansen, and B. Adcock, “On asymptotic structure in compressed sensing,” arXiv preprint arXiv:1406.4178, 2014.
  • [39] S. Ravishankar and Y. Bresler, “Adaptive sampling design for compressed sensing MRI,” in 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.   IEEE, 2011, pp. 3751–3755.
  • [40] D.-d. Liu, D. Liang, X. Liu, and Y.-t. Zhang, “Under-sampling trajectory design for compressed sensing MRI,” in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.   IEEE, 2012, pp. 73–76.
  • [41] L. Baldassarre, Y.-H. Li, J. Scarlett, B. Gözcü, I. Bogunovic, and V. Cevher, “Learning-based compressive subsampling,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 4, pp. 809–822, 2016.
  • [42] B. Gözcü, R. K. Mahabadi, Y.-H. Li, E. Ilıcak, T. Cukur, J. Scarlett, and V. Cevher, “Learning-based compressive MRI,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1394–1406, 2018.
  • [43] F. Zijlstra, M. A. Viergever, and P. R. Seevinck, “Evaluation of variable density and data-driven k-space undersampling for compressed sensing magnetic resonance imaging,” Investigative radiology, vol. 51, no. 6, pp. 410–419, 2016.
  • [44] D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv preprint arXiv:1312.6114, 2013.
  • [45] E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with Gumbel-softmax,” arXiv preprint arXiv:1611.01144, 2016.
  • [46] C. J. Maddison, A. Mnih, and Y. W. Teh, “The concrete distribution: A continuous relaxation of discrete random variables,” arXiv preprint arXiv:1611.00712, 2016.
  • [47] F. Chollet et al., “Keras,” 2015.
  • [48] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., “Tensorflow: A system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016, pp. 265–283.
  • [49] A. V. Dalca, J. Guttag, and M. R. Sabuncu, “Anatomical priors in convolutional networks for unsupervised biomedical segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9290–9299.
  • [50] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [51] J. Zbontar, F. Knoll, A. Sriram, M. J. Muckley, M. Bruno, A. Defazio, M. Parente, K. J. Geras, J. Katsnelson, H. Chandarana, Z. Zhang, M. Drozdzal, A. Romero, M. Rabbat, P. Vincent, J. Pinkerton, D. Wang, N. Yakubova, E. Owens, C. L. Zitnick, M. P. Recht, D. K. Sodickson, and Y. W. Lui, “FastMRI: An open dataset and benchmarks for accelerated MRI,” arXiv preprint arXiv:1811.08839, 2018.
  • [52] S. T. Welstead, Fractal and wavelet image compression techniques.   SPIE Optical Engineering Press Bellingham, Washington, 1999.
  • [53] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  • [54] E. M. Eksioglu, “Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3D-MRI,” Journal of Mathematical Imaging and Vision, vol. 56, no. 3, pp. 430–440, 2016.
  • [55] J. P. Haldar, “Low-rank modeling of local -space neighborhoods (LORAKS) for constrained MRI,” IEEE Transactions on Medical Imaging, vol. 33, no. 3, pp. 668–681, 2014.
  • [56] J. P. Haldar and J. Zhuo, “P-LORAKS: Low-rank modeling of local k-space neighborhoods with parallel imaging data,” Magnetic Resonance in Medicine, vol. 75, no. 4, pp. 1499–1514, 2016.
  • [57] W. Guo, J. Qin, and W. Yin, “A new detail-preserving regularization scheme,” SIAM Journal on Imaging Sciences, vol. 7, no. 2, pp. 1309–1334, 2014.
  • [58] M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M. Pauly, S. S. Vasanawala, and M. Lustig, “ESPIRiT: an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA,” Magnetic Resonance in Medicine, vol. 71, no. 3, pp. 990–1001, 2014.
  • [59] A. Di Martino, C.-G. Yan, Q. Li, E. Denio, F. X. Castellanos, K. Alaerts, J. S. Anderson, M. Assaf, S. Y. Bookheimer, M. Dapretto et al., “The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism,” Molecular psychiatry, vol. 19, no. 6, p. 659, 2014.
  • [60] J. Zhang, Z. Liu, S. Zhang, H. Zhang, P. Spincemaille, T. D. Nguyen, M. R. Sabuncu, and Y. Wang, “Fidelity imposed network edit (FINE) for solving ill-posed image reconstruction,” arXiv preprint arXiv:1905.07284, 2019.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
384063
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description