GrappaNet: Combining Parallel Imaging with Deep Learning for MultiCoil MRI Reconstruction
Abstract
Magnetic Resonance Image (MRI) acquisition is an inherently slow process which has spurred the development of two different acceleration methods: acquiring multiple correlated samples simultaneously (parallel imaging) and acquiring fewer samples than necessary for traditional signal processing methods (compressed sensing). Both methods provide complementary approaches to accelerating the speed of MRI acquisition.
In this paper, we present a novel method to integrate traditional parallel imaging methods into deep neural networks that is able to generate high quality reconstructions even for high acceleration factors. The proposed method, called GrappaNet, performs progressive reconstruction by first mapping the reconstruction problem to a simpler one that can be solved by a traditional parallel imaging methods using a neural network, followed by an application of a parallel imaging method, and finally finetuning the output with another neural network. The entire network can be trained endtoend. We present experimental results on the recently released fastMRI dataset Zbontar et al. (2018) and show that GrappaNet can generate higher quality reconstructions than competing methods for both and acceleration.
1 Introduction
Magnetic Resonance Imaging (MRI) is the leading diagnostic modality for a wide range of disorders including musculoskeletal, neurological, and oncological diseases. However, the physics of the MRI data acquisition process make it inherently slower than alternate modalities like CT or XRay. As a consequence, increasing the speed of MRI acquisition has been a major ongoing research goal for decades.
Parallel Imaging (PI) is one of the most important and successful developments in reducing MRI scan time Pruessmann et al. (1999); Griswold et al. (2002). The technique requires the use of multiple physical receiver coils to simultaneously record different views of the object being imaged. Parallel imaging is the default option for many scan protocols and it is supported by almost all modern clinical MRI scanners.
Another approach to accelerating MR imaging is the use of Compressed Sensing (CS), which can speed up MRI acquisition by acquiring fewer samples than required by traditional signal processing methods. To overcome aliasing artifacts introduced by violating the ShannonNyquist sampling theorem, CS methods incorporate additional a priori knowledge about the images. Recently, the use of learned image priors through the use of deep learning have rapidly gained in popularity (Hammernik et al., 2016; Wang et al., 2016; Hammernik et al., 2018; Schlemper et al., 2017; Zhu et al., 2018). These approaches have shown a significant improvement in image reconstruction quality, particularly for nonparallel MRIs.
In this paper, we show that a novel combination of classical parallel imaging techniques with deep neural networks can achieve higher acceleration factors than using either approach alone. Utilizing parallel imaging in deep learning approaches to reconstruction is challenging. The relation between the captured views changes for each scan and is dependent on the configuration of the detectors with respect to the object being imaged.
To address this challenge we introduce GrappaNet, a new neural network architecture that incorporates parallel imaging. GrappaNet contains a GRAPPA layer that learns a scanspecific reconstruction function to combine the views captured during parallel imaging. To allow the network to fully utilize all the information captured during parallel imaging, the reconstruction is performed jointly across all the complexvalued views captured during the parallel imaging process. Unlike many previous approaches Hammernik et al. (2018), the views are not combined until the final layer to produce the output reconstruction. The model uses a progressive refinement approach in both kspace (frequency domain) and image space to both aid in the optimization and to take advantage of the complementary properties of the two spaces. Most previous approaches typically focus on either reconstructing in image space Hammernik et al. (2018) or kspace Han et al. (2018). We evaluate GrappaNet performance on the recently released fastMRI Zbontar et al. (2018) dataset.
We first give a short introduction to parallel MRI and review some deep learning methods for parallel MRI reconstruction in section 2. Next, we provide a description of the GrappaNet model in section 3 and then describe our experiments in section 4. Finally, we conclude with a discussion of future work in section 5.
2 Background and Related Work
2.1 Parallel MRI
MR scanners image a patient’s anatomy by acquiring measurements in the frequency domain using a measuring instrument called a receiver coil. In the MRI literature, these frequencydomain measurements are called kspace samples, where k refers to the spatial wave number. The image can then be obtained by applying an inverse multidimensional Fourier transform to the measured kspace samples. The underlying image is related to the measured kspace samples as
(1) 
where is the measurement noise.
Most modern scanners support parallel imaging: they employ an array of multiple receiver coils that simultaneously obtain kspace samples from the anatomy being imaged. The kspace samples measured by each coil are modulated by their sensitivity to the MR signal arising from different regions. In particular, the kspace sample measured by the th coil is
(2) 
where is a complexvalued diagonal matrix encoding the position dependent sensitivity map of the th coil and is the number of coils.
Different coils are typically sensitive to different but overlapping regions. It is important to note that the coil sensitivities vary per scan since they depend not only on the configuration of the coils but also on their interaction with the anatomy being imaged.
2.2 Accelerated MRI
The speed of MRI acquisition is limited by the number of kspace samples obtained. This process can be accelerated by obtaining only a subset of the kspace data:
(3) 
where is a binary mask operator that selects a subset of the kspace points. The same mask is used for all coils. Applying an inverse Fourier transform naively to this undersampled kspace data results in aliasing artifacts.
Parallel MRI can be used to accelerate imaging by exploiting the redundancies in kspace samples measured by different coils to estimate the missing kspace points from the observed points. Various parallel imaging methods have been proposed but they can be divided into two broad classes: a) SENSEtype methods Pruessmann et al. (1999) that operate in the image space, and b) GRAPPAtype methods Griswold et al. (2002) that operate locally in kspace. The latter is relevant to this work.
The GRAPPA algorithm estimates the unobserved kspace points as a linear combination of the neighboring observed kspace points from all coils. The same set of weights are used at all spatial locations, which can be seen as a complexvalued convolution in kspace from channels to channels, where is the number of coils. Formally, the unobserved kspace points are computed from the observed kspace points by convolving with GRAPPA weights :
(4) 
During acquisition, the central region of kspace (which corresponds to low spatial frequencies) is fully sampled. This region, called the AutoCalibration Signal or ACS, is used to estimate the GRAPPA weights . We can simulate undersampling in the ACS by masking out certain kspace points. Let the simulated observed and unobserved kspace points in the ACS be and respectively. From equation 4, the convolution of and should be equal to . Thus, we can estimate by solving the following optimization problem:
(5) 
The knee images in the fastMRI dataset Zbontar et al. (2018) were acquired using machines that employ 15 receiver coils and can generally support 2 acceleration for imaging of the knee using this approach. Higher acceleration factors lead to aliasing artifacts that cannot be removed by standard parallel imaging methods.
2.3 Compressed Sensing for Parallel MRI Reconstruction
Compressed Sensing Donoho (2006) enables reconstruction of images by using fewer kspace measurements than is possible with classical signal processing methods by enforcing suitable priors. Compressed sensing has been combined with parallel imaging to achieve higher acceleration factors than those allowed by parallel imaging alone.
Classical compressed sensing methods use sparsity in some transform domain as a prior. Many classical compressed sensing methods operate in the image domain and solve the following optimization problem:
(6) 
where is a regularization function that enforces a sparsity constraint in some transform domain such as gradients in the image domain. This problem can be solved by iterative gradient descent style methods.
In the last few years, there has been rapid development of deep learning based approaches to MRI reconstruction. One approach in this direction is the Variational Network (VN) Hammernik et al. (2018). The VN model is a deep neural network, each of whose layers implements one gradient update step for the optimization problem in equation 6. The VN uses precomputed sensitivity maps and achieves excellent reconstructions at low acceleration factors. Computing sensitivity maps becomes more challenging at higher accelerations, which may limit the maximum acceleration this method can achieve.
An alternate line of work operating in kspace is the RAKI model Akçakaya et al. (2019) which replaces the single convolution operation in GRAPPA with a deep convolutional network that is trained independently for each scan. The RAKI method emphasizes the importance of using a scan specific model for multicoil reconstruction. This method is complementary to our work and can be integrated into the GrappaNet by replacing the GRAPPA layer with the RAKI network.
A comprehensive survey of recent developments in using deep learning for parallel MRI reconstruction can be found in Knoll et al. (2019).
3 GrappaNet
The GrappaNet is a neural network that takes undersampled, multicoil kspace data as input and outputs the reconstructed image. Figure 1 shows a diagram of the network architecture that contains three important properties. First, the differentiable GRAPPA layer enables the network to take advantage of the known physical proprieties of parallel imaging. Next, each convolutional network is performed across all complexvalued views jointly, before being combined in the final stage. This enables the network to take advantage of all the information captured during parallel imaging. Several previous approaches Hammernik et al. (2018); Han et al. (2018), performed reconstruction after collapsing to a single view. Finally, imagetoimage mappings using UNets are performed in both kspace and image space. Convolutions, pooling, and upsampling result in very different operations in image space and kspace. We demonstrate in Section 4 that using both these complementary spaces provides improved accuracy.
The network consists of two convolutional neural networks, with the application of the GRAPPA operator in between them. Denoting the input undersampled kspace data by , the network computes the following function:
(7) 
where , and are convolutional networks that map multicoil kspace to multicoil kspace and combines the multicoil kspace data to a single image.
The first network, takes the multicoil kspace data with fold undersampling and maps it to an fold undersampled kspace dataset with the same number of coils. The GRAPPA operator, , which is separately obtained from the ACS, is then applied to this fold undersampled dataset to fill in the missing kspace data. This allows the network to take advantage of the known physical proprieties of the parallel imaging process. is chosen to be small enough that traditional parallel imaging methods like GRAPPA can reconstruct the image accurately. We use for our experiments.
3.1 UNet
Both and are composed of multiple UNets Ronneberger et al. (2015b), which are convolutional networks that operate at multiple scales. UNet models and their variants have successfully been used for many imagetoimage mapping tasks including MRI reconstruction Hyun et al. (2018); Han and Ye (2018) and image segmentation Ronneberger et al. (2015a). The UNets used in this work are based on the UNet baseline models from Zbontar et al. (2018).
The UNet model (figure 2) consists of a convolutional encoder followed by a convolutional decoder. The decoder consists of blocks of two convolutions each followed by instance normalization Ulyanov et al. (2016) and Rectified Linear Unit (ReLU) activation functions. The blocks are interleaved by downsampling operations consisting of maxpooling layers with stride 2 which halve each spatial dimension. The upsampling path consists of blocks with a similar structure to the downsampling path, interleaved with bilinear upsampling layers which double the resolution between blocks. Each block consists of two convolutions with instance normalization and ReLU activation layers. In contrast to the downsampling path, the upsampling path concatenates two inputs to the first convolution in each block: the upsampled activations from the previous block, together with the activations that follow the skip connection from the block in the downsampling path with the same resolution (horizontal arrows in Figure 2). At the end of the upsampling path, we include a series of convolutions that reduce the number of channels to 30 without changing the spatial resolution.
A UNet is useful for image to image mapping tasks like semantic segmentation because the presence of pooling and upsampling layers allow it to learn useful feature maps at multiple scales and abstraction levels. This multiresolution feature representation helps the UNet predict the higher level details of the output at the lowest level of the decoder and gradually adds finer, higher frequency details as the upsampling layers are applied.
The baseline model described in Zbontar et al. (2018) used such a UNet model for MRI image reconstruction. However, that model is only able to perform denoising since it is applied after combining the different views using a rootsumofsquares (RSS) transform (equation 8). This prevents the baseline model from learning how to combine all of the coils and using the phase information. As a result, the reconstructions from this baseline model are too smooth and lose much of the medically relevant high frequency information (see Figure 4). We show in section 4 that simply applying a UNet to the real and imaginary data from all coils can significantly improve upon this model. Such a UNet can potentially learn to combine information from different coils together, which improves performance.
Han et al Han et al. (2018) show that a UNet can also be applied directly to undersampled kspace data. Their work was motivated by connections between encoderdecoder models and a classical CS algorithm called the annihilating filterbased lowrank Hankel matrix approach (ALOHA). The input to the ALOHA UNet is zerofilled kspace data and the model fills in the missing information. In an approach similar to the fastMRI baseline model Zbontar et al. (2018), Han et al Han et al. (2018) also apply their UNet after combining all of the coils into a single coil. Taking insight from algorithms like GRAPPA, we posit that it would be beneficial to apply convolutions directly to the multicoil kspace data. We show in section 4 that such a model outperforms the baseline models.
The functions and apply the following series of operations to the input kspace data (see figure 1): a UNet in kspace followed by a hard data consistency, inverse 2D Fourier transform to convert to image space, a UNet in the image space, followed by a 2D Fourier transform and data consistency. Each of the UNets map 15 complexvalued channels to 15 complexvalued channels. Here, the hard data consistency operations simply copy all of the observed kspace samples to the correct locations in kspace. This ensures that the model only fills in the missing kspace points.
The function combines the reconstructed multicoil kspace data into a single realvalued image by first applying an inverse 2D Fourier transform to each coil, followed by a root sumofsquares (RSS) operation. The RSS operation combines all the coils into a single realvalued image:
(8) 
where are the images from the coils.
3.2 GRAPPA Layer
As explained in the previous section convolutional networks in kspace or image space applied to all coils can, to a limited extent, learn to combine all of the coils. However, as described in 2.2, the coil sensitivities can vary from one imaging examination to another. Traditional parallel imaging methods take this into consideration by estimating distinct sensitivity maps or GRAPPA kernels for each scan. This motivates the need to include a scanspecific component within the neural network that can adapt to differences in the sensitivity profile to improve generalization of the reconstruction model.
We achieve this adaptation by introducing a new neural network layer that we call the GRAPPA layer. The GRAPPA layer estimates the GRAPPA kernel from the ACS region and then applies a two dimensional convolution with the estimated kernel. Because the application of GRAPPA is differentiable, the entire network can be trained in an endtoend fashion using backpropagation.
4 Experimental Results
We ran all our experiments on the multicoil images from the fastMRI dataset Zbontar et al. (2018), which consists of raw kspace data from 1594 knee MRI exams from four different MRI machines. The dataset contains two types of MRI sequences that are commonly used for knee exams in clinical practice: a Proton Density (PD) weighted sequence and a Proton Density weighted sequence with Fat Saturation (PDFS). We used the same train, validation and test splits as in the original dataset. The training data consisted of 973 volumes which contained kspace data of different sizes. During training, we omitted kspace data with a width greater than 372, which is about 7% of the training data. We evaluated various models on all test images.
For training our models, we used random masks with and accelerations, based on code released with the fastMRI dataset

Classical CS baseline based on Total Variation minimization Zbontar et al. (2018)

UNet baseline model applied to RSS inputs Zbontar et al. (2018)

Variational Network model introduced in Hammernik et al. (2018)

UNet applied in kspace to 15 coil input

UNet applied in image space to 15 coil input

GrappaNet model
We used the original implementation of the Variational Network
For the kspace UNet, the image space UNet, and the GrappaNet models, we followed the training procedure for the baseline models in Zbontar et al. (2018). To deal with complexvalued inputs, we simply treated the real and imaginary parts as two distinct channels. Hence, 15coil complexvalued kspace or image data were treated as 30channel data. These models were trained using the RMSProp Tieleman and Hinton (2012) algorithm to minimize a linear combination of Structural Similarity (SSIM) Wang et al. (2004) and losses:
(9) 
where is the reconstruction and is the ground truth image, after cropping to the central region. Lambda was set to . The models were trained for 20 epochs with a fixed learning rate of . All models were trained on a machine with 8 NVIDIA Volta V100 GPUs using data parallel training for about 3 days.
Acceleration  Model  NMSE  PSNR  SSIM  

PD  PDFS  PD  PDFS  PD  PDFS  
4fold  Classical CS baseline  0.0198  0.0951  32.6  27.5  0.693  0.588 
UNet Baseline  0.0154  0.0525  34.00  29.95  0.815  0.636  
Variational Net  0.0138  0.0262  35.82  33.196  0.919  0.855  
KSpace UNet  0.0055  0.0114  37.27  36.45  0.927  0.870  
Image UNet  0.0034  0.0103  39.58  36.97  0.949  0.886  
GrappaNet  0.0026  0.0085  40.74  37.77  0.957  0.891  
8fold  Classical CS baseline  0.0352  0.109  29.6  26.8  0.642  0.551 
UNet Baseline  0.0261  0.0682  31.5  28.71  0.762  0.559  
Variational Net  0.0211  0.0816  32.12  27.72  0.788  0.675  
KSpace UNet  0.0189  0.0206  36.45  32.54  0.870  0.807  
Image UNet  0.0079  0.0160  36.26  34.36  0.886  0.831  
GrappaNet  0.0071  0.0146  36.76  35.04  0.922  0.842  
The UNet models applied either to 15coil kspace input or 15coil image input start with 384 channels, which are doubled after each pooling. The GrappaNet model contains a total of 4 UNets, each of which starts with 192 channels. All three models have roughly 480M parameters.
Experimental results are shown in table 1, which lists three metrics that are computed in the same manner as Zbontar et al. (2018): normalized mean squared error (NMSE), peak signal to noise ratio (PSNR) and structural similarity (SSIM) Wang et al. (2004). All of the proposed models perform significantly better than the baselines. The large difference in performance between a UNet applied to all 15 coils versus the UNet baseline underscores the importance of letting the neural network figure out how to combine the coil images.
The GrappaNet performs best according to all metrics. The improved performance of the GrappaNet can be attributed to the inclusion of the GRAPPA layer to implement parallel imaging within the network.
Some example reconstructions are shown in figures 3 and 4 for and accelerations, respectively. Figure 5 shows some of the medically relevant regions zoomed in for acceleration. The baseline UNet model is able to remove aliasing artifacts, but this comes at the cost of severe oversmoothing. The reconstruction lacks some of the high frequency detail that is clinically relevant. The reconstructions from the image UNet model are significantly better than the baseline, but they are not as sharp as the reconstructions from the GrappaNet model.
The Variational Net model makes heavy use of estimated sensitivity maps throughout the network, including in the data consistency terms. It is able to generate good reconstructions with acceleration, which retains a sufficient number of low frequency lines to estimate sensitivity maps. When the acquisition is accelerated by , however, the performance degrades significantly since it is not possible to accurately estimate sensitivity maps for this case.
5 Conclusion and Future Work
In this paper, we introduced the GrappaNet architecture for multicoil MRI reconstruction. Multicoil MRI reconstruction presents an important and challenging problem due to the prevalence of parallel imaging and the need to make scanspecific adaptations to the neural networks. GrappaNet addresses this challenge by integrating traditional parallel imaging methods with neural networks and training the model endtoend. This allows the model to generate high fidelity reconstructions even at high acceleration factors.
The GRAPPA kernel used in the GrappaNet model is estimated from the lowfrequency lines of kspace and is used as a fixed input to the model. A possible extension to this work could explore methods to optimize the process of estimating the kernel jointly with the rest of the network during training.
Quantitative measures such as NMSE, PSNR, and SSIM only provide an estimate for the quality of the reconstructions. Clinically important details are often subtle and contained in small portions of an MRI. Before techniques such as those presented in this paper can be used in practice, proper clinical validation studies need to be performed to ensure that the use of accelerated MRIs does not degrade the quality of diagnosis.
Appendix A Dithering as postprocessing
The GrappaNet model was trained to optimize a linear combination of Structural Similarity Wang et al. [2004] and L1 loss between the reconstruction and the ground truth image. SSIM and L1 loss are imperfect proxies for radiologists’ visual perception; optimizing SSIM, L1 loss, or a linear combination of them can produce unnaturally smooth reconstructions even when preserving diagnostic content. We can enhance the perceived sharpness of the images by adding low levels of noise, that is, by dithering. As established by Perlin [1985], filtered noise (“Perlin noise”) is a good model for the synthesis of natural textures — naturallooking textures include some noise. Quoting Pham [2016], “the preservation of film grain noise can also help enhance the subjective perception of sharpness in images, known as acutance in photography, although it degrades the signaltonoise ratio. The intentional inclusion of noise in processing digital audio, image, and video data is called dither.”
To avoid obscuring dark areas of the reconstruction by adding too much noise, we adapt the level of noise to the brightness of the image around each pixel. Specifically, we blur the image we wish to dither with a median filter taking medians over patches 11 pixels high by 11 pixel wide, then take the square root of the value at each pixel of the blurred image, and finally add to the image being dithered centered Gaussian noise of standard deviation times the associated blurred pixel (having normalized the pixel values to range from 0 to 1). We set for nonfatsuppressed images and for fatsuppressed images (which have a worse native SNR).
Appendix B Training with random masks to counter adversarial examples
Compressed sensing is the reconstruction of images to a resolution beyond what reconstruction via classical signal processing would permit for the amount of measurements actually made. In MRI, the measurements are taken in space, and the classical signal processing involves an inverse Fourier transform. Compressed sensing reconstructs to the same resolution as if using an inverse Fourier transform on more measurements than actually taken; compressed sensing must be nonlinear to succeed. When taking measurements in space at fixed locations, it is relatively straightforward to construct objects whose measurements at these fixed locations will result in reconstructions from compressed sensing that are horribly wrong: simply alter arbitrarily the objects in the parts of space in between those locations in space that are actually measured. Whether such socalled “adversarial” examples of objects being measured are worrisome depends on where the actual measurements are made and (especially) on the algorithm used for reconstruction.
If the algorithm used for reconstruction is trained on a set of examples with measurements always taken at the same locations in space, then the reconstruction is likely to be blind to properties of objects that depend on parts of space in between those actually measured. The adversarial examples can then hide horrible problems in between the parts of space that are actually measured; the algorithm for reconstruction trained on only fixed locations in space will have no hope of learning how the unmeasured parts of space contribute to the correct reconstruction. On the contrary, if the algorithm used for reconstruction is trained on examples with measurements taken at random locations in space (which is particularly advantageous if each example gets measured at several different random realizations of the sampling pattern), and the random locations cover all space (over enough random realizations), then the algorithm is likely to learn about all parts of space during training (here, “all” space refers to the sampling pattern used for conventional reconstruction via the inverse Fourier transform at full resolution). When taking measurements at random locations in space, the algorithm for reconstruction will probably detect at least a piece of any adversarial attempt to hide horrible artifacts in parts of space, and will learn how all relevant parts of space affect the correct reconstruction.
Therefore, a machinelearned algorithm for reconstruction should train on examples measured at randomized locations in space in order to avoid some adversarial examples, such as those constructed by Antun et al. [2019]. Moreover, the measurements for the validation and testing sets must also be randomized, in the following subtle sense: the locations of the measurements in space must be stochastically independent of the object being imaged. Ideally the object will be deterministic and the locations of the measurements in space will be drawn randomly independently of the object. Thus, the object being imaged should not be constructed conditional on knowing the locations in space of the measurements being taken; an object in physical reality has no way of knowing where the measurements are being taken. The adversarial examples of Antun et al. [2019] construct objects that depend on where the measurements are being taken, and so are inapplicable to the setting of randomized locations for the measurements. In practice, the same random locations in space can be used for multiple objects, provided that the objects being imaged cannot alter themselves based on knowing where the measurements are being taken, and provided that the training of any machinelearned reconstruction considers many different random locations in space (preferably covering all space over enough random realizations).
To summarize:

Measurements should be at randomized locations in space during training of machinelearned algorithms for reconstruction, such that the random locations cover all space (over enough random realizations), where “all” refers to the sampling pattern used for conventional reconstruction at full resolution via the inverse Fourier transform.

Measuring each object in the training set at multiple different random samples in space is ideal, constituting a kind of data augmentation that regularizes the reconstruction and improves generalization and robustness to adversarial examples.

The object being imaged in reality during validation and testing should be deterministic, with the random locations in space where measurements are taken being stochastically independent of the object.

When taking measurements at randomized locations in space, the object should not alter itself based on where measurements are made; adversarial examples are irrelevant when they are conditional on knowing the locations of the randomized measurements.

The same random locations in space can be used across the objects in the validation and testing sets (yet these locations must vary during training!).
Fortuitously, algorithms for reconstruction that obey the above conditions are also ideal for use in estimating errors via the bootstrap, as described by Defazio et al. [2019].
Regarding technologically reasonable sampling patterns, MRI works well taking measurements along the following lines:

radial lines in space, with the lines at random angles

parallel lines in space, with the lines at random offsets

equispaced parallel lines in space, with the overall offset chosen at random
In all cases, “random” means the same angle or offset for different objects being imaged in validation and testing sets, but with the angles or offsets varying at random during training and in the bootstrap or jackknife error estimation. Socalled parallel imaging usually supplements the above measurements with some additional measurements of mainly low frequencies for autocalibration of sensitivity maps or of convolutional kernels for fusing contributions from multiple receiver coils, as discussed by Antun et al. [2019] and others. The extra set of autocalibration measurements is merely a bonus, not requiring the same randomization as the other measurements.
Appendix C Example Reconstructions
Footnotes
 https://github.com/facebookresearch/fastMRI
 https://github.com/VLOGroup/mrivariationalnetwork/
References
 Scanspecific robust artificialneuralnetworks for kspace interpolation (raki) reconstruction: databasefree deep learning for fast imaging. Magnetic Resonance in Medicine 81 (1), pp. 439–453. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1002/mrm.27420 Cited by: §2.3.
 On instabilities of deep learning in image reconstruction — does AI come at a cost?. Technical report Technical Report 1902.05300, arXiv. Note: Available at \urlhttp://arxiv.org/abs/1902.05300 Cited by: Appendix B, Appendix B.
 Compressed sensing with a jackknife and a bootstrap. Technical report Technical Report fbooja, GitHub. Note: Available at \urlhttps://github.com/facebookresearch/fbooja Cited by: Appendix B.
 Compressed sensing. IEEE Transactions on Information Theory 52 (4), pp. 1289–1306. Cited by: §2.3.
 Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magnetic Resonance in Medicine 47 (6), pp. 1202–1210. Cited by: §1, §2.2.
 Learning a variational network for reconstruction of accelerated MRI data. Magnetic Resonance in Medicine. Cited by: §1, §1, §2.3, §3, item 3.
 Learning a Variational Model for Compressed Sensing MRI Reconstruction. In Magnetic Resonance in Medicine (ISMRM), Cited by: §1.
 Framing UNet via deep convolutional framelets: application to sparseview CT. IEEE Transactions on Medical Imaging 37 (6). Cited by: §3.1.
 Deep learning with domain adaptation for accelerated projection reconstruction MR. Magnetic Resonance in Medicine 80 (3). Cited by: §1, §3.1, §3.
 Deep learning for undersampled MRI reconstruction. Physics in medicine and biology 63 (13). Cited by: §3.1.
 Deep learning methods for parallel magnetic resonance image reconstruction. CoRR abs/1904.01112. Cited by: §2.3.
 An image synthesizer. In Proceedings of the Twelfth Annual Conference on Computer Graphics and Interactive Techniques, pp. 287–296. Cited by: Appendix A.
 Noiseadded texture analysis. In CIARP 2016: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Lecture Notes in Computer Science, Vol. 10125, pp. 93–100. Cited by: Appendix A.
 SENSE: sensitivity encoding for fast MRI. Magnetic resonance in medicine 42 (5). Cited by: §1, §2.2.
 UNet: convolutional networks for biomedical image segmentation. Medical Image Computing and ComputerAssisted Intervention. Cited by: §3.1.
 UNet: convolutional networks for biomedical image segmentation. In MICCAI 2015: Medical Image Computing and ComputerAssisted Intervention, Lecture Notes in Computer Science, Vol. 9351, pp. 234–241. Cited by: §3.1.
 A deep cascade of convolutional neural networks for MR image reconstruction. Information Processing in Medical Imaging. Cited by: §1.
 Lecture 6.5  rmsprop: divide the gradient by a running average of its recent magnitude. Vol. 4. Note: COURSERA: Neural networks for machine learning Cited by: §4.
 ESPIRiT an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA. Magnetic resonance in medicine 71 (3). Cited by: §4.
 Instance normalization: the missing ingredient for fast stylization. arXiv preprint. Cited by: §3.1.
 Accelerating magnetic resonance imaging via deep learning. In IEEE International Symposium on Biomedical Imaging (ISBI), Cited by: §1.
 Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: Appendix A, §4, §4.
 FastMRI: an open dataset and benchmarks for accelerated MRI. CoRR abs/1811.08839. External Links: Link, 1811.08839 Cited by: GrappaNet: Combining Parallel Imaging with Deep Learning for MultiCoil MRI Reconstruction, §1, §2.2, §3.1, §3.1, §3.1, item 1, item 2, §4, §4, §4.
 Image reconstruction by domaintransform manifold learning. Nature 555 (7697). Cited by: §1.