\Sigma-net: Ensembled Iterative Deep Neural Networks for Accelerated Parallel MR Image Reconstruction

-net: Ensembled Iterative Deep Neural Networks for Accelerated Parallel MR Image Reconstruction


We explore an ensembled -net for fast parallel MR imaging, including parallel coil networks, which perform implicit coil weighting, and sensitivity networks, involving explicit sensitivity maps. The networks in -net are trained in a supervised way, including content and GAN losses, and with various ways of data consistency, i.e., proximal mappings, gradient descent and variable splitting. A semi-supervised finetuning scheme allows us to adapt to the k-space data at test time, which, however, decreases the quantitative metrics, although generating the visually most textured and sharp images. For this challenge, we focused on robust and high SSIM scores, which we achieved by ensembling all models to a -net.

1 Introduction

The fastMRI Zbontar et al. (2018) multicoil challenge provides a great opportunity to show how we can push the limits of acquisition speed by combining parallel magnetic resonance imaging (pMRI) and deep learning. Recent works on pMRI show the success of learning a fixed iterative reconstruction scheme, involving the MR forward model in various ways Hammernik et al. (2018); Aggarwal et al. (2017); Duan et al. (2019); Schlemper et al. (2019); Qin et al. (2018). In this work, we explore different types of reconstruction networks for pMRI: (1) parallel coil networks (PCNs), that learn implicit weighting of the single coils, and (2) sensitivity networks (SNs), that require explicit coil sensitivity maps Pruessmann et al. (1999); Uecker et al. (2014). We investigate different ways of incorporating data consistency (DC) and train the networks in both a supervised and semi-supervised manner. Instead of choosing a single model for pMRI reconstruction, we increase the robustness by ensembling the individual model reconstructions, termed -net. To meet the quantitative evaluation criteria, we introduce - exclusively for this challenge - a style transfer layer (STL), which maps the contrast of SNs to root-sum-of-squares (RSS) reconstructions.

2 Methods

We explore a variety of network architectures, loss functions and learning strategies. In the following, we give a short overview of the different architectures and show how we achieve an ensembled -net.

2.1 Learning unrolled optimization

All of our models are based on learning a fixed iterative scheme Hammernik et al. (2018); Aggarwal et al. (2017); Duan et al. (2019); Schlemper et al. (2019). In general, this has the following form to obtain a reconstruction from k-space data involving a linear forward model

Here, represents the neural network-based reconstruction block, denotes a DC layer and is the number of steps. Each reconstruction block has the form of an encoding-decoding structure, such as U-net Ronneberger et al. (2015) and Down-Up CNNs Yu et al. (2019). The DC is considered in various ways. We investigate DC as gradient descent (GD) Hammernik et al. (2018), proximal mappings (PM) Schlemper et al. (2017); Aggarwal et al. (2017) and variable splitting (VS) Duan et al. (2019). For pMRI involving explicit sensitivity maps, the PMs are solved numerically using conjugate gradient Aggarwal et al. (2017).

2.2 Network architectures

We investigate two types of architectures for pMRI reconstruction. Parallel coil networks (PCNs) reconstruct individual coil images for coils. The network is realized by a U-net Ronneberger et al. (2015) with complex-valued input and output channels and learns implicit coil weightings. For sensitivity networks (SNs), the coil combination is defined in the operator using explicit coil sensitivity maps as in Hammernik et al. (2018). To overcome field-of-view issues in the SNs, we use an extended set of two coil sensitivity maps according to Uecker et al. (2014), hence, reconstructing . In this case, the network has two complex-valued input and output channels and is modelled by a Down-Up network Yu et al. (2019). The final reconstruction is obtained by RSS combination of the individual channels of .

2.3 Supervised and semi-supervised learning

We trained individual networks for the acceleration factors and as well as contrasts PD and PDFS of the fastMRI Zbontar et al. (2018) training set. The networks were trained using and SSIM loss Zhao et al. (2016); Hammernik et al. (2017) between the reference and the reconstruction , involving the binary foreground mask ,

where is the pixel wise product and denotes the RSS reconstruction to combine the individual output channels. The parameter was chosen empirically to match the scale of the two losses. We trained for 50 epochs using RMSProp with learning rate , reduced by 0.5 every 15 epoch.

Least Squares GAN (LSGAN) The trained model was further finetuned for 10 epochs with learning rate using an LSGAN loss with the same architecture as in Ledig et al. (2017). The parameter was chosen empirically via searching on the validation set and is set to for PD and for PDFS data.

Semi-supervised finetuning To adapt to k-space data efficiently and overcome smooth reconstructions, we consider the problem , where we use the initial network output as a prior. We empirically chose and and finetune for 30 epochs on 4 slices of a patient volume simultaneously using ADAM (learning rate ). The trained parameters are then used to reconstruct the whole patient volume.

2.4 Experimental setup

Using the described tools, we trained one PM-PCN, with the individual fully sampled coil images as , and four different SNs with different data consistency layers and losses, i.e., PM-SN, GD-SN, VS-SN, GD-SN-LSGAN. Additionally, we finetuned the GD-SN which we denote as GD-SN-FT. The reference for the SNs was defined by the sensitivity-combined fully sampled data.

Style-transfer layer (STL) Although RSS is suitable surrogate for coil combination Roemer et al. (1990), we observed, however, that the gap between RSS and sensitivity-weighted images is relatively large for PDFS cases due to Rician bias of noise (see Fig. 1). To bridge this gap, we trained a STL based on a SN with initial features. The STL was trained on the SSIM loss for 10 epochs using RMSProp (learning rate ).

Ensembling To get robust quantitative scores, we use following ensemble to form the -net

The reconstruction contains the average over the SNs excluding the GD-SN-FT, which is denoted by , and is the PM-PCN reconstruction.

2.5 Data Processing

We estimated two sets of sensitivity maps according to soft SENSE Uecker et al. (2014) from 30 auto-calibration lines (ACLs) for and 15 ACLs for for the training and validation set. For the test and challenge set, the sensitivity maps were computed from the provided ACLs.

To overcome the huge memory consumption of the proposed networks, we use a patch learning strategy Schlemper et al. (2017) where we extract patches of size 96 in frequency encoding (FE) direction without introducing new artifacts. At test time, the network is applied to the full data.

To stabilize training, we generated foreground masks semi-automatically using the graph cut algorithm for 10 cases. This is followed by a self-supervised refinement step using a U-net with initial features. The background was replaced by the mean value, estimated from noise patches of the undersampled RSS and scaled by the true acceleration factor, to match the RSS background level.

3 Results

We present quantitative scores on the fastMRI validation set in Tab. 1 and qualitative results on a PDFS case for in Fig. 1. The ensembled -net achieves the best SSIM scores. While the scores of SN-FT are low, it appears most textured and sharp compared to the ensembled -net result.

GD-SN 0.0069 0.0243 38.91 6.46 0.9136 0.1320 0.0130 0.0503 35.39 4.68 0.8809 0.1341
PM-SN 0.0071 0.0250 38.81 6.64 0.9135 0.1340 0.0137 0.0508 35.12 4.59 0.8790 0.1377
VS-SN 0.0069 0.0265 38.98 6.57 0.9138 0.1326 0.0118 0.0511 36.15 5.19 0.8842 0.1374
GD-SN-LSGAN 0.0069 0.0267 38.99 6.55 0.9137 0.1322 0.0118 0.0538 36.18 5.18 0.8841 0.1367
GD-SN-FT 0.0069 0.0125 38.46 6.10 0.9085 0.1327 0.0107 0.0124 36.01 4.59 0.8808 0.1352
PM-PCN 0.0064 0.0117 38.61 5.67 0.9127 0.1199 0.0115 0.0150 35.56 4.42 0.8785 0.1277
-net 0.0055 0.0118 39.57 6.42 0.9205 0.1234 0.0091 0.0150 36.83 4.97 0.8917 0.1317
Table 1: Quantitative results averaged over the whole fastMRI validation set
Figure 1: PDFS@1.5T (a) Fully-sampled sensitivity-weighted reference used for SN training (b) RSS target for quantitative challenge evaluation; (c) ensembled -net (d) semi-supervised FT, for .

4 Conclusion

This work shows the results for various PCNs and SNs, which are included in an ensembled -net. We observe that the SNs perform similar and the final ensembling reduces random errors made by the individual networks. Semi-supervised finetuning is a promising way to get the texture and noise back from the original k-space data. Although the GD-SN-FT would suit the human eye best, these results demonstrate, however, again that quantitative metrics do not coincide with the visual perception. The challenge of this multicoil challenge was the RSS reference, requiring a STL for SNs to match the contrast. This has no practical relevance and visually decreases the quality of our initial results. Hence, future work will focus on evaluating the SNs on sensitivity-combined reference images.


The work was partially funded by EPSRC Programme Grant (EP/P001009/1).


  1. MoDL: model-based deep learning architecture for inverse problems. IEEE Transactions on Medical Imaging 38, pp. 394–405. Cited by: §1, §2.1.
  2. VS-net: variable splitting network for accelerated parallel mri reconstruction. arXiv preprint arXiv:1907.10033. Cited by: §1, §2.1.
  3. Learning a variational network for reconstruction of accelerated MRI data. Magnetic resonance in medicine 79, pp. 3055–3071. Cited by: §1, §2.1, §2.2.
  4. L2 or not l2: impact of loss function design for deep learning mri reconstruction. In ISMRM 25th Annual Meeting, pp. 0687. Cited by: §2.3.
  5. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §2.3.
  6. SENSE: sensitivity encoding for fast mri. Magnetic resonance in medicine 42 (5), pp. 952–962. Cited by: §1.
  7. Convolutional recurrent neural networks for dynamic mr image reconstruction. IEEE transactions on medical imaging 38 (1), pp. 280–290. Cited by: §1.
  8. The nmr phased array. Magnetic resonance in medicine 16 (2), pp. 192–225. Cited by: §2.4.
  9. U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §2.1, §2.2.
  10. A deep cascade of convolutional neural networks for dynamic mr image reconstruction. IEEE transactions on Medical Imaging 37 (2), pp. 491–503. Cited by: §2.1, §2.5.
  11. Data consistency networks for (calibration-less) accelerated parallel mr image reconstruction. In ISMRM 27th Annual Meeting, pp. 4664. Cited by: §1, §2.1.
  12. ESPIRiT–an eigenvalue approach to autocalibrating parallel mri: where sense meets grappa.. Magnetic resonance in medicine 71 3, pp. 990–1001. Cited by: §1, §2.2, §2.5.
  13. Deep iterative down-up cnn for image denoising. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §2.1, §2.2.
  14. Fastmri: an open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839. Cited by: §1, §2.3.
  15. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging 3 (1), pp. 47–57. Cited by: §2.3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description