# Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction

###### Abstract

We present simple reconstruction networks for multi-coil data by extending deep cascade of CNN’s and exploiting the data consistency layer. In particular, we propose two variants, where one is inspired by POCSENSE and the other is calibration-less. We show that the proposed approaches are competitive relative to the state-of-the-art both quantitatively and qualitatively.

## 1 Introduction

Recently, several deep learning approaches have been proposed for accelerated parallel MR image reconstruction[hammernik2018learning, mardani2018deep, han2019k, cheng2018deepspirit, akccakaya2019scan, zhang2018multi]. In this work, we present simple reconstruction networks for multi-coil data by extending deep cascade of CNN’s[schlemper2017deep]. In particular, we propose two approaches, where one is inspired by POCSENSE[samsonov2004pocsense] and the other is calibration-less. The method is evaluated using a public knee dataset containing 100 subjects[hammernik2018learning]. We show that the proposed approaches are competitive relative to the state of the art both quantitatively and qualitatively.
^{†}^{†}Presented at ISMRM 27th Annual Meeting & Exhibition (Abstract #4663)

## 2 Methods

The proposed networks are direct extensions of deep cascades of CNN (DC-CNN), where the denoising sub-networks and the data consistency layers are interleaved. However, for parallel imaging, the data consistency layer can be extended in two ways, yielding two network variants. In the first approach, sensitivity estimates are required, which can be computed using algorithms such as E-SPIRiT[uecker2014espirit]. The input to the CNN is a single, sensitivity-weighted recombined image. At each iteration, the CNN updates an estimate of the combined image. For the data consistency layer, the forward operation is performed, then acquired samples are filled coil-wise as:

(1) |

where , are the -th coil-weighted image for the intermediate CNN reconstruction in -space and the original -space data respectively. The result is mapped back to image domain via the adjoint of the encoding matrix. As the operation in the data consistency layer is analogous to the projection step from POCSENSE, the proposed network is termed D(eep)-POCSENSE. The balancing term depends on the input noise level, however, this is made trainable as a network parameter. The network is trained using loss:

(2) |

where and are the initial recombined image and ground truth respectively.

The second approach reconstructs the multiple coil data directly without performing the recombination and the coil images are stacked along the channel-axis and fed into each sub-network. For the data consistency layer, each coil image is Fourier transformed and Eq. 1 is applied individually. As it does not require a sensitivity estimate, the proposed approach is calibration-less. The proposed network, DC-CNN, is trained with the following weighted- loss:

(3) |

where the subscript indexes -th coil data and is the sensitivity map. The proposed architectures are shown in Fig. 3.

## 3 Evaluation

We used the public knee dataset provided by Hammernik et al.[hammernik2018learning]^{1}^{1}1Available at mridata.org.. The dataset contains 100 patients, 20 subjects per acquisition protocol. For each approach, one network was trained to reconstruct all acquisition protocols simultaneously. We used 15 for training and 5 for testing per protocol. The proposed approach was compared with -SPIRiT[murphy2012fast] and Variational Network (VN)[hammernik2018learning]. We used Cartesian undersampling with acceleration factor (AF) , sampling 24 central region, which was also used as the calibration region for estimating the sensitivity maps. In this work, D-POCSENSE and DC-CNN were trained with ,,[schlemper2017deep] and convolution kernels with dilation factor 2. The network was trained using Adam with for 200 epoch with batch size 4. The default parameters were used for both -SPIRiT and VN. We used PSNR and SSIM for the metric.

## 4 Results

Quantitative results are summarised in Table 1 for each acquisition protocol. The proposed methods both outperformed the compressed sensing approach on average. D-POCSENSE achieved the performance close to VN for AF=4, whereas DC-CNN was slightly worse. All methods provided similar SSIM. For AF=6, VN achieved the highest PSNR. The sample reconstructions are shown in Fig. 6 for AF=4 and AF=6 respectively. For Axial image, D-POCSENSE gave the most homogeneous image, whereas DC-CNN and VN often failed to remove aliasing. For AF=4, all methods generated sharp images. For AF=6, DC-CNN performed worse that D-POCSENSE and VN and the residual aliasing is prominent.

## 5 Discussion and Conclusion

In this work, we proposed simple extensions to DC-CNN for parallel imaging. When comparing the two approaches so far explored, D-POCSENSE outperformed DC-CNN overall, which leads to the conclusion that incorporating the sensitivity estimate is advantageous. We speculate that this is because it allows intermediate sub-networks to directly operate in the output space as well as directly optimising the loss with respect to the final output. Nevertheless, DC-CNN achieved the highest SSIM in some regimes, which shows that a novel way of combining the raw data could lead to improved algorithms. The proposed methods achieved comparable performance to state-of-the-art algorithms, however, we note that the variational network produced the best result overall.

## 6 Note

We observed that training D-POCSENSE and DC-CNN networks longer can further remove the residual aliasing present in the reconstruction to eventually reach similar performances. The presented work is now extended to *variable-splitting network* [duan2019vs].

## 7 Acknowledgements

Jo Schlemper is partially funded by EPSRC Grant (EP/P001009/1).