dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance

dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance

Jo Schlemper(\Letter) 1Biomedical Image Analysis Group, Imperial College London, UK 1{jo.schlemper11,d.rueckert}@imperial.ac.uk    Ilkay Oksuz 2School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
2{o.oksuz,j.clough,andrew.king,julia.schnabel}@kcl.ac.uk
   James Clough 2School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
2{o.oksuz,j.clough,andrew.king,julia.schnabel}@kcl.ac.uk
   Jinming Duan 1Biomedical Image Analysis Group, Imperial College London, UK 1{jo.schlemper11,d.rueckert}@imperial.ac.uk    Andrew P. King 2School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
2{o.oksuz,j.clough,andrew.king,julia.schnabel}@kcl.ac.uk
   Julia A. Schnabel    Jo V. Hajnal 2School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
2{o.oksuz,j.clough,andrew.king,julia.schnabel}@kcl.ac.uk 3Imaging and Biomedical Engineering Clinical Academic Group, King’s College London, UK
3{jo.hajnal}@kcl.ac.uk
   Daniel Rueckert 1Biomedical Image Analysis Group, Imperial College London, UK 1{jo.schlemper11,d.rueckert}@imperial.ac.uk
Abstract

AUTOMAP [zhu2018image] is a promising generalized reconstruction approach, however, it is not scalable and hence the practicality is limited. We present dAUTOMAP, a novel way for decomposing the domain transformation of AUTOMAP, making the model scale linearly. We show dAUTOMAP outperforms AUTOMAP with significantly fewer parameters.

1 Introduction

Recently, automated transform by manifold approximation (AUTOMAP) [zhu2018image] has been proposed as an innovative approach to directly learn the transformation from source signal domain to target image domain. While the applicability of AUTOMAP to a range of tasks has been demonstrated, its practicality remains limited as the required number of parameters scales quadratically with the input size. We present a novel way for decomposing the domain transformation, which makes the model scale linearly with the input size. We term the resulting network dAUTOMAP (decomposed - AUTOMAP). We show that, remarkably, the proposed approach outperforms AUTOMAP for the provided dataset with significantly fewer parameters.Presented at ISMRM 27th Annual Meeting & Exhibition (Abstract #658)

2 Methods

Let be a complex-valued image. The two-dimensional Discrete Fourier Transform (DFT) is given by:

(1)

This is commonly written as a matrix product: , where we take row-major vectorization and , with , . As the matrix is the Kronecker product of two one-dimensional DFT’s, we have:

(2)

where , . Observe that can be computed using a convolution layer with kernels of size with no padding where the output tensor has size . Motivated by this, we propose a decomposed transform layer (DT layer): a convolution layer with the above kernel size, which is learnable. In the simplest case, the layer can be reduced to the (inverse) Fourier transform or identity. A 2D DFT can be performed by applying the DT layer twice, where the intermediate tensor is first reshaped into and then conjugate-transposed. Note that the complex nature of the operation is preserved by , which doubles the number of output channels (i.e. ). Therefore, the convolution kernel of the DT layer has the shape: . For the second DT layer, and are swapped.

The proposed dAUTOMAP, shown in Fig. 1, replaces the fully-connected layers in AUTOMAP by DT layers. We used ReLU as the choice of non-linearity.

Figure 1: The proposed network architecture of dAUTOMAP. The network takes -space data on a Cartesian grid and directly recovers grid and directly recovers the underlying image. The fully-connected layers in AUTOMAP are replaced by decomposed transform (DT) layers, which are fully-connected along one axis. For two-dimensional input, we apply the DT layer twice, once for each axis. Each DT block is activated by ReLU nonlinearity. DT blocks are followed by a sparse convolutional autoencoder, as proposed in [zhu2018image].

3 Evaluation

We evaluated the proposed method on a simulation-based study using short-axis (SA) cardiac cine magnitude images from the UK Biobank Study[petersen2015uk] (1M SA slices). To compare with AUTOMAP, the data were subsampled to central k-space grid points. Both methods were evaluated on the reconstruction tasks from three undersampling patterns: (1) Cartesian with Acceleration Factor AF2, (2) Poisson with AF4, (3) Variable density Poisson (VDP) with AF7[uecker2015berkeley]. For dAUTOMAP, we also experimented with the images having k-space grid points, with Cartesian undersampling.

Both networks were initialised randomly and trained for 1000 epochs. We used RMSProp with and Adam optimiser for AUTOMAP and dAUTOMAP respectively. The reconstructions were evaluated by mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and High Frequency Error Norm (HFEN). We also compared the reconstruction speed and the required parameters.

4 Results

{adjustbox}

max width= AF Undersampling Model MSE () PSNR (dB) SSIM HFEN 7 VDP AUTOMAP 3.271.17 22.071.35 0.760.03 0.540.07 dAUTOMAP 1.760.47 24.671.14 0.820.02 0.390.03 4 Poisson AUTOMAP 3.631.28 21.611.34 0.740.03 0.620.09 dAUTOMAP 1.540.43 25.281.20 0.840.02 0.400.03 2 Cartesian AUTOMAP 2.811.35 22.841.64 0.800.03 0.420.07 dAUTOMAP 1.010.39 27.2 1.51 0.890.02 0.270.05 2 Cartesian AUTOMAP n/a n/a n/a n/a dAUTOMAP 0.510.25 30.311.81 0.910.02 0.290.04

Table 1: Quantitative comparison between AUTOMAP and dAUTOMAP. In general, dAUTOMAP outperformed AUTOMAP for Mean Squared Error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and high frequency error norm (HFEN).
Figure 2: Reconstruction of AUTOMAP and dAUTOMAP for different undersampling patterns and the resulting error maps. One can see that AUTOMAP tends to over-smooth the edges, which were preserved by dAUTOMAP better.
Figure 3: The sample reconstructions of dAUTOMAP from Cartesian -space data and the corresponding error maps. The majority of the artefacts were removed, however, some high-frequency aliasing can still be observed.

As shown in Table 1, the proposed approach outperformed AUTOMAP (Wilcoxon, ). Fig. 2 shows sample reconstructions. We notice that AUTOMAP tends to over-smooth the image, whereas dAUTOMAP preserves the fine-structure better, even though the residual artefact is more prominent. The result of dAUTOMAP for -space data is shown in Fig. 3, demonstrating that the method successfully learnt a transform which simultaneously dealiases the image. The execution speeds were comparable (Table 2. The parameters of the proposed approach required only 1.5MB of memory for k-space data, compared to 3.1GB required for AUTOMAP (these numbers increase to 3.1MB vs 56GB for k-space data).

{adjustbox}

max width= Model #Parameters () Speed (ms) #Parameters () Speed (ms) AUTOMAP 806 0.360.36 13000 n/a dAUTOMAP 0.37 0.480.10 1.16 0.50 0.12

Table 2: Comparison of the number of parameters and execution speed.

5 Discussion and Conclusion

In this work, we propose a simple architecture which makes AUTOMAP scalable, based on the idea that the original Fourier kernels are linearly separable. We experimentally found that such an approach yields superior performance in practice, which is attributed to having significantly fewer parameters, making it easier to train and less prone to overfitting. In future, we plan to investigate the performance of non-Cartesian sampling strategies, which would require regridding, or extensions to 3D data. Finally, the code is available on http://github.com/js3611/dAUTOMAP.

6 Acknowledgements

Jo Schlemper is partially funded by EPSRC Grant (EP/P001009/1).

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
391981
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description