Low Dose CT Image Reconstruction With Learned Sparsifying Transform

Low Dose CT Image Reconstruction With Learned Sparsifying Transform

Abstract

A major challenge in computed tomography (CT) is to reduce X-ray dose to a low or even ultra-low level while maintaining the high quality of reconstructed images. We propose a new method for CT reconstruction that combines penalized weighted-least squares reconstruction (PWLS) with regularization based on a sparsifying transform (PWLS-ST) learned from a dataset of numerous CT images. We adopt an alternating algorithm to optimize the PWLS-ST cost function that alternates between a CT image update step and a sparse coding step. We adopt a relaxed linearized augmented Lagrangian method with ordered-subsets (relaxed OS-LALM) to accelerate the CT image update step by reducing the number of forward and backward projections. Numerical experiments on the XCAT phantom show that for low dose levels, the proposed PWLS-ST method dramatically improves the quality of reconstructed images compared to PWLS reconstruction with a nonadaptive edge-preserving regularizer (PWLS-EP).

\name

Xuehang Zheng, Zening Lu, Saiprasad Ravishankar, Yong Long*, Jeffrey A. Fessler 1 \addressUniversity of Michigan - Shanghai Jiao Tong University Joint Institute,
Shanghai Jiao Tong University, Shanghai, China
Department of Electrical Engineering and Computer Science, University of Michigan, MI, USA \copyrightnotice978-1-5090-0746-2/16/$31.00 ©2016 IEEE {keywords} Low dose CT, Sparsifying transform learning, Statistical image reconstruction, Sparse representation, Dictionary learning

1 Introduction

A major challenge in computed tomography (CT) is to reduce X-ray dose to a low or even ultra-low level while maintaining the high quality of reconstructed images. Low dose CT (LDCT) scans that still provide good image quality could significantly improve the benefits of CT scans and open up numerous entirely new clinical applications.

Currently, most commercial CT scanners use a technique called filtered back-projection (FBP) for image reconstruction. FBP requires undesirably high doses of radiation to produce high-quality diagnostic images. Model-based image reconstruction (MBIR) methods, also known as statistical image reconstruction methods, produce high-quality and accurate images, while reducing patient radiation exposure. Weighted-least squares (WLS) estimation is commonly used for CT [1]. WLS estimation with proper weighting that gives less weight to measurements that are noisier and more weight to the more reliable data reduces noise in the reconstructed image. Penalized weighted-least squares (PWLS) reconstruction with added regularization based on prior knowledge of the underlying unknown object improves image quality in LDCT reconstruction [2]. Thus, MBIR with better image priors is a promising way to develop improved reconstruction methods for achieving high quality LDCT imaging.

Prior information extracted from big datasets of CT images could potentially enable dramatic improvements in image reconstruction from LDCT measurements. It is well known that natural signals are sparse in certain transform domains, such as wavelets and discrete gradient domain. A sparsifying transform (ST) converts signals into these domains where they can be represented using a few non-zero coefficients. Ravishankar and Bresler [3, 4] proposed a generalized analysis dictionary learning method, called transform learning, to efficiently find sparse representations of data. The transform learning method avoids optimization of highly non-convex or NP-hard cost functions involved in both synthesis [5, 6] and previous analysis [7] dictionary learning methods, and shows promising performance and speed-ups over the popular synthesis K-SVD [6] algorithm in applications such as image denoising.

Xu et al. [8] first applied dictionary learning to CT image reconstruction by proposing a PWLS approach with regularization based on a redundant synthesis dictionary. Their method uses a global dictionary trained from image patches extracted from one normal-dose FBP image, or an adaptive dictionary jointly estimated with the low-dose image. Pfister and Bresler [2, 9] proposed a model-based iterative reconstruction method with adaptive sparsifying transforms to jointly estimate the ST and the image, showing the promise of PWLS reconstruction with ST regularization. Existing CT image reconstruction methods based on dictionary/transform learning have two common downsides. Firstly, the dictionary is often learned from a very small number of prior images or from the current measurements themselves, which does not take advantage of the existing big databases of CT images acquired from thousands of patients. Secondly, the prior images are often from the same patient at regular dose. When such a prior image is not available, these methods could not be used.

We propose a new method for low dose CT reconstructions that combines conventional PWLS reconstruction with regularization based on a sparsifying transform (PWLS-ST) learned from a dataset of numerous CT images. Numerical experiments on the XCAT phantom show that for low dose levels, our method dramatically improves the quality of reconstructed images compared to PWLS reconstruction with an edge-preserving hyperbola regularizer (PWLS-EP). We adopt a relaxed linearized augmented Lagrangian method with ordered-subsets (relaxed OS-LALM) [10] to accelerate the image reconstruction process.

2 Problem Formulation

We solve the following optimization problem to reconstruct an image from noisy sinogram data using a pre-learned [4] ST matrix :

(P0)

where the regularizer based on the sparsifying transform is defined as:

(1)

is the diagonal weighting matrix with elements being the reciprocal of the variance, i.e., , is the system matrix of a CT scan, and . is the number of image patches, the operator extracts the th patch of voxels of as a vector , denotes the sparse representation of , is a positive parameter to control the noise and resolution trade-off, is a weight to control sparsity in the model, and is the “norm” that counts the number of nonzero elements in a vector.

In (P0), the patches of the underlying image are assumed to be approximately sparse in the learned transform domain. We estimate both the image and the sparse coefficients from LDCT data.

3 Algorithm

3.1 Sparsifying Transform (ST) Learning

We learn a sparsifying transform (ST) matrix from the patches extracted from a dataset of regular dose CT images, i.e., we solve the following problem:

(P1)

where is the number of training patches, is a matrix whose columns are the sparse codes of the corresponding training signals (vectorized patches) in , and are positive scalar parameters, and regularizer prevents trivial solutions and enables control over the condition number of .

3.2 Optimization algorithm

We propose an alternating algorithm to solve (P0) that alternates between updating (image update step) and (sparse coding step) with other variables kept fixed.

Image Update Step

With fixed, (P0) reduces to the following weighted least squares problem:

(2)

We solve this problem using the relaxed OS-LALM [10] by iterating over the following update steps:

(3)

where is a diagonal majorizing matrix of , e.g., [11], is an operator that projects the input vector onto the convex set , is the number of ordered subsets, and , , and the vector are sub-matrices of , , and sub-vector of , respectively, for the th subset. is the (over-)relaxation parameter, and is the AL penalty parameter decreasing gradually as iterations progress [10], i.e.,

(4)

The learned sparsifying transform is a well-conditioned square matrix [4]. is a diagonal majorizing matrix of the Hessian of the regularizer , e.g.,

(5)

The term is a diagonal matrix with the diagonal entries corresponding to image pixel locations and their values being the number of patches overlapping each pixel [12]. If we assume periodically positioned overlapping image patches that wrap around at image boundaries, then the diagonal entries are equal, i.e., (), where is a scalar. In particular, when the overlap stride is 1, is equal to the image patch size . Therefore, simplifies to:

(6)

Since is independent of and , we precompute it prior to iterating.

Sparse Coding Step

With fixed, we update by solving

(7)

The optimal sparse codes are given in closed form as , i.e., setting the entries with magnitude less than to zero. The hard-thresholding operator is applied to each entry in a vector as

(8)
Input: initial image , , ,
,
Output: - reconstructed image.
for i  do
     (1) Image Reconstruction: with fixed, Initialization: , , , , .
     for k  do
         for m  do
decrease using (4) with .
         end for
     end for
     
     (2) Sparse Coding: with fixed, the optimal sparse codes are .
end for
Algorithm 1 PWLS-ST Algorithm

Algorithm 1 describes the proposed optimization algorithm for solving the PWLS-ST reconstruction problem (P0).

4 Experimental Results

We evaluate the proposed PWLS-ST method and compare its image reconstruction quality with those of conventional FBP with a Hanning window, PWLS reconstruction with regularization based on DCT in (1) (PWLS-DCT), and PWLS reconstruction with edge-preserving hyperbola regularization (PWLS-EP). The PWLS-EP reconstruction is optimized using relaxed OS-LALM algorithm [10].

(a) DCT
(b)
Figure 1: Rows of the sparsifying transform shown as patches for (a) 2D DCT, and (b) learned with .

We pre-learned a ST matrix from different slices of an XCAT phantom [13] using (P1). We extracted image patches with overlapping stride of 1 from the five XCAT slices. We ran iterations of the alternating minimization algorithm proposed in [4] to make sure the learned ST is completely converged with , and . Figure 1 shows 2D DCT and the well-conditioned pre-learned ST. Each row of these transforms is displayed as an patch.

We simulated a 2D fan-beam CT scan using a XCAT phantom slice, which is different from the learning slices, and mm. Noisy (Poisson noise) sinograms of size were numerically generated with GE LightSpeed fan-beam geometry corresponding to a monoenergetic source with , , and incident photons per ray and no scatter, respectively. We reconstructed a image with a coarser grid, where mm.

(a) (b)
Figure 2: From top to bottom the incident photon intensities are , , and . (a) Reconstructions (display window HU). (b) Difference images (magnitudes) between PWLS-EP and PWLS-ST reconstructions and the ground truth.

We computed the root mean square error (RMSE) in modified Hounsfield units (HU) where air is HU. For a reconstructed image , RMSE , where is the ground truth image.

Initialized with FBP reconstructed images, the PWLS-EP method converges quickly using relaxed OS-LALM with subsets. For PWLS-DCT and the proposed PWLS-ST methods, we used the image obtained with the PWLS-EP method as initialization. The parameter and were determined empirically by sweeping over a large range of values. For PWLS-ST, was set as , , and for incident photon intensities of , , and , and the corresponding values for PWLS-DCT were , , and , respectively. In each outer iteration, we ran inner iterations of the image update step with subsets, i.e., , in Algorithm 1. For PWLS-EP, was set as , , and respectively for the four incident photon intensities, and the edge-preserving regularizer is with HU.

Intensity FBP PWLS-EP PWLS-DCT PWLS-ST
24.8
30.6
44.2
Table 1: RMSE (HU) of reconstructions with FBP, PWLS-EP, PWLS-DCT and PWLS-ST for four levels of incident photon intensities.

Figure 2(a) shows the reconstructed images by FBP, PWLS-EP, PWLS-DCT and PWLS-ST. When the incident photon intensity is , although the PWLS-EP image has lower RMSE than the PWLS-ST image, the latter has no visible noise. When the incident photon intensities are and , compared with FBP and PWLS-EP, PWLS-ST greatly improves image quality in terms of decreasing noise and retaining small structures. Figure 2(b) shows the difference images (magnitudes) between the PWLS-EP and PWLS-ST reconstructions and the ground truth image, for the three incident photon intensities.

Table 1 summarizes the RMSE of reconstructions with FBP, PWLS-EP, PWLS-DCT and PWLS-ST for four different incident photon intensities. For low dose cases, PWLS-ST further decreases the RMSE achieved by PWLS-EP.

5 Conclusion

We present a PWLS-ST method that combines conventional PWLS reconstruction with regularization based on a sparsifying transform that is pre-learned from a dataset of numerous CT images, to improve the quality of reconstructed images in low dose CT imaging. Numerical experiments show the proposed PWLS-ST method may help reduce X-ray dose to a low level while still providing high quality image reconstructions. For future work, we will investigate PWLS with a union of sparsifying transforms [14] that could be pre-learned in an online manner [15] from large datasets. We also plan to compare our methods to the recent transform blind reconstruction framework [2]. We will apply the proposed PWLS-ST method to clinical CT data.

Footnotes

  1. thanks: This work was supported in part by SJTU-UM Collaborative Research Program, NSFC (61501292), Shanghai Pujiang Talent Program (15PJ1403900), NIH grant U01 EB018753, ONR grant N00014-15-1-2141, DARPA Young Faculty Award D14AP00086, and ARO MURI grants W911NF-11-1-0391 and 2015-05174-05.             *Yong Long (email: yong.long@sjtu.edu.cn).

References

  1. J-B. Thibault, K. Sauer, C. Bouman, and J. Hsieh, “A three-dimensional statistical approach to improved image quality for multi-slice helical CT,” Med. Phys., vol. 34, no. 11, pp. 4526–44, Nov. 2007.
  2. L. Pfister and Y. Bresler, “Adaptive sparsifying transforms for iterative tomographic reconstruction,” in Proc. 3rd Intl. Mtg. on image formation in X-ray CT, 2014, pp. 107–10.
  3. S. Ravishankar and Y. Bresler, “Learning sparsifying transforms,” IEEE Trans. Sig. Proc., vol. 61, no. 5, pp. 1072–86, Mar. 2013.
  4. S. Ravishankar and Y. Bresler, sparsifying transform learning with efficient optimal updates and convergence guarantees,” IEEE Trans. Sig. Proc., vol. 63, no. 9, pp. 2389–2404, May 2015.
  5. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Im. Proc., vol. 15, no. 12, pp. 3736–45, Dec. 2006.
  6. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Proc., vol. 54, no. 11, pp. 4311–22, Nov. 2006.
  7. R. Rubinstein, T. Peleg, and M. Elad, “Analysis K-SVD: A dictionary-learning algorithm for the analysis sparse model,” IEEE Trans. Sig. Proc., vol. 61, no. 3, pp. 661–677, Feb. 2013.
  8. Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-dose X-ray CT reconstruction via dictionary learning,” IEEE Trans. Med. Imag., vol. 31, no. 9, pp. 1682–97, Sept. 2012.
  9. L. Pfister and Y. Bresler, “Model-based iterative tomographic reconstruction with adaptive sparsifying transforms,” in Proc. SPIE 9020 Computational Imaging XII, 2014, pp. 90200H–1–90200H–11.
  10. H. Nien and J. A. Fessler, “Relaxed linearized algorithms for faster X-ray CT image reconstruction,” IEEE Trans. Med. Imag., vol. 35, no. 4, pp. 1090–1098, Apr. 2016.
  11. H. Erdoğan and J. A. Fessler, “Ordered subsets algorithms for transmission tomography,” Phys. Med. Biol., vol. 44, no. 11, pp. 2835–51, Nov. 1999.
  12. S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled k-space data by dictionary learning,” IEEE Trans. Med. Imag., vol. 30, no. 5, pp. 1028–41, May 2011.
  13. W. P. Segars, M. Mahesh, T. J. Beck, E. C. Frey, and B. M. W. Tsui, “Realistic CT simulation using the 4D XCAT phantom,” Med. Phys., vol. 35, no. 8, pp. 3800–8, Aug. 2008.
  14. B. Wen, S. Ravishankar, and Y. Bresler, “Structured overcomplete sparsifying transform learning with convergence guarantees and applications,” Intl. J. Comp. Vision, vol. 114, no. 2-3, pp. 137–167, 2015.
  15. S. Ravishankar, B. Wen, and Y. Bresler, “Online sparsifying transform learning – Part I: Algorithms,” IEEE J. Sel. Top. Sig. Proc., vol. 9, no. 4, pp. 625–636.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
214343
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description