Deconvolution-Based Backproject-Filter (BPF) Computed Tomography Image Reconstruction Method Using Deep Learning Technique

Deconvolution-Based Backproject-Filter (BPF) Computed Tomography Image Reconstruction Method Using Deep Learning Technique

Abstract

For conventional computed tomography (CT) image reconstruction tasks, the most popular method is the so-called filtered-back-projection (FBP) algorithm. In it, the acquired Radon projections are usually filtered first by a ramp kernel before back-projected to generate CT images. In this work, as a contrary, we realized the idea of image-domain backproject-filter (BPF) CT image reconstruction using the deep learning techniques for the first time. With a properly designed convolutional neural network (CNN), preliminary results demonstrate that it is feasible to reconstruct CT images with maintained high spatial resolution and accurate pixel values from the highly blurred back-projection image, i.e., laminogram. In addition, experimental results also show that this deconvolution-based CT image reconstruction network has the potential to reduce CT image noise (up to ), indicating that patient radiation dose may be reduced. Due to these advantages, this proposed CNN-based image-domain BPF type CT image reconstruction scheme provides promising prospects in generating high spatial resolution, low-noise CT images for future clinical applications.

CT imaging, backproject-filter (BPF) reconstruction algorithm, convolutional neural network (CNN).
\pdfoptionpdfminorversion

=5

123

I Introduction

Until now, the FBP algorithmKak and Slaney (1988); Hsieh et al. (2009) and its variant the FDK cone-beam CT reconstruction algorithmFeldkamp, Davis, and Kress (1984) may still be the most popular analytical CT image reconstruction method in clinical applications. In the FBP reconstruction algorithm, a 1D ramp kernel (or other type of similar filter) is usually used to filter the Radon projections, i.e.,sinogram, at first. Afterwards, the filtered sinogram is back-projected to generate the desired CT image. Since the 1D image-domain data filtration operation is simple and fast, the FBP CT image reconstruction framework has been widely used in practical implementations.

However, other analytical CT image reconstruction methods are also feasible in generating CT image in theory. For example, the Fourier slice theorem based CT image reconstruction strategy, and the backproject-filter (BPF) CT image reconstruction strategy. Herein, we mainly focus on discussing the BPF CT image reconstruction method. As the name indicated, this strategy backprojects the sinogram first to generate the laminogramSmith, Peters, and Bates (1973); Martin et al. (1994); Danovich and Segal (1994). Afterwards, an image deconvolution procedure is utilized to recover the corresponding sharp CT image. To extract the true CT image, historically, one option is performing the image deconvolution in 2D Fourier domain. However, a practical difficulty of using this method roots in the fact that the ideal deconvolution kernel needs an unbounded supportSmith, Peters, and Bates (1973). This requires to reconstruct the laminogram onto a support considerably larger than the support of the true object, which makes the computation less efficient. Furthermore, any possible digital apodization might also degrade the spatial resolution of the final CT image. Other possible solutions could be the conventional image domain deconvolution or deblurring techniquesAlmeida and Almeida (2010); Campisi and Egiazarian (2017). Despite that these approaches do not require large size support of laminogram, they may still encounter some critical challenges. For instance, the deblurring operations may be hard in recovering fine image details, or may need a prolonged image reconstruction period. Therefore, this BPF-type image reconstruction strategy was seldom implemented in history.

Figure 1: Comparison of two CT image reconstruction schemes: the top one represents the conventional FBP reconstruction strategy; the bottom one represents the proposed CNN-based BPF reconstruction strategy.

Nowadays, the fast developing deep learning technique has shown great success in image processing fields, including classification, identification, and segmentationLeCun, Bengio, and Hinton (2015). Definitely, this new emerging technique will provide many opportunities for medical CT imaging applications. Some of the latest CT image processing studiesJin et al. (2017); Chen et al. (2017); Wang et al. (2018); Han and Ye (2018); Kang et al. (2018); Yang et al. (2018); Zhang et al. (2018) have demonstrated the advancements of the deep learning technique, especially the deep convolutional neural network (CNN), in reducing image noise and removing image artifacts. In one of the latest studies, it has also demonstrated that CNN can learn the CT image reconstruction procedures via domain transform manifold learningZhu et al. (2018). These exciting and pioneering studies strongly indicate that the deep learning technique can help solving problems hard to be solved in history, and providing more efficient and better solutions than conventionally used methods. Under this condition, in this paper, we proposed to reconstruct CT image by deconvolving the highly blurred laminogram using the latest CNN technique. In particular, we would like to investigate the feasibility of reconstructing CT image from the laminogram via image-domain deconvolution using the CNN technique. The workflow of the conventional FBP reconstruction method and the new CNN-based BPF reconstruction method are compared in Fig. 1.

To our best knowledge, it is the first time to revisit the BPF-type CT image reconstruction idea using deep learning technique. This preliminary work mainly focuses on demonstrating the feasibility of this newly suggested strategy. Other advancements of this CNN-based new CT image reconstruction framework, for example, the CT image denoisingChen et al. (2017); Kang et al. (2018); Yang et al. (2018) capability and the CT image artifact removalHan and Ye (2018); Zhang et al. (2018) capability, will be studied and discussed in future. The main contributions of this work are as follows: First, we demonstrated that a properly designed CNN architecture is able to learn a close approximation of the theoretical deconvolution kernel, and therefore can reconstruct the object from laminogram with high accuracy; Second, the reconstructed CT image has similar spatial resolution compared with the FBP reconstructed CT image; Third, CT images generated from this CNN-based BPF reconstruction method also maintains similar noise texture as in the FBP reconstruction method. Fourth, the CNN-based BPF reconstruction method has the potential to reduce CT image noise (up to ) compared with the FBP results. To prove these observations, both numerical and experimental studies are conducted, results and quantitative measurements are presented in the following sections as well.

Ii Methods

ii.1 Theoretical foundation

Without loss of generality, in this theoretical discussion section, we assume that the CT system has a parallel-beam imaging geometry. Similar conclusions may also be obtained for the fan-beam geometryGullberg and Zeng (1995). For parallel-beam CT imaging geometry, the projection of an arbitrary 2D object on the detector plane at a certain view-angle can be written as following:

(1)

In Eq. (1), denotes the measured projection, specifies an x-ray beam passing through the object, is the distance from the origin to the beam, is the angle between the -axis to the ray penetrating the object. For the parallel-beam imaging geometry, the projections are usually acquired with 180 degree tube-detector rotation interval, namely, . Usually, Eq. (1) is also known as the Radon transformKak and Slaney (1988); Hsieh et al. (2009).

To continue the following discussions, now we simply take each projection value and put it back into the object space along the Radon integration direction one view after another. Thus, the so-called laminogram, denoted as , is generated. Explicitly, this angularly-weighted back-projection procedure can be expressed as:

(2)

By substituting Eq. (1) into Eq. (2), one can get the following relationship between the laminogram image and the reference object ,

(3)

Evidently, the direct back-projection of the Radon projections generates a blurred version of the reference object . The blurry kernel is . The 2D Fourier transformation of Eq. (3) yields,

(4)

where variables are the Fourier domain counterparts to the image domain spatial variables . Note that the convolution operation in image-domain as shown in Eq. (3) has replaced by the multiplication operation in frequency-domain, as shown in Eq. (4). Immediately, one can get,

(5)

Upon Eq. (5), now we can recover the reference object by taking an inverse Fourier transform on both sides,

(6)
(7)

This is one possible solution, and potential difficulties of solving it has been briefly discussed before. Actually, according to the convolution theorem, the product of the Fourier transforms of two functions in frequency space equals to the convolution of two functions in spatial space. Therefore, Eq. (7) can be rewritten as below,

(8)

The Eq. (8) provides another option in reconstructing the reference object . In this approach, deconvolution is performed in image space directly. As mentioned above, conventional image domain deconvolution algorithms may encounter some difficulties. Therefore, the deep learning technique will be introduced and investigated to learn an approximation of the ideal deconvolution kernel in Eq. (8).

Figure 2: The architecture of the CNN network. In total there are 17 individual layers. The weight kernels for the convolutional layers are except for the last convolutional layer. The middle of this CNN is a fully-connected layer. The three numbers in each box denotes the image column number, image row number, and the channel number, respectively. All activation functions used are tanh. Four shortcuts are added symmetrically with respect to the fully-connected layer.

ii.2 Convolutional neural network architecture

In this study, the convolutional neural network feed-forward architecture was used. The CNN extracts high-level features from the blurred laminogram, and forces the entire network to approximate the theoretical deconvolution kernel. The input of the network is laminogram, the output is the deconvolved CT image, and between the input and the output is the network, as shown in Fig. 2. The designed architecture contains 17 layers in total: the first eight convolutional layers, the fully-connected layer in middle, and the last eight convolutional layers. The first eight convolutional layers help to encode useful image information from the laminogram. During this procedure, three stride=(2, 2) operations are used to reduce the image size from down to . The last eight convolutional layers after the fully-connected layer form the information decoder and generate the expected sharp CT images. As a contrary to the encoder, the image is upsampled step by step from back to the original image size. The weight kernels for the convolutional layers are except for the last convolutional layer. By design, four shortcuts are added accordingly to minimize the vanishing gradient phenomenon during back-propagationHe et al. (2016).

The network objective loss function is defined as:

(9)

where and represent the total pixels of CT image along the horizontal and vertical directions, correspondingly. In addition, the reference CT image is denoted as , the CNN-learned CT image is denoted as , and a pair of represents a certain pixel on image. During training procedures, the network automatically updates the weights and biases for all channels, and gradually learns the optimal parameters to minimize the loss function . As the training continues, the learned CT image converges to the reference image . Specifically, the Adam algorithmKingma and Ba (2014) was used with starting learning rate of 0.0003. The learning rate was exponentially decayed by a factor of 0.96 after every 1000 steps. The mini-batch had a size of 16, and batch-shuffling was turned on to increase the randomness of the training data. The network was trained for 100 epochs on the Tensorflow deep learning framework using a single graphics processing unit (GPU, NVIDIA GeForce GTX 1080Ti) with 11 GB memory capacity.

ii.3 Numerical study: Materials and methods

To verify the feasibility of this novel CNN based CT image reconstruction algorithm, numerical studies were performed with 50,000 diagnostic CT images downloaded from The Cancer Imaging Archive (TCIA). Anatomically, these diagnostic CT images contain typical human body structures: head and neck, chest, abdomen, and pelvis. All CT images were reconstructed into a image matrix by vendors.

To build the training dataset, a linear transformation was applied to convert the original CT image value of Hounsfield Unit (HU) to the linear attenuation coefficient as following:

(10)

where the factor 0.02 (with unit of 1/mm) corresponds to the reference X-ray linear attenuation coefficient of water around 60 keV. This is a close approximation to most of the mean X-ray tube energy in routine clinical CT scans. Essentially, this conversion also helps to normalize the TICA CT images. To augment the dataset, CT images were rotated by 90 degrees randomly. In addition, a uniform random number distributed between 0.50 to 2.00 was further multiplied to the CT images to broaden the data range.

A fan-beam CT imaging geometry was simulated. The source to detector distance was 1500.00 mm, and the source to rotation center was 1000.00 mm. There were 1400 detector elements, each had a dimension of 0.30 mm. To make the clinical CT images fit in this simulated imaging geometry, we further assumed that all CT images have the same pixel dimension of . Forward projections, i.e., Radon projections, were collected from 360 views with 1.00 degree angular interval. Notice that this simulated fan-beam CT imaging geometry is different from most of the real multi-slice diagnostic CT (MDCT) scanner imaging geometry. Also be aware that no noise is added in these simulations.

Figure 3: Flowchart to calculate MTF curves: (a) the cropped image with the Teflon rod insert positioned in center, (b) the radially averaged edge spread function, (c) the corresponding line spread function, (d) the estimaged MTF curve.
Figure 4: Validation results using clinical CT images obtained from TCIA. The display window is [0.00, 0.04] for the CT images, and is [-0.003, 0.003] for the difference images. The scale bar denotes 5.00 cm.

Afterwards, laminograms with image matrix size were reconstructed by directly back-projects the sinograms without adding any filtration. Thus, one laminogram (as the network input) and one reference CT image (as the network label) were prepared and paired to form both the training and testing datasets. Overall, 50,000 such data pairs were collected: 49,700 pairs were generated to form the training dataset, and the rest 300 pairs were prepared as the testing dataset. All these above numerical operations were performed in Matlab (The MathWorks Inc., Natick, MA, USA), as well as Matlab-executable CUDA (Compute Unified Device Architecture) algorithms, which are responsible for the forward Radon projections and the back-projections.

ii.4 Experimental materials and methods

Experimental validations were further conducted to demonstrate the performance and robustness of the proposed BPF CT image reconstruction framework. All experiments were performed on an in-house CT imaging bench in our lab. The system had a rotating-anode Tungsten target diagnostic level X-ray tube (Varex G-242, Varex Imaging Corporation, UT, USA). It was operated at 120.00 kV continuous fluoroscopy mode with 0.40 mm nominal focal spot. The X-ray tube current was set at 11.00 mA. The collimated beam had a vertical height of 25.00 mm at the rotation-center. A bowtie filter designed for PMMA phantom with 320.00 mm diameter was used, together with additional 0.50 mm Copper filtration. The X-ray detector was an energy-resolving photon-counting detector (XC-Hydra FX50, XCounter AB, Sweden) made from CdTe (Cadmium Telluride) material. It had an imaging area of 512.00 mm 6.00 mm with a native element dimension of 0.10 mm 0.10 mm. The detector signal accumulation period was 0.10 second. Projection datasets were acquired with detector binning mode, and were rebinned in the post-processing procedures. Since there was no need to discriminate photon energies, the detector was operated and calibratedGe et al. (2017) with only one single energy threshold (10.00 keV). To enable more accurate photon number detection, its anti-charge-sharing function was turned on. The source to detector distance was equal to 1500.00 mm, and the source to rotation-center was 1000.00 mm. A CatPhan-700 (The Phantom Laboratory Inc., Salem, NY, USA), and a dental and diagnostic head phantom (Atom Max 711-HN, CIRS Inc., VA, USA) were imaged in this work. Phantoms were rotated by 360.00 degrees with 1.00 degree angular interval. Under this particular system settings, the measured from a PMMA phantom of 160.00 mm diameter was 2.00 mGy. To investigate the radiation dose dependency of the new BPF-type CT image reconstruction algorithm, phantoms were scanned repeated by 10 times. Thus, the CT images generated with minimum noise correspond to a of 20.00 mGy. No beam-hardening corrections and scatter corrections were performed during these experiments.

Figure 5: Experimental results of the CTP682 module in CatPhan-700. Images in (a)-(c) are reconstructed from the FBP algorithm, and images in (d)-(f) are obtained from the BPF algorithm. Images in (a) and (d) are acquired with radiation dose; Images in (b) and (e) are acquired with radiation dose; Images in (c) and (f) are acquired with radiation dose. The scale bar denotes 5.00 cm. The display window is [0.01, 0.03].

Experimental CT images were reconstructed readily via two different approaches: one is the conventional FBP method; the other is the newly developed CNN deconvolution-based method in this paper. For the first FBP approach, the filtration kernel was a standard ramp kernel. For the second BPF approach, sinogram was first backprojected to get the laminogram, and then immediately put into the CNN network to reconstruct the desired CT image. The used CNN parameters were the same as used for the numerical validations in previous section. Both the laminogram and the reconstructed CT image have size of . The voxel dimension of CT images for visualization is . The voxel dimension was decreased to for quantitative image evaluations. Finally, the signal accuracy, image spatial resolution, i.e., the modulation transfer function (MTF), and the noise power spectrum (NPS) were studied side by side for the two methods. The workflow for MTF calculation is illustrated in Fig. 3.

Iii Results

iii.1 Results of numerical simulations

In this section, the numerical validations are presented. Results are illustrated in Fig. 4. In it, the first column represents the reference CT images, the second column represents the standard FBP-type CT images reconstructed from the acquired 360 views of projections, the third column represents the CNN learned BPF-type CT images, the fourth and fifth columns represent the difference between the reference and the reconstructed images correspondingly. Clearly, the proposed CNN network is able to reconstruct sharp CT images from the highly blurred laminograms with very high accuracy.

iii.2 Results of experimental studies

Experimental results are presented in this section. Figs. 5-7 compare the reconstructed CT images from both the FBP method and the CNN-based BPF method for three different radiation dose levels: radiation dose; radiation dose; and radiation dose. Since there are no reference images available, the standard FBP reconstructed CT images are considered as the ground truth, and the CT images reconstructed from the CNN network are compared with them.

Figure 6: Experimental results of the CTP714 module in CatPhan-700. Images in (a)-(c) are reconstructed from the FBP algorithm, and images in (d)-(f) are obtained from the BPF algorithm. Images in (a) and (d) are acquired with radiation dose; Images in (b) and (e) are acquired with radiation dose; Images in (c) and (f) are acquired with radiation dose. The scale bar denotes 5.00 cm. The display window is [0.01, 0.03].
Figure 7: Experimental results of the head and neck phantom. Images in (a)-(c) are reconstructed from the FBP algorithm, and images in (d)-(f) are obtained from the BPF algorithm. Images in (a) and (d) are acquired with radiation dose; Images in (b) and (e) are acquired with radiation dose; Images in (c) and (f) are acquired with radiation dose. The scale bar denotes 5.00 cm. The display window is [0.01, 0.03].

The imaging results include two different internal modules inside the CatPhan-700 phantom, and one central slice of the head and neck phantom. To enable quantitative comparisons of the signal values, ten ROIs (region-of-interests) are selected to measure their mean signal values and the standard deviations. Results are listed in Table 1. Herein, the first eight ROIs are obtained from the rod inserts inside CatPhan-700 phantom, as marked on Fig. 5(a). The last two ROIs are drawn on the head and neck phantom, as highlighted on Fig. 7(a). These measurements demonstrate that the CNN-based BPF-type CT image reconstruction method is able to generate CT images with accurate signal values, and has the potential to reduce image noise as well. Take the measured results of LDPE rod for example, the relative difference of the mean signal value obtained from the BPF reconstruction method () to the FBP method () is within . Meanwhile, the relative difference of the signal standard deviation obtained from the BPF reconstruction method () to the FBP method () is up to about .

Dose 100% 50% 10%
Algorithm   FBP   BPF   FBP   BPF   FBP   BPF
Bone 50% 3.14 3.13 3.14 3.14 3.14 3.13
9.40 8.30 1.23 1.06 2.56 2.10
LDPE 1.73 1.73 1.73 1.72 1.73 1.73
9.20 7.70 1.22 1.02 2.52 2.10
Bone 20% 2.33 2.32 2.32 2.32 2.33 2.31
1.14 9.80 1.43 1.21 2.75 2.31
Teflon 3.58 3.57 3.58 3.57 3.58 3.58
1.01 9.10 1.36 1.17 2.89 2.37
PS 1.84 1.85 1.84 1.85 1.84 1.85
9.20 7.60 1.29 1.09 2.57 2.21
Delrin 2.53 2.51 2.53 2.51 2.53 2.51
8.50 7.10 1.24 1.04 2.72 2.26
PMP 1.58 1.55 1.57 1.55 1.58 1.56
7.70 6.40 1.06 9.00 2.38 2.07
Acrylic 2.12 2.10 2.11 2.09 2.11 2.09
9.90 8.20 1.32 1.11 2.45 2.08
ROI-1 2.50 2.48 2.50 2.48 2.51 2.50
1.22 1.03 1.59 1.33 3.29 2.64
ROI-2 2.00 1.98 2.00 1.98 2.00 1.97
1.20 1.00 1.64 1.34 3.64 2.92
Table 1: Measured mean values and standard deviations from ten different ROIs (Unit: ).

Moreover, the MTFs are further measured with the Teflon rod (see the squared region on Fig. 5(a)) in the CatPhan-700 phantom at three different radiation dose levels for both the FBP method and the BPF method. Detailed signal processing procedures can be found in Fig. 3. The MTF results are shown in Fig. 8. Overall, the CNN-based BPF method generates similar MTFs as the conventional FBP method. In particular, the BPF reconstruction method generates slightly better MTF response at low spatial frequency range than the FBP reconstruction method. However, the FBP reconstruction method slightly outperforms the BPF reconstruction method at high frequency range. Additionally, the imaging results of resolution bar structures in Fig. 6 enable to visually appreciate the similar spatial resolution performance of the FBP and BPF reconstruction algorithms. For all radiation dose levels, the bar structures of the seventh group can be clearly distinguished for both reconstruction methods, which agree well with the previous MTF quantification results.

Figure 8: The measured MTF curves from the Teflon rod at three different dose levels: (a) radiation dose, (b) radiation dose, (c) radiation dose.

Finally, the CT image noise power spectra (NPS) are analyzed side by side for the two image reconstruction algorithms. Results are shown in Fig. 9. To do so, the central image region with dimension of 256 pixels 256 pixels on noise-only image is used to calculate the 2D NPS maps. In total, 50 noise-only samples are used. As can be seen, overall, the proposed BPF CT image reconstruction method is able to generate similar NPS map as the FBP reconstruction method. However, the obtained NPS map from the BPF method contains some interesting features, for example, the four bright legs on the left and right sides of Fig. 9(b). The radially averaged NPS curves also demonstrate the high similarities between the two image reconstruction strategies. Specifically, the NPS profile obtained from the BPF method slightly shifts to the lower frequency part with respect to the FBP method. In addition, the NPS curve from BPF method does not drop to zero at very low frequency region.

Figure 9: The measured NPS results from axial CT images: (a) the 2D NPS map from conventional FBP method, (b) the 2D NPS map from the proposed BPF method, (c) the radially averaged line profile (normalized by the area under the curve).

Iv Discussion

In this study, we have demonstrated the feasibility of image-domain laminogram deconvolution BPF-type CT image reconstruction framework using the deep-learning-based technique. Both numerical and experimental results show that the introduced image-domain CNN can reconstruct CT images accurately from the highly blurred laminograms. This proposed CNN network not only being able to generate CT images with similar spatial resolution as from the FBP algorithm, but also helps to maintain the similar noise textures.

iv.1 About the network

Compared with the domain transform manifold learning CT image reconstruction CNNZhu et al. (2018), one major advantage of the CNN-based BPF-type image reconstruction framework is that it can significantly save the computation resource. As pointed in literatureYe, Han, and Cha (2018), the fully-connected layer between the full-size sinogram and full-size CT image may consume a huge amount of GPU resource. This might strongly limit its wide applications in CT image reconstructions, especially for clinical CT images with large pixel dimension, i.e., . As a contrary, our CT image reconstruction network does not extensively require huge GPU resources. This is because the new framework puts this computationally extensive domain-transformation (from sinogram domain to the CT image domain) burden to the laminogram reconstruction procedure, which can be performed analytically and quickly. Once this procedure is finished, the trained CNN can be cascaded afterwards to further reconstruct sharp CT image.

This proposed CNN architecture is inspired by the well-known U-Net architectureRonneberger, Fischer, and Brox (2015), image debluring networkXu et al. (2014), and image super-resolution networkDong et al. (2016); Dong, Loy, and Tang (2016). Our designed network keeps the downsampling and upsampling structures in U-net. However, its middle connection with fully-convolutional layer is replaced by a fully-connected layer. Our experience finds that the fully-connected layer has a better performance in maintaining image sharpness, signal accuracy, as well as the network convergence speed. We also notice that the added shortcut layers are important in maintaining high image spatial resolution, which agrees with the conclusion obtained in previous literatureChen et al. (2017). We admit that this particular CNN network architecture is purely empirical, and more fundamental work are needed to provide explanations on why and how it works. For instance, it would be an interesting topic to compare the shape of the theoretical deconvolution kernel with the learned approximation by the CNN, from which the effectiveness of the network might be demonstrated by some degree.

iv.2 About the network applications

During practical applications, the voxel dimension of the laminogram does not have to be the same as used for the training dataset. For example, it is in this study. For real applications, the laminogram voxel can be reconstructed into any dimension. However, in order to make the trained CNN outputs sharp CT images with correct attenuation coefficient values, the ratio of the trained image voxel dimension to the real voxel dimension needs to be considered. In addition, the real CT imaging geometry also does not have to be the same as for the numerical simulations when generating the training dataset. Actually, the trained CNN result fits well with any type of CT image geometry. This is because the CNN part, which takes care of the deconvolution procedure, is independent of the imaging geometry. Whereas, the procedure of generating laminograms indeed relies on the CT image geometry knowledge.

The current network training dataset is generated from the clinical CT images by converting their Hounsfield Unit into the corresponding linear attenuation coefficient. In order to do this, a empirical conversion factor 0.02/mm, which matches the water attenuation with X-ray beam of about 60 keV, is selected. This is a rough approximation. For CT images scanned with other mean tube potentials, this conversion factor may need to be adjusted correspondingly. To compensate this, we augmented the image values by multiplying an additional factor between 0.50 to 2.00 randomly to each CT image. For current validations, we did not observe any failure. However, it is still important to broaded the data range of the training dataset as wide as possible by adding various clinical CT images to avoid any potential failure in future.

For this discussed CNN, it only accepts input images having a fixed matrix size of . Meanwhile, the output of the CNN also has the same size as of the input image. Clearly, this already trained CNN cannot be used to reconstruct CT images with different sizes, for example, or . In order to make it applicable for varied CT image dimensions, one may need to train this CNN separately regarding to a certain imaging task. Since the training procedure does not take too long, therefore, this should not become a bottleneck for its practical applications.

iv.3 About the future work

Results show that this novel deconvolution-based BPF-type CT image reconstruction network has the potential to generate similar NPS distributions compared with the FBP reconstruction method. However, for the slight difference between the 2D NPS distribution maps, currently we do not have a clear explanation. Therefore, we would like to study these interesting phenomenons in future.

Moreover, this developed CNN shows potential to reduce image noise compared with the conventional FBP reconstruction results, despite that the training dataset is generated using noise-free images. If training this CNN with dataset containing noise in future, it is very promising that this innovative CNN-based BPF-type CT image reconstruction framework could further reduce CT image noise (and remove CT image artifacts), as have been demonstrated by other recent studies. Therefore, to maintain the image quality while reduce patient dose as low as reasonably achievable (ALARA), we would focus on optimizing the network’s dose reduction potential and image artifact removal potential, and eventually generalize the current BPF CT image reconstruction network into a dose-reduction and artifact-free CT image reconstruction network in future. Additionally, results will be compared with other methods as well, including both iterative CT image reconstruction algorithmsStayman and Fessler (2000); Elbakri, Fessler et al. (2002); Sidky, Kao, and Pan (2006); Chen, Tang, and Leng (2008); Yu and Wang (2009); Hara et al. (2009); Yu et al. (2011) and the pure CNN reconstruction algorithmZhu et al. (2018). Finally, the cone-beam CT (CBCT) image reconstruction capability of this CNN-based BPF-type CT reconstruction algorithm will also be investigated as part of the future work.

V Conclusions

In conclusion, we have demonstrated the feasibility of a new BPF CT image reconstruction framework using CNN technique in this work. Within this framework, CT image is reconstructed from the blurred laminogram via a particular CNN learning procedure directly in image-domain. Results show that the CNN not only being able to maintain image spatial resolution and image noise texture, but also has the potential to reduce image noise. Therefore, this proposed CNN-based image-domain BPF-type CT image reconstruction scheme may become a promising alternative for future clinical CT image reconstruction applications.

Acknowledgment

We would like to acknowledge Dr. Yaoqin Xie for borrowing us the CT phantoms used in acquiring the experimental validation datasets. We also acknowledge The Cancer Imaging Archive (TCIA) organization for opening access to real clinical CT images. This project is partly supported by Shenzhen Strategic Emerging Industry Development Fund (JSGG20170412100952456).

Footnotes

  1. thanks: Y. Ge and Q. Zhang are considered both as the first author.
  2. thanks: Y. Ge and Q. Zhang are considered both as the first author.
  3. thanks: Scientific correspondence should be addressed to Dong Liang (dong.liang@siat.ac.cn).

References

  1. A. C. Kak and M. Slaney, Principles of computerized tomographic imaging (IEEE press, 1988).
  2. J. Hsieh et al., “Computed tomography: principles, design, artifacts, and recent advances,”  (SPIE Bellingham, WA, 2009).
  3. L. Feldkamp, L. Davis,  and J. Kress, ‘‘Practical cone-beam algorithm,” JOSA A 1, 612–619 (1984).
  4. P. Smith, T. Peters,  and R. Bates, “Image reconstruction from finite numbers of projections,” Journal of Physics A: Mathematical, Nuclear and General 6, 361 (1973).
  5. J. Martin, N. Dogan, J. Gormley, G. Knoll, M. O’Donnell,  and D. Wehe, ‘‘Imaging multi-energy gamma-ray fields with a compton scatter camera,” IEEE Transactions on Nuclear Science 41, 1019–1025 (1994).
  6. Z. Danovich and Y. Segal, “Laminogram reconstruction through regularizing fourier filtration,” NDT & E International 27, 123–130 (1994).
  7. M. S. Almeida and L. B. Almeida, “Blind and semi-blind deblurring of natural images,” IEEE Transactions on Image Processing 19, 36–52 (2010).
  8. P. Campisi and K. Egiazarian, Blind image deconvolution: theory and applications (CRC press, 2017).
  9. Y. LeCun, Y. Bengio,  and G. Hinton, “Deep learning,” nature 521, 436 (2015).
  10. K. H. Jin, M. T. McCann, E. Froustey,  and M. Unser, ‘‘Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing 26, 4509–4522 (2017).
  11. H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou,  and G. Wang, “Low-dose CT with a residual encoder-decoder convolutional neural network,” IEEE transactions on medical imaging 36, 2524–2535 (2017).
  12. G. Wang, J. C. Ye, K. Mueller,  and J. A. Fessler, “Image reconstruction is a new frontier of machine learning.” IEEE transactions on medical imaging 37, 1289–1296 (2018).
  13. Y. Han and J. C. Ye, “Framing U-Net via deep convolutional framelets: Application to sparse-view CT,” IEEE transactions on medical imaging 37, 1418–1429 (2018).
  14. E. Kang, W. Chang, J. Yoo,  and J. C. Ye, “Deep convolutional framelet denosing for low-dose CT via wavelet residual network,” IEEE transactions on medical imaging 37, 1358–1369 (2018).
  15. Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M. K. Kalra, Y. Zhang, L. Sun,  and G. Wang, “Low dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss,” IEEE transactions on medical imaging  (2018).
  16. Z. Zhang, X. Liang, X. Dong, Y. Xie,  and G. Cao, “A sparse-view ct reconstruction method based on combination of densenet and deconvolution.” IEEE transactions on medical imaging 37, 1407–1417 (2018).
  17. B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen,  and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature 555, 487 (2018).
  18. G. Gullberg and G. Zeng, “Backprojection filtering for variable orbit fan-beam tomography,” IEEE transactions on nuclear science 42, 1257–1266 (1995).
  19. K. He, X. Zhang, S. Ren,  and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016) pp. 770–778.
  20. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980  (2014).
  21. Y. Ge, X. Ji, R. Zhang, K. Li,  and G.-H. Chen, “K-edge energy-based calibration method for photon counting detectors,” Physics in Medicine & Biology 63, 015022 (2017).
  22. J. C. Ye, Y. Han,  and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM Journal on Imaging Sciences 11, 991–1048 (2018).
  23. O. Ronneberger, P. Fischer,  and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention (Springer, 2015) pp. 234–241.
  24. L. Xu, J. S. Ren, C. Liu,  and J. Jia, “Deep convolutional neural network for image deconvolution,” in Advances in Neural Information Processing Systems (2014) pp. 1790–1798.
  25. C. Dong, C. C. Loy, K. He,  and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE transactions on pattern analysis and machine intelligence 38, 295–307 (2016).
  26. C. Dong, C. C. Loy,  and X. Tang, “Accelerating the super-resolution convolutional neural network,” in European Conference on Computer Vision (Springer, 2016) pp. 391–407.
  27. J. W. Stayman and J. A. Fessler, “Regularization for uniform spatial resolution properties in penalized-likelihood image reconstruction,” IEEE transactions on medical imaging 19, 601–615 (2000).
  28. I. Elbakri, J. Fessler, et al., “Statistical image reconstruction for polyenergetic x-ray computed tomography,” Medical Imaging, IEEE Transactions on 21, 89–99 (2002).
  29. E. Y. Sidky, C.-M. Kao,  and X. Pan, “Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT,” Journal of X-ray Science and Technology 14, 119–139 (2006).
  30. G.-H. Chen, J. Tang,  and S. Leng, “Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Medical physics 35, 660–663 (2008).
  31. H. Yu and G. Wang, “Compressed sensing based interior tomography,” Physics in medicine & biology 54, 2791 (2009).
  32. A. K. Hara, R. G. Paden, A. C. Silva, J. L. Kujak, H. J. Lawder,  and W. Pavlicek, “Iterative reconstruction technique for reducing body radiation dose at CT: feasibility study,” American Journal of Roentgenology 193, 764–771 (2009).
  33. Z. Yu, J.-B. Thibault, C. A. Bouman, K. D. Sauer,  and J. Hsieh, “Fast model-based x-ray CT reconstruction using spatially nonhomogeneous ICD optimization,” IEEE Transactions on image processing 20, 161–175 (2011).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
214344
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description