Deep Learning Accelerated Light Source Experiments *Both authors contributed equally to this research.

Deep Learning Accelerated Light Source Experiments thanks: *Both authors contributed equally to this research.

Zhengchun Liu* Data Science and Learning
Argonne National Laboratory
Lemont, IL, USA
zhengchun.liu@anl.gov
   Tekin Bicer* Data Science and Learning
Argonne National Laboratory
Lemont, IL, USA
bicer@anl.gov
   Rajkumar Kettimuthu Data Science and Learning
Argonne National Laboratory
Lemont, IL, USA
kettimut@anl.gov
   Ian Foster Data Science and Learning
Argonne National Laboratory
Lemont, IL, USA
foster@anl.gov
   Zhengchun Liu*, Tekin Bicer*, Rajkumar Kettimuthu, Ian Foster
Data Science and Learning Division, Argonne National Laboratory, Lemont, IL 60439, USA
{zhengchun.liu, bicer, kettimut, foster}@anl.gov
Abstract

Experimental protocols at synchrotron light sources typically process and validate data only after an experiment has completed, which can lead to undetected errors and cannot enable online steering. Real-time data analysis can enable both detection of, and recovery from, errors, and optimization of data acquisition. However, modern scientific instruments, such as detectors at synchrotron light sources, can generate data at GBs/sec rates. Data processing methods such as the widely used computational tomography usually require considerable computational resources, and yield poor quality reconstructions in the early stages of data acquisition when available views are sparse. We describe here how a deep convolutional neural network can be integrated into the real-time streaming tomography pipeline to enable better-quality images in the early stages of data acquisition. Compared with conventional streaming tomography processing, our method can significantly improve tomography image quality, deliver comparable images using only 32% of the data needed for conventional streaming processing, and save 68% experiment time for data acquisition.

Deep Learning, denoising, image reconstruction, stream processing, synchrotron light sources

I Introduction

Synchrotron light sources can provide extremely bright high-energy x-rays that can penetrate thick materials and be focused on small regions. These x-rays can then be used for advanced experiments, including studies of the internal morphology of materials and samples with high spatial (atomic and molecular scale) and temporal resolutions (100 ps). The Advanced Photon Source (APS) at Argonne National Laboratory (ANL) is an advanced synchrotron radiation facility that hosts thousands of scientists annually from a wide variety of communities, such as energy, materials, health, and life sciences [6, 2].

APS experiments can generate massive amounts of data in a short time. For example, tomographic imaging beamlines can collect 1500 projections (each 20482448 pixels) in nine seconds using an Oryx detector [19]: a rate of more than 1 GB/s. These experiments may be performed to observe time-dependent phenomena that spread over long time periods (weeks in some cases), resulting in large datasets. Another example is ptychographic imaging, where current high-performance detectors collect 3000, typically 1 MB, frames per second. Imaging a large sample, such as an integrated circuit with 1 cm2 scanning area, can take months and generate petabytes [24].

These data generation rates, coupled with long experimentation times, make it easy to generate petabytes of measurement data. The management of experimental data at this scale, in terms of both time and size, is challenging and requires advanced analysis techniques for timely feedback. Methods are urgently needed that can reduce the amount of data collected, or permit real-time determination of whether specific data are useful.

We propose here a data analysis pipeline that uses a deep learning (DL) model to enhance the quality of reconstructed images obtained via processing of streaming experimental data from synchrotron beamlines. We focus on tomography data, a common imaging modality at synchrotrons. We show that conventional streaming tomographic reconstruction plus deep learning image enhancement can deliver performance that significantly surpasses that of conventional reconstruction alone, in terms of both image quality and throughput.

For example, Figure 1 shows how the integration of deep learning enhancement into the tomographic pipeline can generate images with comparable quality to those produced by conventional methods, but using only 32% of experiment time (320 versus 1000 seconds) and after acquiring, streaming, and processing only 32% as much data (480 versus 1504 X-ray projections). Thus, our method can provide both three times faster turnaround time for domain scientists and three times increased throughput for the light source and computing facility. These improvements are also important as enablers of experiment steering, where quick turnaround is required.

(a) Conventional at 462s.
(b) Proposed at 462s.
(c) Conventional at 1433s.
Fig. 1: Streaming tomography image quality, with and without enhancement: (a) with data up to 462s (480 projections), before enhancement; (b) with the same data, after enhancement; (c) with data up to 1433s (1504 projections), before enhancement.

Specifically, our paper makes the following contributions:

  • We propose and implement a pipelined workflow for tomography reconstruction with streaming data (as shown in Figure 2) for realtime streaming analysis;

  • We repurpose and retrain TomoGAN (a GAN [20]-based deep learning model originally designed for low dose X-ray tomography [33]) and integrate it into our workflow for image quality enhancement (as shown in Figure 3);

  • We evaluate our system with two real-world data sets collected at APS and provide insightful analysis on performance improvements.

Ii Background

We introduce the computed tomography (CT) image analysis pipeline used at synchrotron light sources, and the use of deep learning methods for enhancing reconstructed images. Figure 2 illustrates the tomographic data acquisition, management, and analysis phases at synchrotron light sources. During the data acquisition phase, a sample is placed on a rotation stage and illuminated by x-ray. As x-rays pass through the sample, the photons—attenuated to a degree determined by the thickness and density of the object—are measured by the detector. The corresponding measurement is called a projection. A tomography experiment collects projections from different rotations (), with typically a fixed exposure time for each. An ideal experiment collects projections, , that fully cover the sample.

Fig. 2: Tomographic data acquisition and reconstruction pipeline. The steps are described in the text.

Beer’s law shows the underlying mathematical model for the measurement process [9]:

(1)

where is the incident x-ray illumination on the sample and are the collected measurements at a number of s, as a result of a tomographic scan. represents a cross section of projections (shown in blue in the central section of Figure 2), known as a sinogram. For parallel beam geometry, measurements in a sinogram correspond to a cross section of the target sample. The tomographic reconstruction process aims to recover 2D cross section images of a sample from their corresponding sinograms.

Iterative reconstruction approaches aim to solve:

(2)

where is the reconstructed tomogram, is the forward model, is the sinogram, is a regularizer functional, is the search variable, and is a constraint on .

Iterative approaches use statistical models to converge a solution that is consistent with measurements. They consist of the three steps shown in the reconstruction phase in Figure 2. First, a forward model is applied to an intermediate image estimate in order to find a measurement. Then, the estimated and real measurements are compared. Finally, the estimated image is updated according to the difference between the real and estimated measurements. These steps are repeated until a user-defined constraint is met, such as total number of iterations or error threshold.

Fast reconstruction of tomographic datasets is important to permit real-time feedback, such as when reconstruction of a limited number of projections suffices to determine that no more data need be collected. Although iterative reconstruction algorithms require more computation than analytical approaches, they provide superior reconstruction quality with incomplete or limited measurements data. Furthermore, many parallelization techniques have been developed to improve their performance.

A naive parallelization technique is to distribute one sinogram to each process and have the processes perform independent reconstructions in parallel. However, while this method can reconstruct many small- to medium-scale datasets successfully, the reconstruction time for large datasets can be long, especially for those that require many iterations.

Advanced parallelization techniques, such as in-slice parallelization [14, 11] and memory-centric [22] approaches, address the limitations of the naive approach. In-slice parallelization replicates sinogram and image among the processes and perform global reduction at the end of each iteration; therefore, the portions of the same sinogram can be reconstructed by multiple processes. Memory-centric reconstruction, in contrast, uses memoization and domain partitioning to split single sinogram reconstruction to multiple processes. Both of these advanced techniques are suitable for quasi-real-time reconstruction of large datasets and being used at synchrotron light source facilities [18, 21].

Iii DL-Enhanced X-Ray Image Reconstruction

We next introduce our runtime system, which is optimized for reconstruction of streaming tomography datasets, and describe its integration with TomoGAN, our advanced GAN-based image restoration approach: see Figure 3.

Fig. 3: Tomographic reconstruction on a streaming experimental data with DL denoising. are separate threads.

Iii-a Reconstruction of Streaming Experimental Data

Data Acquisition: As shown in Figure 3, our system first acquires data from the tomographic experiment. Recall that tomographic data acquisition is performed while a sample is rotated on a rotation axis. Data acquisition may be fixed-angle or interleaved. Fixed-angle acquisition starts at a specified angle and advances by a fixed offset until a specified final angle is reached. For instance, if the angle offset is and the experiment is set to collect 180 projections starting from , then the data acquisition results in a set of projections with angles.

Interleaved data acquisition also starts at a specified angle and advances by a fixed offset. However, acquisition proceeds in several rotations, with each rotation starting at a different degree, resulting in interleaved projections among rotations. For example, an interleaved data acquisition configuration may consist of 10 rotations, each with 18 projections, resulting in a set of projections with angles.

Interleaved acquisition has three advantages for real-time reconstruction. First, the generated projections provide full coverage with fewer projections, and thus full volume reconstruction can start sooner. Second, interleaved acquisition improves the convergence rate of the reconstruction [8]. Third, artifacts due to a small number of projections, e.g. dose artifacts, can be addressed with advanced iterative reconstruction techniques and DL-enhanced denoising approaches.


Distributor: This component receives projections from the data acquisition component and partitions them across the reconstruction processes. Partitioning is performed according to the sinograms (illustrated with colored rows in Fig. 3). For instance, if a projection consists of 1024 rows and there are two reconstruction processes, then each process receives 512 rows. This distributed-memory parallelization allows for scaling up to the number of sinograms in the projection.


Reconstruction: This component receives partitioned sinograms and performs analysis according to user-defined configuration parameters and algorithm. The runtime system uses a sliding window to handle streaming sinogram data. Specifically, each partitioned projection (set of rows/sinograms) data is pushed to a (MPI) process buffer, which is then iterated with a window.

The window size parameter, , set by the user, determines the number of projection rows at any reconstruction event. Our system triggers reconstruction after receiving rows, and thus the projection consumption rate can be adjusted according to available computational resources and quality needs.

The projection data in the window are pulled by the threads associated with a process, for shared memory parallelization and reconstruction. Threads use in-slice parallelization, where the intermediate tomograms (or reconstructed image slices) are replicated among, and are independently updated, according to their corresponding sinograms. For example, assume that the distributor assigns 512 rows/sinograms to each process, and that each sinogram is being used to reconstruct a tomogram with dimension 10241024. If the process has 256 threads, then the runtime system allocates 256 tomogram replicas for each sinogram, which effectively results in total buffer size of 25651210241024. This replication-based parallelization eliminates race conditions during reconstruction, since each thread can operate on its own tomogram replica.

Our use of iterative reconstruction algorithms means that the number of iterations, , is another parameter that can be set by the user. For each triggered reconstruction event, the threads iterate on the window data times. Thus provides another way to adjust computational throughput and image quality.

After each iteration, our system synchronizes (reduces) the replicated tomograms so that the correct tomogram can be recovered and used for the next iteration.

Iii-B Reconstructed Image Enhancement

We append TomoGAN, an image quality enhancement model based on generative adversarial networks [20] originally developed for low-dose X-ray imaging in [33], to the streaming tomographic processing pipeline to enable online enhancement of image quality. We have shown in previous work [33] that, once trained on one sample, TomoGAN can be applied effectively to other similar samples, even if X-ray projections of those samples are collected at a different facility and show different noise characteristics.

Iv Experimental Results

We evaluate our system with respect to both the quality of the reconstructed images and the time required to process an image. We work with two real-world experimental datasets collected at APS, each with different runtime configuration parameters, specifically window size and iterations. For ease of reference, we use the notation =X:=Y:=Z to denote streaming tomography images after Z rotations with a window size of X and with Y iterations performed for each update of the output image.

Our experimental datasets include Shale and Glass samples that are imaged at APS at Argonne. Shale is an X-ray microtomography dataset of a shale sample from the Upper Barnett Formation in Texas [30]; it contains tiny features (pores) with irregular shapes and sizes that are challenging to reconstruct. The dataset consists of 1501 projections, each of 17922048 pixels. The Glass dataset is of a set of borosilicate glass spheres of different sizes [42]; it consists of 1500 projections, each of 21602560 pixels. Both datasets are publicly available and can be downloaded from TomoBank [17].

We used the simultaneous iterative reconstruction technique (SIRT) for tomographic reconstruction [4]. We varied the window size over {16, 32, 64, 128, 256} and the number of iterations over {1, 5, 10}. We simulated interleaved data acquisition, where the number of projections is set to the window size for each full rotation; thus, the window buffer is guaranteed to contain projections from angles that capture the full view of the sample for every window configuration.

The combination of different window sizes and iterations resulted in different reconstruction qualities and computational characteristics, as shown in Table I. We see that image updates can happen as frequent as once per second. The averaged sustained data consumption rate quantifies the number of X-ray projections per second that our workflow can process. Thus the workflow can achieve real time data analysis if the data acquisition rate is less than the sustained rate.

 

SIRT iterations, 1 5 10
Window size, 16 32 64 128 256 16 32 64 128 256 16 32 64 128 256

 

Glass Refresh time (s) 1.5 1.6 1.8 2.4 4.0 7.5 7.9 9.7 12.9 20.4 15.4 16.4 20.1 26.4 40.8
Glass Sustained Rate (p/s) 10.7 20.8 36.9 56.0 75.1 2.1 4.1 6.7 10.6 14.7 1.0 2.0 3.2 5.2 7.3

 

Shale Refresh time (s) 1.1 1.1 1.2 1.6 2.7 5.3 5.4 6.7 8.8 13.5 10.6 10.5 13.6 17.8 27.3
Shale Sustained Rate (p/s) 15.2 30.2 52.9 83.0 112.8 3.1 6.0 9.8 15.5 22.2 1.5 3.1 4.8 7.7 11.0

 

TABLE I: Data processing time for different configurations, for Glass and Shale datasets. Refresh time is the time it takes to generate an update. The sustained data consumption rate is measured by the number of projection processed per second.

We used two Argonne Leadership Computing Facility computer systems in this work: Theta for reconstructions and Cooley to train TomoGAN and to run TomoGAN on the reconstructed images. Theta consists of 4392 Intel Xeon Phi (KNL) nodes and has a peak speed of 11.60 petaflops. Cooley has 126 compute nodes, each with a Tesla K80 dual GPU card with 24GB memory.

Iv-a Reconstruction Quality Improvement

Due to dataset limitations, we split our samples, each with 1024 images of 25602560 pixels, into 128 (12.5% of total) for training and the rest for testing. We trained the TomoGAN model with 128 tomography images obtained with the configuration =32:=1:=5 and their corresponding ground truth. The ground truth we used to train TomoGAN is an offline reconstruction using SIRT with 100 iterations and all projections, i.e., the best reconstruction we can get with the state-of-the-art method. We appended the trained model to the streaming pipeline to enhance tomography images (as shown in Figure 3) for all other experiment configurations, i.e., every other combination of and for each updating/rotation ().

The structural similarity index metric (SSIM) [52] is a commonly used method for measuring the similarity between two images. SSIM is a full reference metric; in other words, its measurement of image quality is based on an initial uncompressed or distortion-free image as reference. SSIM is designed to improve on traditional methods such as peak signal-to-noise ratio and mean squared error. It ranges from 0 to 1, where 0 means two images are completely different and 1 means the two images are identical. For example, Baker et al. [7] compared ten different similarity metric for image and indicated that the SSIM metric performs the best. In this paper, we use the SSIM between ground truth (i.e., the best possible) and the target image to quantify image quality. Thus a larger SSIM value means more similar to the best possible image, and thus higher image quality.

For each update (i.e., one full rotation of data acquisition) of the Glass sample, Figure 4 shows the image quality improvement as measured by SSIM when performing 10 SIRT iterations per update, and for different window sizes. Each dot in Figure 4 represents the average SSIM of all tomography slices for that number of updates. There is an update after every X-ray projections acquired, streamed, and processed. The timestamp of each update is proportional to the corresponding refresh time, as shown in Table I.

Fig. 4: Streaming tomography image quality improvements for Glass, as measured by SSIM, averaged across all slices. The labels in the legend are coded as follows: W is window size; C denotes conventional reconstruction and T denotes conventional plus TomoGAN enhancement. Streaming tomography processing uses 10 iterations in each case. The red dashed horizontal line shows the best result obtained with the conventional method and a window size of 16.

More specifically for the case =16:=1 (one with the most frequent updates), Figure 5 shows the SSIM comparison with conventional streaming as well as a regional preview for every 10 updates. We see that the image generated with TomoGAN in processing pipeline becomes, after 20 rotations (i.e., 320 projections acquired), visually comparable with the best image that the conventional streaming tomography can get at the end.

Fig. 5: Comparison of SSIM values, for a representative region with the conventional and proposed methods at =16:=1, shows the improvements obtained with the latter. The red dashed line shows the best result obtained with the conventional method.

Iv-B End-to-end Performance Evaluation

If we purely measure the image quality with SSIM, as shown in Figure 4, then for =16:=10, the best SSIM that conventional streaming tomography can achieve is 0.638. When using TomoGAN, in contrast, the SSIM exceeds 0.638 (as shown by the horizontal red dotted line in Figure 4) after just four rotations. However, as shown in Figure 6, the (visual) image quality even after 11 rotations (i.e., at 169s, because each update takes 15.4s, as shown in Table I) is poor. We thus conclude that we cannot rely only on SSIM for image quality measurement to estimate the end-to-end performance improvement.

Fig. 6: Reconstucted images obtained as a experiment proceeds for a representative region of the Glass dataset. Image quality with the addition of deep learning (below) is significantly improved relative to conventional reconstruction alone (above).

As an alternative, we use the naked eye to subjectively evaluate image quality to estimate the end-to-end speedup of throughput. We evaluate the image quality based on two factors: (1) Similarity to the best possible image quality (i.e., compare TomoGAN denoised image with the best image from conventional streaming tomography. The best image from conventional streaming tomography is the one that is obtained after processing all the projections - for example, the one on the blue curve at 1500 projections in Figure 5.) and, (2) Clarity of features in the image (i.e., compare TomoGAN denoised image with the image from conventional streaming that is just clear enough to see all features). We observe: (1) as shown in Figure 6, the TomoGAN denoised image at 477s (i.e., the 31st update) is visually comparable with the best image (at 1433s) from conventional streaming: a speedup of about 3; and (2) features in the image using conventional streaming tomography are observable only at 477s whereas features in the TomoGAN denoised image is observable at 169s (i.e., the 11th update), again a speed-up of nearly 3.

We also evaluated our method using the Shale sample. The image quality improvement is demonstrated in Figure 7. The Shale sample has much more high-frequency content when compare with Glass. Although the image quality improvement is still clear, the end-to-end speedup is not as good as it for Glass sample. The end-to-end speed up is 2x (as opposed to 3x for Glass) by naked eye evaluation.

Fig. 7: Conventional recontruction (above) vs. conventional plus deep learning (below), as in Fig. 6, but for the Shale dataset.

Iv-C Overhead analysis

TomoGAN takes about 290ms to process one 25602560 pixel image in our experiments on one NVIDIA Tesla V100 GPU card. TomoGAN and the tomographic reconstruction algorithm can run in parallel and TomoGAN takes significantly less time than the reconstruction algorithm. Thus, the tomographic reconstruction algorithm and TomoGAN can be effectively pipelined such that the total overhead of TomoGAN is only 290ms irrespective of the number of reconstruction steps (or the number of updates done to the output image, which is as the number of rotations performed on the sample in our experimental setup). In other words, the first update of the output image is delayed by 290ms with TomoGAN in the processing pipeline but the frequency of the subsequent updates remains the same as that of the processing pipeline. Given that TomoGAN cuts down the number of updates (and the amount of projections need to be collected) by a factor of 3 or 2, a 290ms delay in getting the first update is negligible. Therefore, TomoGAN also does not affect the sustained data consumption rate, i.e., projections per seconds. However, the limitation here is that there needs to be one GPU card per node to run TomoGAN to achieve such a low delay. The delay will increase if TomoGAN is run on another server because of data movement latency. The sustained data consumption rate in Table I, measured by the number of projection processed per second, when compared with data acquisition rate, can be used to quantify the real-time processing capability for different configurations.

V Related Work

Tomographic reconstruction techniques can broadly be categorized into two groups: analytical and iterative approaches. Analytical reconstruction techniques perform single-pass over the dataset, and known to be computationally efficient compared to iterative methods; however they are prone to measurement errors and require sufficient amount of data for good reconstruction [10]. In contrast, iterative approaches are resilient to noisy measurements and can provide reasonable reconstructions even with limited data. Although iterative reconstruction techniques require significant compute resources, their parallel implementations enable their usage on challenging datasets [3, 44, 27, 28]. Further, since they can provide reasonable reconstructions with limited data, they are suitable for fast feedback workflows with rapid reconstructions.

Iterative reconstruction techniques have been successfully used to provide high quality images, especially in medical imaging area [16, 39, 32], where limiting dose exposure is important [41, 26]. The computational requirements of these algorithms have typically been met with many-core architectures, such as GPUs and KNLs [46, 40, 43, 54]. Most of these works consider the availability of all data and are not optimized for real-time reconstruction.

Deep learning (DL) approaches have been used successfully in many scientific imaging problems, such as denoising, feature segmentation, image restoration and super resolution [5, 37, 45, 31, 25]. Among these, denoising reconstructed images has been an active area [50, 36, 35]. Many DL approaches have been developed and applied to denoise reconstructed images [29, 34, 53, 49, 38, 23]. Pelt et al. also used a mixed-scale convolutional neural network to improve noise in CT images, with impressive results [38, 23]. Yang et al. [56], use a convolutional neural network (CNN) to denoize reconstructed images and show 10-fold improvement on signal-to-noise ratio. In our work, we apply our denoising method, TomoGAN [33], to streaming reconstructions and evaluate its impact on image quality and end-to-end performance.

Real-time experimental data analysis [48, 55, 47] and steering have been active research areas [15, 51]. ASTRA is a popular GPU-based toolkit for processing and reconstruction of x-ray data [47]. UFO is another image processing framework for synchrotron dataset that uses GPUs for fast feedback and visualization [48]. MemXCT is a highly optimized reconstruction engine for large-scale tomography datasets [22]. In this work, we extended our efficient stream reconstruction data analysis pipeline [14, 12, 13] with denoising capabilities[33, 1].

Vi Conclusions and Future work

We presented a new method for real-time computed tomography at synchrotron light sources. In this new method, a deep learning model is used to improve the quality of tomographic reconstructions as data is collected, thus producing high-quality output more quickly or, alternatively, reducing the amount of data that must be collected.

Our experimental evaluations, using real-world datasets, show significant improvement in tomography image quality and system throughput. In particular, the proposed method need only a fraction (as low as 1/3) of the data required for conventional reconstruction methods, thus saving not only precious beamline time but also the network and computing resources that would otherwise be required to process the data. Thus, end-to-end experimental throughput is as much as three times greater than that of state-of-the-art conventional methods.

Much of our work can be reused for other synchrotron light source analysis tasks. For example, the data acquisition component can be used for any pixelated detector, and many modalities can be implemented by using our parallel processing framework, including correlation analysis for x-ray photon spectroscopy, ptychographic reconstruction, and fitting of fluorescence data.

In future work, we plan to explore how these methods can be integrated into an experiment steering framework, to help domain scientists correct or terminate unwanted data collection. We also intend to explore architecture, methods, and algorithms needed to support autonomous experiments.

Acknowledgment

This work was supported in part by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357. We acknowledge computing resources provided and operated by Argonne’s Joint Laboratory for System Evaluation and Argonne Leadership Computing Facility.

References

  • [1] V. Abeykoon, Z. Liu, T. Bicer, R. Kettimuthu, G. Fox, and I. Foster (2019) Scientific image restoration anywhere. In XLOOP 2019 IEEE Computer Society Technical Consortium on High Performance Computing, Cited by: §V.
  • [2] Advanced Photon Source, Argonne National Laboratory Research and Engineering Highlights, APS Science 2018. Note: https://www.aps.anl.gov/Science/APS-Science[Accessed: May 2019] Cited by: §I.
  • [3] J. Agulleiro and J. Fernandez (2011) Fast tomographic reconstruction on multicore computers. Bioinformatics 27 (4), pp. 582–583. Cited by: §V.
  • [4] A. H. Andersen and A. C. Kak (1984) Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm. Ultrasonic imaging 6 (1), pp. 81–94. Cited by: §IV.
  • [5] F. H.D. Araujo, R. R.V. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N.S. Medeiros (2019) Deep learning for cell image segmentation and ranking. Computerized Medical Imaging and Graphics 72, pp. 13 – 21. External Links: ISSN 0895-6111, Document, Link Cited by: §V.
  • [6] Argonne National Laboratory Advanced Photon Source, An Office of Science National User Facility. Note: https://www.aps.anl.gov[Accessed: May 2019] Cited by: §I.
  • [7] A. H. Baker, D. M. Hammerling, and T. L. Turton (2019) Evaluating image quality measures to assess the impact of lossy data compression applied to climate simulation data. Computer Graphics Forum 38 (3), pp. 517–528. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.13707 Cited by: §IV-A.
  • [8] F. J. Beekman and C. Kamphuis (2001-06) Ordered subset reconstruction for x-ray CT. Physics in Medicine and Biology 46 (7), pp. 1835–1844. External Links: Document, Link Cited by: §III-A.
  • [9] A. Beer (1852) Bestimmung der absorption des rothen lichts in farbigen flussigkeiten. Ann. Physik 162, pp. 78–88. Cited by: §II.
  • [10] M. Beister, D. Kolditz, and W. A. Kalender (2012) Iterative reconstruction methods in x-ray CT. Physica Medica 28 (2), pp. 94–108. Note: External Links: ISSN 1120-1797, Document, Link Cited by: §V.
  • [11] T. Bicer, D. Gürsoy, R. Kettimuthu, F. De Carlo, and I. T. Foster (2016) Optimization of tomographic reconstruction workflows on geographically distributed resources. Journal of synchrotron radiation 23 (4), pp. 997–1005. Cited by: §II.
  • [12] T. Bicer, D. Gürsoy, V. De Andrade, R. Kettimuthu, W. Scullin, F. De Carlo, and I. T. Foster (2017) Trace: a high-throughput tomographic reconstruction engine for large-scale datasets. Advanced structural and chemical imaging 3 (1), pp. 6. Cited by: §V.
  • [13] T. Bicer, D. Gursoy, R. Kettimuthu, F. De Carlo, G. Agrawal, and I. T. Foster (2015) Rapid tomographic image reconstruction via large-scale parallelization. In European Conference on Parallel Processing, pp. 289–302. Cited by: §V.
  • [14] T. Bicer, D. Gursoy, R. Kettimuthu, I. T. Foster, B. Ren, V. De Andrede, and F. De Carlo (2017) Real-time data analysis and autonomous steering of synchrotron light source experiments. In 2017 IEEE 13th International Conference on e-Science (e-Science), pp. 59–68. Cited by: §II, §V.
  • [15] B. Blaiszik, K. Chard, R. Chard, I. Foster, and L. Ward (2019) Data automation at light sources. In AIP Conference Proceedings, Vol. 2054, pp. 020003. Cited by: §V.
  • [16] C. Chou, Y. Chuo, Y. Hung, and W. Wang (2011) A fast forward projection using multithreads for multirays on GPUs in medical image reconstruction. Medical Physics 38 (7), pp. 4052–4065. Cited by: §V.
  • [17] F. De Carlo, D. Gürsoy, D. J. Ching, K. J. Batenburg, W. Ludwig, L. Mancini, F. Marone, R. Mokso, D. M. Pelt, J. Sijbers, et al. (2018) TomoBank: a tomographic data repository for computational x-ray science. Measurement Science and Technology 29 (3), pp. 034004. Cited by: §IV.
  • [18] D. J. Duke, A. B. Swantek, N. M. Sovis, F. Z. Tilocco, C. F. Powell, A. L. Kastengren, D. Gürsoy, and T. Biçer (2016) Time-resolved x-ray tomography of gasoline direct injection sprays. SAE International Journal of Engines 9 (1), pp. 143–153. Cited by: §II.
  • [19] FLIR Systems Oryx 10GigE Detector. Note: https://www.flir.com/products/oryx-10gige[Accessed: May 2019] Cited by: §I.
  • [20] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial networks. In Advances in Neural Information Processing Systems, pp. 2672–2680. Cited by: 2nd item, §III-B.
  • [21] D. Gürsoy, T. Biçer, J. D. Almer, R. Kettimuthu, S. R. Stock, and F. De Carlo (2015) Maximum a posteriori estimation of crystallographic phases in x-ray diffraction tomography. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 373 (2043), pp. 20140392. Cited by: §II.
  • [22] M. Hidayetoglu, T. Bicer, S. Garcia de Gonzalo, B. Ren, R. Kettimuthu, I. T. Foster, and W. Hwu (in press) MemXCT: memory-centric x-ray ct reconstruction with massive parallelization. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Cited by: §II, §V.
  • [23] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger (2016-08) Densely connected convolutional networks. ArXiv e-prints. Cited by: §V.
  • [24] Intelligence Advanced Research Projects Activity Rapid Analysis of Various Emerging Nanoelectronics. Note: https://www.iarpa.gov/index.php/research-programs/raven[Accessed: May 2019] Cited by: §I.
  • [25] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967–5976. External Links: Document Cited by: §V.
  • [26] B. Jang, D. Kaeli, S. Do, and H. Pien (2009) Multi GPU implementation of iterative tomographic reconstruction algorithms. In Biomedical Imaging: From Nano to Macro, 2009. ISBI’09. IEEE International Symposium on, pp. 185–188. Cited by: §V.
  • [27] C.A. Johnson and A. Sofer (1999-02) A data-parallel algorithm for iterative tomographic image reconstruction. In 7th Symposium on the Frontiers of Massively Parallel Computation, pp. 126–137. External Links: Document Cited by: §V.
  • [28] M.D. Jones, R. Yao, and C.P. Bhole (2006-Oct.) Hybrid MPI-OpenMP programming for parallel OSEM PET reconstruction. Nuclear Science, IEEE Transactions on 53 (5), pp. 2752–2758. External Links: Document, ISSN 0018-9499 Cited by: §V.
  • [29] E. Kang, W. Chang, J. Yoo, and J. C. Ye (2018-06) Deep convolutional framelet denoising for low-dose CT via wavelet residual network. IEEE Transactions on Medical Imaging 37 (6), pp. 1358–1369. External Links: Document, ISSN 0278-0062 Cited by: §V.
  • [30] W. Kanitpanyacharoen, D. Y. Parkinson, F. De Carlo, F. Marone, M. Stampanoni, R. Mokso, A. MacDowell, and H. Wenk (2013) A comparative study of x-ray tomographic microscopy on shales at different synchrotron facilities: als, aps and sls. Journal of synchrotron radiation 20 (1), pp. 172–180. Cited by: §IV.
  • [31] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi (2017-07) Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition, Vol. , pp. 105–114. External Links: Document, ISSN 1063-6919 Cited by: §V.
  • [32] D. Lee, I. Dinov, B. Dong, B. Gutman, I. Yanovsky, and A. W. Toga (2012) CUDA optimization strategies for compute-and memory-bound neuroimaging algorithms. Computer Methods and Programs in Biomedicine 106 (3), pp. 175–187. Cited by: §V.
  • [33] Z. Liu, T. Bicer, R. Kettimuthu, D. Gursoy, F. De Carlo, and I. Foster (2019) TomoGAN: low-dose x-ray tomography with generative adversarial networks. arXiv preprint arXiv:1902.07582. Cited by: 2nd item, §III-B, §V, §V.
  • [34] G. Lovric, S. F. Barré, J. C. Schittny, M. Roth-Kleiner, M. Stampanoni, and R. Mokso (2013-08) Dose optimization approach to fast X-ray microtomography of the lung alveoli. Journal of Applied Crystallography 46 (4), pp. 856–860. External Links: Document, Link Cited by: §V.
  • [35] J. Ma, J. Huang, Q. Feng, H. Zhang, H. Lu, Z. Liang, and W. Chen (2011) Low-dose computed tomography image restoration using previous normal-dose scan. Medical physics 38 (10), pp. 5713–5731. Cited by: §V.
  • [36] K. A. Mohan, S. V. Venkatakrishnan, L. F. Drummy, J. Simmons, D. Y. Parkinson, and C. A. Bouman (2014-05) Model-based iterative reconstruction for synchrotron x-ray tomography. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 6909–6913. External Links: Document, ISSN 1520-6149 Cited by: §V.
  • [37] D. Y. Parkinson, D. M. Pelt, T. Perciano, D. Ushizima, H. Krishnan, H. S. Barnard, A. A. MacDowell, and J. Sethian (2017) Machine learning for micro-tomography. Proceedings of the SPIE 10391 (), pp. 10391 – 10391 – 8. External Links: Document, Link, Cited by: §V.
  • [38] D. M. Pelt, K. J. Batenburg, and J. A. Sethian (2018) Improving tomographic reconstruction from limited data using mixed-scale dense convolutional neural networks. Journal of Imaging 4 (11). External Links: Link, ISSN 2313-433X, Document Cited by: §V.
  • [39] G. Pratx, G. Chinn, P.D. Olcott, and C.S. Levin (2009-03) Fast, accurate and shift-varying line projections for iterative reconstruction using the GPU. Medical Imaging, IEEE Transactions on 28 (3), pp. 435–445. External Links: Document, ISSN 0278-0062 Cited by: §V.
  • [40] A. Sabne, X. Wang, S. J. Kisner, C. A. Bouman, A. Raghunathan, and S. P. Midkiff (2017) Model-based iterative ct image reconstruction on gpus. ACM SIGPLAN Notices 52 (8), pp. 207–220. Cited by: §V.
  • [41] E. Y. Sidky, C. Kao, and X. Pan (2006) Accurate image reconstruction from few-views and limited-angle data in divergent-beam ct. Journal of X-ray Science and Technology 14 (2), pp. 119–139. Cited by: §V.
  • [42] S. Singh, T. J. Stannard, S. S. Singh, A. S. Singaravelu, X. Xiao, and N. Chawla (2017) Varied volume fractions of borosilicate glass spheres with diameter gaussian distributed from 38-45 micronsen cased in a polypropylene matrix. Technical report Argonne National Lab.(ANL), Argonne, IL (United States). Cited by: §IV.
  • [43] S. S. Stone, J. P. Haldar, S. C. Tsao, W. Hwu, B. P. Sutton, Z. Liang, et al. (2008) Accelerating advanced MRI reconstructions on GPUs. Journal of Parallel and Distributed Computing 68 (10), pp. 1307–1318. Cited by: §V.
  • [44] J. Treibig, G. Hager, H. G. Hofmann, J. Hornegger, and G. Wellein (2012) Pushing the limits for medical image reconstruction on recent standard multicore processors. International Journal of High Performance Computing Applications, pp. . Cited by: §V.
  • [45] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2017-11) Deep image prior. ArXiv e-prints. Cited by: §V.
  • [46] W. van Aarle, W. J. Palenstijn, J. D. Beenhouwer, T. Altantzis, S. Bals, K. J. Batenburg, and J. Sijbers (2015) The ASTRA Toolbox: a platform for advanced algorithm development in electron tomography. Ultramicroscopy 157, pp. 35 – 47. External Links: ISSN 0304-3991, Document, Link Cited by: §V.
  • [47] W. van Aarle, W. J. Palenstijn, J. De Beenhouwer, T. Altantzis, S. Bals, K. J. Batenburg, and J. Sijbers (2015) The astra toolbox: a platform for advanced algorithm development in electron tomography. Ultramicroscopy 157, pp. 35–47. Cited by: §V.
  • [48] M. Vogelgesang, S. Chilingaryan, T. dos_Santos Rolo, and A. Kopmann (2012) UFO: a scalable gpu-based image processing framework for on-line monitoring. In 2012 IEEE 14th International Conference on High Performance Computing and Communication & 2012 IEEE 9th International Conference on Embedded Software and Systems, pp. 824–829. Cited by: §V.
  • [49] G. Wang (2016) A perspective on deep imaging. IEEE Access 4 (), pp. 8914–8924. External Links: Document, ISSN 2169-3536 Cited by: §V.
  • [50] J. Wang, H. Lu, T. Li, and Z. Liang (2005) Sinogram noise reduction for low-dose ct by statistics-based nonlinear filters. In Medical Imaging 2005: Image Processing, Vol. 5747, pp. 2058–2067. Cited by: §V.
  • [51] Y. Wang, F. De Carlo, D. C. Mancini, I. McNulty, B. Tieman, J. Bresnahan, I. Foster, J. Insley, P. Lane, G. von Laszewski, et al. (2001) A high-throughput x-ray microtomography system at the advanced photon source. Review of Scientific Instruments 72 (4), pp. 2062–2068. Cited by: §V.
  • [52] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004-04) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. External Links: Document, ISSN 1057-7149 Cited by: §IV-A.
  • [53] J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum (2017) Generative adversarial networks for noise reduction in low-dose ct. IEEE transactions on medical imaging 36 (12), pp. 2536–2545. Cited by: §V.
  • [54] F. Xu and K. Mueller (2005) Accelerating popular tomographic reconstruction algorithms on commodity PC graphics hardware. Nuclear Science, IEEE Transactions on 52 (3), pp. 654–663. Cited by: §V.
  • [55] F. Xu and K. Mueller (2007) Real-time 3d computed tomographic reconstruction using commodity graphics hardware. Physics in Medicine & Biology 52 (12), pp. 3405. Cited by: §V.
  • [56] X. Yang, V. De Andrade, W. Scullin, E. L. Dyer, N. Kasthuri, F. De Carlo, and D. Gürsoy (2018) Low-dose x-ray tomography through a deep convolutional neural network. Scientific Reports 8 (1), pp. 2575. External Links: Document, ISSN 2045-2322, Link Cited by: §V.

License

The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan. http://energy.gov/downloads/doe-public-access-plan.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393524
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description