A Variational Reconstruction Method for Undersampled Dynamic X-ray Tomography based on Physical Motion Models

A Variational Reconstruction Method for Undersampled Dynamic X-ray Tomography based on Physical Motion Models


In this paper we study the reconstruction of moving object densities from undersampled dynamic X-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation.

We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.

Martin Burger, Hendrik Dirks, Lena Frerking, Andreas Hauptmann

Tapio Helin, and Samuli Siltanen

Institut für Numerische und Angewandte Mathematik, Westfälische Wilhelms-Universität (WWU) Münster, Münster, Germany.

Department of Computer Science, University College London, London, United Kingdom

Department of Mathematics and Statistics, University of Helsinki, Helsinki, Finland


Tomographic image reconstruction is probably the most popular inverse problem, with a mathematical history of now 100 years after the seminal work of Radon [30] and enormous impact on applications for more than 50 years [11]. Nonetheless, it still offers a wealth of mathematical problems, most of them being driven by concrete practical issues, e.g. limited angles or limited field-of-view. A question that receives particular attention in the last years is dynamic tomography, i.e. time-dependent projection measurements of a non-stationary object. Of particular interest are moving objects, e.g. organs in medical computerized tomography (CT), which can lead to significant artifacts in a stationary reconstruction even in the case of slow motion. There have been several studies to tackle this problem, for instance by gating, where projections are either only taken at specific time instances of a periodic movement or by choosing only those projections after the measurement, typically used for imaging of the beating heart [1]. More recent studies estimate the motion to perform a motion compensated reconstruction [4]. The problem of motion compensation is analyzed in a rigorous mathematical framework in [17]. However, all of the techniques above aim to reconstruct a static CT image from dynamic data. In this work we are rather interested in the reconstruction of the object and the dynamics in space and time, and hence are not limited to specific movements. For this purpose we discuss a joint variational approach that incorporates physical motion models and regularization of motion vectors to achieve improved reconstruction results. We use the well-known optical flow constraint in spatial dimension two, i.e. we assume that the intensity of the images we aim to reconstruct is constant over time. The approach is however extendable to three-dimensional density reconstruction, where an even more physical modelling with a continuity equation constraining the dynamics is possible, reminiscent of optimal transport type approaches (cf. [5]).

A particular goal of our study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. Obviously in real life tomographs usually acquire one angular direction after the other, so one would need to work with single angles in an ideal modeling. However, for a suitable mathematical model we can compare the multiple time scales appearing during the process: the scale needed to take a projection at fixed angle, the typical time scale to perform a rotation, and finally the time scale of the object motion. The latter can be determined as a ratio of the spatial size of the objects one is interested in and their speed. If one finds that one or two of those time scales are smaller than others by magnitude, they can usually be ignored. For slowly moving objects with and , it is indeed realistic to assume that a full (or limited but not small) set of angles can be acquired in each time step. Even in this case one might be limited by other factors however, such as dose considerations in medical X-ray tomography,prohibiting to acquire multiple full X-rays consecutively. Thus, we will focus on the important case of few angles per time step, which does not allow to perform separate static reconstructions at single steps, but indeed enforces to perform space-time reconstruction. Without additional prior information on the dynamics, the latter is highly underdetermined and hence we shall incorporate physical motion models into variational regularization methods, which will be based on recently proposed motion corrected image reconstruction techniques, cf. [12]. A preliminary application similar to ours is discussed in [22]. In order to deal with the different possibilities to measure few angles, we use a time-dependent forward operator, which is also the main change in the method compared to [12]. Hence, we will keep the analysis in this paper rather short and mainly highlight the needed modifications in a time-continuous setting. Moreover, we discuss different time discretizations induced by measurement times and the corresponding time-discrete motions.

The main focus of the paper is the computational side and a detailed comparison of possible results in different measurement (sampling) setups, restricting ourselves to a two-dimensional setup (one projection being a single line integral), which allows to gain good insight into the problem. We will consider the following realistic cases:

  • Small angular increments: In some cases, the object of interest can be very dynamic in relation to the rotation time scale, i.e. . Then, the only way to obtain information regarding the dynamics is to apply small rotations, e.g. small increments between the angles in consecutive time steps. If , a sufficiently small angular increment usually achieves . Since the object of interest can evolve quite a lot before degrees of rotation are reached, it is expected that even with good motion models the reconstructions suffer from similar artefacts as in static limited angle tomography. Cardiac imaging with a single-source CT scanner provides an example of this case [31]. Problems with motion artifacts are typically overcome using gating, or imaging over several heartbeats and synchronizing the data with the help of a electrocardiogram. However, the proposed motion model approach paves the way towards dynamic tomography of one-shot events, such as the entry of contrast agent into bloodstream in angiography.

  • Small angular increments with multiple angles: A modification of the previous setup is obtained if we consider small angular increments but with different projection angles at each time step with relative angles of degrees. This is a model for multi-source imaging systems, such as the classical “dynamic spatial reconstructor” with [32], or more recent dual-source or triple-source CT scanners [15]. Another possibility is to consider the case of even for larger angles, but a dose constraint allowing only angles per time step.

  • Tracking: Additionally, we consider a case with a possibly different number of angles measured per time step, taking the extreme case of tracking by starting with a full data set and then acquiring a single angle over several time steps until the next full data set is obtained. One motivation for this approach is again dose limitation. Another one may be processes with inherently different time scales in the dynamics, a fast part that only allows to take single angles with small increments interchanging with a slow part such that . For example, consider studying fluid flow in porous media using synchrotron radiation [8]. Before introducing fluid, the sample is static () and can be accurately imaged. During fluid flow we have a fast period with . When the voids in the sample are fully occupied by fluid, we again have and can take a final accurate scan.

  • Randomized angles: If and one has dose limitations (or if ) one can instead consider a setup with arbitrarily different angles in each time step, which is expected to improve the reconstruction quality if we can obtain a better sampling of the full degrees in smaller time. Taking into account recent results on randomized measurements in compressed sensing (cf. [24]) we consider a setup of randomized angles. For simplicity we restrict ourselves to choosing a single angle in each time step from a uniform distribution (independent from the other time steps), which already yields strongly improved results. This measurement setup could be implemented using arrays of individually flashable small X-ray emitters [43].

In order to perform computational experiments, we use well-designed software phantoms as well as a hardware phantom measured with a custom-built CT system at the University of Helsinki, see [7] for technical specifications. As physical target we choose three small (25 mm) square ceramic stones that are centered close to the detector. The geometry is approximately parallel beam with a focus-to-detector distance of 630 mm. The collected data consists of 30 stop-and-go measurements with different angles acquired in each times step, which can be considered as a full CT and hence provides some reference reconstructions. All measurement setups discussed above can be obtained by choosing a subset of the collected angles.

The remainder of the paper is organized as follows: in Section 2 we introduce a time-dependent Radon transform and formulate the reconstruction procedure in a time continuous setting. For estimating the motion we discuss the optical flow constraint and combine both models to a joint problem for image reconstruction and motion estimation. Subsequently, we present a possibility to analyze the numerical error by writing the dynamic system as a state-space model and applying Bayesian inference. In Section 3 we discuss the discretization of our model as well as practical issues to solve the optimization problem. Results of the proposed method are then presented in Section 4 for a simulated and a physical phantom. We conclude this study with an outlook to future work in Section 5.

2Variational models for dynamic tomography with motion

In order to introduce variational models for the reconstruction of dynamic X-ray tomography data, we start by shortly recapping some basic properties that are needed. At first we define a time-dependent version of the Radon transform, which is applicable to dynamic data sets and show its well-definedness. Subsequently, we concentrate on one of the most popular approaches for estimating motion, which is the optical flow methodology. In principle it is also possible to use different motion models, e.g. nonlinear models to consider large scale movements, or continuity equations in a 3D space. However, in this paper we focus on the 2D optical flow model. We combine the optical flow approach with the time-dependent Radon transform and hence obtain a model for motion corrected variational reconstruction. In the end of the chapter we furthermore perform an uncertainty quantification for a state-space formulation of the introduced model.

2.1X-ray tomography and Radon transform

In dynamic X-ray tomography one seeks to determine a time-dependent function , where models the non-negative absorption of photons on a bounded domain at a time instance , hence . We consider the measurement model

where the operator denotes a time-dependent two-dimensional Radon transform

Here, indicates the set of given measurements at time . The full Radon transform maps to a parametrization of the infinite unit cylinder denoted by , where is the unit circle in . For a fixed time the obtained measurement is called the sinogram, which consists of line integrals over with respect to the set . The attenuation at each time instance can be uniquely determined if one has knowledge of the full sinogram for all possible lines, as shown by Radon [30], see also [29]. However, we are interested in situations of undersampling, such that rather the full collection corresponds to the usual sinogram. This is apparent if there is a single angle, i.e. a unique for each . Then the measurement is actually

In order to give a sound definition of the time dependent Radon transform we introduce some additional notation and assumptions. In the following we assume that the time interval is fixed and bounded by the end point . Furthermore, we assume that the measured angles might change between time steps and denote as the set of active measurement parameters in each time instance. We equip the set of measurement parameters measured at time with a nonnegative Radon measure , noticing that in the undersampling situations we are interested in will be a partially discrete measure with respect to for each . We denote by the set of all appearing in . Then, with denoting the bounded support of we consider for the operator

In order to deal appropriately with the undersampling we define

and assume that there exists a nonnegative Radon measure on and a bounded (uniformly in time) measurable function supported on such that

for all integrable functions . It is straight-forward to construct from in the measurement scenarios outlined above.

First of all it is apparent that is well-defined on the dense subspace . For continuous we have

Thus, is a bounded linear operator defined on a dense subspace and can be extended uniquely to a bounded linear operator on . Now consider the duality product

then an application of Fubini’s theorem yields the above form of the dual operator.

Then it follows directly that the operator is linear and bounded:

Considering the inverse problem, since we cannot measure the full sinogram in real life applications, uniqueness of the solution in is not guaranteed. Furthermore, the measurement is typically contaminated with noise and we need additional regularization to obtain a stable reconstruction. A well established approach is to search for a minimizer of a regularization functional, such as

where is a regularization parameter balancing the two parts. The first term in is the data fidelity term, which enforces that the sought-for attenuation function is close to the obtained measurement. The second term is the regularization term enforcing certain features of the reconstruction. In particular, we are interested in a sparse reconstruction with constant areas that are divided by sharp edges. For this purpose the so-called total variation is a common approach. In this study we consider the two choices . For we have the classical and well-studied - model used for tomographic imaging by [21] and many more, in contrary the - model is typically not used for X-ray tomography but tends to reduce streaking artifacts for undersampled data [35]. With , reminiscent of the well-known ROF-model [33], we would obtain separate image reconstructions at different time steps, which is attractive from a computational point of view, but seems only applicable for a reasonable amount of measurements. In this study we will supplement the model by an appropriate time correlation of the image sequence and evaluate the performance of both models for extremely undersampled measurement data.

2.2Motion models

Optical flow is one of the most common methods to estimate motion between consecutive images. Its performance is based on the assumption of brightness constancy, i.e. every pixel keeps its intensity over time even if it moves to another position within the image. Assuming a constant image intensity along a trajectory with , we obtain

The last equation is generally known as the optical flow constraint and is the desired vector field. In spatial dimension two, only states one equation per point for the two unknown components of and, consequently, the problem is underdetermined. To overcome this, the optical flow formulation can be used as a data fidelity in a variational model together with an isotropic total variation term on each of the two flow components to ensure spatial regularity, i.e.

In this model, the parameter regulates between both parts. In the optical flow setting the norm has been proven to be more robust with respect to outliers [3], which is an important characteristic especially in combination with noisy data from real applications. This model is nowadays one of the most popular models for optical flow, since it has shown success while tracking constant moving objects over time, see e.g. [42]. The total variation regularization usually causes piecewise constant vector fields, which allow to distinguish a moving object from the background.

We mention that various modifications can be incorporated into our approach in a straightforward way. For out of plan motion it may be necessary to include additional source and sink terms to obtain

with becoming another optimization model, ideally regularized by a sparsity prior in the variational model. For three-dimensional tomography reconstruction it is completely natural to replace the optical flow constraint by the continuity equation

2.3Motion corrected variational reconstruction

From our point of view the problem of motion estimation is directly connected to the tomographic reconstruction problem because it requires accurate input images. In many motion estimation applications one first reconstructs the image sequence using and afterwards estimates the underlying vector fields using . In [12] it has been shown that a joint model that simultaneously recovers an image sequence and estimates motion offers a significant advantage towards subsequently applying both methods. In [16] the model proposed in [12] was applied to dynamic X-ray tomography, resulting in the following joint model:

for and . For both image sequence and vector field, the respective total variation is used as a regularizer and the classical optical flow formulation from connects image sequence and vector field. From the perspective of image reconstruction the optical flow constraint acts as an additional temporal regularizer along the calculated motion field .

Appropriate weak-star compactness of sublevel sets and lower semicontinuity can be deduced from the arguments in [9], where the minimization was carried out over the set

where is a Banach space continuously embedded into , with and . Noticing that is continuously embedded into for in two spatial dimensions, the arguments of [9] can be directly applied for and the ones in [16] provide a similar proof for the case . Thus, we obtain the following existence result for :

We mention that the choice has to be made in the analysis to avoid to deal with measures in time, in computational scenarios below it is however more efficient to set .

2.4Uncertainty quantification for a probabilistic state-space model

Estimating numerical errors in the problem is challenging due to the inherent nonlinearity of the problem. Below we provide a probabilistic perspective to error modeling by writing our dynamic system as a state-space model and applying Bayesian inference [27]. Here, we assume a time-space discretization which is specified later. Our state variable is at time-step and the flow field corresponds to a latent variable, although it is naturally of equal interest. For convenience, let us write and for the concatenation of the times series of state vectors until time . Similarly, we write . Notice that the flow field is not estimated at the last time step and therefore and represent the full time series. Below, stands for the probability density function related to random variable .

Prior. Suppose our prior information regarding the initial state is given by the formal probability density

where is the energy functional, e.g. the BV norm in Section 3. The equations governing optical flow yield the evolution model, which is given by

where corresponds a suitable discretization of specified later and the noise is distributed according to density . Clearly, the conditional probability distribution of given both and can be expressed by

Here, we make the crucial assumption that and is a priori independent . Notice carefully that this is not the case a posteriori. Moreover, we assume is a priori independent of and therefore

Now by assuming for some energy functional , it follows that

and the full prior model can be expressed recursively as

Likelihood. Our observation of the system state is obtained via

where , are i.i.d. and is a random noise vector. Moreover, we assume virtual zero-observations at time steps , i.e., we observe

where are i.i.d., and assume for all . Notice that the virtual observations are not necessary for the probabilistic system to be well-defined. However, observations in state that for the likely values of the quantity is small and, therefore, impose some additional regularity to the system.

Under these observations it follows that the likelihood density is of the form

Uncertainty quantification. In this work we consider reconstruction methods based on smoothing [27], i.e., we estimate all states simultaneously based on the full time-series data. In this case, the posterior distribution obtained from the state-space model is proportional to the product of the prior and likelihood densities

where is the functional given by

Our numerical work below on estimating the maximum point of the posterior distribution, i.e. minimizer of , corresponds to the weak constraint 4DVAR method [2]. Sampling the full posterior distribution requires high computational effort and is not explored in this paper.

Notice that the state-space model also enables filtering methods, where one is concerned by the online estimation of as the observational data is accumulated. The update from to is obtained via a prediction step

and an analysis step

These aspects of the stochastic dynamics in optical flow models are studied further in subsequent work.

3Numerical implementation

To handle the numerical implementation of the joint model for motion corrected reconstruction, we first need to formulate a discrete version. Here we especially focus on the structure of the time-dependent Radon operator respectively the matrix representing its discretization, in order to obtain efficient schemes. Afterwards, we split the variational model into two subproblems and introduce an alternating approach to solve it. In order to minimize the single subproblems, we employ iteration schemes based on established primal-dual methods.


In practical applications we cannot measure infinitely many line integrals continuously in time and hence we need to assume a discretization in space and time to model the measurement process properly. We aim at representing the inverse problem of recovering the attenuation as a simple matrix-vector equation

with the matrix representing the discretized Radon transform dependent on time, being the attenuation coefficient in each pixel. The measurement is taken during a fixed time period as discussed in Section 2.1, and we only measure at certain time instances. We denote the number of measured time instances by , such that each measurement point in time is given by .

Let us consider one fixed point in time . Then for the discretization in space we divide the domain into pixels such that the attenuation is modeled by a vector . Furthermore, we have a finite set of lines for which we can measure the attenuation. The total amount of lines depends on the projection angles in each step and is typically a multiple of the sensor resolution. In case we have only one projection per time step, then coincides with the sensor resolution. The discrete measurement is then given by

where is the length of the line in the th pixel. The measurement matrix at time denoted by is then composed of the coefficients in . The matrix for the projection of all time steps is simply given as the block diagonal matrix

Concerning the entire problem , the discretization of the time-interval into steps leads to images resp. vector fields between subsequent frames. Using the previously derived discrete Radon matrix, the time-discrete counterpart of is hence given by

where denotes the isotropic total variation, which takes the point wise Euclidean norm of and afterwards sums up the resulting vector in .


Despite showing the existence of a minimizer, its calculation is numerically challenging. Problems arise from the non-convexity and non-linearity of the optical flow term, the non-differentiability of the –norm and finally several linear operators acting on and . To address these issues, the joint model can be transformed into a two-step method

that alternatingly solves a problem for the image sequence using information from and a problem for the flow sequence using information from . Using block diagonal operators, see , equation and can be simplified into problems of the form

where the operator depends on and depends on . The terms from can be combined to the vector in . Due to this transformation, each of these subproblems is linear and convex but still non-differentiable. Moreover, the alternating scheme tends to end up in local minima rather than in convergence to the solution of .

In order to minimize the single subproblems for reconstruction and for motion estimation, we use a primal-dual approach, which was introduced in [10]. Therefore, we rewrite the given problems as saddle point formulations of the form

and subsequently apply the iteration scheme proposed in [10]. In what follows we address the problems for and in detail. The pseudocode in Algorithm ? then gives a sketch of the alternating minimization strategy.

Reconstruction. Within this part we restrict ourselves to the case of an data fidelity where an extension to an term is straight forward. Since all terms of contain operators, we dualize each term and obtain the saddle-point problem


Here denotes an indicator function on the set defined as

An application of the primal-dual method to the above problem yields the following iteration scheme:

where denotes a projection of the input argument onto the set and are valid stepsizes as explained in [10].

Motion Estimation. The saddle point formulation for the motion estimation problem can be derived in analogy to the reconstruction problem as follows


Similar to the reconstruction part, we apply the iteration scheme proposed in [10] to the given problem and end up with

To handle large displacements, the optical flow calculation is incorporated into a coarse-to-fine pyramid with intermediate warping steps. We refer to [13] for details.


For the evaluation of the proposed motion estimation and reconstruction we consider two experiments in this section. For a qualitative evaluation we consider a simulated data set of a moving ball. Since the ground truth is known we can explicitly evaluate the performance of the and model as reconstruction functional by evaluating the reconstruction errors. Based on the knowledge obtained in the simulated experiments we then apply the reconstruction algorithm to real measurement data from the CT lab at the University of Helsinki.

A further aim is to compare the reconstruction quality of the and fidelity terms for extremely undersampled dynamic data by means of their reconstruction error. Two of the most common error measures are the distance and the distance between the reconstruction and the phantom. Thus, we also use both measures to compare the results. However, a disadvantage of their utilization is that they are not neutral with respect to the chosen norms of the model. Naturally, the error in is smaller for an data fidelity term, and the error in is smaller for the model. Thus, it is necessary to use an unbiased technique. For this purpose, the method of our choice is the Structural Similarity (SSIM) index (cf [6]). The SSIM index evaluates an image by comparing the structures of two images with a perception-based model. It includes illumination as well as different contrasts in an image. For two images and the SSIM index is given by

where and are the averages of and , and are the variances of and , is the covariance of and , and as well as are constants to prevent a division by zero. To compare the structures of two entire image sequences, we add up the SSIM index for every time step and calculate the average, i.e.

In contrast to the errors in and in , the SSIM index is close to one for images that are similar to the reference image.

4.1Software experiment: Pinball

The synthetic Pinball data set consists of a two dimensional image of pixels. The image is recorded at 30 consecutive time steps. Throughout this period of time, a uniform and rigid ball is moving from the left side of the image frame to the right side. During the whole time, the ball resides in a stationary ellipse of medium intensity. shows the ground truth images at a selection of time steps. To avoid inverse crime, we first compute the sinogram from a high resolution image, add Gaussian noise, and finally downscale it to the size of the corresponding phantom. The noise level has been chosen to be reasonably low for accurate measurements, such that the reconstructed features are mainly depending on sampled data and chosen angles. The regularization parameters were chosen such that the error is minimized for the - model, for - respectively. That is for ; and for . A full space-time reconstruction takes a few minutes on a modern CPU.

The results of our computations can be seen in for the data fidelity term and in for . Each figure illustrates the four measurement settings mentioned in Section 1. The results in the top rows are computed with small angular increments, i.e. single consecutive angles. As previously discussed, the information from a single angle is not sufficient to obtain a reasonable reconstruction and hence different measurement protocols need to be considered. The second rows show results calculated assuming small angular increments with multiple angles, in this case two angles with a 90 offset. The results for the tracking approach are presented in the third rows, i.e. we have a full CT scan of 60 angles available for the first and last time step. The bottom rows present the results for one single randomized angle in each time step.

In both examples we can clearly see that the reconstructions from one incremental angle per time step can not produce satisfactory results. Even though the ellipses are rather well reconstructed, in case of the fidelty term sharp and for the fidelty term blurred, but the position and the shape of the balls are incorrect. The balls seems to follow a wave-like shape, which depends on the measurement angles. By increasing the angles per time step in the second measurement setup to two angles, the shape and the position of the balls are already significantly better reconstructed than in the results with only a single angle. However, for both models there are two tail-like artifacts in direction of the projections. Between those tails, i.e. in the upper part of the ellipse, edges are not properly reconstructed. If one considers the tracking approach, both initial and end reconstructions are of course well reconstructed. However, in the intermediate time steps the position of the balls can not be reconstructed correctly. Though, the results are considerably more accurate than in the approach without any a priori information. Finally, for a randomized measurement protocol with a single angle, the balls are correctly located. For the fidelity term the shape is close to a square, whereas the fidelity term nicely reproduces a round appearance. The ellipse is well reconstructed for both fidelity terms. In general we note that the fidelty term produces sharper results, especially for the stationary ellipse and the fidelty term has a tendency to blur the background, but keeping the shape of the moving object closer to the original.

Table 1: Calculated relative , errors, and averaged SSIM for Pinball reconstructions.
error error SSIM error error SSIM
small incr. ( angle) 0.4744 0.6485 0.7498 0.8897 0.6962 0.4310
small incr. ( angles) 0.3828 0.4166 0.7275 0.3329 0.2954 0.7208
tracking 0.3131 0.5177 0.8240 0.4789 0.5042 0.6321
randomized angle 0.1978 0.3310 0.8502 0.2223 0.2586 0.8006

Additionally, to accompany the visual inspection, we computed the relative errors in and in , as well as the SSIM index (see ), displayed in . Comparing the performances of the and of the data fidelity for each experimental setting reveals that as expected the error in is always smaller for the results calculated with the data fidelity, whereas the error is smaller for the results of the data fidelity. However, the SSIM index, which we expect to be more neutral with respect to the chosen norm, indicates that the norm outperforms the norm for every single approach. This is a consequence of the distinctly better reconstruction of the ellipses. Concerning different experimental settings, the approach considering randomized angles achieves the best results by far. For both data fidelities all error measures indicate that this approach yields the best outcome. Consistent with the visual inspection, the approach with small angular increments with single angles can be considered as the worst performing approach.

To conclude this chapter, we present in the flow fields corresponding to the approach with randomized angles and for both fidelity terms. The flow fields generally estimate the motion in the correct direction. Nevertheless, there are obvious differences between the flow fields estimated with the and the data fidelity. For the data fidelity the entire movement of the ball is visible in the flow field. In contrast to the data fidelity, here the model only recognizes motion at the edges of the ball. For both data fidelity terms the flow fields for the first time steps are not correctly estimated. The reason for this is the missing possibility to use a priori information for the estimation from previous time steps that can be used as initialization.

4.2Hardware experiment: Rolling Stones

The Rolling Stones data set consists of images of size pixels, measured at 30 consecutive time steps. Even though we aim at being able to recover continuous movement measured with an extremely limited amount of angles, the measurements were actually recorded from 60 equally distributed angles in a stop-and-go approach. This has the advantage that we are able to use the exact same data set for different arrangements of angles as well as having a reference reconstruction from 60 angles as ground truth. shows the reconstruction from 60 angles for a representative selection of time steps, computed by a simple smoothed and therefore differentiable - variant especially suitable for large-scale data, a detailed description for the used procedure can be found in [20].

The Rolling Stones data depicts three ceramic stones of approximately 25 mm, which are initially located next to each other in the center of the domain. In each time step the stones move further apart from each other to the boundary of the imaging domain. Since the stones were moved manually during the measurements, the vector and direction of movement differs for every stone and every time step. We have seen in the software experiments that the data fidelity works best for reconstructing this kind of data and hence we restrict ourselves in the following to the - model. In we present the sinograms that are used in the following for the joint image reconstruction and motion estimation.

We omit the reconstructions for small angular increments with one angle here, since the reconstructions were simply not satisfactory. On the other hand for small angular increments with two angles we obtain quite informative reconstructions, as displayed in . The reconstruction results for the tracking setting can be seen in and the corresponding reconstructions for the randomized angles are presented in .

For all three measurement setups presented we can reconstruct the position of the stones quite clearly. Especially in case of small angular increments with two angles and randomized angles, we can separate the stones already from the beginning and clearly track the movement. For the tracking approach the initial and end states are clearly reconstructed due to the full angle data, but the position of the stones during the movement can only be identified after they have separated sufficiently.

The randomized angles provide a superior reconstruction quality with respect to the amount of used data. The motivation of this study is to reduce the amount of necessary measurements for dynamic data as far as possible and hence the randomized angles are most successful. We point out that we only have one projection image per time step and hence we are not able to reduce the data any further. However, we try to lower the amount of used projections even more by reducing the time steps and hence increasing the spacial offset between frames. In reconstructions for a total of 15 and 8 time steps are presented. For 15 projections, the separation as well as the position of the stones are still well reconstructed, just a slightly stronger blurring occurs due larger movements between frames. Regarding the results calculated from only 8 projections, the blurring has strongly increased and the shape of the stones is not clear anymore. This could be considered as the limit of our approach to produce reasonable results.

5Conclusions and outlook

We introduced a framework to combine motion estimation and reconstruction in X-ray tomography in a joint model. The aim of this study is to illustrate that one can estimate the motion from the measured data in a variational framework in order to reconstruct the dynamics of the measured object in space-time. For estimating the motion we utilized the optical flow framework, which is already well-established in image registration, but has been only recently introduced to inverse problems. The forward problem in our framework is modeled by a time-dependent Radon transform that is well-defined in a finite time setting. We then combined the reconstruction task and the motion estimation to a joint model, which can be solved in an alternating way with modern optimization techniques. Additionally, we propose a probabilistic perspective on error modeling that will be studied further in future research.

The proposed model has been applied to simulated and real phantoms with extremely undersampled data, i.e. we went as low as one projection per time step and yet were able to produce informative results capturing the dynamics of the system correctly. The experiments showed that an - model for the reconstruction is most powerful for this kind of data. Furthermore, we obtained the best results with randomly chosen projection angles in each time instance. This will be of special interest for further studies in the context of compressed sensing in X-ray tomography in particular for dynamic systems.

Optical flow respectively the continuity equation as motion model is rather restrictive in the context of some inverse problems in medical imaging, since one cannot introduce new mass to the system, e.g. in terms of tracers or signals. Hence it is important to investigate different motion models. A first extension that comes to ones mind is to allow input to the system by considering non-zero Neumann boundary conditions. Relevant applications include cardiac scans, where a tracer is injected to the patients blood stream to monitor blood flow through the heart. Many imaging modalities also include diffusion processes and, therefore, are not suitable for the optical flow model. We leave this and further extensions to future research.


This work has been supported by the German Science Exchange Foundation DAAD via Project 57162894, Bayesian Inverse Problems in Banach Space, as well as by the Academy of Finland through the Finnish Centre of Excellence in Inverse Problems Research 2012–2017, decision number 250215. MB, HD and LF acknowledge further support by ERC via Grant EU FP 7 - ERC Consolidator Grant 615216 LifeInverse. AH was partially supported by FiDiPro project of the Academy of Finland, decision number 263235. TH was supported by Academy of Finland via project 275177.


  1. S. Achenbach, T. Giesler, D. Ropers, S. Ulzheimer, H. Derlien, C. Schulte, E. Wenkel, W. Moshage, W. Bautz, W. G. Daniel, et al., Detection of coronary artery stenoses by contrast-enhanced, retrospectively electrocardiographically-gated, multislice spiral computed tomography, Circulation, 103 (2001), pp. 2535–2538.
  2. A. Apte, C. K. Jones, A. Stuart, and J. Voss, Data assimilation: Mathematical and statistical perspectives, International journal for numerical methods in fluids, 56 (2008), pp. 1033–1046.
  3. G. Aubert, R. Deriche, and P. Kornprobst, Computing optical flow via variational techniques, SIAM Journal on Applied Mathematics, 60 (1999), pp. 156–182.
  4. C. Blondel, R. Vaillant, G. Malandain, and N. Ayache, 3d tomographic reconstruction of coronary arteries using a precomputed 4d motion field, Physics in medicine and biology, 49 (2004), p. 2197.
  5. Reviewer: Prof. Dr. Martin Burger, Prof. Dr. Stanley Osher.
    C. Brune, 4D Imaging in Tomography and Optical Nanoscopy, PhD thesis, University of Muenster, Germany, july 2010.
  6. D. Brunet, E. R. Vrscay, and Z. Wang, On the mathematical properties of the structural similarity index, IEEE Transactions on Image Processing, 21 (2012), pp. 1488–1499.
  7. T. A. Bubba, A. Hauptmann, S. Huotari, J. Rimpeläinen, and S. Siltanen, Tomographic x-ray data of a lotus root filled with attenuating objects, arXiv preprint arXiv:1609.07299, (2016).
  8. F. N. Büchi, R. Flückiger, D. Tehlar, F. Marone, and M. Stampanoni, Determination of liquid water distribution in porous transport layers, ECS Transactions, 16 (2008), pp. 587–592.
  9. M. Burger, H. Dirks, and C.-B. Schönlieb, A variational model for joint motion estimation and image reconstruction, arXiv preprint arXiv:1607.03255, (2016).
  10. A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision, 40 (2011), pp. 120–145.
  11. A. M. Cormack, Representation of a function by its line integrals, with some radiological applications I, Journal of Applied Physics, 34 (1963), pp. 2722–2727.
  12. H. Dirks, Variational Methods for Joint Motion Estimation and Image Reconstruction, PhD thesis, Westfälische Wilhelms-Universität Münster, 2015.
  13. H. Dirks, Joint large-scale motion estimation and image reconstruction, preprint arXiv:1610.09908, (2016).
  14. H. Dirks, J. Geiping, D. Cremers, and M. Moeller, Multiframe motion coupling via infimal convolution regularization for video super resolution, arXiv preprint arXiv:1611.07767, (2016).
  15. T. G. Flohr, C. H. McCollough, H. Bruder, M. Petersilka, K. Gruber, C. Sü, M. Grasruck, K. Stierstorfer, B. Krauss, R. Raupach, et al., First performance evaluation of a dual-source ct (dsct) system, European radiology, 16 (2006), pp. 256–268.
  16. L. Frerking, Variational Methods for Direct and Indirect Tracking in Dynamic Imaging, PhD thesis, Westfälische Wilhelms-Universität Münster, 2016.
  17. B. Hahn, Reconstruction of dynamic objects with affine deformations in computerized tomography, Journal of Inverse and Ill-posed Problems, 22 (2014), pp. 323–339.
  18. B. N. Hahn, Null space and resolution in dynamic computerized tomography, Inverse Problems, 32 (2016), p. 025006.
  19. B. N. Hahn and E. T. Quinto, Detectable singularities from dynamic radon data, SIAM Journal on Imaging Sciences, 9 (2016), pp. 1195–1225.
  20. K. Hämäläinen, L. Harhanen, A. Hauptmann, A. Kallonen, E. Niemi, and S. Siltanen, Total variation regularization for large-scale x-ray tomography, International Journal of Tomography & Simulation, 25 (2014), pp. 1–25.
  21. G. T. Herman and R. Davidi, Image reconstruction from a small number of projections, Inverse Problems, 24 (2008), p. 045011.
  22. N. Huynh, F. Lucka, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, Sub-sampled fabry-perot photoacoustic scanner for fast 3d imaging, in SPIE BiOS, International Society for Optics and Photonics, 2017, pp. 100641Y–100641Y.
  23. T. L. Jensen, J. H. Jørgensen, P. C. Hansen, and S. H. Jensen, Implementation of an optimal first-order method for strongly convex total variation regularization, BIT Numerical Mathematics, DOI: 10.1007/s10543-011-0359-8 (2011).
  24. J. S. Jørgensen, E. Y. Sidky, P. C. Hansen, and X. Pan, Empirical average-case relation between undersampling and sparsity in x-ray ct, Inverse problems and imaging (Springfield, Mo.), 9 (2015), p. 431.
  25. T. Koesters, F. Knoll, A. Sodickson, D. K. Sodickson, and R. Otazo, Sparsect: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction, 2017.
  26. V. Kolehmainen, A. Vanne, S. Siltanen, S. Järvenpää, J. Kaipio, M. Lassas, and M. Kalke, Parallelized Bayesian inversion for three-dimensional dental X-ray imaging, IEEE Transactions on Medical Imaging, 25 (2006), pp. 218–228.
  27. K. Law, A. Stuart, and K. Zygalakis, Data Assimilation, Springer, 2015.
  28. T. Li, E. Schreibmann, Y. Yang, and L. Xing, Motion correction for improved target localization with on-board cone-beam computed tomography, Physics in medicine and biology, 51 (2005), p. 253.
  29. F. Natterer, The mathematics of computerized tomography, vol. 32, John Wiley & Sons, Chichester, USA, and B. G. Teubner, Stuttgart, Germany, 1986.
  30. J. Radon, über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten, Berichte über die Verhandlungen der Sächsischen Akademien der Wissenschaften, Leipzig. Mathematisch-physische Klasse, 69 (1917), pp. 262–267.
  31. E. L. Ritman, Cardiac computed tomography imaging: a history and some future possibilities, Cardiology clinics, 21 (2003), pp. 491–513.
  32. R. A. Robb, E. A. Hoffman, L. J. Sinak, L. D. Harris, and E. L. Ritman, High-speed three-dimensional x-ray computed tomography: The dynamic spatial reconstructor, Proceedings of the IEEE, 71 (1983), pp. 308–319.
  33. L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena, 60 (1992), pp. 259–268.
  34. L. A. Shepp and B. F. Logan, The fourier reconstruction of a head section, IEEE Transactions on Nuclear Science, 21 (1974), pp. 21–43.
  35. E. Y. Sidky, J. H. Jørgensen, and X. Pan, Convex optimization problem prototyping for image reconstruction in computed tomography with the chambolle-pock algorithm, Physics in Medicine and Biology, 57 (2012), p. 3065.
  36. E. Y. Sidky and X. Pan, Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization, Physics in Medicine and Biology, 53 (2008), p. 4777.
  37. K. T. Smith, D. C. Solmon, and S. L. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bulletin of the AMS, 83 (1977), pp. 1227–1270.
  38. S. Suhr, Variational Methods for Combined Image and Motion Estimation, PhD thesis, Westfälische Wilhelms-Universität Münster, 2015.
  39. J. Tang, B. E. Nett, and G.-H. Chen, Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms, Physics in Medicine and Biology, 54 (2009), p. 5781.
  40. Z. Tian, X. Jia, K. Yuan, T. Pan, and S. B. Jiang, Low-dose CT reconstruction via edge-preserving total variation regularization, Physics in Medicine and Biology, 56 (2011), p. 5949.
  41. T. Trabold, M. Buchgeister, A. Küttner, M. Heuschmid, A. Kopp, S. Schröder, and C. Claussen, Estimation of radiation exposure in 16-detector row computed tomography of the heart with retrospective ecg-gating, in RöFo-Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren, vol. 175, © Georg Thieme Verlag Stuttgart New York, 2003, pp. 1051–1055.
  42. C. Zach, T. Pock, and H. Bischof, A duality based approach for realtime tv-l 1 optical flow, in Joint Pattern Recognition Symposium, Springer, 2007, pp. 214–223.
  43. Y. Zhang, H.-P. Chan, B. Sahiner, J. Wei, M. M. Goodsitt, L. M. Hadjiiski, J. Ge, and C. Zhou, A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis, Medical Physics, 33 (2006), pp. 3781–3795.
  44. J. Zhao, Y. Lu, T. Zhuang, and G. Wang, Overview of multisource ct systems and methods, in SPIE Optical Engineering+ Applications, International Society for Optics and Photonics, 2010, pp. 78040H–78040H.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description