Joint image edge reconstruction and its application in multi-contrast MRI

Joint image edge reconstruction and its application in multi-contrast MRI

Yunmei Chen 111Department of Mathematics, University of Florida, Gainesville, FL 32611, USA. Email: yun@math.ufl.edu    Ruogu Fang 222Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA. Email: ruogu.fang@bme.ufl.edu    Xiaojing Ye 333Corresponding author. Department of Mathematics & Statistics, Georgia State University, Atlanta, GA 30303, USA. Email: xye@gsu.edu
Abstract

We propose a new joint image reconstruction method by recovering edge directly from observed data. More specifically, we reformulate joint image reconstruction with vectorial total-variation regularization as an minimization problem of the Jacobian of the underlying multi-modality or multi-contrast images. Derivation of data fidelity for Jacobian and transformation of noise distribution are also detailed. The new minimization problem yields an optimal convergence rate, where is the iteration number, and the per-iteration cost is low thanks to the close-form matrix-valued shrinkage. We conducted numerical tests on a number multi-contrast magnetic resonance image (MRI) datasets, which show that the proposed method significantly improves reconstruction efficiency and accuracy compared to the state-of-the-arts.

Keywords. Joint image reconstruction, matrix norm, optimal gradient method, multi-contrast.

1 Introduction

The advances of medical imaging technology have allowed simultaneous data acquisition for multi-modality images such as PET-CT, PET-MRI and multi-contrast images such as T1/T2/PD MRI. Multi-modality/contrast imaging integrates two or more imaging modalities (or contrasts) into one system to produce more comprehensive observations of the subject. Such technology combines the strengths of different imaging modalities/contrasts in clinical diagnostic imaging, and hence can be much more precise and effective than conventional imaging. The images from different modalities or contrasts complement each other and generate high spatial resolution and better tissue contrast. However, effectively utilizing sharable information from different modalities and reconstructing multi-modality images remain as a challenging task especially when only limited data are available. Therefore, the main approach of joint image reconstruction is to incorporate similarities, such as anatomical structure, between all modalities/contrasts into the process to improve reconstruction accuracy [2, 9, 17].

In this paper, we consider a new approach by jointly reconstructing edges of images, rather than the images themselves, directly from multi-modality/contrast imaging data. The final images can be obtained very easily given the reconstructed edges. We show that this new approach results in an -type minimization that can be effectively solved by accelerated prox-gradient methods with an optimal convergence rate, where is the iteration number. Moreover, the subproblems reduce complex vectorial/joint total variation regularizations to simple matrix-valued shrinkages, which often have cheap closed-form solutions. This is in sharp contrast to primal-dual based image reconstruction algorithms, such as primal-dual hybrid gradient method (PDHG) and alternating direction method of multipliers (ADMM), which only yield convergence rate. Therefore, the proposed method can produce high quality reconstructions with much improved efficiency over the state-of-the-arts methods in multi-modality/contrast image reconstruction problems.

The contributions of this paper can be summarized as follows. (a) We develop a novel two-step joint image reconstruction method that transforms the vectorial TV regularized minimization of image into an minimization of Jacobian. This enables a numerical scheme with optimal convergence rate by employing accelerated gradient descent method (Section 3.2). (b) In the resulting minimization problems, the main subproblem involves matrix norm (instead of vectorial TV) and can be solved easily using generalized shrinkage for matrices. We provide close-form solutions for several cases originated from commonly used vectorial TVs (Section 3.4 and Appendix A). (c) We analyze the noise distribution after transformation, and incorporate it into the data fidelity of Jacobian in the algorithm which significantly improves reconstruction accuracy and efficiency (Section 3.3). (d) We conduct a series to numerical tests on several multi-contrast MRI datasets and show the very promising performance of the proposed methods (Section 4). Although we only focus on multi-contrast MRI reconstruction where all data is acquired in Fourier space, it is worth noting that the proposed method can be readily extended to other cases, such as those with Radon data, hence is applicable to reconstructions involving other types of imaging modalities.

The remainder of the paper is organized as follows. We first provide an overview of recent literatures in joint image reconstruction in Section 2. Then we present the proposed method and address a number of details in Section 3. In Section 4, we conduct a number of numerical tests on a variety of multi-contrast MR image reconstruction datasets. Section 5 concludes our findings.

2 Related Work

There have been a significant amount effort devoted to develop models and algorithms that can effectively take the anatomy structure similarities across modalities/contrasts into account during joint reconstructions. As widely accepted, these structure similarities can be exploited using the locations and directions of the edges [7, 10, 11] characterized by the magnitude and direction of the gradient of an image. The active researches on multi-modal/contrast image reconstructions have focused mainly on how to effectively utilize the complimentary information on these structure similarities to improve the accuracy and robustness of the reconstructions.

Inspired by the success of total variation (TV) based image reconstructions for scalar-valued images, many algorithms for joint reconstruction of multi-modal images extend the classical TV to vectorial TV for joint multi-modal image reconstructions. This extension aims at capturing the sharable edge information and performing smoothing along the common edges across the modalities. There have been several different ways to define the TV regularization for multi-modal images (represented by vector-valued functions). For instance, , where is the (scalar-valued) image of modality/channel/contrast in an -modality imaging problem, is proposed in [3]. In [6], a number of variations along this direction, called collaborative TV (CTV), are studied and summarized comprehensively. A specific TV takes a particular form to integrate partial derivatives of the image across the modalities. For example, takes norm of image gradient at every point (pixel) for each modality, then norm over all pixels, and finally norm (direct sum) across all modalities. Another commonly used CTV variation, called joint TV (JTV), is formulated as and has been successfully applied to color image reconstruction [4, 23], multi-contrast MR image reconstruction [17, 19], joint PET-MRI reconstruction [11], and in geophysics [16]. As one can see, JTV is taking norm in terms of gradients, norm across modalities, and then finally for pixels. In general, the gradient of a vector-valued image consisting multiple modalities/contrasts is a tensor at each pixel, and the TV takes specific combination of norms regarding partial derivatives (gradients), modalities (channels, or contrasts), and pixels, respectively.

Another joint regularization approach is to extend anisotropic TV regularization from scalar to vector-valued case. In this approach, anisotropic TV is employed to replace isotropic TV by an anisotropic term that incorporates directional structures exhibited by either the original data or the underlying image. Anisotropic TV been applied in standard single-modal image reconstruction with successes in [13, 15, 18], where structure tensor is used to provide information of both size and orientation of image gradients. In particular, the model proposed in [15] employs an anisotropic TV regularization of form for image reconstruction. Here, is the gradient of the single-modal, scalar-valued 2D image , the diffusion tensor is determined by the eigenvalues and corresponding eigenvectors of the structure tensor . The model developed in [13] alternately solves two minimizations: one that estimates structure tensor using the image from previous iteration, and the other one improves the image with an adaptive regularizer defined from this tensor. The idea of anisotropic TV has been extended to vector-valued images with the anisotropy adopted from the gradients of multi-modal/contrast images, which incorporates similarities of directional structures in joint reconstruction. The method developed in [9] projects the gradient in the total variation functional onto a predefined vector field given by the other contrast for joint reconstruction of multi-contrast MR images. In [9], a directional TV (DTV) is proposed with the form , where is the residue of projection of to , and and are scalar-valued functions each representing the image of one modality and for all . In other words, this is the anisotropic diffusion by using the structure tensor of given reference image .

With geometric interpretation of gradient tensor above, authors in [5] suggest to consider a vector-valued image as a parameterized 2-dimensional Riemann manifold with metric , where is a vector-valued image and is the Jacobian of . Then the eigenvector corresponding to the larger eigenvalue gives the direction of the vectorial edge. Based on this framework, several forms of vectorial TV (VTV) have been developed. In [22], a family of VTV formulations are suggested as the integral of over the manifold, where and denote the larger and smaller eigenvalues of the metric , respectively, and is a suitable scalar-valued function. A special choice of , i.e., the Frobenius norm of the Jacobian , reduce to JTV mentioned above. For another special choice of , there is , where is the largest singular value of the Jacobian of . This can be computed by using the dual formulation with where is the standard simplex in .

Besides joint TV or tensor based regularization, the parallelism of level sets across multi-modality/contrast images are also proposed as joint image regularization in [8, 11, 12, 16]. The main idea is to exploit the structural similarity of two images and measured by the parallelism of their gradients and at each point using , for some functions and . Then the regularization in joint reconstruction takes form Several different choices of and have been studied in these works. For instance, in [12], and are taken as identities, or and . In [16], is the identity and . In [11] the side information on the level set, namely, the location and direction of the gradients of a reference image, is available to assist the reconstruction.

Another joint reconstruction approach different from aforementioned methods is to recover gradient information for each of the underlying multi-modal images from the measurements in Fourier domain, then use the recovered gradients to reconstruct images. This approach is motivated by the idea of Bayesian compressed sensing and applied to joint multi-contrast MR image reconstruction in [2]. In [2], gradients of the multi-contrasts images are reconstructed jointly from their measurements in Fourier space under a hierarchical Bayesian framework, where the joint sparsity on gradients across multi-contrast MRI is exploited by sharing the hyper-parameter in the their maximum a posteriori (MAP) estimations. Their experiments show the advantage of using joint sparsity on gradients over conventional sparsity. However, their method requires extensive computational cost. A two-step gradient reconstruction of MR images is also proposed in [20], however, only single-modality/contrast image is considered. In [20], the authors showed that this two-step gradient reconstruction approach allows to reconstruct image with fewer number of measurements than required by standard TV minimization method.

3 Proposed Method

In this section, we propose a new joint image reconstruction method that first restores image edges (gradients/Jacobian) from observed data, and then assembles the final image using these edges. Without loss of generality, we assume all images are -dimensional, i.e., the image domain (all derivations below can be easily extended to higher dimensional images). For simplicity, we further assume is rectangular. For single channel/modality/contrast case, we use function represents the image, such that stands for the intensity of image at . In multi-modality case, where is the number of modalities.

It is also convenient to treat an image as a matrix in discretized setting which we mainly work on in practice, and further stack the columns of to form a single column vector in , where is the total number of pixels in the image. For multi-modality case, we have to represent the (discretized) image of modality for .

3.1 Vectorial total-variation regularization

Standard total-variation (TV) regularized image reconstruction can be formulated as a minimization problem as follows:

(1)

where represents the data fidelity function, e.g., , for some given data sensing matrix and observed partial/noisy/blurry data . By solving (1), we obtain a solution which minimizes the sum of TV regularization term and data fidelity term with a weighting parameter that balances the two terms. It is shown that TV regularization can effectively recover high quality images with well preserved object boundaries from limited and/or noisy data.

In joint multi-modality image reconstruction, the edges of images from different modalities are highly correlated. To take such factor into consideration, the standard TV regularized image reconstruction (1) can be simply replaced by vectorial TV (VTV) regularized counterpart:

(2)

where is the vectorial TV of . In the case that is continuously differentiable, VTV is a direct extension of standard TV as an “ of gradient” as

(3)

where is the Jacobian matrix at point , and indicates some specific matrix norm. For example, let be an -by- matrix, then we may use one of the following matrix norms as :

  • Frobenius norm: . This essentially treats as an -vector.

  • Induced 2-norm: where is the largest singular value of . This norm is advocated in [14] with a geometric interpretation when used in VTV.

  • Nuclear norm: where are singular values of . This is a convex relaxation of matrix rank.

Obviously, there are a number of other choices for VTV due to the many variations of matrix norms. However, in this paper, we only focus on these three norms as they are the mostly used ones in VTV regularized image reconstructions.

It is also worth noting that, in general, the VTV norm with matrix norm is defined for any function (not necessarily differentiable) as

(4)

where is the dual norm of . Although we would not always make use of the original VTV definition (4) in discrete setting, we show that they can help to derive closed-form soft-shrinkage with respect to the corresponding matrix -norm as in Appendix A.

3.2 Joint edge reconstruction

The main idea of this paper is to reconstruct edges (gradients/Jacobian) of multi-modality images jointly, and then assemble the final image from these edges. To that end, we let denote the “gradients” (or Jacobian) of . If is differentiable, then is the Jacobian matrix of at . For example, assuming there are three modalities and at every point , then is a matrix

(5)

where at every for and ( here). As a result, the VTV of in (3) simplifies to . If is not differentiable, then may become the weak gradients of (when ) or even a Radon-Nikodym measure (when ). However, in common practice using finite differences for numerical implementation to approximate partial derivatives of functions, such subtlety does not make much differences. Therefore, we treat as the Jacobian of throughout the rest of the paper and in numerical experiments.

Assume that we can derive the relation of the Jacobian and the original observed data and form a data fidelity of , as an analogue of data fidelity of image (justification of this assumption will be presented in Section 3.3). Then we can reformulate the VTV regularized image reconstruction problem (2) about to a matrix -norm regularized inverse problem about as follows:

(6)

Now we seek for reconstructing instead of . This reformulation (6) has two significant advantages compared to the original formulation (2):

  • It reduces to an type minimization (6) and can be solved effectively by accelerated gradient method. For example, using Algorithm 1 based on FISTA [1], we can attain an optimal convergence rate of to solve (6), where is the iteration number. This is in sharp contrast to the best known rate of primal-dual based methods (including the recent, successful PDHG and ADMM) for solving (2).

  • The per-iteration complexity of the type minimization (6), e.g., Algorithm 1, is the same or even lower compared to VTV regularized minimization (2). In particular, closed-form solution of gradients in (7), a matrix -norm variant of soft-shrinkage, is widely available and cheap to compute.

Input: Initial and its Jacobian . Step size .
Set , , , and iterate (7)–(9) below until stopping criterion is met
(7)
(8)
(9)
Output: Reconstructed Jacobian (edge) .
Algorithm 1 Joint edge reconstruction of (6) using FISTA

In the remainder of this section, we will answer the following three questions that are critical in the proposed joint image reconstruction method based on (6):

  • In what situations/applications the fidelity of image can be converted to fidelity of Jacobian ? (Section 3.3)

  • How to obtain closed form solution of -subproblem (7) for matrix -norms, particularly for the three matrix norms in Section 3.1? (Section 3.4)

  • How to reconstruct image using Jacobian obtained from Algorithm 1? (Section 3.5)

3.3 Formulating fidelity of Jacobian

In an extensive variety of imaging technologies, especially medical imaging, the image data are acquired in transform domains. Among those common transforms, Fourier transform and Radon transform are the two most widely used ones in medical imaging. In what follows, we show that the relation between an image and its (undersampled) data can be easily converted to that between the Jacobian matrix and in the Fourier domains. This allows us to derive the data fidelity of Jacobian and reconstruct it properly for MRI reconstruction problems. Similar idea can be applied to the case with Radon transforms, subject to modification of data fidelity and noise distribution in a straightforward manner. In this paper, we focus on the Fourier case only as our data are multi-contrast MRIs.

Data transformation. Imaging technologies, such as magnetic resonance imaging (MRI) and radar imaging, are based on Fourier transform of images. The image data are the Fourier coefficients of image acquired in the Fourier domain (also known as the -space in MRI community). The inverse problem of image reconstruction in such technologies often refers to recovering image from partial (i.e., undersampled) Fourier data.

In discrete setting, let denote the image to be reconstructed, the (discrete) Fourier transform matrix (hence a unitary matrix), and the diagonal matrix with binary values ( or ) as diagonal entries to represent the undersampling pattern (also called mask in -space), and be the observed partial data with at unsampled locations. Therefore, the relation between the underlying image and observed partial data is given by .

The gradient (partial derivatives) of an image can be regarded as a convolution. The Fourier transform, on the other hand, is well known to convert a convolution to simple point-wise multiplication in the transform domain. In discrete settings, this simply means that is a diagonal matrix where is the discrete partial derivative (e.g., forward finite difference) operator along the direction (). This amounts to a straightforward formulation for the data fidelity of : Let be the partial derivative of in direction, i.e., then we have

(10)

where we used the facts that both and are diagonal matrices so they can commute. Therefore, the data fidelity term in (6) can be, for example, formulated as

(11)

As long as the Fourier data is concerned, the data fidelity of the corresponding gradient can be formulated in a similar way as above.

Noise transformation. We now consider the noise distribution in image reconstruction from Fourier data. Suppose the noise is due to acquisition in Fourier transform domain such that

(12)

for each frequency value in Fourier domain. Here is the Fourier transform of image . By multiplying on both sides of (12), we see that this fidelity of can be obtained by

Suppose that where and are real and imaginary parts of respectively, and where and are independent for all . Then we know that the transformed noise are independent for different and distributed as bivariate Gaussian

(13)

where and . Therefore, we can readily obtain the maximum likelihood of and hence the fidelity of . In particular, if , then and are two i.i.d. . i.e., . In this case, we denote , then data fidelity (i.e., negative log-likelihood) of simply becomes , as in contrast to the fidelity term of . In the remainder of this paper, we assume that the standard deviation of real and imaginary parts are both . Furthermore, in numerical experiments, we can pre-compute and only perform an additional point-wise division of after computing in each iteration.

3.4 Closed form solution of matrix-valued shrinkage

As we can see, the algorithm step (7) calls for solution of type

(14)

for specific matrix norm and given matrix . In what follows, we provide the close-form solutions of (14) when the matrix -norm is Frobenius, induced 2-norm or nuclear norm, as mentioned in Section 3.1. The derivations are provided in Appendix A.

  • Frobenius norm. This is the simplest case since treats as a vector in in (14), for which the shrinkage has close-form solution. More specifically, the solution of (14) is

    (15)
  • Induced 2-norm. This is advocated the vectorial TV [14], but now we can provide a close-form solution of (14) as

    (16)

    where and are the left and right singular vectors corresponding to the largest singular value of .

  • Nuclear norm. This norm promotes low rank and yields a close-form solution of (14) as

    (17)

    where is the singular value decomposition (SVD) of .

The computation of (15) is essential shrinkage of vector and hence very cheap. The computations of (16) and (17) involve (reduced) SVD, however, explicit formula also exists as the matrices have tiny size of -by- (-by- if images are 3D), where is the number of image channels/modalities.

Remark. It is worth noting that the computation of (14) is carried out at every pixel independently of others in each iterations. This allows straightforward parallel computing which can further reduce real-world computation time.

3.5 Reconstruct image from gradients

Once the Jacobian is reconstructed from data, the final step is to resemble the image from . Since this step is performed for each modality, the problem reduces to reconstruction of a scalar-valued image from its gradient . In [20, 21], the image is reconstructed by solving the Poisson equation since and are the partial derivatives of . The boundary condition of this Poisson equation can be either Dirichlet or Neumann depending on the property of imaging modality. In medical imaging applications, such as MRI, CT, and PET, the boundary condition is simply since it often is just background near image boundary .

Numerically, it is more straightforward to recover by solving the following minimization

(18)

with some parameter to weight the data fidelity of . The solution is easy to compute since the objective is smooth, and often times closed-form solution may exist. For example, in the MRI case where , the solution of (18) is given by

(19)

where is diagonal and hence trivial to invert. The main computations are just few Fourier transforms.

In fact, it seems often sufficient to retrieve the base intensity of from to reconstruct from , and hence the result is not sensitive to when solving (18). It is also worth pointing out that, by setting , the minimization (18) is just least squares whose normal equation is the Poisson equation mentioned above.

To summarize, we propose the two-step Algorithm 2 which restores image edge and resembles image for multi-contrast MRI reconstruction. The two steps are each executed only once (no iteration). Step 1 itself requires iterations which converge very quickly with rate where is iteration number. Step 2 has closed form solution (19) for multi-contrast MRI reconstruction.

Input: Multi-contrast partial Fourier data , mask . Initial .
Step 1: Jointly reconstruct Jacobian using Algorithm 1.
Step 2: Reconstruct image of contrast using and by (19) for .
Output: Multi-contrast image .
Algorithm 2 Two-step edge-based reconstruction for multi-contrast MRI

4 Numerical Results

In this section, we conduct a series of numerical experiments on synthetic and real multi-contrast MRI datasets using Algorithm 2 (for short, we call it ER, standing for Edge-based Reconstruction, without adaptive weighting, and ER-weighted for the one with adaptive weighting in Section 3.3). For comparison, we also obtained implementation of two state-of-the-arts method for joint image reconstruction: multi-contrast MRI method Bayesian CS (BCS)444BCS code: http://martinos.org/~berkin/Bayesian_CS_Prior.zip [2], and Fast Composite Splitting Algorithm for multi-contrast MRI (FCSA-MT)555FCSA-MT code: http://ranger.uta.edu/~huang/codes/Code_MRI_MT.zip [17], which are both publicly available online. We tune the parameters of each methods so they can perform nearly optimally. In particular, FCSA-MT seems to be sensitive to noise level and we need to tune a very different parameter for different . Specifically, we set the weight in BCS as for both and cases; the TV term and wavelet term weights in FCSA-MT are set to and (resp.) for case, and and (resp.) for case; the weight of ER/ER-weighted is set to , and the fidelity weight for (19) to obtain image from edges is set to for both and cases.

The image datasets we used are a 2D multi-contrast Shepp-Logan phantom (size ), a simulated multi-contrast brain image (size ) obtained from BrainWeb666BrainWeb: http://brainweb.bic.mni.mcgill.ca/brainweb/, and a 2D in-vivo brain image (size ) included in the FCSA-MT code. All images contain three contrasts. In particular, the three contrasts in BrainWeb and in-vivo brain datasets represent the T1, T2, and PD images. Undersampling pattern and ratio are presented below with results. The experiments are performed in Matlab computing environment on a Mac OS with Intel i7 CPU and 16GB of memory. Gaussian white noise with standard deviation and were added to the k-space for the simulated data. We full Fourier data to obtain reference (ground truth) image . The relative (L2) error of reconstruction for modality is defined by and used to measure error of quantitatively for .

4.1 Comparison of different matrix norms

Our first test is on the performance of three matrix norms, namely Frobenius, Induced 2-norm, and Nuclear norm, when used in Algorithm 2. These norms correspond to three commonly used vectorial TV regularization in multi-modal/channel/contrast joint image reconstruction. All three norms yield closed-form solutions for subproblem (7) as we showed in Section 3.4. We apply the proposed Algorithm 2 with these three norms to different image data and noise level combinations, and observe very similar performance. For demonstration, we show a typical result using BrainWeb data with no noise in left panel of Figure 1. As we can see, all norms yield very similar accuracy in terms of reconstruction error.

In terms of real-world computational cost, however, Frobenius norm treats matrix as vector and hence the shrinkage (15) is very cheap to compute, while both induced 2-norm and nuclear norm require (reduced) SVD in (16) and (17) that can be much slower despite that the matrices all have small size ( for 2D images with modalities/contrasts). We show the same trajectory of relative errors but versus CPU time in the right panel of Figure 1, and it appears that Frobenius norm is the most cost-effective choice for this test. In the remainder of this section, we only use Frobenius norm (corresponding to the specific vectorial TV norm of form ) for the proposed Algorithm 2, but not the induced 2-norm (corresponding to ) and nuclear norm (corresponding to ).

Figure 1: Comparisons of Frobenius norm, induced 2-norm, and nuclear norm for Algorithm 2 (ER) on the BrainWeb image without noise. (a) Relative error vs number of iterations; (b) Relative error vs CPU time.

4.2 Comparisons on multi-contrast MRI datasets

Now we conduct comparison of the proposed algorithm with BSC and FCSA-MT on multi-contrast MRI datasets. We first test the comparison algorithms on a multi-contrast Shepp-Logan phantom image. To demonstrate robustness of the proposed algorithm, we use a radial mask of undersampling ratio in Fourier space, but add white complex-valued Gaussian noise with standard deviation (for both real and imaginary parts of the noise) to the undersampled Fourier data. Then we apply all comparison algorithm, namely BCS, FSCA-MT, and proposed ER and ER-weighted to the undersampled noisy Fourier data to reconstruct the multi-contrast image. For each method, we record the progress of relative error and show the relative error vs CPU time in Figure 2. From these plots, we can see that the proposed Algorithm 2 (ER/ER-weighted) achieves higher efficiency as they reduce reconstruction error faster than the comparison algorithms BCS and FCSA-MT. Moreover, the proposed algorithm with the weight (ER-weighted) can incorporate the transformed noise distribution and further improve reconstruction accuracy. We observe that ER-weighted sometimes performs slightly slower than ER in terms of CPU time (although always faster in terms of iteration number which is not shown here), since the ER-weighted requires an additional point-wise division using . We believe that this is an issue that can be resolved by further optimizing code, because complexity wise the additional computation required by ER-weighted is rather low compared to other operations (such as Fourier transforms) in ER/ER-weighted.

Besides the plot of relative error vs CPU time, we also show the final reconstruction images using these methods in Figure 3 for the noise level case. In Figure 3, the radial mask of sampling ratio (13.5%) is show on upper left corner (white pixels indicate sampled location). The final reconstructed multi-contrast images by the four comparison algorithm are shown in the middle column with algorithm name on left. On the right column, we also show their corresponding error image where is the ground truth multi-contrast image obtained using full Fourier data. As we can see, the reconstruction error of proposed algorithm is very small compared other others, especially showing less obvious error on edges. This demonstrates the improve reconstruction accuracy using our method.

Figure 2: Relative error vs CPU time of the comparison algorithms on Shepp-Logan phantom image with noise standard deviation (top row) and (bottom row).
Figure 3: Undersampling mask, reconstructed image, and error image for the Shepp-Logan image with noise level .

We also conduct the same test using the BrainWeb image with a Poisson mask of sampling ratio 25%, also with noise level and . The results are shown in Figure 4. For this “near real” image, our method again shows significant improvement of efficiency compared to other methods. In particular, ER-weighted outperforms all methods in both cases, suggesting its superior robustness in multi-contrast image reconstruction. The final reconstruction images and error images, similar to those for Shepp-Logan image, are also plotted in Figure 5 for the case.

Figure 4: Relative error vs CPU time of the comparison algorithms on BrainWeb image with noise standard deviation (top row) and (bottom row).
Figure 5: Undersampling mask, reconstructed image, and error image for the BrainWeb image with noise level .

Finally, we conduct the same test on the in-vivo brain image with radial mask of sampling ratio 25%. The relative error vs CPU time plots are shown in Figure 6 ( and ). For this image, our method further shows its promising performance with significant improvement when compared to existing methods. The final reconstruction images and error images are also shown in Figure 7 for the case.

Figure 6: Relative error vs CPU time of the comparison algorithms on in-vivo brain image with noise standard deviation (top row) and (bottom row).
Figure 7: Undersampling mask, reconstructed image, and error image for the in-vivo brain image with noise level .

5 Conclusion

We proposed to reconstruct Jacobian of mutli-modal/contrast/channel image from which we can resemble the underlying image. We showed the relation between Jacobian and the observed data when the underlying transform is Fourier, and formulate the reconstruction problem of Jacobian as an minimization. Our new method then exhibits an optimal convergence rate which outperforms the rate of primal-dual based algorithms. We also derived closed-form solutions for the minimization subproblem as matrix-valued shrinkage. The per-iteration complexity is thus very low. Numerical results demonstrated the promising performance of the proposed method when compared to the state-of-the-arts joint image reconstruction methods.

Acknowledgement

The authors would like to thank Yao Xiao and Yun Liang for assisting Fang to run the MATLAB code and collect numerical results. This research is supported in part by National Science Foundation grant DMS-1719932 (Chen), IIS-1564892 (Fang), DMS-1620342 (Ye) and CMMI-1745382 (Ye), and National Key Research and Development Program of China No: 2016YFC1300302 (Fang) and National Natural Science Foundation of China No: 61525106 (Fang).

Appendix

A Computation of matrix-valued shrinkage

We show that the minimization problem (14) has closed-form solutions when the matrix -norm is Frobenius, induced 2-norm, or nuclear norm.

  • Frobenius norm. In the case, all matrices can be considered as vectors and hence the vector-valued shrinkage formula can be directly applied. We omit the derivations here.

  • Induced 2-norm. In the case, (14) becomes

    (20)

    where and .

    Switching min and max, and solving for with fixed and , we obtain and the dual problem of (20) becomes

    (21)

    where we used the facts that and . Therefore, and are the left and right singular vectors of , and hence the optimal solution of (20) is .

    To obtain , one needs to compute the largest singular vectors of , or the SVD of . Note that has size , and hence a reduced SVD for is sufficient. In particular, for , there is a closed form expression to compute SVD of . In general, SVD of a matrix is easy to compute and has at most two singular values.

  • Nuclear norm This corresponds to standard nuclear norm shrinkage: compute SVD of , and set . Computation complexity is comparable to the 2-norm case above.

References

  • [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
  • [2] B. Bilgic, V. K. Goyal, and E. Adalsteinsson. Multi-contrast reconstruction with bayesian compressed sensing. Magnetic Resonance in Medicine, 66(6):1601–1615, 2011.
  • [3] P. Blomgren and T. F. Chan. Color tv: total variation methods for restoration of vector-valued images. IEEE transactions on image processing, 7(3):304–309, 1998.
  • [4] X. Bresson and T. F. Chan. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse problems and imaging, 2(4):455–484, 2008.
  • [5] S. Di Zenzo. A note on the gradient of a multi-image. Computer vision, graphics, and image processing, 33(1):116–125, 1986.
  • [6] J. Duran, M. Moeller, C. Sbert, and D. Cremers. Collaborative total variation: a general framework for vectorial tv models. SIAM Journal on Imaging Sciences, 9(1):116–151, 2016.
  • [7] M. Ehrhardt. Joint reconstruction for multi-modality imaging with common structure. PhD thesis, UCL (University College London), 2015.
  • [8] M. J. Ehrhardt and S. R. Arridge. Vector-valued image processing by parallel level sets. IEEE Transactions on Image Processing, 23(1):9–18, 2014.
  • [9] M. J. Ehrhardt and M. M. Betcke. Multicontrast mri reconstruction with structure-guided total variation. SIAM Journal on Imaging Sciences, 9(3):1084–1106, 2016.
  • [10] M. J. Ehrhardt, P. Markiewicz, M. Liljeroth, A. Barnes, V. Kolehmainen, J. S. Duncan, L. Pizarro, D. Atkinson, B. F. Hutton, S. Ourselin, et al. Pet reconstruction with an anatomical mri prior using parallel level sets. IEEE transactions on medical imaging, 35(9):2189–2199, 2016.
  • [11] M. J. Ehrhardt, K. Thielemans, L. Pizarro, D. Atkinson, S. Ourselin, B. F. Hutton, and S. R. Arridge. Joint reconstruction of pet-mri by exploiting structural similarity. Inverse Problems, 31(1):015001, 2015.
  • [12] M. J. Ehrhardt, K. Thielemans, L. Pizarro, P. Markiewicz, D. Atkinson, S. Ourselin, B. F. Hutton, and S. R. Arridge. Joint reconstruction of pet-mri by parallel level sets. In Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2014 IEEE, pages 1–6. IEEE, 2014.
  • [13] V. Estellers, S. Soatto, and X. Bresson. Adaptive regularization with the structure tensor. IEEE Transactions on Image Processing, 24(6):1777–1790, 2015.
  • [14] B. Goldluecke and D. Cremers. An approach to vectorial total variation based on geometric measure theory. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 327–333. IEEE, 2010.
  • [15] M. Grasmair and F. Lenzen. Anisotropic total variation filtering. Applied Mathematics & Optimization, 62(3):323–339, 2010.
  • [16] E. Haber and M. H. Gazit. Model fusion and joint inversion. Surveys in Geophysics, 34(5):675–695, 2013.
  • [17] J. Huang, C. Chen, and L. Axel. Fast multi-contrast mri reconstruction. Magnetic resonance imaging, 32(10):1344–1352, 2014.
  • [18] F. Lenzen and J. Berger. Solution-driven adaptive total variation regularization. In International Conference on Scale Space and Variational Methods in Computer Vision, pages 203–215. Springer, 2015.
  • [19] A. Majumdar and R. K. Ward. Joint reconstruction of multiecho mr images using correlated sparsity. Magnetic resonance imaging, 29(7):899–906, 2011.
  • [20] V. M. Patel, R. Maleh, A. C. Gilbert, and R. Chellappa. Gradient-based image recovery methods from incomplete fourier measurements. IEEE Transactions on Image Processing, 21(1):94–105, 2012.
  • [21] E. Sakhaee, M. Arreola, and A. Entezari. Gradient-based sparse approximation for computed tomography. In Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, pages 1608–1611. IEEE, 2015.
  • [22] G. Sapiro. Vector-valued active contours. In Computer Vision and Pattern Recognition, 1996. Proceedings CVPR’96, 1996 IEEE Computer Society Conference on, pages 680–685. IEEE, 1996.
  • [23] G. Sapiro and D. L. Ringach. Anisotropic diffusion of multivalued images with applications to color filtering. IEEE transactions on image processing, 5(11):1582–1586, 1996.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
23636
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description