Multiscale Higher Order TV Operators for \ell_{1} Regularization

Multiscale Higher Order TV Operators for Regularization

Toby Sanders and Rodrigo B. Platte
School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, USA.
Abstract

In the realm of signal and image denoising and reconstruction, regularization techniques have generated a great deal of attention with a multitude of variants. A key component for their success is that under certain assumptions, the solution of minimum norm is a good approximation to the solution of minimum norm. In this work, we demonstrate that this approximation can result in artifacts that are inconsistent with desired sparsity promoting properties, resulting in subpar results in some instances. With this as our motivation, we develop a multiscale higher order total variation (MHOTV) approach, which we show is related to the use of multiscale Daubechies wavelets. We also develop the tools necessary for MHOTV computations to be performed efficiently, via operator decomposition and alternatively converting the problem into Fourier space. The relationship of higher order regularization methods with wavelets, which we believe has generally gone unrecognized, is shown to hold in several numerical results, although notable improvements are seen with our approach over both wavelets and classical HOTV.

1 Introduction

Over the past couple of decades, regularization techniques such as total variation have become increasingly popular methods for image and signal denoising and reconstruction problems. Along with TV [28], a large variety of approaches for similar regularization approaches have been proposed for an array of problems. Signal and image recovery methods continue to attract a great deal of interest due to the wide variety of potential applications and ever increasing means of various sensing mechanisms to acquire data. To name a few, synthetic aperture radar (SAR) [46, 2], magnetic resonance imaging (MRI) [23, 24, 25], electron tomography[21, 33], and inpainting [32, 19] are all image recovery applications that have advanced in part due to regularization methods, and in each case the approach can be tailored to the challenges that the particular application poses. With many problems such as MRI and electron tomography, the challenge is often to acquire as little data as necessary due to possible damage of the subject being imaged or because of time constraints, driving the need for inverse methods that can achieve the absolute best results from very limited and noisy data.

The mathematical description of the general problem we are interested in is to recover a signal or image , from noisy measurements of the form , where is some sensing matrix that approximates the physical model of the particular problem. Then the regularized solution is given by

(1)

where is some sparsifying linear transform and is a parameter that balances the effects of the data and regularization terms. The appropriateness of this approach is that some prior knowledge of the signal suggests that is sparse, and that the formulation with the norm encourages such sparsity [12, 5, 6]. In many applications, some knowledge of the appropriate transform is available, particularly with images and for other signals, this knowledge is in the form of some “smoothness.”

In the case of TV, the sparsifying transform is given by , where . The general idea for this approach is that the signal is assumed to be piecewise constant with a few discontinuities, in which case is sparse. If this is not precisely true, this approach still effectively reduces unwanted oscillations at the cost of the well documented stair-casing effect [8, 3]. However, for more general piecewise smooth functions higher order TV (HOTV) regularization methods are effective [8, 4, 17], and they do not suffer from the stair-casing effects. In this case the transform maps to approximations of discrete derivatives of , e.g. higher order finite differences of .

Another popular choice for are wavelet transforms [36, 26, 23]. For instance, such a transform can be written as , where and are orthonormal so that . The idea here is that for appropriately smooth signals, most of the signal’s energy is captured in the few low frequency, larger scaled elements of the basis. Thus most of the coefficients can be neglected, and thus a sparse approximation of exists with respect to the basis.

1.1 Discussion and Contribution

The crux of general regularization methods is that recovering a signal with the most sparse representation, that is recovering the solution with the smallest so called norm, is often equivalent to its convexly relaxed variant of recovering the signal with the smallest norm, which is a field of study called compressed sensing (CS) [12, 5, 6]. Although convex optimization algorithms are useful in promoting sparsity, some small nonzero coefficients may still persist, an obvious sign that the assumptions needed for the exactness guarantees given by CS theory sometimes do not hold in practice. This observation is largely the original motivation our present work in developing a multiscale HOTV approach related to multiscale wavelet regularization.

Much work has been devoted to understanding and developing sparsity promoting regularization methods, which are related to our current work. Numerous variants of higher order TV methods have been proposed [8, 1, 17]. For example, in [1] the authors propose an edge detection operator that annihilates polynomials, which leads them to operators close to finite difference matrices. In [8] a combination of a TV regularizer with a quadratic second order regularizer is developed in the continuous domain to eliminate staircasing effects. Likewise, several authors have shown that using some combination of first and second order methods to be beneficial [37, 4, 35, 7]. Unfortunately, since there are multiple regularization terms these methods typically introduce additional parameters that need to be tuned. In terms of theory, it has been well documented that under certain conditions TV and HOTV are equivalent to reconstruction with splines [44, 38], i.e. the solution of such methods recovers a piecewise polynomial with a sparse set of jumps.

TV denoising in particular has several very interesting equivalences. It is well known that TV denoising and other more general first order denoising methods are equivalent to smoothing with a certain nonlinear diffusion models[34], a typical result of writing the equivalent Euler-Lagrange equations. Perhaps discussed less frequently and most related to the observations in our current work, TV denoising is equivalent to soft threshold denoising with the highest frequency basis elements of the Haar wavelets [39, 40], in particular with the so called cycle spinning [18]. In general however, the main difference between these methods is that with TV the smoothness analysis is limited to the finest scales, whereas wavelet regularizations promote function smoothness at multiple scales. A main contribution of this article is to expand further on the relationship between wavelets in regularization and those methods related to HOTV. In regards to extension of wavelets, a number of multidimensional generalizations have been invented including curvelets and shearlets [15, 20, 36], which are primarily used for sparse function approximation and improve the approximation rates in two and three-dimensions compared with their one-dimensional counterparts.

The method we develop here an alternative for HOTV regularization which we refer to as multiscale HOTV (MHOTV). In contrast to previous work, our approach considers combining both a multiscale approach and higher order TV methods for the class of image reconstruction problems. The motivation for such an approach is in observable sub par results due to the relaxation of the sparsity promotion through the norm, contrary to the aforementioned results with splines [44, 38]. In light of this, we determined this calls for analysis of the function behavior at multiple scales. As can be deduced, this multiscale strategy is similar to the treatment of wavelets, and we argue that our approach is indeed related to the use of Daubechies wavelets, with the main divergence coming in the orthogonality and/or frame conditions prescribed by the wavelets. Orthogonality may be unnecessary for general regularization techniques, although fundamental to thresholding denoising techniques, and the relaxation of this condition in our approach allows for better localization of the transform. In the development of MHOTV, we carefully address the computational concerns associated with our approach through the use of both the FFT and operator decompositions. We are able to show through several numerical examples that MHOTV provides a notable improvement to the current alternatives.

The organization of the remainder of the article is as follows. In section 2 we define the HOTV operators and the corresponding multiscale generalizations. We also motivate the approach via a numerical example, and make the connection with Daubechies wavelets. In section 2.1 we precisely define the MHOTV regularization model and give precise normalizations to deal with proper parameter selection. In section 3 we address the computational concerns associated with calculating MHOTV coefficients, devising two distinct ways that they can be calculated in an efficient manner. In section 4 we provide numerical results for 1-D and 2-D reconstruction problems, showing that MHOTV is an improvement to the original HOTV and the related Daubechies wavelets. Some proofs and definitions are provided in the appendix.

2 HOTV and Multiscale Generalizations

As an alternative to TV regularization, general order TV methods have been shown to be effective for regularization [8, 4, 33, 1]. The TV transform can be thought of as a finite difference approximation of the first derivative, thus annihilating a function in locations where the function is a constant, i.e. a polynomial of degree zero. Likewise, higher order TV transforms can be considered higher order finite difference approximations to higher order derivatives, thus annihilating the corresponding degree polynomials. With this in mind, we have the following definition:

Definition 1 (Finite Differences).

Let be defined by

(2)

Then for , the periodic order finite difference of is given by

where denotes the discrete convolution.

Remark 1.

The convolution in this definition (and in general) can be represented by multiplication with a circulant matrix , where each row of holds a shifted version of . An example of the matrix in the case is given in (3). Note that this definition uses a periodic extension of and can be ignored by dropping the last rows of .

(3)

With this definition, the HOTV model can be said to recover

(4)

Unfortunately for many real world imaging problems the equivalence between and may not hold in practice, yet the regularization still tends to encourage favorable solutions. In terms of the sparsity promoting transform, this means that the transform of the recovered function may not be truly sparse, but most of the values are instead relatively close to zero. For HOTV, this means that a local Taylor expansion of the recovered function will still contain some small nonzero higher order coefficients, yet essentially unobservable at the very local scale. In other words, at some point , there exists a polynomial expansion of minimal degree of given by

(5)

which holds for all within some small interval around the point . Ideally a solution given by the order HOTV model recovers a solution so that the coefficients vanish for . The model allows for these coefficients to remain, although very small, and the function still appears to essentially be a polynomial of degree less than . However, when this behavior persists over many points at a larger scale, the result can be a function which looks more like a trigonometric polynomial rather than an algebraic one.

This phenomenon is demonstrated in Figure 1, where a piecewise polynomial of degree two was reconstructed from random noisy samples with 50% sampling111The number of samples is half the number of grid points. using TV and HOTV regularizations. The sampling matrix is constructed so that a random 10% of its entries are set to be nonzero, where these nonzero values are uniformly distributed over . The samples were corrupted with normaly distributed mean zero noise. Two different grid sizes are demonstrated, 256 and 1024, and it can be observed that these small oscillations become increasingly abundant with more grid points. However, in the bottom of the figure, the third order finite difference of the HOTV3 solution plotted in logarithmic scale shows that locally this oscillatory behavior results in almost exact low order polynomials, although very small amplitudes persist in the transformed domain and thus not truly sparse in the sense. Nevertheless, all regularization approaches should still be deemed useful, as evidenced by the least squares solution shown.

Figure 1: Rows 2-4 and 7-9 reconstruction of a piecewise polynomial function of degree two shown in the top row over 256 (top 5 plots) and 1024 (bottom 5 plots) points from random sampling at 50%. The corresponding least squares solution is shown in the fifth plot, and the 3rd order finite difference of the HOTV3 solution over the 1024 grid is shown on the bottom.

Due to this phenomena we propose a multiscale HOTV approach, which considers the regularization transform at multiple scales. The idea is that a larger stencil would penalize these oscillations even with the norm. As TV generalizes to the Haar wavelet by stretching and scaling of the elements, we propose the same with HOTV. To this end we give the following definition.

Definition 2 (Multiscale Finite Differences).

Let be defined by

(6)

Then for , the periodic order finite difference of scale of is given by

where denotes the discrete convolution.

Remark 2.

Again, this convolution can be represented as multiplication with a circulant matrix . An example of in the case and is given in (7).

(7)

2.1 MHOTV Reconstruction Model

We now present the general model for MHOTV reconstruction. Generally speaking, we still use the model presented in (1), where maps the unknown function to some perhaps noisy measurements given by , from which we use to reconstruct . Our sparsity promoting transforms are now given by the matrices , for , where is the maximum scaling of the operator used and is the chosen order. Setting our maximum scaling to corresponds to the traditional HOTV approach. Although not completely necessary, we choose a dyadic scaling of the operators, similar to the treatment of wavelets. As with wavelets, we will show that this is convenient for computational purposes. Finally then our reconstruction model is given by

(8)

The factor of is a normalization that accounts for the increasing norms of each operator, which would otherwise weigh too heavily to the largest scaling operator 222This is akin to the dyadic scaling of the wavelet basis elements after the dyadic stretching.. The scaling of the parameter by simplifies the selection of the parameter, which would otherwise need to be manually scaled by such a factor to account for the number of scales being used. By similar reasoning, the additional scaling by is used to account for the order of the method [31].

2.2 Relationship to Daubechies Wavelets

Wavelets can be characterized as an orthonormal basis that is generated through a multiresolutional analysis [10, 26]. The multiresolutional analysis leads to the shifting and dyadic stretching and scaling of a single generating mother wavelet, analogous to our treatment of MHOTV by shifting and stretching of a single row or element of the matrices . From this very general characterization, there are a number of parameters in the design of the wavelets. For Daubechies wavelets the smoothness is characterized by the number of vanishing moments, i.e. the number of polynomial orders to which the wavelet is orthogonal. A wavelet with vanishing moments acts as a multiscale differential operator of order . As a trade off, an increasing number of vanishing moments chosen for the wavelet basis results in an increase in the support of the wavelet functions, and Daubechies wavelets are designed to yield the orthonormal wavelet basis of minimal support given a selected number of vanishing moments [26].

To develop a basic mathematical description of a wavelet expansion, suppose we want to represent a uniform pixelated function with pixels on in terms of the wavelet basis. Then denoting our scaling function and mother wavelet with vanishing moments by and respectively, we have the following orthonormal wavelet representation

(9)

Here, and similarly for , i.e. shrinking and scaling of the of the generating wavelet functions. The parameter is a positive integer with , and the value is said to be the number of scales in the wavelet expansion 333For it is understood that the second sum is removed.. With the representation in (9), the coefficients for the scaling functions in the first sum capture most of the energy of the signal, and the wavelet coefficients vanish for values of where is a polynomial of degree over the support of . For regularization, we only need to be concerned with regularization of the wavelet coefficients in (9), and thus the coefficients for the scaling functions in the first sum are not included in the regularization.

Figure 2: The filters in Fourier space of wavelet and MHOTV convolution functions.

Connecting these ideas to HOTV, we see that these transforms are playing similar roles. Both are prescribed by the number of vanishing moments, or in the language of HOTV, the highest order polynomial that is annihilated by the approach. Furthermore, both are designed to yield minimal support given the number of vanishing moments. The crucial difference lies in the orthogonality condition prescribed by wavelets, which further increases the support of the wavelet elements. We emphasize again, that this condition is fundamental to compression and threshold denoising methods, but not necessarily useful with general image reconstruction problems.

Finally, one additional technique utilized for regularization and denoising as well is the use a wavelet frame by taking all possible shifts for each scaling of the wavelets, which is sometimes referred to as translational invariant cycle spinning [9, 42, 18]. This eliminates the lack of translation invariance of a wavelet basis that can otherwise result in unwanted artifacts near discontinuities. With this in mind, we may define the wavelet frame elements by

Then the averaged wavelet frame representation of may be written as

where . Hence a wavelet approach promotes sparsity with respect to the vectors , or equivalently with respect to . Then a regularization norm in this setting takes the form

(10)

which is analogous to our regularization norm in (8). For wavelets, the scalings are inherent to function definitions, and the dyadic stretching of the elements is indicated by as opposed to .

The case when would be most closely related to the original HOTV, and for smaller values of the wavelets are more comparable to the MHOTV development in this article.

Since computing both MHOTV operators and wavelets coefficients are convolutional operations, we may visualize their corresponding filters in Fourier space, providing us another basis for comparison, which we have done in Figure 2. Each of these can be interpreted as high pass filters, where the higher levels pass increasingly lower frequencies. A very close similarity of the wavelet filters and MHOTV filters can be observed in Figure 2, providing a strong visual confirmation to our preceding discussion on the close relationship between the two.

3 Fast Calculation of MHOTV Operators

Calculation of traditional HOTV coefficients is a computationally inexpensive task, due to the sparsity of the matrix operator. However, with increasing dyadic scales the direct calculation increases exponentially. Due to this, in the proceeding section we develop two distinct approaches that show that these calculations can be carried out with linear increase in the flop count with respect to the number of scales used.

Fast computation of standard HOTV can be done in several ways. One can construct the sparse matrix and perform matrix computations directly, a calculation with runtime of flops. One could make use of other procedures, such as MATLAB’s “diff” command which requires the same flop count without storing the matrix. With MHOTV, these approaches become less appealing. With matrix construction, if one is using several scales, then several matrices need to be computed and stored, and the matrices become significantly less sparse for larger scales. The“diff” command cannot be implemented directly for larger scale HOTV operators.

Another alternative is to use the Fourier convolution theorem to perform the convolution operation via a product in Fourier space. For the traditional HOTV operators, this can be fairly slow compared with the matrix and “diff” approach, since the necessary two discrete Fourier transforms would require flops compared with the flops for the alternative implementations. However, this method is relatively comparable for MHOTV, since the Fourier transforms only need to be computed once to determine the coefficients at all scales.

We outline two procedures for efficient calculation of MHOTV. First, we describe the Fourier approach, where we derive precise formulas for the MHOTV Fourier filters. Second, we describe an alternative efficient approach by decomposition of the MHOTV matrix operators.

3.1 Computation via Fourier Transforms

By the Fourier convolution theorem, the MHOTV operators can be computed as multiplications in Fourier space, i.e.

(11)

where denotes the discrete Fourier transform. Although this can be numerically computed, it is a convenient to have an exact formula for the discrete Fourier transform of . Moreover, analytic determination of allows us to generalize the MHOTV to fractional orders.

Proposition 1.

The DFT of the vector defined in (6) has an explicit expression given by

(12)

for .

Proof.

The expression for the Fourier coefficient in the DFT of is given by

(13)

Notice that the terms vanish by definition of . For the latter terms, we make the substitution and flip the sum to give the expression

(14)

where the term corresponds to and the following indices , correpsond to , respectively. Notice that we may drop the in the numerator of the exponential and that the values of repeat over strings of length . Therefore each of these corresponding strings of exponential terms in (13) get the same weights, leading to the following sum:

(15)

Here the inner sum represents the consecutive terms in (13) that receive the same weights from , namely . Switching the order of summation, we recognize the sum over as a binomial expansion leading to

The remainder of the proof follows by elementary calculations. ∎

3.2 Fast Computation via Operator Decomposition

In this section, we give a decomposition for the matrix operator and describe how this decomposition can be used for rapid calculation of MHOTV operators. The decomposition of is given in the following theorem.

Theorem 1.

Let the matrix with entries be defined by

(16)

Then the following holds:

  1. The entries of , which we denote by , are given by

    where it is implied is an integer satisfying .

  2. has the decomposition

    (17)

    and therefore

    (18)
  3. The equality in (17) holds for any rearrangement of the product of matrices.

The proof of this theorem is given in the appendix. The matrices and are shown below to illustrate the sparse structure of these operators:

Proposition 2.

Direct calculation of requires flops. The same calculation using the decomposition in (17) requires flops. The same calculation using the Fourier method requires .

Proposition 2 is a direct result of Theorem 1, the Fourier convolution theorem combined with the FFT, and the flops required for the direct calculation. We assume that the FFT and inverse FFT can be computed in flops, although the exact count is somewhat vague, depending on the precise algorithm and if is a power of 2. To compute the full set of operators, we can get away with less flops then adding the flops for each level. If we use the decomposition approach to calculate the operators as determined by (17), the associated computations are limited to that at the highest scale, since the intermediate scales are determined in this calculation as pointed out in (18). If we use the Fourier approach for calculating the coefficients in (8), only one foward FFT is required for the function . Then the product of and must be computed for each , as well as the inverse FFT for each of these products. The observations lead to the following corollary.

Corollary 1.

Let be the matrix containing the complete set of operators involved in the MHOTV regularization norm, so that Then calculating using the operator decomposition given in Theorem 1 requires flops. Calculating using the Fourier approach requires a total flop count of .

A few concluding remarks are in order.

Remark 3.

All of the results presented are for 1-D signals. For higher dimensions say 2-D, the operators can be applied along each row and column, and the flop count is only doubled, disregarding the likely increased number of indices.

Remark 4.

To solve (8), we use the well establised alternating direction method of multipliers (ADMM) [22, 14, 45]. This approach introduces splitting variables that allows one to split the objective functional into equivalent subproblems that can be solved relatively fast. Our algorithm can be downloaded at [29], and some of the simulations in the proceeding section can also be found there.

4 Numerical Experiments

4.1 Repeat of 1-D Simulations

Figure 3: Reconstruction of a piecewise polynomial function of degree two over 1024 stencil from random sampling at 50%. Three scales are used for both the Daubechies wavelets and multiscale HOTV.

To compare MHOTV and wavelet regularized reconstructions we repeat the numerical examples presented in Figure 1 with the same noisy data used for the HOTV reconstruction. The corresponding reconstruction with MHOTV and wavelets are presented in Figure 3. Recall that the measurements were collected at a 50% sampling rate and corrupted with normally distributed mean zero noise. For the multiscale HOTV and wavelets, three scaling levels were used. The selection of the regularization parameter was set to the same value for each order for HOTV and the wavelets, where we used a similar normalization approach for the wavelets coefficients as presented in (8).

The results in Figure 3 were generated with orders 1, 2, and 3. The order is indicated with the numbers next to the approach in the legends, e.g. we denote the order MHOTV approach with MHOTV3. For a baseline comparison, the least squares solution is shown as well. Compared with the corresponding reconstructions from HOTV in Figure 1, these solutions show clear improvements, particularly with the higher orders. As we expect, although the MHOTV1 and Haar wavelet coefficients are computed in a different manner, the resulting reconstruction are identical since the models are theoretically equivalent. They both exhibit the staircasing and noise effects in precisely the same locations. The higher order approaches also show many similar effects of the noisy features, exhibiting certain oscillatory features with the same general behavior in precisely the same locations. However, with the higher orders, these approaches are not equivalent and MHOTV provides regulatory information at finer scales due to the minimal support of the transform elements. The result appears to be a modest improvement in the resulting reconstructions.

Finally, in the bottom of the figure the third order finite difference of the MHOTV3 solution is plotted in logarithmic scale. Comparing this with the original HOTV3 finite difference in Figure 1, we observe that the solution exhibits much better sparsity with respect to this transform domain, as desired.

4.2 2-D Tomographic Simulations

In this section we investigate the regularization methods on the common 2-D tomographic image reconstruction problem [27]. The phantom test image is shown in Figure 4 (a). The data generated for tomography are 1-D line integrals of the image, well-known as Radon transform data. Formally, the Radon transform of a 2-D function or image is defined as

(19)

where is the image domain and is the Dirac delta function. As in many applications, the data collected for reconstruction are of the form known as parallel beam geometry. In this setting, the full knowledge of noisy is known for some finite set of angles, 444There is also a discretization over , but it is small enough to ignore. In this numerical experiment, we use a total of 29 angles that are equally distributed across the full 180 angular range, which are visualized as a sinogram in Figure 4 (b). Such a limited set of data is sometimes referred to as limited data tomography. Mean zero normally distributed noise was again added to the data values. Classically tomographic reconstruction from parallel beam geometry can be done by first transforming the data into Fourier space by the Fourier slice theorem, and then applying a chosen ramp filter to this data and the inverse Fourier transform. This direct approach, called filtered backprojection, is sensitive to noise and is shown in Figure 4 (c).

The problem can however be discretized and approximated by a set of linear equations (see for instance [30] on pages 8-9 within section 1.5.), where is sparse matrix that is a discretized approximation of the Radon transform, is the vectorized image, and is a vector holding the data values. With this set up we can apply regularization models such as (1) and (8). We use a pixelated mesh for the image domain in this experiment. The results for applying these models with HOTV, MHOTV, and Daubechies wavelets all at orders one and three are shown in Figure 5. Each of the models are also supplemented with a nonnegativity constraint, , which is carried out with a projected gradient method. A baseline comparison obtained by a conjugate gradient least squares solver is also shown in the figure. To ensure accurate comparison between the methods by appropriate parameter selection and algorithmic convergence, the relative data errors defined by are shown in the figure, and it confirms that each approach approximately fit the data equally well, with all of the errors contained within an interval of size 0.0129.

As has been observed previously [33], due to a number of reasons including undersampling, noise, fine details between the image features, and nature of the regularization, the order 1 solutions (TV) can leave the fine features under resolved, even though the underlying image is truly a piecewise constant that classical TV was originally designed to recover. Each of these order 1 images appear relatively similar, with the MHOTV and Daubechies approaches showing modest improvements in resolving some features. As in the 1-D case, the HOTV3 solution exhibits some small local oscillations that appear as noise in the image. However, this image, as well as the other order 3 approaches resolve the features notably more clear than the order 1 approaches. Both of the order 3 multiscale approaches appear less noisy than the HOTV order 3, while still maintaining a good approximation of the image features.

Figure 4: (a) Phantom image. (b) Noisy tomographic data in sinogram format, 29 projections in total. (c) Classical filtered backprojection reconstruction from data.
Figure 5: Reconstructions of phantom image from 29 tomographic projections. Orders 1 and 3 are shown for the regularization approaches. Top left: relative data fitting error from each approach shows approximately equivalent data fitting from each approach. Top right: least squares solution for baseline comparison.
SNR Order 1 Order 2 Order 3 Order 1.5 Order 2.5
mhotv  Daub mhotv  Daub mhotv  Daub mhotv mhotv
1 level .1624, .1624 .2039, .1961 .2464, .2306 .1819 .2328
2 2 levels .1612, .1617 .1742, .1852 .2135, .2223 .1782 .2183
3 levels .1699, .1615 .1513, .1778 .1745, .2149 .1776 .1975
4 levels .2001, .1647 .1584, .1745 .1764, .2104 .2031 .2102
1 level .0864, .0864 .0971, .0914 .1293, .1090 .1025 .1287
5 2 levels .0858, .0857 .0761, .0838 .0946, .1004 .0987 .1172
3 levels .0926, .0864 .0668, .0805 .0766, .0982 .1016 .1133
4 levels .1100, .0894 .0742, .0801 .0828, .0981 .1186 .1276
1 level .0543, .0542 .0509, .0480 .0694, .0572 .0690 .0841
10 2 levels .0542, .0539 .0400, .0442 .0489, .0528 .0657 .0763
3 levels .0589, .0547 .0359, .0430 .0399, .0522 .0694 .0776
4 levels .0696, .0570 .0413, .0436 .0442, .0535 .0802 .0880

Table 1: Average relative reconstruction error over 100 simulations, as a function of the order of the method and number of levels in the multiscale approaches. The minimums for each SNR are emphasized in bold.

4.3 Quantitative Results

We performed two sets of simulations to compare the methods in a more quantitative manner. The first set of results presented here involved setting up 100 different test problems and then running all of our methods over each time for multiple noise levels, and the mean reconstruction error over all simulations is presented in Table 1, with the MHOTV resulting in the left of each column and Daubechies wavelets in the right of each column. It is important to note here, that the parameter in 8 was optimized in every reconstruction to yield the solution that minimized the true error between the test signal and the reconstruction, making for objective comparisons. In order to set up each test problem, a 1D piecewise quadratic polynomial (presumably ideal for order 3) was randomly generated over a 1024 stencil, and the entries in sampling matrix and added noise to were randomly generated from a mean zero Gaussian distribution. Overall, these results show that MHOTV moderately outperforms Daubechies wavelets in each case, and remaining comparisons between the order and number of levels are generally consistent between MHOTV and the wavelets.

For the single level case (original TV and HOTV), the error generally increases for higher orders, contrary to the results in previous work [1]. Multiple scales show notable improvement for the higher orders, whereas they show a mild reduction in accuracy for order 1. The most benefit for both orders 2 and 3 is seen when using 3 levels, and order 2 actually outperforms order 3. Finally, using the fact that (12) gives us a way to compute fractional orders of the method, we present also the results from orders 1.5 and 2.5. These are notably worse than the integer orders, a testament to the fact that these fractional order derivates result in highly nonlocal differences 555To observe these nonlocal stencils, one can compute the inverse Fourier transform of (12) for fractional orders ..

In the second set of results presented here we ran a series of numerical simulations and measured the rate of successful recovery for each method as a function of the sampling rate. For each simulation we randomly generated a piecewise polynomial of specified maximal degree over a 1024 stencil. This function was randomly sampled at the specified sampling rate precisely as in the previous 1-D simulations in section 4.1, where the sampling rate is defined by the number of samples divided by the number of grid points. Each regularization procedure was then used for reconstruction, and the error between the true function and reconstructed functions is determined. If the error was less than , then the reconstruction was said to yield a successful recovery. This simulation was carried out for each sampling rate in 20 trials, and the fraction of those 20 trials that yielded successful recovery is set as our probability of success. In each case, the generated test functions had five discontinuities, and the location of the jumps were drawn randomly from a uniform distribution on the approximation interval.

No noise was added for these simulations, as this can make the likelihood of an exact recovery unlikely. Therefore, for this case our general model as a modification of (1) is given by

(20)

and similarly for our specific MHOTV model in (8). This constrained data fitting problem is solved by reformulating as an unconstrained problem with an augmented Lagrangian function [16, 22].

The results for these simulations are shown in Figure 6. The results are separated in two ways, by the degree of the piecewise polynomial function that is sampled (varying along the rows) and the order of the regularization method (varying along the columns). In the first row are results for piecewise constant functions, in the second row are piecewise linear functions, and in the third row are piecewise quadratic functions. In all cases, the MHOTV yields the highest probability of success, regardless of the degree of the polynomial or order of the regularization, and the Daubechies wavelets success appears to generally lie somewhere between MHOTV and HOTV. The order 1 regularizations perform well only in the case of piecewise constant functions. On the other hand, the order 2 and 3 regularizations perform well for all function types, with order 2 again outperforming order 3 both with piecewise linear and quadratic signals.

Figure 6: Probability of success for HOTV, MHOTV, and Daubechies wavelets at orders 1 (left column), 2 (middle column) and 3 (right column). A successful recovery is deemed whenever the relative error between the reconstruction and the true signal is less than . Top row: piecewise constant functions. Middle row: piecewise linear functions. Bottom row: piecewise quadratic functions.

5 Summary

HOTV circumvents the staircasing often observed in TV solutions and has been shown to be more effective for problems with fine features, where resolution can be improved by increasing the order of derivatives in the regularization term [33]. In some applications, however, high order derivatives promote solutions with spurious local oscillations, as shown in Figure 1. The MHOTV regularization we introduce in this work is shown to mitigate unwanted oscillations while maintaining the resolution power of high order regularization.

Although the theory for MHOTV reconstructions remains underdeveloped when compared to wavelets regularization [11, 43, 15, 20, 36, 13, 41], our experiments indicate that MHOTV can outperform wavelets regularization in practical applications. Figure 3, for instance, shows fewer spurious oscillations in the MHOTV reconstruction than for Daubechies wavelets penalization. A feature that can also be observed for the 2-D tomographic data. Moreover, our results show that MHOTV regularization requires fewer samples for successful reconstructions than for HOTV and wavelets. Computational efficiency is achieved by performing the transformation in Fourier space or by matrix decomposition, as derived in section 3. The associated matlab algorithms can be downloaded at [29], and some of the simulations in the proceeding section can also be found there.

Appendix A Proof of Theorem 1

Lemma 1.

Let with . Then we have the following Vandermonde-like identity:

(21)

where for even and for odd.

Proof of lemma 1.

Consider the polynomial , which can be factored as . Both representations can be expanded using the binomial sum giving

(22)

by the first representation and

(23)

by the second representation. Since (22) and (23) are equivalent for all , the coefficients of any particular power of are equivalent, which is the equality we set out to prove. ∎

Proof of theorem 1.

Statement 3 is an immediate consequence of statement 2, since each matrix involved in the product is a convolution operator, and convolution operations are commutative and associative.

To prove statement 1, first observe that with increasing , the nonzero entries in the rows of become increasingly spaced, and it easy to see that the general resulting product is essentially the same for each with different spacings between the nonzero entries. Thus it is enough to show statement 1 for . In the case , this calculation can be checked directly. So suppose 1 holds for some arbitrary . Then we need to show that yields the desired result as defined by (16). It is fairly easy to see that the resulting entries of this product is simply the addition of two neighboring entries (modulo ) in . Any such entries added together trivially yields the desired values, and the proper location of these values is also easy to confirm.

Similar arguments used for statement 1 also apply to statement 2. First, we can consider an inductive approach, over , where we will need to show . Note that again due to the spacing of the entries of , the argument for any arbitrary is parallel to that for , with just different handling of the indices. Therefore the case for suffices for the inductive step, and the case for is an immediate consequence of the previous lemma. ∎

Appendix B Definitions

If , then the convolution of and is given by

(24)

where for indices of running outside of domain of , a periodic extension of is assumed. The discrete Fourier transform (DFT) of is defined by

(25)

and the inverse discrete Fourier transform (IDFT) of is given by

(26)

Acknowledgements

This work is supported in part by the grants NSF-DMS 1502640, NSF-DMS 1522639 and AFOSR FA9550-15-1-0152.

References

  • [1] R. Archibald, A. Gelb, and R. B. Platte. Image reconstruction from undersampled Fourier data using the polynomial annihilation transform. J. Sci. Comput., pages 1–21, 2015.
  • [2] S. Bhattacharya, T. Blumensath, B. Mulgrew, and M. Davies. Fast encoding of synthetic aperture radar raw data using compressed sensing. In 2007 IEEE/SP 14th Workshop on Statistical Signal Processing, pages 448–452. IEEE, 2007.
  • [3] P. Blomgren, T. F. Chan, P. Mulet, C.-K. Wong, et al. Total variation image restoration: numerical methods and extensions. In ICIP (3), pages 384–387, 1997.
  • [4] K. Bredies, K. Kunisch, and T. Pock. Total generalized variation. SIAM J. Imaging Sci., 3(3):492–526, 2010.
  • [5] E. Candès and J. Romberg. Sparsity and incoherence in compressive sampling. Inverse Probl., 23(3):969, 2007.
  • [6] E. J. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory, 52(2):489–509, 2006.
  • [7] A. Chambolle and P.-L. Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188, 1997.
  • [8] T. Chan, A. Marquina, and P. Mulet. High-order total variation-based image restoration. SIAM J. Sci. Comput., 22(2):503–516, 2000.
  • [9] R. R. Coifman and D. L. Donoho. Translation-invariant de-noising. Springer, 1995.
  • [10] I. Daubechies et al. Ten lectures on wavelets, volume 61. SIAM, 1992.
  • [11] M. Eck, T. DeRose, T. Duchamp, H. Hoppe, M. Lounsbery, and W. Stuetzle. Multiresolution analysis of arbitrary meshes. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 173–182. ACM, 1995.
  • [12] Y. C. Eldar and G. Kutyniok. Compressed sensing: theory and applications. Cambridge University Press, 2012.
  • [13] H.-Y. Gao. Wavelet shrinkage denoising using the non-negative garrote. Journal of Computational and Graphical Statistics, 7(4):469–488, 1998.
  • [14] T. Goldstein and S. Osher. The split Bregman method for l1-regularized problems. SIAM J. Imaging Sci., 2(2):323–343, 2009.
  • [15] K. Guo and D. Labate. Optimally sparse multidimensional representation using shearlets. SIAM journal on mathematical analysis, 39(1):298–318, 2007.
  • [16] M. R. Hestenes. Multiplier and gradient methods. Journal of optimization theory and applications, 4(5):303–320, 1969.
  • [17] Y. Hu and M. Jacob. Higher degree total variation (HDTV) regularization for image recovery. IEEE Transactions on Image Processing, 21(5):2559–2571, 2012.
  • [18] U. Kamilov, E. Bostan, and M. Unser. Wavelet shrinkage with consistent cycle spinning generalizes total variation denoising. IEEE Signal Processing Letters, 19(4):187–190, 2012.
  • [19] E. J. King, G. Kutyniok, and W.-Q. Lim. Image inpainting: theoretical analysis and comparison of algorithms. SPIE Optical Engineering+ Applications, page 885802, 2013.
  • [20] G. Kutyniok et al. Shearlets: Multiscale analysis for multivariate data. Springer Science & Business Media, 2012.
  • [21] R. Leary, Z. Saghi, P. A. Midgley, and D. J. Holland. Compressed sensing electron tomography. Ultramicroscopy, 131:70 – 91, 2013.
  • [22] C. Li, W. Yin, H. Jiang, and Y. Zhang. An efficient augmented lagrangian method with applications to total variation minimization. Comput. Optim. Appl., 56(3):507–530, 2013.
  • [23] M. Lustig, D. Donoho, and J. M. Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic resonance in medicine, 58(6):1182–1195, 2007.
  • [24] M. Lysaker, A. Lundervold, and X.-C. Tai. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Transactions on Image Processing, 12(12):1579–1590, Dec 2003.
  • [25] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty. An efficient algorithm for compressed MR imaging using total variation and wavelets. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8, June 2008.
  • [26] S. Mallat. A wavelet tour of signal processing: the sparse way, pages 292–296. Academic press, 2008.
  • [27] F. Natterer. The Mathematics of Computerized Tomography. Society for Industrial and Applied Mathematics, 2001.
  • [28] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992.
  • [29] T. Sanders. Matlab imaging algorithms: Image reconstruction, restoration, and alignment, with a focus in tomography. http://www.toby-sanders.com/software, https://doi.org/10.13140/RG.2.2.33492.60801. Accessed: 2016-19-08.
  • [30] T. Sanders. Image Processing and 3-D Reconstruction in Tomography. PhD thesis, University of South Carolina, 2015. Chapter 1: Background.
  • [31] T. Sanders. Parameter selection for HOTV regularization. Applied Numerical Mathematics, 125:1–9, 2018.
  • [32] T. Sanders and C. Dwyer. Subsampling and inpainting approaches for electron tomography. Ultramicroscopy, 182:292–302, 2017.
  • [33] T. Sanders, A. Gelb, R. Platte, I. Arslan, and K. Landskron. Recovering fine details from under-resolved electron tomography data using higher order total variation regularization. Ultramicroscopy, 174:97–105, 2017.
  • [34] O. Scherzer and J. Weickert. Relations between regularization and diffusion filtering. Journal of Mathematical Imaging and Vision, 12(1):43–63, 2000.
  • [35] S. Setzer, G. Steidl, and T. Teuber. Infimal convolution regularizations with discrete L1-type functionals. Communications in Mathematical Sciences, 9(3):797–827, 2011.
  • [36] J.-L. Starck, F. Murtagh, and J. M. Fadili. Sparse image and signal processing: wavelets, curvelets, morphological diversity. Cambridge university press, 2010.
  • [37] W. Stefan, R. A. Renaut, and A. Gelb. Improved total variation-type regularization using higher order edge detectors. SIAM J. Imaging Sci., 3(2):232–251, 2010.
  • [38] G. Steidl, S. Didas, and J. Neumann. Splines in higher order TV regularization. International journal of computer vision, 70(3):241–255, 2006.
  • [39] G. Steidl and J. Weickert. Relations between soft wavelet shrinkage and total variation denoising. In Joint Pattern Recognition Symposium, pages 198–205. Springer, 2002.
  • [40] G. Steidl, J. Weickert, T. Brox, P. Mrázek, and M. Welk. On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and sides. SIAM Journal on Numerical Analysis, 42(2):686–713, 2004.
  • [41] C. Taswell. The what, how, and why of wavelet shrinkage denoising. Computing in science & engineering, 2(3):12–19, 2000.
  • [42] A. Temizel, T. Vlachos, and W. Visioprime. Wavelet domain image resolution enhancement using cycle-spinning. Electronics Letters, 41(3):119–121, 2005.
  • [43] F. C. Tenoudji. Wavelets; multiresolution analysis. In Analog and Digital Signal Analysis, pages 337–373. Springer, 2016.
  • [44] M. Unser, J. Fageot, and J. P. Ward. Splines are universal solutions of linear inverse problems with generalized TV regularization. SIAM Review, 59(4):769–793, 2017.
  • [45] Y. Wang, J. Yang, W. Yin, and Y. Zhang. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci., 1(3):248–272, 2008.
  • [46] S.-J. Wei, X.-L. Zhang, J. Shi, and G. Xiang. Sparse reconstruction for SAR imaging based on compressed sensing. Progress In Electromagnetics Research, 109:63–81, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
321932
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description