Regularity Properties of the Smoothed-TV

Smoothed-TV Regularization for Hölder Continuous Functions

Erdem Altuntac Institute for Numerical and Applied Mathematics, University of Göttingen, Lotzestr. 16-18, D-37083, Göttingen, Germany e.altuntac@math.uni-goettingen.de
Abstract

This work aims to explore the regularity properties of the smoothed-TV regularization for the functions is of the class Hölder continuous. Over some compact and convex domain we study construction of multivariate function as the optimized solution to the following convex minimization problem

where the penalizer is the smoothed total variation penalizer

for a fixed We assume our target function to be Hölder continuous. With this assumption, we establish relation between total variation of our target function and its Hölder coefficient. We prove that the smoothed-TV regularization is an admissible regularization strategy by evaluating the discrepancy for some fixed To do so, we need to assume that the target function to be class of From here, under the fact that the penalty is strongly convex, we move on to showing the convergence of for is the optimum and is the true solution for the given minimization problem above. We demonstrate that strong convexity and convexity are actually different names for the same concept. In addition to these facts, we make us of Bregman divergence in order to be able to quantify the rate of convergence.


Keywords. Hölder continuity, Bounded variation, smoothed total variation, Morozov discrepancy.


1 Introduction

As alternative to well established Tikhonov regularization, [26, 27], studying convex variational regularization with any penalizer has become important over the last decade. Introducing a new image denoising method named as total variation, [28], is commencement of this study. Application and analysis of the method have been widely carried out in the communities of inverse problems and optimization, [1, 3, 4, 8, 9, 10, 14, 15, 31]. Particularly, formulating the minimization problem as variational problem and estimating convergence rates with variational source conditions has also become popular recently, [7, 18, 19, 20, 25]. Unlike in the available literature, we define discrepancy principle for the smoothed-TV regularization under a particular rule for the choice of regularization parameter. Furthermore, still with the same regularization parameter, we manage to show that smoothed-TV regularization is an admissible regularization strategy with Hölder continuity.

We are tasked with constructing the regularized solution over some compact and convex domain for the following variational minimization problem,

(1.1)

for the penalty term defined by

(1.2)

and is the regularization parameter. It is expected that the perturbed given data is lies in in some ball centered at the true data i.e. The compact forward operator is assumed to be linear and injective. It is well known by the theory of inverse problems that a regularization strategy is admissible if the regularization parameter satisfies,

(1.3)

where [16, Eq. (4.57) and (4.58)], [24, Definition 2.3]. The regularized solution of the problem (1.1) must satisfy the following first order optimality conditions,

(1.4)

This work aims to answer two fundamental questions in the field of regularization theory; Is it possible to quantify in (1.3) when the penalizer is (1.2)? What is the rule for the choice of regularization parameter when the penalizer is (1.2) that the smoothed-TV is also an admissible regularization theory? We will be able to quantify the rate of the convergence of by means of the Bregman divergence.

Existence of the solution to the TV minimization problem, i.e. in the problem (1.1), has been discussed extensively [22, 29]. Moreover, an existence and uniquness theorem for the minimizer of quadratic functionals with different type of convex integrands has been established in [11, Theorem 9.5-2]. As has been given by the Minimal Hypersurfaces problem in [13], the minimizer of the problem (1.1), for the smoothed-TV penalty exists on a reflexive Banach space.

2 Notations and Prerequisite Knowledge

2.1 Vector calculus notations

We assume to be tasked with reconstruction of some non-negative scalar function defined on a compact subset of i.e. where the spatial coordinate is Then the gradient of is regarded as a vector with components

The magnitude of this gradient in the Euclidean sense,

(2.1)

2.2 Functional analysis notations

We aim to approximate a function which belongs to Hölder space. Hölder space is denoted by where [17, Subsection 5.1]. If a multivariate function then there exists such that the function satisfies the following Hölder continuity

(2.2)

Here is the absolute value of Hölder space is a Banach space endowed with the norm

(2.3)

where the Hölder coefficient is defined by

(2.4)

and the Euclidean norm is

(2.5)

So that, we define Hölder space by

In this work, we focus on total variation (TV) of a function, [8, 28]. With (2.1), TV of our multivariate function is explicitly,

Total variation type regularization targets the reconstruction of bounded variation (BV) class of functions, [30],

(2.6)

2.3 Bregman divergence

Following formulation emphasizes the functionality of the Bregman divergence in proving the norm convergence of the minimizer of the convex minimization problem to the true solution.

Definition 2.1 (Total convexity and Bregman divergence).

[6, Definition 1]

Let be a smooth and convex functional. Then is called totally convex in if, for and it holds that

where represents the Bregman divergence.

It is said that is q-convex in with a if for all there exists a such that for all we have

(2.7)

Throughout our norm convergence estimations, we refer to this definition for the case of convexity.

In fact, another similar estimation to (2.7), for can also be derived by making further assumption about the functional one of which is strong convexity with modulus [5, Definition 10.5]. Below is this alternative way of obtaining (2.7) when

Proposition 2.2.

Let be is strongly convex with modulus of convexity i.e. then

(2.8)
Proof.

Let us begin with considering the Taylor expansion of

(2.9)

Then the Bregman divergence

Since is striclty convex, due to strong convexity and hence one obtains that

(2.10)

where is the modulus of convexity.

2.4 Further Results on the Hölder Continuity

We already have reviewed in Subsection 2.2 that the Hölder space is a Banach space endowed with the norm, for all and is a compact domain,

(2.11)

Here the Hölder coefficient is obviously bounded by

(2.12)

Furthermore, following from (2.11), an immediate conclusion can be formulated as follows.

Proposition 2.3.

Over the compact domain if then

Proof.

Since and then

(2.13)

3 Hölder Continuity and TV of a Smooth Function

We now come to the point where we start establishing the relations between Hölder continuity and TV of a function on The following theorems will also serve us for determining an implementable and unique regularization parameter appeared in the minimization problem (1.1). We emphasize a very important assumption that we always work with continuous function on a compact domain which is uniformly continuous. This fact will allow us to interchange the necessary operations in order to obtain the desired results in what follows.

Theorem 3.1 (Morrey’s inequality).

[17, Subsection 5.6.2., Theorem 4] Let be the compact domain and let Then there exists a constant depending only on and such that

(3.1)

for all where

(3.2)
Corollary 3.2.

Specifically in the theorem implies that

(3.3)

since

Theorem 3.3.

Over the compact domain with its volume let Then Hölder coefficient of the function is bounded by its total variation as such,

Proof.

Recall our vectoral notations in and Then for a fixed componentwise Hölder continuity in is given by

By the definition of Euclidean norm in (2.5),

So this implies

Here, the last equality in the chain is rather convenient to present since Obviously, for any pair of points there exists such that s = Then,

we have

Recall that our function is continuous over the compact domain which makes it uniformly contiuous on the same domain. Then we are allowed to interchage with Now, moving on to the limit on both sides with respect to each component and

Again, the last inequality has been obtained by the fact that sum of the components always remains greater than each component itself. Now, integrate both sides over the compact domain to yield

which is, to be more precise,

since in

This shows that Hölder coefficient of a function is an approximation for the total variation of the same function. In the following theorems, we will establish the reverse direction of this statement. To do so, we will make use of the Lipschitz continuity which is a specific case of Hölder continuity in (2.2) for

Theorem 3.4.

Under the same conditions of Theorem 3.3, for in (2.2),

(3.4)
Proof.

As we have introduced in the Section 2 by (2.1),

(3.5)

This inequality has been obtained by using the following simple identity

for This implies

To arrive at (3.5), set and lastly Now by the definition of partial derivative in the componentwise sense,

Gradient of the functional over the compact domain is valid for any Therefore, we continue with our proof in the unified form. First observe that by the Lebesgue dominated convergence theorem,

(3.6)

Since then Hölder continuity given by (2.2) is satisfied for

which is Lipschitz continuity. Then (3.6) reads

(3.7)

We formulate the last formulatin for this section which is an immediate consequence of this theorem.

Corollary 3.5.

Under the same conditions of Theorem 3.4, then,

(3.8)
Proof.

Again, by the definition of Euclidean norm in Section 2 by (2.1),

Analogous to the proof of Theorem 3.4,

(3.9)

since

4 Smoothed-TV Regularization Is an Admissible Regularization Strategy With the Hölder Continuity

We will define such a regularization parameter which will simultaneously enable us to prove the convergence of the smoothed-TV regularization and to estimate the dicrepancy for the corresponding regularization strategy, [10]. Unlike the available literature, [1, 3, 4, 8, 9, 10, 14, 15, 31], we define discrepancy principle for the smoothed-TV regularization under a particular rule for the choice of regularization parameter. Furthermore, still with the same regularization parameter, we manage to show that smoothed-TV regularization is an admissible regularization strategy with Hölder continuity. Throughout this section, the fact that our targeted solution function is Hölder continuous will be to our benefit to be able provide an implementable regularization parameter for copmuterized environment. Hereafter, the component is replaced by only for the sake of simplicity.

To be able to show the convergence of we will refer to Bregman divergence. In Proposition 2.2, we have demonstrated the relation between strong convexity and convexity. Convexity of the smoothed total variation penalizer has been established in [1, Theorem 2.4]. We will ensure the strong convexity of the same penalizer in the following formulation.

Theorem 4.1.

For any the functional is strongly convex.

Proof.

It suffices to prove that To avoid confusion in the calculations, we will make an assignment where According to Leibniz integral rule, calculating and are equivalent to each other. Then

and likewise

Obviously, for any

Theorem 4.2.

Over the compact domain assume that Then there exists a dynamical positive real-valued functional depending on such that

for defined by and where satisfies (2.2) for

Proof.

By the definition of

(4.1)

Now choose to have,

Apply Hölder inequality to have,

since for any By Corollary 3.5, we have already obtained the upper bound for the second integral on the right hand side. Then,