# Smoothed-TV Regularization for Hölder Continuous Functions

## Abstract

This work aims to explore the regularity properties of the smoothed-TV regularization for the functions is of the class Hölder continuous. Over some compact and convex domain we study construction of multivariate function as the optimized solution to the following convex minimization problem

where the penalizer is the smoothed total variation penalizer

for a fixed We assume our target function to be Hölder continuous. With this assumption, we establish relation between total variation of our target function and its Hölder coefficient. We prove that the smoothed-TV regularization is an admissible regularization strategy by evaluating the discrepancy for some fixed To do so, we need to assume that the target function to be class of From here, under the fact that the penalty is strongly convex, we move on to showing the convergence of for is the optimum and is the true solution for the given minimization problem above. We demonstrate that strong convexity and convexity are actually different names for the same concept. In addition to these facts, we make us of Bregman divergence in order to be able to quantify the rate of convergence.

Keywords.

Hölder continuity, Bounded variation, smoothed total variation, Morozov discrepancy.

## 1Introduction

As alternative to well established Tikhonov regularization, [26], studying convex variational regularization with any penalizer has become important over the last decade. Introducing a new image denoising method named as *total variation*, [28], is commencement of this study. Application and analysis of the method have been widely carried out in the communities of inverse problems and optimization, [1]. Particularly, formulating the minimization problem as variational problem and estimating convergence rates with variational source conditions has also become popular recently, [7]. Unlike in the available literature, we define discrepancy principle for the smoothed-TV regularization under a particular rule for the choice of regularization parameter. Furthermore, still with the same regularization parameter, we manage to show that smoothed-TV regularization is an admissible regularization strategy with Hölder continuity.

We are tasked with constructing the regularized solution over some compact and convex domain for the following variational minimization problem,

for the penalty term defined by

and is the regularization parameter. It is expected that the perturbed given data is lies in in some ball centered at the true data *i.e.* The compact forward operator is assumed to be linear and injective. It is well known by the theory of inverse problems that a regularization strategy is admissible if the regularization parameter satisfies,

where [16], [24]. The regularized solution of the problem (Equation 1) must satisfy the following first order optimality conditions,

This work aims to answer two fundamental questions in the field of regularization theory; Is it possible to quantify in (Equation 3) when the penalizer is (Equation 2)? What is the rule for the choice of regularization parameter when the penalizer is (Equation 2) that the smoothed-TV is also an admissible regularization theory? We will be able to quantify the rate of the convergence of by means of the Bregman divergence.

Existence of the solution to the TV minimization problem, *i.e.* in the problem (Equation 1), has been discussed extensively [22]. Moreover, an existence and uniquness theorem for the minimizer of quadratic functionals with different type of convex integrands has been established in [11]. As has been given by the *Minimal Hypersurfaces* problem in [13], the minimizer of the problem (Equation 1), for the smoothed-TV penalty exists on a reflexive Banach space.

## 2Notations and Prerequisite Knowledge

### 2.1Vector calculus notations

We assume to be tasked with reconstruction of some non-negative scalar function defined on a compact subset of *i.e.* where the spatial coordinate is Then the gradient of is regarded as a vector with components

The magnitude of this gradient in the Euclidean sense,

### 2.2Functional analysis notations

We aim to approximate a function which belongs to Hölder space. Hölder space is denoted by where [17]. If a multivariate function then there exists such that the function satisfies the following Hölder continuity

Here is the absolute value of Hölder space is a Banach space endowed with the norm

where the Hölder coefficient is defined by

and the Euclidean norm is

So that, we define Hölder space by

In this work, we focus on total variation (TV) of a function, [8]. With (Equation 5), TV of our multivariate function is explicitly,

Total variation type regularization targets the reconstruction of bounded variation (BV) class of functions, [30],

### 2.3Bregman divergence

Following formulation emphasizes the functionality of the Bregman divergence in proving the norm convergence of the minimizer of the convex minimization problem to the true solution.

Throughout our norm convergence estimations, we refer to this definition for the case of convexity.

In fact, another similar estimation to ( ?), for can also be derived by making further assumption about the functional one of which is strong convexity with modulus [5]. Below is this alternative way of obtaining ( ?) when

Let us begin with considering the Taylor expansion of

Then the Bregman divergence

Since is striclty convex, due to strong convexity and hence one obtains that

where is the modulus of convexity.

### 2.4Further Results on the Hölder Continuity

We already have reviewed in subSection 2.2 that the Hölder space is a Banach space endowed with the norm, for all and is a compact domain,

Here the Hölder coefficient is obviously bounded by

Furthermore, following from (Equation 12), an immediate conclusion can be formulated as follows.

Since and then

## 3Hölder Continuity and TV of a Smooth Function

We now come to the point where we start establishing the relations between Hölder continuity and TV of a function on The following theorems will also serve us for determining an implementable and unique regularization parameter appeared in the minimization problem (Equation 1). We emphasize a very important assumption that we always work with continuous function on a compact domain which is uniformly continuous. This fact will allow us to interchange the necessary operations in order to obtain the desired results in what follows.

Recall our vectoral notations in and Then for a fixed componentwise Hölder continuity in is given by

By the definition of Euclidean norm in (Equation 9),

So this implies

Here, the last equality in the chain is rather convenient to present since Obviously, for any pair of points there exists such that s = Then,

we have

Recall that our function is continuous over the compact domain which makes it uniformly contiuous on the same domain. Then we are allowed to interchage with Now, moving on to the limit on both sides with respect to each component and

Again, the last inequality has been obtained by the fact that sum of the components always remains greater than each component itself. Now, integrate both sides over the compact domain to yield

which is, to be more precise,

since in

This shows that Hölder coefficient of a function is an approximation for the total variation of the same function. In the following theorems, we will establish the reverse direction of this statement. To do so, we will make use of the Lipschitz continuity which is a specific case of Hölder continuity in (Equation 6) for

As we have introduced in the Section 2 by (Equation 5),

This inequality has been obtained by using the following simple identity

for This implies

To arrive at (Equation 14), set and lastly Now by the definition of partial derivative in the componentwise sense,

Gradient of the functional over the compact domain is valid for any Therefore, we continue with our proof in the unified form. First observe that by the Lebesgue dominated convergence theorem,

Since then Hölder continuity given by (Equation 6) is satisfied for

which is Lipschitz continuity. Then (Equation 15) reads

We formulate the last formulatin for this section which is an immediate consequence of this theorem.

Again, by the definition of Euclidean norm in Section 2 by (Equation 5),

Analogous to the proof of Theorem ?,

since

## 4Smoothed-TV Regularization Is an Admissible Regularization Strategy With the Hölder Continuity

We will define such a regularization parameter which will simultaneously enable us to prove the convergence of the smoothed-TV regularization and to estimate the dicrepancy for the corresponding regularization strategy, [10]. Unlike the available literature, [1], we define discrepancy principle for the smoothed-TV regularization under a particular rule for the choice of regularization parameter. Furthermore, still with the same regularization parameter, we manage to show that smoothed-TV regularization is an admissible regularization strategy with Hölder continuity. Throughout this section, the fact that our targeted solution function is Hölder continuous will be to our benefit to be able provide an implementable regularization parameter for copmuterized environment. Hereafter, the component is replaced by only for the sake of simplicity.

To be able to show the convergence of we will refer to Bregman divergence. In Proposition ?, we have demonstrated the relation between strong convexity and convexity. Convexity of the smoothed total variation penalizer has been established in [1]. We will ensure the strong convexity of the same penalizer in the following formulation.

It suffices to prove that To avoid confusion in the calculations, we will make an assignment where According to Leibniz integral rule, calculating and are equivalent to each other. Then

and likewise

Obviously, for any

By the definition of

Now choose to have,

Apply Hölder inequality to have,

since for any By Corollary ?, we have already obtained the upper bound for the second integral on the right hand side. Then,

Hence, the positive real valued functional is defined by,

An immediate consequence that we make use of function space is formulated below.

Since there there exists constant satisfying Then it follows from the above calculations in the proof of Theorem ?,

### 4.1Discrepancy principle for the smoothed TV regularizer

We are able to evaluate the fixed coefficient in the discrepancy principle for the smoothed TV penalty in the problem (Equation 1). To do so, we need to assume that the target function to be class of

Moreover, in order for a precise upper bound for we will need to focus on our specified penalty The regularized solution to the problem (Equation 1) is the minimum of for all Which is in other words,

Then

Since the true data satisfying the operator equation lies in some ball *i.e.* then (Equation 17) reads,

Further development of this estimation will be done by means of Theorem ? as formulated below.

From the calculations in (Equation 18) and the quick adaptation of Theorem ?, it is firstly obtained that

Then, with the given rule of the regularization parameter in ( ?),

which is the first result. It is not difficult to obtain the second part of the theorem. Analogous to Corollary ?, observe that there exists such that