Minimax theory for a class of non-linear statistical inverse problems

Minimax theory for a class of non-linear statistical inverse problems

Kolyan Ray111The research leading to these results has received funding from the European Research Council under ERC Grant Agreement 320637.
Email: k.m.ray@math.leidenuniv.nl, schmidthieberaj@math.leidenuniv.nl
  and Johannes Schmidt-Hieber
Leiden University
Abstract

We study a class of statistical inverse problems with non-linear pointwise operators motivated by concrete statistical applications. A two-step procedure is proposed, where the first step smoothes the data and inverts the non-linearity. This reduces the initial non-linear problem to a linear inverse problem with deterministic noise, which is then solved in a second step. The noise reduction step is based on wavelet thresholding and is shown to be minimax optimal (up to logarithmic factors) in a pointwise function-dependent sense. Our analysis is based on a modified notion of Hölder smoothness scales that are natural in this setting.

AMS 2010 Subject Classification:

Primary 62G05; secondary 62G08, 62G20.

Keywords:

Non-linear statistical inverse problems; adaptive estimation; nonparametric regression; thresholding; wavelets.

1 Introduction

We study the minimax estimation theory for a class of non-linear statistical inverse problems where we observe the path arising from

(1.1)

Here, is a known strictly monotone link function, is a known linear operator mapping into a univariate function and denotes a standard Brownian motion. The number plays the role of sample size and asymptotic statements refer to The statistical task is to estimate the regression function from data . Model (1.1) includes the classical linear statistical inverse problem as a special case for the link function

Various non-Gaussian statistical inverse problems can be rewritten in the form (1.1). A strong notion to make this rigorous is to prove that the original model converges to a model of type (1.1) with respect to the Le Cam distance. This roughly means that for large sample sizes, such a model transformation does not lead to a loss of information about Most such results are established in the direct case where is the identity operator, but as mentioned in [33], such statements can often be extended to the inverse problem setting (see [28] for an example). A particularly interesting example covered within this framework is Poisson intensity estimation, where we observe a Poisson process on with intensity This can be rewritten in the form (1.1) with see Section 5. Reconstructing information from Poisson data has received a lot of attention in the literature due to its applications in photonic imaging, see Vardi et al. [36], Cavalier and Koo [11], Hohage and Werner [20], and Bertero et al. [2]. Other examples of models of type (1.1) include density estimation, density deconvolution, spectral density estimation, variance estimation, binary regression and functional linear regression. For further details on the approximation of models and additional examples, see Section 5.

For linear link functions, we recover the well-known linear statistical inverse problem. In this case, the classical approach is to project the data onto the eigenbasis of and to use the first (weighted) empirical basis coefficients to reconstruct the signal (cf. the survey paper Cavalier [10]). For non-linear link functions, spectral methods cannot be applied to obtain rate optimal estimators and we propose to pre-smooth the data instead. Noise reduction as a pre-processing step for inverse problems has been proposed in different contexts by Bissantz et al. [3], Klann et al. [23], Klann and Ramlau [24] and Mathé and Tautenhahn [27, 26]. Let us briefly sketch the idea in the linear case Suppose that the signal has Hölder (or Sobolev) regularity and that has degree of ill-posedness , mapping -smooth signals into a space of -smooth functions. The minimax rate (possibly up to -factors) for estimation of from data under -loss is then Using pre-smoothing, we thus have access to

(1.2)

where with high probability. Reconstruction of from may be now viewed as a deterministic inverse problem with noise level In such a deterministic setup, the regression function can be estimated at rate (cf. Natterer [29], Engl et al. [15], Section 8.5), which coincides with the minimax rate of estimation (cf. [10]). Reducing the original statistical inverse problem to a deterministic inverse problem via pre-smoothing of the data therefore leads to rate-optimal procedures. The advantage of pre-smoothing the data is that it is quite robust to all sorts of model misspecifications, in particular outliers in the data. A drawback of the method is of course that for the second step, we need to know the order of the noise level and thus the smoothness of We discuss various approaches to this in Section 4.

Following this two-step strategy, the statistical problem reduces to finding error bounds in the direct problem (i.e. is the identity). To pre-smooth the data, we apply the following general plug-in principle: construct an estimator of by hard wavelet thresholding and take (recall that is injective) to be the pre-smoothed version of Analyzing this method for non-linear link functions is a challenging problem, since both the amount of smoothing and the minimax estimation rates are very sensitive to the local regularity properties of Due to this individual behaviour depending on the non-linear link functions, we have restricted ourselves to those that are most relevant from a statistical point of view, namely equal to and (see Section 5 for more details). While we take a similar overall approach for these three link functions, the different natures of their singular points mean that the details of each case must be treated separately. Indeed, we believe that there is no feasible proof that covers them all at once.

We remark that two link functions , in (1.1) can be considered equivalent if they can be transformed into one another via -functions, in which case the minimax rates of estimation are identical up to constants. Since we concern ourselves with the rate and not the minimax constants, our results apply to any that are equivalent to the specific link functions we study. In particular, the constants in these three cases have no impact.

We seek to control the difference between the pre-smoothed data and that is the noise level in the deterministic inverse problem (1.2). Due to the non-linearity of the rates for are highly local and we establish matching pointwise upper and lower bounds for that depend on the smoothness of and the value From this, bounds in for all can be deduced. Local rates of this type allow one to capture spatial heterogeneity of the function and were considered only recently for density estimation [31], which is related to model (1.1) with link function

Surprisingly, in each of the three cases considered here there are two regimes for the local convergence rate. We call an irregular point of the link function if is singular at For example, the link function has irregular point Away from any irregular point of we obtain a local rate corresponding to the “classical” nonparametric minimax rate. However, when close to an irregular point we obtain a different type of convergence rate which is related to estimation in irregular models. For the direct problem, we show that the derived rates are minimax optimal up to -factors.

When dealing with non-linear link functions, classical Hölder smoothness spaces can only be employed up to a certain degree. For instance, it is well-known that if has Hölder smoothness then has Hölder smoothness . However, this breaks down for the usual definition of Hölder smoothness when (cf. [4]). So far, the common approach to this issue (within for example density estimation [30]) is to assume that is uniformly bounded away from zero, which is an artificial restriction from a modelling perspective. There has been recent progress in removing this constraint [31], though such results can not be extended beyond . We take a different approach here and consider a suitable modification of Hölder scales [32] for non-negative functions which maintains this relationship. This framework is natural in our setting and is expanded upon in Section 2.

The article is organized as follows. The modified Hölder spaces are introduced in Section 2, the main upper and lower bound results are presented in Section 3 with further discussion on extending this approach to the full inverse problem setting in Section 4. Section 5 provides additional motivation for model (1.1) and examples of statistical problems that can be rewritten in this form. Proofs and technical results are deferred to Sections A and B in the appendix.

Notation: For two sequences , of numbers, we write if is bounded away from zero and infinity as . For two numbers arbitrarily selected from some set , we also write if is bounded away from zero and infinity for all ( will always be clear in the context). The maximum and minimum of two real numbers are denoted by and , while is the greatest integer strictly smaller than . Let be the Kullback-Leibler divergence for non-negative densities , relative to a dominating measure . Recall that in the Gaussian white noise model (1.1), the Kullback-Leibler divergence between two measures generating such processes with regression functions , equals .

2 Hölder spaces of flat functions

In this section, we consider a restricted Hölder class of functions whose derivatives behave regularly in terms of the function values. This class has better properties under certain pointwise non-linear transformations compared to classical Hölder spaces (cf. [32]) and is therefore appropriate as a parameter space for adaptive estimation in model (1.1). In particular, it allows one to express pointwise convergence rates in terms of the function values alone.

Let be the usual Hölder seminorm and consider the space of -Hölder continuous functions on ,

where denotes the -norm. Define the following seminorm on :

with defined as and for The quantity measures the flatness of a function near zero in the sense that if is small, then the derivatives of must also be small in a neighbourhood of . Let

and consider the space of non-negative functions

This space contains for instance the constant functions, all functions that are uniformly bounded away from zero and any function of the form for infinitely differentiable. Due to the non-negativity constraint, is no longer a vector space. However, it can be shown that it is a convex cone, that is, for any and positive weights then also Moreover, defines a norm in the sense of Theorem 2.1 in [32]. For , the additional derivative constraint is in fact always satisfied: contains all non-negative functions that can be extended to a -Hölder function on (Theorem 2.4 of [32]).

The space allows one to relate the smoothness of to that of in a natural way for certain non-linear link functions of interest (see Lemmas 5 and 7). In contrast, the classical Hölder spaces behave poorly in this respect. For example, if then there exist infinitely differentiable functions such that is not in for any (Theorem 2.1 of [4]). Based on observing (1.1) (with the identity), one can not exploit higher order classical Hölder smoothness of ; this is a fundamental limitation of the square-root transform. An alternative approach to exploit extra regularity would be to impose stronger flatness-type assumptions on only the local minima of , in particular forcing them all to be zero (e.g. [5, 6]). However, the approach taken here is more flexible, permitting functions to take small non-zero values.

For the Bernoulli case (3.7) with , we must make a further restriction since the function range is limited to :

In particular, this implies that there exists such that for . The class simply ensures the same behaviour near the boundary value 1 as does near 0, since both and are irregular points of .

3 Main results for the direct case

In this section we derive matching upper and lower bounds for pointwise estimation in the pre-smoothing step of the full statistical model (1.1). We are thus concerned with the pointwise recovery of from observation (1.1) or equivalently obtaining pointwise bounds for the noise level in the deterministic inverse problem (1.2). For this it suffices to consider the direct model where we observe the path with

(3.1)

where is the known strictly monotone link function.

Our estimation procedure is based on a wavelet decomposition of . Let be a bounded, compactly supported -regular scaling and wavelet function respectively, where is a positive integer (for a general overview and more details see e.g. [19]). The scaled and dilated functions with support intersecting the interval can be modified to form an -regular orthonormal wavelet basis of by correcting for boundary effects as explained in Theorem 4.4 of [12]. This basis can be written as with the set of wavelet basis functions at resolution level and Due to the compact support of the cardinality of is bounded by a constant multiple of

By projecting the wavelet basis onto (3.1), it is statistically equivalent to observe

(3.2)

where and are i.i.d. standard normal random variables.

We deal specifically with three cases (3.4), (3.7) and (3.9) below. In each case, we construct an estimator using a two-stage procedure involving thresholding the empirical wavelet coefficients in (3.2). In the Gaussian variance case an additional step is needed, see Section 3.3.

  1. Let satisfy and estimate by the hard wavelet thresholding estimator

    (3.3)

    for some .

  2. Consider the plug-in estimator .

3.1 The Poisson case

For the link function , model (3.1) can be written

(3.4)

This model corresponds to nonparametric density estimation [30, 8, 33], Poisson intensity estimation [8] and certain nonparametric generalized linear models with Poisson noise [18] (see Section 5 for more details). In model (3.4) we show that the rate of estimation for at a point is

(3.5)

giving two regimes depending on the size of . Note that the transition between these two regimes occurs at the threshold . The presence of two regimes is caused by the non-linearity of , which is smooth away from 0 but has irregular point 0. Indeed, the smoothness of bears much less resemblance to that of near 0 (see Lemma 5.3 of [32]).

The first regime occurs for small values of and gives convergence rates which are faster than the parametric rate for For small densities, the variance strictly dominates the bias of the estimator so that the rate is purely driven by stochastic error, hence the lack of dependence on . The second regime reflects the “standard” nonparametric regime and yields the usual nonparametric rate, albeit with pointwise dependence. Recall that is the threshold parameter of the wavelet thresholding estimator.

Theorem 1.

For any and , the estimator satisfies

where is given in (3.5).

Thus, with high probability, we have that for all and some constant The estimator is fully adaptive, both spatially and over Hölder smoothness classes. Since the constant in Theorem 1 does not depend on , we also obtain convergence rates in for all ,

which are optimal up to factors.

The rate (3.5) is the same (up to logarithmic factors) as that attained in density estimation [31], which at first sight might be expected due to the well-known asymptotic equivalence of density estimation and the Gaussian white noise model (3.4), cf. [30, 8]. However the situation is in fact more subtle, with asymptotic equivalence only holding in the second regime of (3.5) and under further conditions (see [33] for more details).

For density estimation, local rates of the type (3.5) have been obtained only up to [31]. This is consistent with our results since and the space of all functions on that can be extended to a non-negative -Hölder function on in fact coincide for (Theorem 2.4 of [32]). The authors in [31] deal with higher derivatives in one specific situation, namely points near the support boundary, where the function necessarily satisfies a flatness type condition allowing one to quantify the behaviour of the higher order derivatives.

For practical applications, it may be useful to consider a bias corrected version of the estimator ,

To see this, recall that the inverse link function is and For each selected empirical wavelet coefficient the squared noise therefore induces an upwards bias of size , which we correct by considering

The rate (3.5) is an upper bound for the pointwise rate of estimation of and we have a corresponding lower bound

which agrees with in (3.5) up to logarithmic factors.

Theorem 2.

For any , , and any sequence with ,

where the infimum is taken over all measurable estimators of .

In the Gaussian white noise model, it is well-known that the minimax estimation rate for pointwise loss is , while the adaptive estimation rate is (cf. [25] and Theorem 3 in [9]). Adaptation to the smoothness index therefore imposes an additional -factor that is unavoidable in the convergence rate. We believe that a similar phenomenon leads to the additional -terms in the upper bound and that this rate is sharp.

In Theorem 2, we require the sequence to be strictly in the interior of . The proof of Theorem 2 gives some insight into the rates. In case (3.4), the Kullback-Leibler divergence corresponds to the Hellinger distance. Away from 0, this behaves like the -distance, thereby leading to the usual classical nonparametric rate. As the function approaches zero however, testing between two functions depends on the -distance and we therefore obtain the same rates as for irregular models (cf. [21]).

The statement of Theorem 2 is considerably stronger than standard minimax lower bounds as it holds for all local parameter spaces with an arbitrary sequence in the interior of the parameter space. To see the difference, consider a model and suppose we want to estimate with respect to a loss Recall that any rate is a lower bound if there are two sequences of parameters such that

(3.6)

(cf. [34], Theorem 2.2 (iii)). This is quite a weak notion, since it says that somewhere on the parameter space the estimation rate cannot be improved. We would of course prefer to have a notion for which the rate holds everywhere on In Theorem 2 we require that is a lower bound on for arbitrary sequences . Taking this implies that for any sequence we can find a sequence satisfying (3.6).

This change of definition makes a big difference in terms of the lower bounds obtained. It is not hard to see for instance that is a minimax lower bound over an -ball for estimation of The rate is attained if the true function is bounded away from zero at However, is not a minimax lower bound in the local sense of Theorem 2. Consider a sequence such that as Then the local parameter space also contains only functions vanishing at for large Since we know from our upper bound in Theorem 1 that for such functions the rate is faster than , the latter can no longer be a lower bound.

Obviously, one would like to restrict the parameter space to even smaller sets, by considering for instance shrinking Kullback-Leibler neighbourhoods around . In this case however, the rates obtained for the upper and local lower bounds no longer match in general. To see this, consider the extreme case where the local parameter space consists of a single element. The rate obtained from the lower bound is zero, but there is of course no method that achieves zero risk over all parameters.

In order to see that flatness type conditions are necessary in order to achieve the rate of estimation in (3.5) for we consider the class of non-negative linear functions on This class contains functions that have arbitrary Hölder smoothness but which are not flat near zero, for example For this class, the pointwise rate of estimation is bounded from below by , as shown in the following result.

Theorem 3.

Denote by the class of non-negative linear functions on with bounded coefficients. Then there exists such that,

where the infimum is taken over all measurable estimators of .

This should be compared to the definition of , which for and gives estimation rate bounded by This example also shows that in model (3.4) we do not get the fastest reconstruction for very smooth functions, such as linear functions, but we do for functions also satisfying flatness constraints in the sense that with large.

3.2 The Bernoulli case

For the link function , model (3.1) can be written

(3.7)

which is motivated by binary regression [18] (see Section 5 for more details). We show that the rate of estimation for in (3.7) at a point is

(3.8)

again giving two regimes depending on the size of . We could alternatively replace with above. Note that the first regime occurs if and only if , that is is close to 0 or 1. We again see the effect of the non-linearity of on the rate since both 0 and 1 are irregular points (caused respectively by the and factors of ).

Given that (3.7) arises from a binary response model [18], it is natural that the rate above behaves symmetrically in terms of and . The rates are similar to those in the Poisson case (3.5), differing only in the second regime due to this symmetry. This means we again observe a superefficiency phenomenon near 0 and 1.

Theorem 4.

For any and , the estimator satisfies

where is given in (3.8).

Again note that is fully adaptive, both spatially and over Hölder smoothness classes. We have a corresponding lower bound to the upper bound (3.8),

which again agrees with the upper bound up to logarithmic factors.

Theorem 5.

For any , , and any sequence with ,

where the infimum is taken over all measurable estimators of .

3.3 The Gaussian variance case

For the link function , model (3.1) can be written

(3.9)

which is motivated by Gaussian variance estimation [18] and spectral density estimation [17] (see Section 5 for more details). We show that the rate of estimation for in (3.9) at a point is

(3.10)

where grows strictly subpolynomially, that is, for any For the exact expression for , see the proof of Theorem 6 below.

The plug-in estimator is unable to adapt to small function values; small fluctuations in the function are magnified rendering estimation by wavelet thresholding unstable. However, this magnification also permits easier detection of this regime and we take advantage of this to propose a modified (non-adaptive) estimator. Define the local averaging process:

where . For the boundary values, we truncate the integral so that for ,

with the corresponding upper truncation for and extended to these ranges. The process is used to detect which regime of occurs, since then the zero estimator and plug-in estimator are optimal on the first and second regimes respectively. This yields the estimator

where is as in (3.3) and .

Theorem 6.

For any and , the estimator satisfies

where is given in (3.10).

We note that is spatially adaptive, but does not adapt over Hölder classes. The difficulty in adapting to smoothness compared to the Poisson and Bernoulli models is due to the nature of the irregular point 0. In the other models, the singularity reduces the smoothness inherited from , but still maps small values to small values. On the contrary, is unbounded near 0, which causes the difference in the rate for small values. This highlights how the nature of the non-linearity affects estimation in the model.

The corresponding lower bound is

which agrees with the upper bound in (3.10) up to subpolynomial factors.

Theorem 7.

For any , , and any sequence with ,

where the infimum is taken over all measurable estimators of .

Since magnifies changes near 0, the first regime in depends much more closely on the true function value than in the Poisson or Binomial cases. In those cases one can always construct an alternative that has pointwise distance from and has Kullback-Leibler distance 1 from . In contrast here, any alternative that is more than a constant multiple of away from at will have diverging Kullback-Leibler distance from . This is due to the extremely sensitive irregular point of .

4 Extension to inverse problems

In the previous section, we derived local rates for in the direct case. For the specific link functions considered, we therefore have explicit and rate optimal bounds such that for all with high probability. Except for the logarithmic link function, the rates are adaptive, that is, the smoothness index of is not necessarily assumed known.

Recall the full inverse problem (1.1). In this case the drift function is and by applying the wavelet thresholding procedure described above, we obtain an estimator for which, as explained in the introduction, can be thought of as a data pre-smoothing procedure. Viewing as the new observation in a deterministic inverse problem, we can control the local noise level in the sense that

with high probability. In order to solve the deterministic inverse problem, reconstructing the function from the pre-smoothed data we need to be able to compute the order of the noise level. This can then be used for the choice of the regularization parameter or stopping criteria in the regularization procedure. The next result shows that with high probability, a plug-in estimator for does not change the order.

Theorem 8.

On the event where is one of the three scenarios considered in Sections 3.1-3.3, there is a constant that is independent of such that

and

with as in (3.10).

We can therefore estimate the order of the noise level using the plug-in estimator since by Theorems 1, 4, and 6, holds with high probability.

The order of the noise level still depends on the smoothness index of which might be unknown. At the moment there is no completely satisfactory answer to this problem and we briefly outline a practical approach below. If the operator is -times integration, then for any as proved in Theorem 4.1 of [32]. If has known smoothness , we can then conclude that For convolution type operators, we believe that similar results hold in view of the Fourier multiplier theorems on Besov spaces (and thus also for Hölder spaces) in [16], Section 4 and in [1].

From a practical point of view, we can guess the smoothness of by studying how many empirical wavelet coefficients are kept in the series estimator (3.3). For smoother signals, the wavelet coefficients decay more quickly and so fewer coefficients are taken into account by the estimator. Estimators based on such ideas typically work in practice but have poor properties in the minimax sense over all Hölder functions, since these function spaces contain rough functions that look much smoother at many resolution levels.

5 Details and examples for model (1.1)

To motivate model (1.1), let us first consider a regular parametric problem where we observe i.i.d. copies of a random variable with distribution , where is the parameter of interest. The Maximum Likelihood Estimator (MLE) is under weak assumptions efficient and asymptotically normal in the sense that where denotes the Fisher information and is a normal random variable with mean and variance If we can find a function satisfying

(5.1)

then, using the delta method (e.g. Chapter 3 of [35]), the transformed MLE satisfies The remainder term does not carry any information about that is relevant for asymptotic statements and we have thus rewritten the original problem as a Gaussian model, where we observe

(5.2)

Since the Fisher information is positive for regular problems, is a strictly monotone increasing function. By rewriting the model in the way just described, we do not lose information as the sample size tends to infinity and we may thus view the transformed model (5.2) as an equivalent representation of the original problem.

This concept can be extended to nonparametric problems but the situation becomes more subtle, since the MLE does not in general exist. A very strong notion of approximation of models is convergence in Le Cam distance (cf. [18, 30]). Suppose we observe independent random variables following a distribution from an exponential family with real-valued parameters , where is an unknown regression function to be estimated. Grama and Nussbaum [18] showed that under some minimal smoothness assumptions, this model converges in Le Cam distance to the Gaussian model where we observe the path with

where the link function satisfies (5.1). This choice of is related to the variance-stabilizing transformation for the exponential family (see e.g. [7] for more details). Such a result may be viewed as a nonparametric analogue of the parametric approximation explained above. The approximation becomes better for smoother functions and deteriorates as we approach irregular points of , where tends to infinity. For the specific case of Poisson intensity estimation, it has been shown in [33] that the convergence with respect to Le Cam distance still holds true near the irregular point under certain assumptions.

We now extend this to the full notion of inverse problems. Suppose we observe independent random variables following a distribution from an exponential family with parameters with a linear (ill-posed) operator mapping univariate functions to univariate functions and is unknown. Using the same transformation as in the direct case, we obtain approximating model (1.1), where we observe with

The quality of approximation can even be better than in the direct case, since is typically smoother than However, there are other obstacles in the formulation which make formal convergence proofs with respect to the Le Cam distance difficult. Nevertheless, it is believed that from a practical point of view the approximation is sufficiently accurate.

The above approach has been outlined for nonparametric exponential families with fixed uniform design but is in fact more general. For instance, it is well-known that nonparametric density estimation, where we observe i.i.d. copies of a random variable drawn from a Lebesgue density , can be mapped to the model (3.4) (cf. [30], [8], [33]), which is model (1.1) with link function Following the above arguments, we may extend this to density deconvolution, which is one of the most studied statistical inverse problems. Here, we observe i.i.d. copies of where and have Lebesgue densities and respectively with known, and we aim to reconstruct from the data. The density of is then the convolution and we may thus rewrite deconvolution as a Gaussian shift model, where we observe with