Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems

Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems


This paper deals with the resolution of inverse problems in a periodic setting or, in other terms, the reconstruction of periodic continuous-domain signals from their noisy measurements. We focus on two reconstruction paradigms: variational and statistical. In the variational approach, the reconstructed signal is solution to an optimization problem that establishes a tradeoff between fidelity to the data and smoothness conditions via a quadratic regularization associated to a linear operator. In the statistical approach, the signal is modeled as a stationary random process defined from a Gaussian white noise and a whitening operator; one then looks for the optimal estimator in the mean-square sense. We give a generic form of the reconstructed signals for both approaches, allowing for a rigorous comparison of the two. We fully characterize the conditions under which the two formulations yield the same solution, which is a periodic spline in the case of sampling measurements. We also show that this equivalence between the two approaches remains valid on simulations for a broad class of problems. This extends the practical range of applicability of the variational method.

Periodic signals, variational methods, representer theorem, Gaussian processes, MMSE estimators, splines.

I Introduction

This paper deals with inverse problems: one aims at recovering an unknown signal from its corrupted measurements. To be more specific, the motivation of this work is the reconstruction of an unknown continuous-domain and periodic signal from its noisy measurements for , where the are measurement functions. The goal is then to build an output signal that is as close as possible to .

I-a Inverse Problems in the Continuous Domain

Inverse problems are often formulated in the discrete domain [1, 2, 3, 4, 5]. This is motivated by the need of manipulating digital data on computers. Nevertheless, many naturally occurring signals depend on continuous variables (e.g., time or position). This leads us to attempt recovering a signal that depends on the continuous variable . In contrast with the classical discrete setting, our search space for this reconstructed signal is thus infinite-dimensional [6]. Moreover, we choose a regularization based on true derivatives (as opposed to finite differences) to impose some smoothness on the reconstructed signal, a concept that is absent in the discrete setting.

When considering continuous-domain reconstruction methods, a majority of works, typically in machine learning, deal with sampling measurements. The goal is then to recover from its (possibly noisy) values at fixed location . In order to investigate a more general version of inverse problems, we shall consider generalized measurements [7, 8]. They largely exceed the sampling case and include Fourier sampling or convolution (e.g., MRI, x-ray tomography [9, 10]). Our only requirement is that the measurements depend linearly on, and evolve continuously with, the unknown signal up to some additive noise, so that .

I-B Variational vs. Statistical Methods

In the discrete domain, two standard strategies are used to reconstruct an input signal from its noisy measurements , where models the acquisition process [5]. The first approach is deterministic and can be tracked back to the ’60s with Tikhonov’s seminal work [11]. The ill-posedness of the problem usually imposes the addition of a regularizer. By contrast, Wiener filtering is based on the stochastic modelization of the signals of interest and the optimal estimation of the targeted signal . This paper generalizes these ideas for the reconstruction of continuous signals from their discrete measurements.

In the variational setting, the reconstructed signal is a solution to an optimization problem that imposes some smoothness conditions [12]. More precisely, the optimization problem may take the form


where is a linear operator. The first term in (1) controls the data fidelity. The regularization term constrains the function to satisfy certain smoothness properties (for this reason, the variational approach is sometimes called a smoothing approach). The parameter in (1) quantifies the tradeoff between the fidelity to the data and the regularization constraint.

In the statistical setting, the signal is modeled as a random process and is optimally reconstructed using estimation theory [13]. More precisely, one assumes that the continuous-domain signal is the realization of a stochastic process and that the samples are given by , where is a random perturbation and a linear measurement function. In this case, one specifies the reconstructed signal as the optimal statistical estimator in the mean-square sense


where the estimators are computed from the generalized samples . The solution depends on the measurement function and the stochastic models specified for and . In our case, the random process is characterized by a linear operator that is assumed to have a whitening effect (it transforms into a periodic Gaussian white noise), while the perturbation is i.i.d. Gaussian.

I-C Periodic and General Setting

The variational and statistical approaches have been extensively studied for continuous-domain signals defined on the infinitely supported real line. However, it is often assumed in practice that the input signals are periodic. In fact, a standard computational approach to signal processing is to extend by periodization the signals of otherwise bounded support. Periodic signals arise also naturally in applications such as the parametric representation of closed curves [14, 15, 16]. This has motivated the development of signal-processing tools and techniques specialized to periodic signals in sampling theory, error analysis, wavelets, stochastic modelization, or curve representation [17, 18, 19, 20, 21, 22, 23].

In this paper, we develop the theory of the variational and statistical approaches for periodic continuous-domain signals in a very general context, including the following aspects:

  • We consider a broad class of measurement functions, with the only assumptions that they are linear and continuous.

  • Both methods refer to an underlying linear operator that affects the smoothness properties of the reconstruction. We deal with a very broad class of linear operators acting on periodic functions.

  • We consider possibly non-quadratic data fidelity terms in the smoothing approach.

I-D Related Works

The topics investigated in this paper have already received some attention in the literature, mostly in the non-periodic setting.

Reconstruction over the Real Line

Optimization problems of the form (1) appear in many fields and receive different names, including inverse problems in image processing [5], representer theorems in machine learning [24], or sometimes interpolation elsewhere. Schoenberg was the first to show the connection between (1) and spline theory [25]. Since then, this has been extended to other operators [26], or to the interpolation of the derivative of the signal [27, 28]. Many recent methods are dealing with non-quadratic regularization, especially the ones interested in the reconstruction of sparse discrete [29, 30] or continuous signals [6, 31, 32, 33]. We discuss this aspect more extensively in Section VI-B.

A statistical framework requires the specification of the noise and of the signal stochastic model. The signal is then estimated from its measurements. A classical measure of the quality of an estimator is the mean-square error. This criterion is minimized by the minimum mean-square error (MMSE) estimator [13, 34]. The theory has been developed mostly for Gaussian processes and in the context of sampling measurements [35]. We are especially interested in innovation models, for which one assumes that the signal can be whitened (i.e., transformed into a white noise) by the application of a linear operator [36, 37]. Non-periodic models have been studied in many situations, including the random processes associated with differential [38, 39] or fractional operators [40]. Extensions to non-Gaussian models are extensively studied by Unser and Tafti [41].

The statistical and variational frameworks are deeply connected. It is remarkable that the solution of either problem can be expressed as spline functions in relation with the linear operator involved in regularization (variational approach) or whitening (statistical approach). Wahba has shown that the two approaches are strictly equivalent in the case of stationary Gaussian models [42]. This equivalence has also been recognized by several authors since then, as shown by Berlinet and Thomas-Agnan [35], and Unser and Blu [43]. In the non-stationary case, this equivalence is not valid any more and the existence of connections has received less attention.

Reconstruction of Periodic Signals

Some strong practical concerns have motivated the need for an adaptation of the theory to the periodic setting. Important contributions in that direction have been proposed. Periodic splines are constructed and applied to sampling problems by Schoenberg [44] and Golomb [45]. The smoothing spline approach is studied in the periodic setting by Wahba [42] for derivative operators of any order. Although the periodic extension of the classical theory is briefly mentioned by several authors [35, 42, 46], we are not aware of a global treatment. Providing a general analysis in the periodic setting is precisely what we propose in this paper.

I-E Outline and Main Contributions

Section II contains the main notations and tools for periodic functions and operators. In Section III, we state the periodic representer theorem (Theorem 1). It fully specifies the form of the solution in the variational approach in a very general setting. For the specific case of sampling measurements, we show that this solution is a periodic spline (Proposition 5). Section IV is dedicated to the statistical approach. We introduce a class of periodic stationary processes (the Gaussian bridges) for which we specify the MMSE estimator in the case of generalized linear measurements (Theorem 2). We also provide a theoretical comparison between the variational and statistical approaches by reformulating the MMSE estimation as the solution of a new optimization problem (Proposition 7). This highlights the strict equivalence of the two approaches for invertible operators and extends known results from sampling to generalized linear measurements. For non-invertible operators, we complete our analysis with simulations in Section V. In particular, we give empirical evidence of the practical relevance of the variational approach for the reconstruction of periodic stationary signals. We provide in Section VI a comparison between our results in the periodic setting and the known results over the real line. Finally, we conclude in Section VII. All the proofs have been postponed to the Appendix sections.

Ii Mathematical Background for Periodic Signals

Throughout the paper, we consider periodic functions and random processes. Without loss of generality, the period can always be normalized to one. Moreover, we identify a periodic function over with its restriction to a single period, chosen to be . We use the symbols , , and to specify a function, a random process, and an estimator of , respectively.

We call the space of -periodic functions that are infinitely differentiable, the space of -periodic generalized functions (dual of ), and the Hilbert space of square integrable -periodic functions associated with the norm . Working with allows us to deal with functions with no pointwise interpretation, such as the Dirac comb defined by


where is the Dirac impulse. The duality product between an element and a smooth function is denoted by . For instance, for every . When the two real functions are in , we simply have the usual scalar product . All these concepts are extended to complex-valued functions in the usual manner with the convention that for square-integrable functions. The complex sinusoids are denoted by for any and . Any periodic generalized function can be expanded as


where the are the Fourier coefficients of , given by . Finally, the convolution between two periodic functions and is given by


If , we have that .

Ii-a Linear and Shift-Invariant Operators

Let be a linear, shift-invariant (LSI), and continuous operator from to . The shift invariance implies the existence of such that


for any . We call the frequency response of the operator ; it is also given by


The sequence is the Fourier series of the periodic generalized function , and is therefore of slow growth [47, Chapter VII]. This implies that , a priori from to , actually continuously maps into itself. This is a significant difference with the non-periodic setting — we discuss this point in the conclusion in Section VII. Therefore, one can extend it by duality from to . Then, for every , we easily obtain from (6) that


The null space of is . We shall only consider operators whose null space is finite-dimensional, in which case can only be made of linear combinations of sinusoids at frequencies that are annihilated by . We state this fact in Proposition 1 and prove it in Appendix -A.

Proposition 1.

Let be a continuous LSI operator. If has a finite-dimensional null space of dimension , then the null space is of the form


where the are distinct.

From (6) and (9), we deduce that if and only if for some . In the following, we consider real-valued operators. In that case, we have the Hermitian symmetry . Moreover, if and only if . The orthogonal projection of on the null space is given by


Let . Then, (4) can be re-expressed as and we have that which yields the Parseval relation


Ii-B Periodic -Splines

Historically, splines are functions defined to be piecewise polynomials [48]. A spline is hence naturally associated to the derivative operator of a given order [49] in the sense that, for a fixed , a spline function satisfies with the th derivative. Splines have been extended to differential [50, 51, 52, 53], fractional [26, 54] or, more generally, spline-admissible operators [41]. We adapt here this notion to the periodic setting, where the Dirac impulse is replaced by the Dirac comb .

Definition 1.

Consider an LSI operator with finite-dimensional null space. We say that a function is a periodic -spline if


for some integer , weights , and knot locations .

Periodic -splines play a crucial role in the variational and statistical approaches for the resolution of inverse problems in the periodic setting. We represent some periodic splines associated to different operators in Figure 1.



(a) and


(b) and
Fig. 1: Illustrations of periodic -splines. Dots: nodes . The spline in (a) corresponds to the periodization of an exponential B-spline (see Figure 1 in [52]).

Iii Periodic Representer Theorem

We now consider a continuous LSI operator with finite-dimensional null space . Let be the vector of the linear measurement functions . They usually are of the form for time-domain sampling problems. Here, we consider general linear measurements to include any kind of inverse problems. In this section, our goal is to recover a function from observed data such that . To do so, we consider the variational problem


where is a strictly convex and continuous function called the cost function. This function controls the fidelity to data. A special attention will be given to the quadratic data fidelity of the form


We give the solution of (13) for the space of -periodic functions in Theorem 1. To derive this solution, we first introduce and characterize the space of functions on which (13) is well-defined.

Iii-a Search Space

The optimization problem (13) deals with functions such that is square-integrable, which leads us to introduce . Due to (11), we have that


Similar constructions have been developed for functions over or for sequences by Unser et al. [32, 55]. We now identify a natural Hilbertian structure on . If is invertible, then inherits the Hilbert-space structure of via the norm . However, when has a nontrivial null space, is only a semi-norm, in which case there exists (any element of the null space of ) such that . To obtain a bona fide norm, we complete the semi-norm with a special treatment for the null-space components in Proposition 2.

Proposition 2.

Let be a continuous LSI operator whose finite-dimensional null space is defined by . We fix . Then, is a Hilbert space for the inner product


The proof is given in Appendix -B. We have that , where is given by (10). The coefficient balances the contribution of both terms.

Iii-B Periodic Reproducing-Kernel Hilbert Space

Reproducing-kernel Hilbert spaces (RKHS) are Hilbert spaces on which the evaluation maps are well-defined, linear, and continuous. In this section, we answer the question of when the Hilbert space associated to an LSI operator with finite-dimensional null space is a RKHS. This property is relevant to us because periodic function spaces that are RKHS are precisely the ones for which one can use measurement functions of the form in (13).

Definition 2.

Let be a Hilbert space of -periodic functions and be its dual. Then, we say that is a RKHS if the shifted Dirac comb for any .

This implies that any element of a RKHS has a pointwise interpretation as a function . As is well known, for any RKHS there exists a unique function such that and for every and . We call the reproducing kernel of .

Proposition 3.

Let be a continuous LSI operator with finite-dimensional null space. The Hilbert space (see (15)) is a RKHS if and only if


Then, the reproducing kernel for the scalar product (16) is given by , where is


The proof is given in Appendix -C. Note that the reproducing kernel only depends on the difference .

Iii-C Periodic Representer Theorem

Now that we have defined the search space of the optimization problem (13), we derive the representer theorem that gives the explicit form of its unique periodic solution.

Theorem 1.

We consider the optimization problem



  • is strictly convex and continuous;

  • is an LSI operator with finite-dimensional null space;

  • such that ;

  • are the observed data; and

  • is a tuning parameter.

Then, (19) has a unique solution of the form


where , , and is given by (18). Moreover, the vector satisfies the relation , with the matrix with entries .

The proof of Theorem 1 is given in Appendix -D. The optimal solution depends on coefficients, but the condition implies that there are only degrees of freedom. In the case when is quadratic of the form (14), the solution is made explicit in Proposition 4.

Proposition 4.

Under the conditions of Theorem 1, if is given by (14), then the vectors and satisfy the linear system


where is defined by and is a Gram matrix such that


The proof is given in Appendix -E. In the case of sampling measurements, we show moreover in Proposition 5 that the optimal solution is a periodic spline in the sense of Definition 1. We recall that such measurements are valid as soon as the search space is a RKHS, a situation that has been fully characterized in Proposition 3.

Proposition 5.

Under the conditions of Proposition 4, if satisfies (17) and if the measurements are of the form , , then the unique solution of (19) is a periodic -spline with weights and knots .

The proof is given in Appendix -F.

Iv Periodic Processes and MMSE

In this section, we change perspective and consider the following statistical problem: given noisy measurements of a zero-mean and real periodic Gaussian process, we are looking for the optimal estimator (for the mean-square error) of the complete process over .

Iv-a Non-Periodic Setting

In a non-periodic setting, it is usual to consider stochastic models where the random process is a solution to the stochastic differential equation [41]


where is a linear differential operator and a continuous domain (non-periodic) Gaussian white noise. When the null space of the operator is nontrivial, it is necessary to add boundary conditions such that the law of the process is uniquely defined.

Iv-B Gaussian Bridges

In the periodic setting, the construction of periodic Gaussian processes has to be adapted. We first introduce the notion of periodic Gaussian white noise, exploiting the fact that the law of a zero-mean periodic Gaussian process is fully characterized by its covariance function such that

Definition 3.

A periodic Gaussian white noise1 is a Gaussian random process whose covariance is .

For any periodic real function , the random variable is therefore Gaussian with mean and variance . Moreover, and are independent if and only if . Hence, the Fourier coefficients of the periodic Gaussian white noise satisfy the following properties:

  • ;

  • ;

  • , ;

  • and ;

  • , and are independent.

Put differently, for any nonzero frequency , and . This means that , , follows a complex normal distribution with mean 0, covariance , and pseudo-covariance  [56].

Gaussian bridges \includegraphics [width=0.4]D+I.pdf \includegraphics [width=0.4]derivee.pdf \includegraphics [width=0.4]ellipse.pdf \includegraphics [width=0.4]D2.pdf
TABLE I: Gaussian bridges for several operators.

When has a nontrivial null space, there is no hope to construct a periodic process solution of (23) with a periodic Gaussian white noise. Indeed, the operator kills the null-space frequencies, which contradicts that almost surely for . One should adapt (23) accordingly by giving special treatment to the null-space frequencies. We propose here to consider a new class of periodic Gaussian processes: the Gaussian bridges. Given some operator and , we set


where is given by (10). Note that for any when the null space of is trivial. Moreover, we remark that


where is given in (16) (with ).

Definition 4.

A Gaussian bridge is a periodic Gaussian process , solution to the stochastic differential equation


with a periodic Gaussian white noise and given by (25) for some LSI operator with finite-dimensional null space and . We summarize this situation with the notation . When the null space is trivial, in which case the parameter is immaterial, we write .

The Gaussian-bridge terminology is inspired by the Brownian bridge, the periodic version of the Brownian motion2. Several realizations of our Gaussian bridges for various operators are shown in Table I for . The influence of the parameter is illustrated in Figure 2.

Proposition 6.

The covariance function of the Gaussian bridge is


where is defined in (18). It implies that


In particular, we have that


The proof of Proposition 6 is given in Appendix -G. An important consequence is that a Gaussian bridge is stationary since its covariance function only depends on the difference .

Iv-C Measurement Model and MMSE Estimator

For this section, we restrict ourselves to operators for which the native space is a RKHS. In that case, using (30) and (18), the Gaussian bridge satisfies


which is finite according to (17). Therefore, the Gaussian bridge is (almost surely) square-integrable.

The observed data are assumed to be generated as


where is a Gaussian bridge (see Definition 4), is a vector of linear measurement functions, and are independent random perturbations such that . Given in (32), we want to find the estimator of the Gaussian bridge , imposing that it minimizes the quantity .

Theorem 2.

Let be the noisy measurement vector (32) of the Gaussian bridge , with measurement functions , . Then, the MMSE estimator of given the samples is


where with , , and is the Gram matrix defined in (21).

The proof is given in Appendix -H. Theorem 2 can be seen as a generalization of the classical Wiener filtering, designed for discrete signals, to the hybrid case where the input signal is in a (periodic) continuous-domain and the (finite-dimensional) measurements are discrete. A leading theme of this paper is that the form of the MMSE estimator is very close to the one of the solution of the representer theorem with and for a quadratic cost function. This connection is exploited in Section IV-D.

Iv-D MMSE Estimation as a Representer Theorem

The MMSE estimator given in Theorem 2 can be interpreted as the solution of the optimization problem described in Proposition 7.

Proposition 7.

Consider an LSI operator with finite-dimensional null space, , and for . We set as in (25). Then, the solution of the optimization problem


exists, is unique, and given by


where and . In particular, the unique minimizer of (34) is the MMSE estimator given in Theorem 2 for and .

The proof of Proposition 7 follows the same steps as the ones of Theorem 1 (form of the minimizer for the periodic representer theorem) and Proposition 4 (explicit formulas in terms of system matrix for the vectors and ), with significant simplifications that are detailed in Appendix -I. Proposition 7 has obvious similarities with Theorem 1, but it also adds new elements.

  • Proposition 7 gives an interpretation of the MMSE estimator of a Gaussian bridge given its measurements as the solution to an optimization problem. This problem is very close to the periodic representer theorem (Theorem 1) for a quadratic cost function. However, (34) differs from (19) because the regularization also penalizes null-space frequencies.

  • If the null space is trivial, then


    for . This means that Theorem 1 (smoothing approach) and 2 (statistical approach) correspond to the same reconstruction method. This equivalence is well-known for stationary processes on in the case of time-domain sampling measurements [42]. Our results extend this to the periodic setting and to the case of generalized linear measurements.

  • If the null space is nontrivial, then Theorem 1 and Proposition 7 yield different reconstructions. In particular, this implies that one cannot interpret the optimizer in Theorem 1 as the MMSE estimator of a Gaussian bridge. Yet, the solutions get closer and closer as . In Section V, we investigate more deeply this situation.

V Quality of the Estimators on Simulations

We consider as the linear estimator of given , where , , and is defined in Proposition 4. To simplify notations, we shall omit when considering . Each pair gives an estimator. In particular, if is a Gaussian bridge, then , according to Theorem 2. The mean-square error (MSE) of over experiments is computed as , where the are independent realizations of that yield a new noisy measurement and is the estimator based on . We define the normalized mean-square error (NMSE) by


In this section, we first detail the generation of Gaussian bridges (Section V-A). We then investigate the role of the parameters (Section V-B) and (Section V-C) on the quality of the estimator . We primarily focus on time-domain sampling measurements with , where the are in .

V-a Generation of Gaussian Bridges

We first fix the operator with null space of dimension and . Then, we generate Fourier coefficients of a Gaussian white noise according to Definition 3. Finally, we compute the Gaussian bridge as


Since , (38) provides a mere approximation of the Gaussian bridge. However, the approximation error can be made arbitrarily small by taking large enough. In Figure 2, we generate for four values of . For small values of , the null-space component dominates, which corresponds in this case to the frequency . When increases, the null-space component has a weaker influence.



(a) .


(b) .


(c) .


(d) .
Fig. 2: Illustration of for different values of .

V-B Influence of

We evaluate the influence of the parameter for the case of the invertible operator . In this case we have that (since ), which simplifies (25). Hence, the parameter is immaterial and we denote by the estimator associated to . We consider and .

Time-Domain Sampling Measurements. We generated realizations of . From each one, we extracted noisy measurements. We then computed estimators , where is the set of values obtained by uniform sampling of the interval . The plot of the NMSE (approximated according to (37)) as a function of is given in Figure 3 (a). The minimum error is obtained for , which corresponds to . This result validates the theory presented in Theorem 2. Actually, when is small, the estimator interpolates the noisy measurements while, for a large , the estimator tends to oversmooth the curve. The MMSE estimator makes an optimal tradeoff between fitting the data and smoothing the curve. These observations about retain their validity for other operators, including noninvertible ones.

Fourier-Domain Sampling Measurements. We consider complex exponential measurement functionals, inducing , where the are in . We define , such that for every . We consider the measurements . Note that these measurement functionals are complex, which calls for a slight adaptation of the framework presented so far3. The noise is then also complex and satisfies the properties:

  • ;

  • , ;

  • , ;

  • and , ;

  • and , , are independent.

This means that for every .

We repeated the experiment done with the time-domain sampling using exactly the same procedure and parameters, and . The experimental curve of the evolution of the NMSE with is given in Figure 3 (b). Again, the minimum is obtained for . We now want to compare this curve to the theoretical one.

For the Fourier-sampling case, we were also able to derive the corresponding closed-form formulas for the NMSE (37).

Proposition 8.

Let be a Gaussian bridge associated with an invertible operator , and , , with the sampled frequencies and a complex Gaussian noise with variance as above. Then, the MSE of the estimator is given by


where is the reproducing kernel of .

The proof is given in Appendix -J. Note that is real-valued and strictly positive for every . From (39), we also recover the property that the optimum is reached for since each of the terms that appear in the first sum is minimized for this value of .

The theoretical curve for is given in Figure 3 (b) and is in good agreement with the experimental curve. We explain the slight variation ( for the -norm over ) by the fact that (37) is only an estimation of the theoretical NMSE.



(a) Time-domain sampling.


(b) Fourier-domain sampling.
Fig. 3: Evolution of the NMSE in terms of for for time and Fourier-domain sampling measurements.

V-C Influence of

In this section, we only consider noninvertible operators since invertibility has already been addressed in Section IV-D (see (36)). In order to evaluate the specific influence of , we set . Hence, . We generated realizations of a Gaussian bridge , and from each one, we extracted noisy measurements. We repeated this for several operators and values of and . For each case, we compared to , , and in (20), seen here as an additional estimator. The corresponding NMSEs (see (37)) are given in Table II. We make four observations.

1) In each case, the best result is obtained with , as expected. We see, moreover, that . This is in line with the fact that the functional (19) to minimize in Theorem 1 corresponds to (34) with .

2) For small values of (i.e., or ), we see that